OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: Re: [virtio-comment] Re: [PATCH v2 05/11] transport-fabrics: introduce Keyed Transmission


On Thu, Jun 01, 2023 at 09:09:49PM +0800, zhenwei pi wrote:
> 
> 
> On 6/1/23 19:33, Stefan Hajnoczi wrote:
> > On Thu, Jun 01, 2023 at 05:02:45PM +0800, zhenwei pi wrote:
> > > 
> > > 
> > > On 6/1/23 00:20, Stefan Hajnoczi wrote:
> > > > On Thu, May 04, 2023 at 04:19:04PM +0800, zhenwei pi wrote:
> > > > > Keyed transmission is used for message oriented communication(Ex RDMA),
> > > > > also add virtio-blk read/write 8K example.
> > > > > 
> > > > > Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
> > > > > ---
> > > > >    transport-fabrics.tex | 178 ++++++++++++++++++++++++++++++++++++++++++
> > > > >    1 file changed, 178 insertions(+)
> > > > > 
> > > > > diff --git a/transport-fabrics.tex b/transport-fabrics.tex
> > > > > index c02cf26..7711321 100644
> > > > > --- a/transport-fabrics.tex
> > > > > +++ b/transport-fabrics.tex
> > > > > @@ -317,3 +317,181 @@ \subsubsection{Buffer Mapping Definition}\label{sec:Virtio Transport Options / V
> > > > >                        |......|
> > > > >                        +------+  -> 8193
> > > > >    \end{lstlisting}
> > > > > +
> > > > > +\paragraph{Keyed Transmission}\label{sec:Virtio Transport Options / Virtio Over Fabrics / Transmission Protocol / Commands Definition / Keyed Transmission}
> > > > > +Command and Segment Descriptors are transmitted in a message within a
> > > > > +connection, and buffer is transmitted by remote memory access.  The layout in message:
> > > > 
> > > > With RDMA it is theoretically possible to implement virtqueues without
> > > > messages in the data path (i.e. by using something similar to vring with
> > > > RDMA). Why did you decide to use a mixed messages + RDMA approach
> > > > instead of a 100% RDMA approach?
> > > > 
> > > 
> > > Hi,
> > > 
> > > To reduce networking RTT. From my experience, a single RDMA message(event
> > > based) uses at least 6us.

What is the cost of 1 8KB RDMA WRITE vs 2 4KB RDMA WRITES?

I'm asking because if 6us is per RDMA transfer, then it's better to
avoid exposing scatter-gather lists (descriptors) to the other side and
instead provide contiguous memory and accept the cost of memcpy on the
receiving side.

On the other hand, if the cost is mostly determined by the amount of
data transferred, then it's better to expose scatter-gather lists so
data is received in the final memory location where it is consumed.

> > > This approach has a chance to send a command(include data segments) by 1
> > > networking RTT, and receive a completion(include data segments) in 1
> > > networking RTT. I tried to design a 100% RDMA approach(mapping a vring to
> > > the remote side, the remote side accesses this vring by RDMA READ/WRITE),
> > > but I failed to find an idea to achieve.
> > 
> > The goal is to minimize the number of RDMA transfers. Each area of
> > memory should be located on the system that is polling constantly (busy
> > waiting) and the other side occassionally sends an RDMA WRITE request.
> > 
> > This idea requires bi-directional RDMA where both initiator and target
> > make memory accessible to the other side. Is this possible?
> > 
> > The target owns the Available Ring, a descriptor table similar to those
> > used by the Split and Packed Virtqueue layouts that is used by the
> > driver to submit virtqueue buffers to the device. The target sends a key
> > to the Available Ring to the initiator during virtqueue setup. The
> > initiator sends RDMA WRITEs that fill in virtqueue descriptors. Indirect
> > descriptors are supported, but the target will need to use RDMA READs to
> > load the indirect descriptor table, so there is overhead. Even regular
> > non-indirect descriptors have overhead because an RDMA READ is required
> > to read the payload. The best approach for small virtqueue elements is
> > to inline the payload in the Available Ring descriptor so no additional
> > RDMA transfers are needed (this achieves similar effect to your approach
> > of using messages + RDMA, but with pure RDMA). The target polls the
> > Available Ring to detect available buffers.
> > 
> > The initiator sends a key to the Used Ring to the target during
> > virtqueue setup. The target sends RDMA WRITEs that fill in used
> > elements. The initiator polls the Used Ring to detect used buffers.
> > 
> > I'm not sure if the Used Ring makes sense as RDMA memory. Maybe it's
> > better to send a message over the reliable connection instead so that
> > Used Buffer Notifications can support interrupts and not just polling.
> > 
> 
> I guess RDMA WRITE WITH IMM would be fine for this approach.
> 
> > This is a new virtqueue layout. It's only worthwhile implementing it if
> > the Available Ring RDMA performance is significantly better than the
> > current approach.
> > 
> > Stefan
> 
> I agree with your approach to maintain the Vring. If I understand correctly:
> an example of virtio-blk write 4k:
> 1, initiator write the 3 vring desc by RDMA WRITE WITH IMM(IMM Data to carry
> VQ control message), this uses 1 networking RTT.
> 2, target handles WRITE WITH IMM, reads remote memory from initiator of
> desc[0] and desc[1]. This uses 1 networking RTT. (I did not find the 2 keys
> of desc[0] and desc[1] from your approach, but I think this can be
> implemented in step 1 by adding another memory)
> 3, target handles virtio-blk write request and writes the memory to
> initiator of desc[2] by RDMA WRITE WITH IMM.(IMM Data to carry control
> message). This uses 1 networking RTT.
> 
> 
> So we use at lease 3 RTT by this approach. If unfortunately the u32 imm_data
> is lack to carry enough control message, we may need more RTT.
> 
> Sorry, the previous "I failed to find an idea to achieve." means that I
> failed to find an idea to complete 1 single request in 2 RTT.

1 RDMA WRITE WITH IMM for the available buffer + 1 RDMA WRITE WITH IMM
for the used buffer is theoretically possible when all virtqueue
buffer elements are inlined. This way Step 2 can be eliminated.

In theory it's possible to supply multiple available buffers in 1 RDMA
WRITE WITH IMM and complete multiple used buffers in 1 RDMA WRITE WITH
IMM when the virtqueue access pattern allows batching. An optimal RDMA
virtqueue protocol has a 1 RDMA WRITE WITH IMM to N virtqueue buffer
relationship, not a 1:1 relationship.

One more idea to play with: VIRTIO has flexible message framing, so
devices must process a virtqueue buffer the same regardless of whether
it has 1 large element or many small elements. Therefore the virtqueue
RDMA protocol does not need to preserve the virtqueue element count and
sizes from the driver. For example, the target can offer a list of
key/length pairs that the initiator RDMA WRITES the virtqueue buffer
contents into. For a virtio-blk device that would be a struct
virtio_blk_outhdr followed by a large page-aligned buffer for the I/O
buffer data to be transferred. Then the device always a properly aligned
and contiguous buffer. Unfortunately this approach breaks down when the
virtqueue carries requests that are organized very differently, but it
might be useful when there is a most common request type.

Stefan

Attachment: signature.asc
Description: PGP signature



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]