OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: RE: Re: Re: Re: [PATCH v2 06/11] transport-fabrics: introduce command set


> From: zhenwei pi <pizhenwei@bytedance.com>
> Sent: Thursday, June 8, 2023 9:39 PM


> > We should start with first establishing the data transfer model covering 512B
> to 1M context and take up the optimizations as extensions.
> >
> >
> 
> Hi, Parav
> 
> What do you think about another RDMA inline proposal in '[PATCH v2 11/11]
> transport-fabrics: support inline data for keyed transmission'?
> 
> 1, use feature command to get the target max recv buffer size, for example 16k
> 2, use feature command to set the initiator max recv buffer size, for example
> 16k If the size of payload is less than max recv buffer size, using a single RDMA
> SEND is enough. for example, virtio-blk writes 8k: 16 + 8192 < 16384, this
> means a single RDMA SEND is fine.

Let me read it.
From above short description, it appears that every receive buffer posted must be of size 16K.
And if sender choose not to do inline, there is super buffer wasted.

If it is read only or read workload, target majority buffer wastage is close to 98% or so assuming 64B command size.

And when buffer is full, the sender is stalled for the full round trip to enqueue the command.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]