OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio] Re: [virtio-comment] [PATCH requirements 7/7] net-features: Add header data split requirements


On Monday, 2023-08-14 at 08:44:11 -04, Willem de Bruijn wrote:
> On Mon, Aug 14, 2023 at 8:01âAM David Edmondson
> <david.edmondson@oracle.com> wrote:
>>
>>
>> On Monday, 2023-07-24 at 06:34:21 +03, Parav Pandit wrote:
>> > Add header data split requirements for the receive packets.
>> >
>> > Signed-off-by: Parav Pandit <parav@nvidia.com>
>> > ---
>> >  net-workstream/features-1.4.md | 13 +++++++++++++
>> >  1 file changed, 13 insertions(+)
>> >
>> > diff --git a/net-workstream/features-1.4.md b/net-workstream/features-1.4.md
>> > index 37820b6..a64e356 100644
>> > --- a/net-workstream/features-1.4.md
>> > +++ b/net-workstream/features-1.4.md
>> > @@ -11,6 +11,7 @@ together is desired while updating the virtio net interface.
>> >  3. Virtqueue notification coalescing re-arming support
>> >  4  Virtqueue receive flow filters (RFF)
>> >  5. Device timestamp for tx and rx packets
>> > +6. Header data split for the receive virtqueue
>> >
>> >  # 3. Requirements
>> >  ## 3.1 Device counters
>> > @@ -306,3 +307,15 @@ struct virtio_net_rff_delete {
>> >     point of reception from the network.
>> >  3. The device should provide a receive packet timestamp in a single DMA
>> >     transaction along with the rest of the receive completion fields.
>> > +
>> > +## 3.6 Header data split for the receive virtqueue
>> > +1. The device should be able to DMA the packet header and data to two different
>> > +   memory locations, this enables driver and networking stack to perform zero
>> > +   copy to application buffer(s).
>> > +2. The driver should be able to configure maximum header buffer size per
>> > +   virtqueue.
>> > +3. The header buffer to be in a physically contiguous memory per virtqueue
>> > +4. The device should be able to indicate header data split in the receive
>> > +   completion.
>> > +5. The device should be able to zero pad the header buffer when the received
>> > +   header is shorter than cpu cache line size.
>>
>> What's the use case for this (item 5)?
>
> Without zero padding, each header write results in a
> read-modify-write, possibly over PCIe. That can significantly depress
> throughput.

Understood. So it could be anything padding, we just want to write a
full cache line.
-- 
Woke up in my clothes again this morning, don't know exactly where I am.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]