OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH v19] virtio-net: support inner header hash


On Thu, Jun 29, 2023 at 11:31:58AM +0800, Jason Wang wrote:
> 
> å 2023/6/28 18:10, Michael S. Tsirkin åé:
> > On Wed, Jun 28, 2023 at 11:46:22AM +0800, Jason Wang wrote:
> > > On Wed, Jun 28, 2023 at 12:35âAM Heng Qi <hengqi@linux.alibaba.com> wrote:
> > > > 1. Currently, a received encapsulated packet has an outer and an inner header, but
> > > > the virtio device is unable to calculate the hash for the inner header. The same
> > > > flow can traverse through different tunnels, resulting in the encapsulated
> > > > packets being spread across multiple receive queues (refer to the figure below).
> > > > However, in certain scenarios, we may need to direct these encapsulated packets of
> > > > the same flow to a single receive queue. This facilitates the processing
> > > > of the flow by the same CPU to improve performance (warm caches, less locking, etc.).
> > > > 
> > > >                 client1                    client2
> > > >                    |        +-------+         |
> > > >                    +------->|tunnels|<--------+
> > > >                             +-------+
> > > >                                |  |
> > > >                                v  v
> > > >                        +-----------------+
> > > >                        | monitoring host |
> > > >                        +-----------------+
> > > > 
> > > > To achieve this, the device can calculate a symmetric hash based on the inner headers
> > > > of the same flow.
> > > > 
> > > > 2. For legacy systems, they may lack entropy fields which modern protocols have in
> > > > the outer header, resulting in multiple flows with the same outer header but
> > > > different inner headers being directed to the same receive queue. This results in
> > > > poor receive performance.
> > > > 
> > > > To address this limitation, inner header hash can be used to enable the device to advertise
> > > > the capability to calculate the hash for the inner packet, regaining better receive performance.
> > > > 
> > > > Fixes: https://github.com/oasis-tcs/virtio-spec/issues/173
> > > > Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
> > > > Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > Reviewed-by: Parav Pandit <parav@nvidia.com>
> > > > ---
> > > > v18->v19:
> > > >          1. Have a single structure instead of two. @Michael S . Tsirkin
> > > >          2. Some small rewrites. @Michael S . Tsirkin
> > > >          3. Rebase to master.
> > > > 
> > > > v17->v18:
> > > >          1. Some rewording suggestions from Michael (Thanks!).
> > > >          2. Use 0 to disable inner header hash and remove
> > > >             VIRTIO_NET_HASH_TUNNEL_TYPE_NONE.
> > > > v16->v17:
> > > >          1. Some small rewrites. @Parav Pandit
> > > >          2. Add Parav's Reviewed-by tag (Thanks!).
> > > > 
> > > > v15->v16:
> > > >          1. Remove the hash_option. In order to delimit the inner header hash and RSS
> > > >             configuration, the ability to configure the outer src udp port hash is given
> > > >             to RSS. This is orthogonal to inner header hash, which will be done in the
> > > >             RSS capability extension topic (considered as an RSS extension together
> > > >             with the symmetric toeplitz hash algorithm, etc.). @Parav Pandit @Michael S . Tsirkin
> > > >          2. Fix a 'field' typo. @Parav Pandit
> > > > 
> > > > v14->v15:
> > > >          1. Add tunnel hash option suggested by @Michael S . Tsirkin
> > > >          2. Adjust some descriptions.
> > > > 
> > > > v13->v14:
> > > >          1. Move supported_hash_tunnel_types from config space into cvq command. @Parav Pandit
> > > I may miss some discussions, but this complicates the provisioning a lot.
> > > 
> > > Having it in the config space, then a type agnostic provisioning
> > > through config space + feature bits just works fine.
> > > 
> > > If we move it only via cvq, we need device specific provisioning interface.
> > > 
> > > Thanks
> > Yea that's what I said too. Debugging too.  I think we should build a
> > consistent solution that allows accessing config space through DMA,
> > separately from this effort.  Parav do you think you can live with this
> > approach so this specific proposal can move forward?
> 
> 
> We can probably go another way, invent a new device configuration space
> capability which fixed size like PCI configuration access capability?
> 
> struct virtio_pci_cfg_cap {
> ÂÂÂÂÂÂÂ struct virtio_pci_cap cap;
> ÂÂÂÂÂÂÂ u8 dev_cfg_data[4]; /* Data for device configuration space access.
> */
> };
> 
> So it won't grow as the size of device configuration space grows.
> 
> Thanks

It is true, it does not have to be DMA strictly speaking.

The basic issue is with synchronous access.

We can change the capability in some way to allow asynch then
that works. E.g. make device change length to 0 and driver
must poll before considering the operation done.

Having said that, this will increase # of VM exits even more.
Not a fan, DMA seems cleaner.

-- 
MST



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]