OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [PATCH requirements v4 5/7] net-features: Add n-tuple receive flow filters requirements

å 2023/8/16 äå6:31, Parav Pandit åé:

From: Heng Qi <hengqi@linux.alibaba.com>
Sent: Wednesday, August 16, 2023 1:08 PM
From: Parav Pandit <parav@nvidia.com>
Sent: Tuesday, August 15, 2023 1:16 PM

Add virtio net device requirements for receive flow filters.

Signed-off-by: Parav Pandit <parav@nvidia.com>
- Addressed comments from Satananda, Heng, David
- removed context specific wording, replaced with destination
- added group create/delete examples and updated requirements
- added optional support to use cvq for flor filter commands
- added example of transporting flow filter commands over cvq
- made group size to be 16-bit
- added concept of 0->n max flow filter entries based on max count
- added concept of 0->n max flow group based on max count
- split field bitmask to separate command from other filter
- rewrote rx filter processing chain order with respect to existing
    filter commands and rss
- made flow_id flat across all groups
- split setup and operations requirements
- added design goal
- worded requirements more precisely
- fixed comments from Heng Li
- renamed receive flow steering to receive flow filters
- clarified byte offset in match criteria
   net-workstream/features-1.4.md | 151
   1 file changed, 151 insertions(+)

diff --git a/net-workstream/features-1.4.md
b/net-workstream/features-1.4.md index cb72442..78bb3d2 100644
--- a/net-workstream/features-1.4.md
+++ b/net-workstream/features-1.4.md
@@ -9,6 +9,7 @@ together is desired while updating the virtio net interface.
   1. Device counters visible to the driver  2. Low latency tx and rx
virtqueues for PCI transport  3. Virtqueue notification coalescing
re-arming support
+4  Virtqueue receive flow filters (RFF)

   # 3. Requirements
   ## 3.1 Device counters
@@ -183,3 +184,153 @@ struct vnet_rx_completion {
      notifications until the driver rearms the notifications of the virtqueue.
   2. When the driver rearms the notification of the virtqueue, the device
      to notify again if notification coalescing conditions are met.
+## 3.4 Virtqueue receive flow filters (RFF) 0. Design goal:
+   To filter and/or to steer packet based on specific pattern match to a
+   specific destination to support application/networking stack driven
+   processing.
+1. Two use cases are: to support Linux netdev set_rxnfc() for
+   and to support netdev feature NETIF_F_NTUPLE aka ARFS.
+### 3.4.1 control path
+1. The number of flow filter operations/sec can range from 100k/sec
+   or even more. Hence flow filter operations must be done over a queueing
+   interface using one or more queues.
+2. The device should be able to expose one or more supported flow
+   count and its start vq index to the driver.
+3. As each device may be operating for different performance
+   start vq index and count may be different for each device. Secondly, it is
+   inefficient for device to provide flow filters capabilities via a config space
+   region. Hence, the device should be able to share these attributes using
+   dma interface, instead of transport registers.
+4. Since flow filters are enabled much later in the driver life cycle, driver
+   will likely create these queues when flow filters are enabled.
Regarding this description, I want to say that ARFS will be enabled at runtime.
But ethtool RFF will be used at any time as long as the device is ready.

Yes, but ethool RFS is blocking callback in which slow task such as q creation can be done, only when one wants to add flows.
ARFS is anyway controlled using set_features() callback.

Combining what was discussed in today's meeting, flow vqs and ctrlq are
mutually exclusive, so if flow vqs are supported, then ethtool RFF can use flow

0. Flow filter queues and flow filter commands on cvq are mutually exclusive.

1. When flow queues are supported, the driver should create flow filter queues
and use it.
(Since cvq is not enabled for flow filters, any flow filter command coming on cvq
must fail).

2. If driver wants to use flow filters over cvq, driver must explicitly enable flow
filters on cvq via a command, when it is enabled on the cvq driver cannot use
flow filter queues.
This eliminates any synchronization needed by the device among different types
of queues.


Well the "likely create these queues when flow filters are enabled"
described here is confusing.
Because if ethtool RFF is used, we need to create a flow vq in the probe stage,

Current spec wording limits one to create queues before DRIVER_OK.
But with introduction of _RESET bit one can create an empty queue and disable it (reset it! What a grand name).

And re-enable it during ethtool callbacks.
This would be workaround to dynamically create the queue.

Yes, this is workaround, we can just set the number of flow vqs for the device, but not allocate resources nor enable. But this is not exhaustive, because xdp may also require dynamic q creation/destruction.

There are several other reasons:
1. The behavior of dynamically creating flow vq will break the current virtio
  ÂÂ Please see the "Device Initialization" chapter. ctrlq, as a configuration queue
  ÂÂ similar to flow vq, is also created in the probe phase. So if we support the
"dynamically creating",
  ÂÂ we need to update the spec.

2. Flow vq is similar to transmit q, and does not need to fill descriptors in
  ÂÂ so the consumption of resources is relatively small.

Only the queue descriptors memory is consumed, which is not a lot.
But concept of creating resource without consuming is just bad.
We learnt the lesson from mlx5 driver that dynamic creation is efficient.
Many part of Linux kernel also moving in this direction, all the way upto dynamically individual msix vector.

Ok. I got it.

So we should strive to enable them dynamically and improve the virtio spec.

It should an orthogonal feature, sadly how the RING_RESET feature is done. :(

RING_RESET is performed without changing the number of queues. But what you said above is a workaround.

3. Dynamic creation of virtqueue seems to be a new thread of virtio spec, and it
should also be
  ÂÂ applicable to rxqs and txqs. We can temporarily support creating flow vq in
the probe stage,
  ÂÂ and subsequent dynamic creation can be an extension.

So, should we create the flow vqs at the initial stage of the driver probe?
One option is to follow the above workaround.
Second option is to add feature bit to feature bit to indicate dynamic Q creation.

I'm leaning towards the second option, which makes the work orthogonal and also works in the case of XDP.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]