OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: RE: [PATCH requirements 5/7] net-features: Add n-tuple receive flow filters requirements

> From: Heng Qi <hengqi@linux.alibaba.com>
> Sent: Wednesday, August 2, 2023 8:55 PM

> Hi, Parav. Sorry for not responding to this in time due to other things recently.
> Yes, RFF has two scenarios, set_rxnfc and ARFS, both of which will affect the
> packet steering on the device side.
> I think manually configured rules should have higher priority than ARFS
> automatic configuration.
> This behavior is intuitive and consistent with other drivers. Therefore, the
> processing chain on a rx packet is:
> {mac,vlan,promisc rx filters} -> {set_rxnfc} -> {ARFS} -> {rss/hash config}.
Within the RFF context, the priority among multiple RFF entries is governed by the concept of group.
So above two users of the RFF will create two groups and assign priority to it and achieve the desired processing order.
> There are also priorities within set_rxnfc and ARFS respectively.
> 1. For set_rxnfc, which has the exact match and the mask match. Exact
> matches should have higher priority.
> Suppose there are two rules,
> 	rule1: {"tcpv4", "src-ip:"} -> rxq1
> 	rule2: {"tcpv4", "src-ip:", "dst-port: 8989"} -> rxq2 .For recieved
> rx packets whose src-ip is, should match rule2 instead of rule1.
Yes. Driver should be able to set the priority within the group as well for above scenario.

> The rules of set_rxnfc come from manual configuration, the number of these
> rules is small and we may not need group grouping for this. And ctrlq can meet
> the configuration rate,
Yes, but having single interface for two use cases enables the device implementation to not build driver interface specific infra.
Both can be handled by unified interface.

> 2. For ARFS, which only has the exact match.
> For ARFS, since there is only one matching rule for a certain flow, so there is no
> need for group?
Groups are defining the priority between two types of rules.
Within ARFS domain we don't need group.

However instead of starting with only two limiting groups, it is better to have some flexibility for supporting multiple groups.
A device can device one/two or more groups.
So in future if a use case arise, interface wont be limiting to it.

> We may need different types of tables, such as UDPv4 flow table, TCPv4 flow
> table to speed up the lookup for differect flow types.
> Besides, the high rate and large number of configuration rules means that we
> need flow vq.
Yes, I am not sure if those tables should be exposed to the driver.
Thinking that a device may be able to decide on table count which it may be able to create.

> Therefore, although set_rxnfc and ARFS share a set of infrastructure, there are
> still some differences, such as configuration rate and quantity. So do we need
> add two features (VIRTIO_NET_F_RXNFC and VIRTIO_NET_F_ARFS) for
> set_rxnfc and ARFS respectively, and ARFS can choose flow vq?
Not really, as one interface can fullfil both the needs without attaching it to a specific OS interface.

> In this way, is it more conducive to advancing the work of RFF (such as
> accelerating the advancement of set_rxnfc)?
Both the use cases are equally immediately usable so we can advance it easily using single interface now.

> > +
> > +### 3.4.1 control path
> > +1. The number of flow filter operations/sec can range from 100k/sec to
> 1M/sec
> > +   or even more. Hence flow filter operations must be done over a queueing
> > +   interface using one or more queues.
> This is only for ARFS, for devices that only want to support set_rxnfc, they don't
> provide VIRTIO_NET_F_ARFS and consider implementing flow vq.
Well once the device implements flow vq, it will service both cases.
A simple device implementation who only case for RXNFC, can implement flowvq in semi-software serving very small number of req/sec.

> > +2. The device should be able to expose one or more supported flow filter
> queue
> > +   count and its start vq index to the driver.
> > +3. As each device may be operating for different performance characteristic,
> > +   start vq index and count may be different for each device. Secondly, it is
> > +   inefficient for device to provide flow filters capabilities via a config space
> > +   region. Hence, the device should be able to share these attributes using
> > +   dma interface, instead of transport registers.
> > +4. Since flow filters are enabled much later in the driver life cycle, driver
> > +   will likely create these queues when flow filters are enabled.
> I understand that the number of flow vqs is not reflected in
> max_virtqueue_pairs. And a new vq is created at runtime, is this supported in
> the existing virtio spec?
We are extending the virtio-spec now if it is not supported.
But yes, it is supported because max_virtqueue_pairs will not expose the count of flow_vq.
(similar to how we did the AQ).
And flowvq anyway is not _pair_ so we cannot expose there anyway.

> > +5. Flow filter operations are often accelerated by device in a hardware.
> Ability
> > +   to handle them on a queue other than control vq is desired. This achieves
> near
> > +   zero modifications to existing implementations to add new operations on
> new
> > +   purpose built queues (similar to transmit and receive queue).
> > +6. The filter masks are optional; the device should be able to expose if it
> > +   support filter masks.
> > +7. The driver may want to have priority among group of flow entries; to
> facilitate
> > +   the device support grouping flow filter entries by a notion of a group. Each
> > +   group defines priority in processing flow.
> > +8. The driver and group owner driver should be able to query supported
> device
> > +   limits for the flow filter entries.
> > +
> > +### 3.4.2 flow operations path
> > +1. The driver should be able to define a receive packet match criteria, an
> > +   action and a destination for a packet.
> When the user does not specify a destination when configuring a rule, do we
> need a default destination?
I think we should not give such option to driver.
A human/end user may not have the destination, but driver should be able to decide a predictable destination.

> > For example, an ipv4 packet with a
> > +   multicast address to be steered to the receive vq 0. The second example is
> > +   ipv4, tcp packet matching a specified IP address and tcp port tuple to
> > +   be steered to receive vq 10.
> > +2. The match criteria should include exact tuple fields well-defined such as
> mac
> > +   address, IP addresses, tcp/udp ports, etc.
> > +3. The match criteria should also optionally include the field mask.
> > +4. The match criteria may optionally also include specific packet byte offset
> > +   pattern, match length, mask instead of RFC defined fields.
> > +   length, and matching pattern, which may not be defined in the standard
> RFC.
> Is there a description error here?
Didn't follow your comment. Do you mean there is an error in above description?

> > +5. Action includes (a) dropping or (b) forwarding the packet.
> > +6. Destination is a receive virtqueue index.
> Since the concept of RSS context does not yet exist in the virtio spec.
> Did we say that we also support carrying RSS context information when
> negotiating the RFF feature? For example, RSS context configuration commands
> and structures, etc.
> Or support RSS context functionality as a separate feature in another thread?
Support RSS context as separate feature.

> A related point to consider is that when a user inserts a rule with an rss context,
> the RSS context cannot be deleted, otherwise the device will cause undefined
> behavior.
Yes, for now we can keep rss context as separate feature.

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]