OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [EXT] [virtio] [PATCH requirements 5/7] net-features: Add n-tuple receive flow filters requirements



> -----Original Message-----
> From: Parav Pandit <parav@nvidia.com>
> Sent: Wednesday, August 2, 2023 1:15 AM
> To: Satananda Burla <sburla@marvell.com>; virtio-comment@lists.oasis-
> open.org
> Cc: Shahaf Shuler <shahafs@nvidia.com>; hengqi@linux.alibaba.com;
> virtio@lists.oasis-open.org
> Subject: RE: [EXT] [virtio] [PATCH requirements 5/7] net-features: Add
> n-tuple receive flow filters requirements
> 
> 
> > From: Satananda Burla <sburla@marvell.com>
> > Sent: Wednesday, August 2, 2023 12:48 PM
> 
> [..]
> > > +7. The device should process packet receive filters programmed via
> > > control vq
> > > +   commands first in the processing chain.
> > > +7. The device should process RFF entries before RSS configuration,
> > > i.e.,
> > > +   when there is a miss on the RFF entry, RSS configuration applies
> > > + if
> > > it exists.
> > > +8. To summarize the processing chain on a rx packet is:
> > > +   {mac,vlan,promisc rx filters} -> {receive flow filters} ->
> > > +{rss/hash
> > > config}.
> > Shouldn't this be
> >                                                          |-match->
> {RFF processing}
> > {mac,vlan,promisc rx filters} -> {receive flow filters} -|
> >                                                          |-no match->
> {rss/hash config}.
> I likely didn't understand your suggestion.
> 
> In the filter chain, the first filters are promiscuous, mac, vlan
> filters which are programmed through the cvq.
> If mac filter of cvq drops the packet, packet does not reach the newly
> introduced RFF filter.
> 
> This is because RFF are steering rules (like RSS).
> They do not override the existing mac/vlan filters from OS/driver pov.
> 
> > The above looks like RSS processing will always happen.
> 
> Oh my bad.
> Rss is only on the no_match, didn't clarify enough.
> 
> Fixing it.
> 
> > > +9. If multiple entries are programmed which has overlapping
> > > +attributes
> > > for a
> > > +   received packet, the driver to define the location/priority of
> the
> > > entry.
> > If driver does not provide group or provides same group with 2 rules
> that have a
> > same match part, does the new entry overwrite the old one for exact
> > matches(no mask) ? and for rules with masks, does the rule with
> longest match
> > take precedence? or the latest added rule takes precedence?
> > > +10. The filter entries are usually short in size of few tens of
> bytes,
> > > +   for example IPv6 + TCP tuple would be 36 bytes, and ops/sec rate
> is
> > > +   high, hence supplying fields inside the queue descriptor is
> > > preferred for
> > > +   up to a certain fixed size, say 56 bytes.
> > > +11. A flow filter entry consists of (a) match criteria, (b) action,
> > > +    (c) destination and (d) a unique 32 bit flow id, all supplied
> by
> > > the
> > > +    driver.
> > > +12. The driver should be able to query and delete flow filter entry
> > > +by
> > > the
> > > +    the device by the flow id.
> > The flowid here seems to be used for rule index. Can this be returned
> by the
> > device instead of being sent by driver? A 32 bit value to store might
> impose
> > undue restrictions on devices that have lesser capacity. Or could
> there be a
> > restriction that flowid cannot exceed the value returned by device as
> the
> > capacity.
> > > +
> I had thought about it as well.
> The main reason for driver to choose the value is, live migration
> scenario.
> If the device chooses _any_ id, then one vendor may choose A and other
> vendor may choose B and it may not work 
> 
> So one way is to keep driver supplied id we need to keep it detached
> from the capacity.
Ok. I was proposing that everybody agrees to use index value in 0-n per
group. I am fine with the size limitation described below.
> 
> Alternatively,
> we can add max_value in the device provisioning flow to ease the device
> implementation.
> Like how we have max_vqs per device (num_queues in
> virtio_pci_common_cfg).
> 
> (max capacity is already present in below flow_filter_capabilities
> struct).
Yes, this was my alternate suggestion as well. We could have a max value
in the provisioning flow.
> 
> > > +### 3.4.3 interface example
> > > +
> > > +Flow filter capabilities to query using a DMA interface:
> > > +
> > > +```
> > > +struct flow_filter_capabilities {
> > > +	u8 flow_groups;
> > > +	u16 num_flow_filter_vqs;
> > > +	u16 start_vq_index;
> > > +	u32 max_flow_filters_per_group;
> > > +	u32 max_flow_filters;
> > > +	u64 supported_packet_field_mask_bmap[4];
> > > +};
> > > +
> > > +
> > > +```
> > > +
> > > +1. Flow filter entry add/modify, delete:
> > > +
> > > +struct virtio_net_rff_add_modify {
> > > +	u8 flow_op;
> > > +	u8 group_id;
> > > +	u8 padding[2];
> > > +	le32 flow_id;
> > > +	struct match_criteria mc;
> > > +	struct destination dest;
> > > +	struct action action;
> > > +
> > > +	struct match_criteria mask;	/* optional */
> > > +};
> > > +
> > > +2. Flow filter entry delete:
> > > +struct virtio_net_rff_delete {
> > > +	u8 flow_op;
> > > +	u8 group_id;
> > > +	u8 padding[2];
> > > +	le32 flow_id;
> > > +};
> > > --
> > > 2.26.2
> > >


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]