OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH v6 2/5] virtio-net: Add flow filter capabilities read commands


On Mon, Nov 27, 2023 at 10:19:51AM +0000, Parav Pandit wrote:
> > runtime configuration, absolutely. We found out writeable config space field
> > are painful for a variety of ways, the main one being device can't report errors.
> > So we generally avoid writeable config space.  But read only - no good reason
> > to avoid them for init time things.
> > 
> The good reason as explained in the doc is to not place them based on the policy of read_only vs read-write, 
> Instead based on the access pattern whether it is needed in early driver initialization time or can it be read at later point over.
> 
> And the based on the access pattern has given more flexibility to implementations across sw and hw based devices without any lose.

Right. Whatever is done in driver probe.
And why? Basically because this makes initialization robust,
simple, and no parallelizm is used during probe anyway, so
a queue makes no sense - it gives 0 advantage.


And if you are going to require that driver allow or disallow ethtool
commands based on this info then it is clearly initialization time
in my book.


> >From the device side it has consistent view of get/set via single channel = cvq when dealing with the guest driver.
> 
> The two non nvidia examples of these devices are Microsoft MANA nic and Amazon ENA nic.
> 
> > 
> > 
> > 
> > > >
> > > > Then hardware offload guys come and say that in PCI spec current
> > > > transport is forcing use of on-device memory, and they want to build
> > > > cheap offload PCI based devices. Fine, let's build a transport variant that
> > does not force this.
> > >
> > > All new capabilities and control is over the cvq. What is baked until 1.2 is sort
> > of legacy.
> > 
> > Just repeating this will not make everyone agree.
> > 
> 
> I see that Satananda also is fine for Marvell to use CVQ.
> 
> > > >  And
> > > > we want optional compatibility, so let's also find a way to do that.
> > > > This makes much more sense than forcing transport specific issues on
> > everyone.
> > > >
> > > Trying to attribute as some transport specific issue is just not aligned to the
> > spec written today.
> > >
> > > > To add to that, what did not historicall scale well is transport-specific
> > registers.
> > > Then you should have put the VQ notification coalescing functionality in a
> > horrible virtio_net_config register like how a queue reset in the common config
> > space.
> > 
> > No because of the 4 commands: VIRTIO_NET_CTRL_NOTF_COAL_TX_SET
> > VIRTIO_NET_CTRL_NOTF_COAL_RX_SET
> > VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET
> > VIRTIO_NET_CTRL_NOTF_COAL_VQ_GET none are initialization time.
> > They are not capabilities: they control and inspec device runtime state.
> > 
> > If we had VIRTIO_NET_CTRL_NOTF_COAL_CAP_GET that would belong in
> > config space.
> > 
> I understood you.
> As we propose above, those caps reading is not must during initialization time phase.

Yes, it is a must initialization time.
In my book initialization time is what happens in probe.

> Hence, cvq delivers better scale like stats and consistent get/set.
> 
> > > Thank God that mistake was not done and many similar mistakes were
> > avoided.
> > 
> > Let's not get emotional here please.
> > 
> > > > That was a bad design, with all transports doing exactly the same
> > > > thing is slightly different ways.  And what you are advocating for
> > > > with CVQ is exactly replicating the bad design not the good one.
> > >
> > > CVQ design is what extends the current spec in good way. Followed by many
> > other non nvidia nics listed in the doc for reference.
> > 
> > I don't know what you are referring to here.  Register maps are all over the
> > place. It's a simple, standard, well understood practice.
> > 
> I provided various examples of virtio and non nvidia devices which prefers to avoid registers.
> Satananda from Marvell also expressed their view.
> 
> > We have some niche uses due to need for extreme VF# counts, this forces
> > DMA for them, not a good reason to force it for everyone.
> PFs are equally benefiting from them. Some of our hw devices has large PF count.
> 
> > The sooner you just stop forcing this down everyone's throat the faster we can
> > make progress on things that matter.
> 
> CVQ is established good configuration channel. So we all want to utilize instead of being forced to grow config space which is very hard to maintain and implement.

Specifically this thing or generally?
This thing - I think I don't understand what this capability is, at this point,
so I am asking you to clarify much more.
If it's a huge array it can't fit in config space by definition
but I don't understand why it needs to be.


Generally - forget it. device config space is one of the few things that
virtio driver writers for different OS-es actually get consistently
right. It is simple, easy to use and understand. Limitations - slow,
serialized and small so only probe time and only little amounts of data.
Stop trying to kill it as a general capability and replace with a hodge
podge of commands I'm not going to support that.


> I will fix your comments in v7 regarding the honoring max limits in driver and device.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]