OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [virtio] RE: [virtio-comment] proposal: use admin command (and aq) of the device to query config space


> From: Jason Wang <jasowang@redhat.com>
> Sent: Thursday, August 3, 2023 8:26 AM
> 
> On Wed, Aug 2, 2023 at 5:57âPM Parav Pandit <parav@nvidia.com> wrote:
> >
> >
> >
> > > From: Jason Wang <jasowang@redhat.com>
> > > Sent: Wednesday, August 2, 2023 3:02 PM
> > >
> > > On Wed, Aug 2, 2023 at 5:07âPM Parav Pandit <parav@nvidia.com> wrote:
> > > >
> > > >
> > > > > From: Jason Wang <jasowang@redhat.com>
> > > > > Sent: Wednesday, August 2, 2023 2:23 PM
> > > > >
> > > > > On Tue, Aug 1, 2023 at 3:09âPM Parav Pandit <parav@nvidia.com>
> wrote:
> > > > > >
> > > > > > One line proposal:
> > > > > > Let's use new admin command and admin q for all device types
> > > > > > to query
> > > > > device config space for new fields. (always).
> > > > >
> > > > > Before we mandate anything to admin command, we need to first
> > > > > invent an admin command over MMIO interface otherwise it would
> > > > > always be an issue for the nesting.
> > > > >
> > > > Nesting can be independent requirement in itself.'
> > >
> > > I don't understand here. If you tie new fields to the DMA interface,
> > > it basically means nesting won't get any new features unless:
> > >
> > > 1) it's a PCI VF
> > > 2) SR-IOV emulation is done
> > > 3) admin virtqueue emulation is done
> > >
> > > If you want differ nesting devices from others, it would be a
> > > nightmare to maintain.
> > >
> > New fields for sure are tied to the DMA interface.
> 
> This is different from what you've said above
> 
> "Let's use the new admin command and admin q for all device types to ..."
> 
> I'm simply replying to your proposal to tie new fields to the admin
> command(queue).
> 
New fields over the queuing interface.
Existing fields using config space register for backward compatibility.

Optionally, existing fields can be queried over queue as well, so if one is implementing new driver, it can always follow q over register.

> > And nesting can also get just like how a VQ works in nested mode.
> 
> We should consider reuse the existing one like cvq or inventing lightwight and
> self contained methods. 

true, in this proposal option_3 is the cvq.

> Admin virtqueue doesn't fit, admin command may but
> we need a MMIO interface for admin command.
>
Keeping nesting aside for a moment AQ fits but it has inefficiencies that I listed in the first email.
Inefficiency exists mainly for those devices which already has cvq.

so a new device or existing device without cvq, when the need arise can add cvq to achieve what net, gpu, crypo will do now.


> >
> > > > >
> > > > > 1) device configuration space is transport independent, some
> > > > > transport already use DMA to access the device configuration
> > > > > space
> > > > You can say ccw instead of "some". :)
> > >
> > > Kind of but the transport vq proposal goes in the same way.
> > >
> > We debated many times that the wording transport vq is wrong as it is _not_
> going to transport driver notifications.
> > Anyway, there is nothing to discuss here. So focusing on main items below.
> >
> > > >
> > > > > 2) device configuration space is not read only, we've already
> > > > > had several examples of using it as write
> > > > >
> > > > It is even worse to have writable.
> > >
> > > Well, what I meant is that, it's not necessarily read only and not
> > > necessarily a register interface.
> > >
> > I took PCI as being most common interface and took net, blk as devices who
> are experiencing high growth on features and device specific config space.
> >
> > This isnât really a normative part of the spec. The key takeway to have it, for
> common things it is read only and its register.
> 
> As mentioned in the past, when developing spec, we should look for what it can
> be.
> 
> My point is to stick to the device configuration space but invent DMA interface
> to access them, then we are all fine.
> 
Yes, option_3 using cvq that utilize the dma interface.
Option_2 using aq.

> >
> > > >
> > > > > > It is growing rapidly.
> > > > > > Some devices may be even multi-functionality device in coming
> > > > > > future such as
> > > > > net + clock + rdma device.
> > > > > > For a PCI transport implementing such ever-growing
> > > > > > capabilities,
> > > > > configuration is burdensome as plain registers.
> > > > >
> > > > > We've already fixed size VIRTIO_PCI_CAP_PCI_CFG. What's wrong with
> that?
> > > > >
> > > > The wrong part is: it is still and indirect and slow, sub-optimal
> > > > register
> > > interface.
> > >
> > > Do we really care about the performance here?
> > When it comes to bulk data transfer, in range of few hundred bytes and
> looking for 5+ year period, than yes, reading using indirect register is slow.
> 
> What kind of configuration requires a few hundred bytes? We should not
> duplicate the work of provisioning into device configuration space.
> 
New steering feature is taking up 16 bytes just in the requirements phase. When we do design, we will find more need.
Counters bitmap..
Timestamping need similar several tens of bytes.
Many requirements that are being worked, sum up to 100 bytes.

> > And after all it is still indirect register which is does not have time variability
> part.
> > So it is not fulfilling the requirement at all.
> >
> > > And if it is one of your major
> > > concerns, it's better to explain it along with the "ever-growing" concern.
> > >
> > I thought it is clear that it is still a register, even slower than current one and
> still have same issues without it.
> >
> > > > VQ is decoupled from transport already.
> > > > So, there is no flexibility broken.
> > > > And yet you suggested transport dependent VIRTIO_PCI_CAP_PCI_CFG
> > > > above making it further wrong. :)
> > > >
> > >
> > > The context here is that you want to mandate any new fields to be DMA.
> > > DMA is obviously transport specific. There are transports that don't
> > > use DMA at all (e.g the shared memory).
> > >
> >
> > VQ is surprisingly does the DMA without being transport specific.
> > How net device mandate tx pkts via vq, how a console device mandate
> > receive and transmit queue, How a crypto device mandate a control vq,
> >
> > What is proposed here is no different...
> >
> > Shared memory is not for bulk data transfer in virtio spec.
> > We donât see "shared memory" as transport in " Virtio Transport Options"
> section.
> 
> You can see some examples in the kernel drivers. The DMA mandating excludes
> any shared memory proposal in the future.
> 
Not really, when one comes up with shared memory, a new feature bit can expose.

> >
> > > > > > 3. A device must be able to choose, starting from which field
> > > > > > driver must query such configuration via DMA interface. This
> > > > > > field offset must be
> > > > > greater than currently defined configuration field.
> > >
> > > [...]
> > >
> > > > > >
> > > > > > d. Any other option?
> > > > >
> > > > > Transport virtqueue on top of admin virtqueue will address this
> seamlessly.
> > > > >
> > > > :)
> > > >
> > > > Donât see why one would create few more objects on top of aq when
> > > > aq or
> > > cvq itself can fulfil the need.
> > > > Can you please elaborate?
> > >
> > > If cvq can work, there's no need for any other methods.
> > Cvq is not present for all the device, at the same time all devices are not
> experiencing high growth of config space either.
> 
> Adding cvq is much easier than inventing(duplicating) the work of a transport.
> 
+1.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]