OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [PATCH 2/5] Introduce VIRTIO_F_ADMIN_VQ_INDIRECT_DESC/VIRTIO_F_ADMIN_VQ_IN_ORDER



> From: Jason Wang <jasowang@redhat.com>
> Sent: Wednesday, January 26, 2022 10:35 AM
> 
> å 2022/1/25 äå11:52, Parav Pandit åé:
> > Hi Jason,
> >
> >> From: Jason Wang <jasowang@redhat.com>
> >> Sent: Tuesday, January 25, 2022 8:59 AM
> >>
> >> å 2022/1/19 äå12:48, Parav Pandit åé:
> >>>> From: Jason Wang <jasowang@redhat.com>
> >>>> Sent: Wednesday, January 19, 2022 9:33 AM
> >>>>
> >>>>
> >>>> It it means IMS, there's already a proposal[1] that introduce MSI
> >>>> commands via the admin virtqueue. And we had similar requirement
> >>>> for virtio-MMIO[2] and managed device or SF [3], so I would rather
> >>>> to introduce IMS (need a better name though) as a basic facility
> >>>> instead of tie it to any specific transport.
> >>>>
> >>> IMS of [1] is a interrupt configuration by the virtio driver for the
> >>> device is it
> >> driving, which needs a queue.
> >>> So regardless of the device type as PCI PF/VF/SF/ADI, there is
> >>> desire to have a
> >> generic admin queue not attached to device type.
> >>> And AQ in this proposal exactly serves this purpose.
> >>>
> >>> Device configuring its own IMS vector vs PCI PF configuring VF's
> >>> MSI-X max
> >> vector count are two different functionality.
> >>> Both of these commands can ride on a generic queue.
> >>> However the queue is not same, because PF owns its own admin queue
> >>> (for vf msix config), VF or SF operates its own admin queue (for IMS
> >>> config).
> >>
> >> So I think in the next version we need to clarify:
> >>
> >> 1) is there a single admin virtqueue shared by all the VFs and PF
> >>
> >> or
> >>
> >> 2) per VF/PF admin virtqueue, and how does the driver know how to
> >> find the corresponding admin virtqueue
> >>
> > Admin queue is not per VF.
> > Lets take concrete examples.
> > 1. So for example, PCI PF can have one AQ.
> > This AQ carries command to query/config MSI-X vector of VFs.
> >
> > 2. In second example, PCI PF is creating/destroying SFs. This is again done by
> using the AQ of the PCI PF.
> >
> > 3. A PCI VF has its own AQ to configure some of its own generic attribute,
> don't know which is that today.
> 
> 
> So this could be useful if we can create SF on top of VF. But as discussed we'd
> better to generalize the concept (management device vs managed device).
> 
It does not matter if the SF is created over PCI PF or VF. Its on top of PCI virtio device.
When/if someone creates SF over PCI VF, PCI VF is management device, and PCI SF is managed device.

When/if SF is created over PCI PF, PCI PF is managed device, and PCI SF is managed device.

In either case the AQ on the PCI device is transporting SF create/destroy commands.

> > AQ inherently allows out of order commands execution.
> > It shouldn't face contention. For example 1K depth AQ should be serving
> hundreds of descriptors commands in parallel for SF creation, VF MSI-X config
> and more.
> >
> > Which area/commands etc you think can lead to the contention?
> 
> 
> Unless the we have self-conainted descriptor which contains per descriptor
> writeback address. Even if we have OOO, the enqueue and dequeue still needs
> to be serialized?
>
No. we don't need to define any behavior.
When VIRTIO_F_IN_ORDER is not negotiated, the way a VQ behaves, AQ always behaves this way.
And any synchronization to be done is done in the driver like today. Usually when posting descriptors it needs to hold lock for short internal.
And this cannot lead to contention, due to the fact that descriptor posting time is very short.
 
> 
> >
> >> 2) if we have per vf admin virtqueue, it still doesn't scale since it
> >> occupies more hardware resources
> >>
> > That is too heavy, and doesnât scale. Proposal is to not have per vf admin
> queue.
> > Proposal is to have one admin queue in a virtio device.
> 
> 
> Ok.
> 
> 
> >
> >>>> I do see one advantage is that the admin virtqueue is transport
> >> independent
> >>>> (or it could be used as a transport).
> >>>>
> >>> I am yet to read the transport part from [1].
> >>
> >> Yes, the main goal is to be compatible with SIOV.
> >>
> > Admin queue is a command interface transport where higher layer services
> can be buit.
> > This includes SR-IOV config, SIOV config.
> > And v2 enables SIOV commands implementation whenever they are ready.
> >
> >>>>> 4. Improve documentation around msix config to link to sriov
> >>>>> section of
> >> virtio
> >>>> spec
> >>>>> 5. Describe error that if VF is bound to the device, admin
> >>>>> commands
> >>>> targeting VF can fail, describe this error code
> >>>>> Did I miss anything?
> >>>>>
> >>>>> Yet to receive your feedback on group, if/why is it needed and,
> >>>>> why/if it
> >> must
> >>>> be in this proposal, what pieces prevents it do as follow-on.
> >>>>> Cornelia, Jason,
> >>>>> Can you please review current proposal as well before we revise v2?
> >>>> If I understand correctly, most of the features (except for the
> >>>> admin virtqueue in_order stuffs) are not specific to the admin
> >>>> virtqueue. As discussed in the previous versions, I still think it's better:
> >>>>
> >>>> 1) adding sections in the basic device facility or data structure
> >>>> for provisioning and MSI
> >>>> 2) introduce admin virtqueue on top as an device interface for
> >>>> those features
> >>>>
> >>> I didn't follow your suggestion. Can you please explain?
> >>> Specifically "data structure for provisioning and MSI"..
> >>
> >> I meant:
> >>
> >> There's a chapter "Basic Facilities of a Virtio Device", we can
> >> introduce the concepts there like:
> >>
> >> 1) Managed device and Management device (terminology proposed by
> >> Michael), and can use PF and VF as a example
> >>
> >> 2) Managed device provisioning (the data structure to specify the
> >> attributes of a managed device (VF))
> >>
> >> 3) MSI
> >>
> > Above is good idea. Will revisit v2, if it is not arranged this way.
> >
> >> And then we can introduced admin virtqueue in either
> >>
> >> 1) transport part
> >>
> >> or
> >>
> >> 2) PCI transport
> >>
> > It is not specific to PCI transport, and currently it is not a transport either.
> 
> 
> Kind of, it allows to configure some basic attributes somehow. I think we'd
> better try not to couple any features to admin virtqueue.
> 
I am fine by defining virtio_mgmt_cmd that somehow can be issued without the admin queue.
For example, struct virtio_fs_req is detached from the request queue, but only way it can be issued today is with request queue.
So we can draft the specification this way.

But I repeatedly miss to see an explanation why is that needed.
Where in the recent spec a new queue is added that has request structure detached from queue.
I would like to see reference to the spec that indicates that 
a. struct virtio_fs_req can be issued by other means other than request queue.
b. Currently the negotiation is done by so and so feature bit to do so via a request queue.
c. "hence down the road something else can be used to carry struct virtio_fs_req instead of request queue".

And that will give good explanation why admin queue should follow some recently added queue which has structure detached from the queue.
(not just in form of structure name, but also in form on feature negotiation plumbing etc).

Otherwise detach mgmt. cmd from admin queue is vague requirement to me, that doesnât require detachment.

> > Certainly. Admin queue is transport independent.
> > PCI MSI-X configuration is PCI transport specific command, so structures are
> defined it accordingly.
> > It is similar to struct virtio_pci_cap, struct virtio_pci_common_cfg etc.
> >
> > Any other transport will have transport specific interrupt configuration. So it
> will be defined accordingly whenever that occurs.
> > For example, IMS for VF or IMS for SF.
> 
> 
> I don't think IMS is PCI specific stuffs, we had similar requests for MMIO.
Sure, but even for that there will be SF specific command for IMS configuration.
This command will have main difference from VF will be the SF identifier vs VF identifier.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]