OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH 2/5] Introduce VIRTIO_F_ADMIN_VQ_INDIRECT_DESC/VIRTIO_F_ADMIN_VQ_IN_ORDER


On Wed, Jan 26, 2022 at 1:58 PM Parav Pandit <parav@nvidia.com> wrote:
>
>
>
> > From: Jason Wang <jasowang@redhat.com>
> > Sent: Wednesday, January 26, 2022 11:15 AM
> > > It does not matter if the SF is created over PCI PF or VF. Its on top of PCI virtio
> > device.
> > > When/if someone creates SF over PCI VF, PCI VF is management device, and
> > PCI SF is managed device.
> > >
> > > When/if SF is created over PCI PF, PCI PF is managed device, and PCI SF is
> > managed device.
> > >
> > > In either case the AQ on the PCI device is transporting SF create/destroy
> > commands.
> >
> > That's exactly what I meant.
> Ok. cool. So we are in sync here. :)
>
> >
> > Probably but it really depends on the magnitude of the objects that you want to
> > manage via the admin virtqueue. 1K queue size may work for 1K objects but not
> > for 10K or 100K.
> >
> We can have higher queue depth.
> Not sure if all 10K will be active at same time, even though total 10K or 100K devices are there.
> We donât see the same in current Linux subfunctions users.

Not specific to this proposal but we see at least 10K+ requirement.

>
> > The lock is not the only thing that needs to care, the (busy) waiting for the
> > completion of the command may still take time.
> There is no need for busy waiting for completion.

Yes, that's why I put busy in the brace.

> Its admin command issued from the process context, it should be like blk request.
> When completion arrives, notifier will awake the caller.
>
> > > I am fine by defining virtio_mgmt_cmd that somehow can be issued without
> > the admin queue.
> > > For example, struct virtio_fs_req is detached from the request queue, but
> > only way it can be issued today is with request queue.
> > > So we can draft the specification this way.
> > >
> > > But I repeatedly miss to see an explanation why is that needed.
> > > Where in the recent spec a new queue is added that has request structure
> > detached from queue.
> > > I would like to see reference to the spec that indicates that a.
> > > struct virtio_fs_req can be issued by other means other than request queue.
> > > b. Currently the negotiation is done by so and so feature bit to do so via a
> > request queue.
> > > c. "hence down the road something else can be used to carry struct
> > virtio_fs_req instead of request queue".
> > >
> > > And that will give good explanation why admin queue should follow some
> > recently added queue which has structure detached from the queue.
> > > (not just in form of structure name, but also in form on feature negotiation
> > plumbing etc).
> > >
> > > Otherwise detach mgmt. cmd from admin queue is vague requirement to
> > me, that doesnât require detachment.
> >
> > So what I meant is not specific to any type of device. Device specific operations
> > should be done via virtqueue.
> >
> > What I see is, we should not limit the interface for the device independent basic
> > device facility to be admin virtqueue only:
> Can you explain, why?

For

1) the vendor and transport that doesn't want to use admin virtqueue
2) a more simple interface for L1

>
> >
> > E.g for IMS, we should allow it to be configured with various ways
> >
> IMS configuration is very abstract.
> Lets talk specific.
> I want to make sure when you say IMS configuration,
>
> Do you me HV is configuring IMS number of vectors for the VF/SF?
> If it's this, than, it is similar to how HV provision MSIX for a VF.

It can be done by introducing a capability in the PF?

struct msix_provision {
u32 device_select;
u16 msix_vectors;
u16 padding;
};

>
> Or you mean, a guest driver of VF or SF is configuring its IMS to later consume for the VQ?
> If its this, than I explained that admin queue is not the vehicle to do so, and we discussed the other structure yday.

Yes, I guess that's the nesting case I mentioned above.

>
> > 1) transport independent way: e.g admin virtqueue (which will be eventually
> > became another transport)
> >
> IMS by guest driver cannot be configured by AQ.

Yes, that's one point.

>
> > or
> >
> > 2) transport specific way, E.g a simple PCI(e) capability or MMIO registeres.
> >
> This is practical.

Right.

>
> > >
> > > > > Certainly. Admin queue is transport independent.
> > > > > PCI MSI-X configuration is PCI transport specific command, so
> > > > > structures are
> > > > defined it accordingly.
> > > > > It is similar to struct virtio_pci_cap, struct virtio_pci_common_cfg etc.
> > > > >
> > > > > Any other transport will have transport specific interrupt
> > > > > configuration. So it
> > > > will be defined accordingly whenever that occurs.
> > > > > For example, IMS for VF or IMS for SF.
> > > >
> > > >
> > > > I don't think IMS is PCI specific stuffs, we had similar requests for MMIO.
> > > Sure, but even for that there will be SF specific command for IMS
> > configuration.
> > > This command will have main difference from VF will be the SF identifier vs
> > VF identifier.
> >
> > I think it's not hard to have a single identifier and just say it's transport specific?
> It is hard when SFs are not defined.
>
> > Or simply reserving IDs for VF.
> When SF are not defined, it doesnât make sense to reserve any bytes for it.
> Linux has 4 bytes SF identifier.
> Community might go UUID way or some other way.
> We cannot define arbitrary bytes that may/may not be enough.
>
> When SF will be defined, it will anyway have sf identifier and it will be super easy to define new vector configuration structure for SF.
> Let's not overload VF MSI-X configuration proposal to be intermix with SF.

That's fine.

Thanks



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]