OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v2 1/4] Add virtio Admin Virtqueue



å 2022/1/30 äå11:30, Michael S. Tsirkin åé:
On Sun, Jan 30, 2022 at 05:12:46PM +0200, Max Gurtovoy wrote:
On 1/30/2022 4:41 PM, Michael S. Tsirkin wrote:
On Sun, Jan 30, 2022 at 11:56:30AM +0200, Max Gurtovoy wrote:
On 1/30/2022 11:40 AM, Michael S. Tsirkin wrote:
On Sun, Jan 30, 2022 at 11:13:38AM +0200, Max Gurtovoy wrote:
On 1/29/2022 5:53 AM, Jason Wang wrote:
On Fri, Jan 28, 2022 at 11:52 PM Michael S. Tsirkin <mst@redhat.com> wrote:
On Fri, Jan 28, 2022 at 04:49:34PM +0100, Cornelia Huck wrote:
On Fri, Jan 28 2022, "Michael S. Tsirkin" <mst@redhat.com> wrote:

On Fri, Jan 28, 2022 at 01:14:14PM +0100, Cornelia Huck wrote:
On Mon, Jan 24 2022, Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
+\section{Admin Virtqueues}\label{sec:Basic Facilities of a Virtio Device / Admin Virtqueues}
+
+Admin virtqueue is used to send administrative commands to manipulate
+various features of the device and/or to manipulate various features,
+if possible, of another device within the same group (e.g. PCI VFs of
+a parent PCI PF device are grouped together. These devices can be
+optionally managed by its parent PCI PF using its admin virtqueue.).
+
+Use of Admin virtqueue is negotiated by the VIRTIO_F_ADMIN_VQ
+feature bit.
+
+Admin virtqueue index may vary among different device types.
So, my understanding is:
- any device type may or may not support the admin vq
- if the device type wants to be able to accommodate the admin vq, it
      also needs to specify where it shows up when the feature is negotiated

Do we expect that eventually all device types will need to support the
admin vq (if some use case comes along that will require all devices to
participate, for example?)
I suspect yes. And that's one of the reasons why I'd rather we had a
device independent way to locate the admin queue. There are less
transports than device types.
So, do we want to bite the bullet now and simply say that every device
type has the admin vq as the last vq if the feature is negotiated?
Should be straightforward for the device types that have a fixed number
of vqs, and doable for those that have a variable amount (two device
types are covered by this series anyway.) I think we need to put it with
the device types, as otherwise the numbering of virtqueues could change
in unpredictable ways with the admin vq off/on.
Well that only works once. The next thing we'll need we won't be able to
make the last one ;) So I am inclined to add a per-transport field that
gives the admin queue number.
Technically, there's no need to use the same namespace for admin
virtqueue if it has a dedicated notification area. If we go this way,
we can simply use 0 as queue index for admin virtqueue.
Or we can use index 0xFFFF for admin virtqueue for compatibility.
I think I'd prefer a register with the #. For example we might want
to limit the # of VQs in order to pass extra data with the kick write.
So you are suggesting adding a new cfg_type (#define
VIRTIO_PCI_CAP_ADMIN_CFG 10) ?

that will look something like:

struct virtio_pci_admin_cfg {

  ÂÂÂ le32 queue_index; /* read only for the driver */

  ÂÂÂ le16 queue_size; /* read-write */
  ÂÂÂ le16 queue_msix_vector; /* read-write */
  ÂÂÂ le16 queue_enable; /* read-write */
  ÂÂÂ le16 queue_notify_off; /* read-only for driver */
  ÂÂÂ le64 queue_desc; /* read-write */
  ÂÂÂ le64 queue_driver; /* read-write */
  ÂÂÂ le64 queue_device; /* read-write */
  ÂÂÂ le16 queue_notify_data; /* read-only for driver */
  ÂÂÂ le16 queue_reset; /* read-write */

};

instead of re-using the struct virtio_pci_common_cfg ?


or do you prefer extending the struct virtio_pci_common_cfg with "le16
admin_queue_index; /* read only for the driver */ ?
The later. Other transports will need this too.


Cornelia has another idea which is that instead of
adding just the admin queue register to all transports,
we instead add a misc_config structure to all
transports. Working basically like device specific config,
but being device independent. For now it will only have
a single le16 admin_queue_index register.

For PCI we would thus add it with VIRTIO_PCI_CAP_MISC_CFG

The point here is that we are making it easier to add
more fields just like admin queue index in the future.
OK.

#define VIRTIO_PCI_CAP_MISC_CFG 10

and

struct virtio_pci_misc_cfg {
     le16 admin_queue_index; /* read-only for driver */
};

Is agreed by all for V3 ? instead of the net and blk AQ index definitions.
We need to add it to MMIO and CCW I guess too.


I wonder how much useful is this.

E.g for PCI we have an equation to calculate the queue notify address, if device choose to use dedicated notify for each queue it will probably end up with the last queue.

And I think the admin_queue_index should be stable regardless of the feature that has been negotiated?

Thanks



This is Cornelia's idea, we'll need her response.



Thanks

Another advantage to this approach is that
we can make sure admin queue gets a page by itself (which can be good if
we want to allow access to regular vqs but not to the admin queue to
guest) even if regular vqs share a page. Will help devices use less
memory space.

--
MST




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]