OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH 2/5] Introduce VIRTIO_F_ADMIN_VQ_INDIRECT_DESC/VIRTIO_F_ADMIN_VQ_IN_ORDER



å 2022/1/20 äå4:25, Parav Pandit åé:
Hi Jason,

From: Parav Pandit
Sent: Wednesday, January 19, 2022 10:18 AM


From: Jason Wang <jasowang@redhat.com>
Sent: Wednesday, January 19, 2022 9:33 AM


It it means IMS, there's already a proposal[1] that introduce MSI
commands via the admin virtqueue. And we had similar requirement for
virtio-MMIO[2] and managed device or SF [3], so I would rather to
introduce IMS (need a better name though) as a basic facility instead
of tie it to any specific transport.

IMS of [1] is a interrupt configuration by the virtio driver for the device is it
driving, which needs a queue.
So regardless of the device type as PCI PF/VF/SF/ADI, there is desire to have a
generic admin queue not attached to device type.
And AQ in this proposal exactly serves this purpose.

Device configuring its own IMS vector vs PCI PF configuring VF's MSI-X max
vector count are two different functionality.
Both of these commands can ride on a generic queue.
However the queue is not same, because
PF owns its own admin queue (for vf msix config), VF or SF operates its own
admin queue (for IMS config).

So a good example is,
1. PCI PF configures 8 MSI-X or 16 IMS vectors for the VF using PF_AQ in HV.
2. PCI VF when using IMS configures, IMS data, vector, mask etc using VF_AQ in
GVM.
Both the functions will have AQ feature bit set.

Fair enough, so we have more users of admin queue than just MSI-X config.

If we treat the AQ as any other VQ, than IMS configuration by the virtio driver of #2 for VF or SF cannot be done through AQ.
Because AQ needs to be live before setting DRIVER_OK.
And spec mandates "The driver MUST configure the other virtqueue fields before enabling the virtqueue with queue_enable."
This includes the vector configuration.
So AQ will operate in polling mode which is not so good.
And it cannot be disabled to update MSI-X/IMS vectore because spec says "The driver MUST NOT write a 0 to queue_enable.".

Make special exception to AQ is equally not good in physical device implementation given all the queue config is located in single config space.

So a better approach for IMS configuration would be to have, limited config registers to bootstrap IMS configuration.
Something like,

struct virtio_pci_dev_interrupt_cfg {
	le16 max_device_interrupts; /* read-only tells maximum ims interrupts */


Is it better to use the same terminology with MSI capability like "max_device_vectors"


	le16 refer_index; /* write-only interrupt index to program */
	u8 status;	/* read only response by device, 0x0 = success, 0x1=busy, other error codes */
	u8 enable;	/* enable/disable interrupt referred by index */


Any reason we need this assuming we've already had PCI_NO_VECTOR in common cfg? I think what's better to have is the ability to mask and unmask a vector.


	le16 reserved_pad;
	u64 addr; /* write only interrupt addr handle on MSIX TLPs */
	u32 data; /* write only interrupt data handle.. */
};


Is this better to split the above into different commands?

Thanks


This way, VF, SF can configure large number of vectors before starting the queue and without on-chip resource.
Only on-chip resource required per VF is 20 bytes of above cfg registers.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]