OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] [PATCH V3 RESEND 2/4] Introduce the commands set of the transport vq




On 8/16/2022 2:21 PM, Michael S. Tsirkin wrote:
On Tue, Aug 16, 2022 at 01:55:24PM +0800, Zhu, Lingshan wrote:

On 8/10/2022 8:58 PM, Michael S. Tsirkin wrote:
On Wed, Aug 10, 2022 at 04:49:25PM +0800, Zhu, Lingshan wrote:
I meant having MSI vectors and let the virtqueue refer to the MSI vectors.

Current design means for a 1024 virtqueues device to store 1024 MSI entries.
With the indirection of the MSI vectors array, the device is free to
have 1 to 1024 MSI entries.
I am not sure I get your points, how can the queues decide the MSI entries,
it should
be the driver and the platform set the vq's MSI. The driver has to
decide/control and handle
the MSI interrupts.

IMHO there are two options:
1)every virtqueue stores its own MSI vector, per-vq MSI vector can get
better performance
for sure. But if there are limited MSI resources, the driver may decide to
share MSI vectors
among the vqs(1k queues, 1 to 1024 MSI entries), and there should be proper
interrupt handlers in the driver.

2)add a device scope MSI entry(only for vqs, not config interrupt, because
we don't want to look into config space every time receive a vq interrupt),
all interrupts from the device are through this MSI entry,
this is optional, can help save resource. But this is complex, like first
set a device scope MSI for all vqs, then set a MSI for vq1, in this case,
we need vq1 use its own MSI to send interrupts, shadows the device MSI. This
is complex, maybe not worthy, I think the platform(x86, arm) has enough
MSI resource and it is just one vector per vq.
Consider a PCI device with subfunctions for example. It maintains
MSI-X vectors in an MSI-X table. vq has to select a vector from
that table. Which one?
1- vq can map to a vector number (this is what PCI)
2- vq can just imply vector number e.g. if vector number == vq number
3- vq can include a copy of vector itself

3 will force us to come up with a way to mask vectors in the transport,
not nice.
I am not sure for SIOV SF devices, whether the parent PCI device needs to
store all
MSI information in its MSI table.

Because these MSI vectors are not for the PCI device to send interrupts,
they are for
the SFs, this means we even don't need the PCI AIPs.

Every SF's vq should be able to store its own MSI vector. For this transport
vq,
I think it provides a way to set a vector for the vq, the vector may not be
a copy of the PCI
vector, nor map to any PCI vectors.

Thanks
Doing this will break things like VFIO passthrough of the parent.
So I would say it should at least be optional.
Would you please help explain how this can break VFIO passthrough the parent device?

As SIOV spec said, a SF may use MSI-X, but it is just one case. A vq can use general
MSI(not limited to PCI) to send interrupts, in this case,
the vq's MSI vectors are not in the PCI MSI-X table.


And may I then suggest you peruse the relevant spec and add the
requisite ton of functionality, then go read at least Linux source to
see whether doing things like masking interrupt vectors using a command
that then needs to block waiting for an interrupt is even practical to
integrate in existing OS-es.

I mean you can do all this but you better be prepared to then
reimplement all of functionality in the MSI-X spec.
It was not put there on a whim.
we provided commands to unmask / mask MSI in the last version.
I can bring them back.

I will add a command to read the pending interrupt bit of a vq

Thanks,
Zhu Lingshan



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]