OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH V2 4/6] virtio-pci: implement VIRTIO_F_QUEUE_STATE




On 11/6/2023 12:12 PM, Parav Pandit wrote:
From: Zhu, Lingshan <lingshan.zhu@intel.com>
Sent: Monday, November 6, 2023 9:01 AM

On 11/3/2023 11:50 PM, Parav Pandit wrote:
From: virtio-comment@lists.oasis-open.org
<virtio-comment@lists.oasis- open.org> On Behalf Of Zhu, Lingshan
Sent: Friday, November 3, 2023 8:27 PM

On 11/3/2023 7:35 PM, Parav Pandit wrote:
From: Zhu Lingshan <lingshan.zhu@intel.com>
Sent: Friday, November 3, 2023 4:05 PM

This patch adds two new le16 fields to common configuration
structure to support VIRTIO_F_QUEUE_STATE in PCI transport layer.

Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
---
    transport-pci.tex | 18 ++++++++++++++++++
    1 file changed, 18 insertions(+)

diff --git a/transport-pci.tex b/transport-pci.tex index
a5c6719..3161519 100644
--- a/transport-pci.tex
+++ b/transport-pci.tex
@@ -325,6 +325,10 @@ \subsubsection{Common configuration
structure
layout}\label{sec:Virtio Transport
            /* About the administration virtqueue. */
            le16 admin_queue_index;         /* read-only for driver */
            le16 admin_queue_num;         /* read-only for driver */
+
+	/* Virtqueue state */
+        le16 queue_avail_state;         /* read-write */
+        le16 queue_used_state;          /* read-write */
This tiny interface for 128 virtio net queues through register read
writes, does
not work effectively.
There are inflight out of order descriptors for block also.
Hence toy registers like this do not work.
Do you know there is a queue_select? Why this does not work? Do you
know how other queue related fields work?
:)
Yes. If you notice queue_reset related critical spec bug fix was done when it
was introduced so that live migration can _actually_ work.
When queue_select is done for 128 queues serially, it take a lot of time to
read those slow register interface for this + inflight descriptors + more.
interesting, virtio work in this pattern for many years, right?
All these years 400Gbps and 800Gbps virtio was not present, number of queues were not in hw.
The registers are control path in config space, how 400G or 800G affect??
See the virtio common cfg, you will find the max number of vqs is there, num_queues.
Device didnât support LM.
Many limitations existed all these years and TC is improving and expanding them.
So all these years do not matter.
Not sure what are you talking about, haven't we initialize the device and vqs
in config space for years?????? What's wrong with this mechanism?
Are you questioning virito-pci fundamentals???

Like how to set a queue size and enable it?
Those are meant to be used before DRIVER_OK stage as they are init time
registers.
Not to keep abusing them..
don't you need to set queue_size at the destination side?
No.
But the src/dst does not matter.
Queue_size to be set before DRIVER_OK like rest of the registers, as all queues must be created before the driver_ok phase.
Queue_reset was last moment exception.
create a queue? Nvidia specific?

For standard virtio, you need to read the number of enabled vqs at the source side, then enable them at the dst, so queue_size matters,
not to create.




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]