OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] [PATCH 5/5] virtio-pci: implement VIRTIO_F_QUEUE_STATE




On 9/12/2023 2:11 PM, Parav Pandit wrote:
From: Jason Wang <jasowang@redhat.com>
Sent: Tuesday, September 12, 2023 9:48 AM

On Mon, Sep 11, 2023 at 2:47âPM Parav Pandit <parav@nvidia.com> wrote:


From: Jason Wang <jasowang@redhat.com>
Sent: Monday, September 11, 2023 12:01 PM

On Mon, Sep 11, 2023 at 12:12âPM Parav Pandit <parav@nvidia.com>
wrote:
Hi Michael,

From: virtio-comment@lists.oasis-open.org
<virtio-comment@lists.oasis- open.org> On Behalf Of Jason Wang
Sent: Monday, September 11, 2023 8:31 AM

On Wed, Sep 6, 2023 at 4:33âPM Michael S. Tsirkin
<mst@redhat.com>
wrote:
On Wed, Sep 06, 2023 at 04:16:37PM +0800, Zhu Lingshan wrote:
This patch adds two new le16 fields to common configuration
structure to support VIRTIO_F_QUEUE_STATE in PCI transport layer.

Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>

I do not see why this would be pci specific at all.
This is the PCI interface for live migration. The facility is not specific to
PCI.
It can choose to reuse the common configuration or not, but the
semantic is general enough to be used by other transports. We
can introduce one for MMIO for sure.

But besides I thought work on live migration will use admin queue.
This was explicitly one of the motivators.
Please find the proposal that uses administration commands for
device
migration at [1] for passthrough devices.
[1]
https://lists.oasis-open.org/archives/virtio-comment/202309/msg000
61.h
tml
This proposal couples live migration with several requirements, and
suffers from the exact issues I've mentioned below.

It does not.
Can you please list which one?

In some cases, it's even worse (coupling with PCI/SR-IOV, second
state machine other than the device status).

There is no state machine in [1].
Isn't the migration modes of "active, stop, freeze" a state machine?

Huh, no. Each mode stops/starts specific thing.
Just because one series is missing this and did only suspend/resume and other series covered P2P mode with modes, it does not make it state machine.
If you call suspend resume as states, it is still 2 state state machines. :)
Why need P2P for Live Migration?

It is not coupled with PCI/SR-IOV either.
It supports PCI/SR-IOV transport and in future other transports too when they
evolve.
For example:

+struct virtio_dev_ctx_pci_vq_cfg {
+ le16 vq_index;
+        le16 queue_size;
+        le16 queue_msix_vector;
+ le64 queue_descÃ
+        le64 queue_driverÃ
+        le64 queue_deviceÃ
+};
+\end{lstlisting}

And does this mean we will have commands for MMIO and other transport?
There are transports so yes, field structures from the device context will have PCI specific items.

(Most of the fields except the msix are general enough). And it's just a partial
implementation of the queue related functionality of the common cfg, so I
wonder how it can work.

As I already explained in cover letter, device context will evolve in v0->v1 to cover more.
True, most fields are general, and it has some pci specific fields, which were not worth taking out to a different structure.
And replicating small number of structs for MMIO is not a problem either as it is not complicating the transport either.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]