OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH v1 3/8] device-context: Define the device context fields for device migration




On 11/1/2023 3:54 PM, Parav Pandit wrote:
From: Zhu, Lingshan <lingshan.zhu@intel.com>
Sent: Wednesday, November 1, 2023 1:17 PM

On 11/1/2023 3:03 PM, Parav Pandit wrote:
From: Zhu, Lingshan <lingshan.zhu@intel.com>
Sent: Wednesday, November 1, 2023 12:26 PM

On 11/1/2023 2:50 PM, Parav Pandit wrote:
From: Zhu, Lingshan <lingshan.zhu@intel.com>
Sent: Wednesday, November 1, 2023 12:09 PM

On 11/1/2023 2:37 PM, Michael S. Tsirkin wrote:
On Wed, Nov 01, 2023 at 05:42:56AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin <mst@redhat.com>
Sent: Wednesday, November 1, 2023 11:01 AM

On Wed, Nov 01, 2023 at 02:54:47AM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin <mst@redhat.com>
Sent: Tuesday, October 31, 2023 3:44 PM

On Tue, Oct 31, 2023 at 05:42:29PM +0800, Zhu, Lingshan wrote:
Your answer is not relevant to this discussion at all.
Why?
Because we were discussing the schemes where registers are
not
used.
One example of that was IMS. It does not matter MSI or MSIX.
As explained in Intel's commit message, the key to focus for
IMS is "queue
memory" not some hw register like MSI or MSI-X.
you know the device always need to know a address and the
data to send a MSI, right?
So if virtio is to use IMS then we'll need to add interfaces
to program IMS, I think. As part of that patch - it's
reasonable to assume - we will also need to add a way to
retrieve IMS so it can be
migrated.
However, what this example demonstrates is that the approach
taken by this proposal to migrate control path structures -
namely, by defining a structure used just for migration -
means that we will need to come up with a migration interface
each time.
And that is unfortunate.

When the device supports a new feature it has supported new
functionality.
Hence the live migration side also got updated.
However, the live migration driver does not have to understand
what is inside
the control path structures.
It is just byte stream.
Only if the hypervisor live migration drive involved in
emulating, it will parse
and that is fine as like other control structures.

The point is that any new field needs to be added in two places
now and that is not great at all.

Most control structs are well defined. So only its type field is
added to
migrating driver side.
This is very low overhead field and handled in generic way for
all device
types and for all common types.
Weird, not what I see.  E.g. you seem to have a structure
duplicating queue fields. Each new field will have to be added
there in addition to the transport.

We need a stronger compatiblity story here I think.

One way to show how it's designed to work would be to split the
patches. For example, add queue notify data and queue reset
separately.
I didn't follow the suggestion. Can you explain splitting patches
and its
relation to the structure?
Another is to add MSIX table migration option for when MSIX
table is passed through to guest.
Yes, this will be added in future when there is actual hypervisor for it.
You are tying the architecture to an extremely implementation
specific detail.

Hypervisors *already* have migrate the MSIX table. Just in a
hypervisor specific way. queue vector is an index into this table.
So the index is migrated through the device but the table itself
has to be trapped and emulated by hypervisor? Give me a break.
I agree, the MSI table could be R/W anyway.
Please explain the motivation, why it cannot be added when there is
_real_ sw
which will use it?
I asked to add it when it is needed.
Why it is must right now, if it is.
If there is any software like to use it, please explain which is it
and how will it
use it.
as MST has ever pointed out, hypervisors already have migrate MSI table.
So I suggest you to read QEMU code to find the answer.
Huh, this is not the answer.

Michael asked - " add MSIX table migration option for when MSIX table is
passed through to guest."
Currently it is not. When in future hypervisor adds it, it can be added. What
prevents this addition in future?
I asked very simple question to explain the use case and hypervisor who
wants to transfer MSIX table by using device context?
You donât answer it...
I assume there is no user software of it.

If there is one, please share and it should be added.
I can give you some hints:
when the VM freeze in the "stop_window" of live migration, the hypervisor
owns the device, and it can access the MSI table of the device. So I don't see
MSI configurations blocking live migration.
We don't want another long long thread but nobody develop their
knowledge there.

Exactly. So explain use case of which software will use it? I donât see any
hypervisor using it today _from_ the device context.
see above
And if this is a vendor specific issue, then should not be relevant to the spec.
What vendor specific issue are you talking about? By some means are you
_implying_ not transferring msix table is proposers vendors limitation?
Hell no.
Hypervisor transfer MSI to the destination as explained above, and this routine
works, an ref is QEMU
I have hard time following your suggestion.
So do you want MSIX table as part of device context or not?
MSI is a part of device context as I suggest you to read QEMU code for the answers.

You will also found that part is already done.

So I don't know why this is a topic worthy discussions.

If you have questions, I still suggest you read QEMU code.




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]