OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH v1 3/8] device-context: Define the device context fields for device migration




On 10/12/2023 6:09 PM, Parav Pandit wrote:
From: Zhu, Lingshan <lingshan.zhu@intel.com>
Sent: Thursday, October 12, 2023 3:30 PM

On 10/11/2023 6:54 PM, Parav Pandit wrote:
From: Zhu, Lingshan <lingshan.zhu@intel.com>
Sent: Wednesday, October 11, 2023 3:38 PM

The system admin can choose only passthrough some of the devices
for nested guests, so passthrough the PF to L1 guest is not a good
idea, because there can be many devices still work for the host or L1.
Possible. One size does not fit all.
What I expressed is most common scenarios that user care about.
don't block existing usecases, don't break the userspace, nested is common.
Nothing is broken as virtio spec do not have any single construct to support
migration.
If nested is common, can you share the performance number with real virtio
device with/without 2 level nesting?
I frankly donât know how they look like.
virtio devices support nested, I mean don't break this usecase And end user
accept performance overhead in nested, this is not related to this topic.

Can you show an example of virtio device nesting and live migration already supported where the device has _done_ the live migration.
Due to which you claim that new feature of admin command-based owner and member device breaks something?
current virito/kvm/qemu support nested. You you try to setup a nested VM to check the result.

Please donât use the verb "break".
Your proposal is the first of its kind that supports migrating nested device.
This is why new patches of config register or admin command does not break anything existing.
if your proposal don't support nested, you break nested use cases.

In second use case, where one want to bind only one member device
to one VM, I think same plumbing can be extended to have another
VF, to take
the role of migration device instead of owner device.
I donât see a good way to passthrough and also do in-band
migration without
lot of device specific trap and emulation.
I also donât know the cpu performance numbers with 3 levels of
nested page
table translation which to my understanding cannot be accelerated
by the current cpu.
host_PA->L1_QEMU_VA->L1_Guest_PA->L1_QEMU_VA->L2_Guest_PA and
so
on,
there can be performance overhead, but can be done.

So admin vq migration still don't work for nested, this is surely a blocker.
In specific case of member devices are located at different nest
level, it does
not.
so you got the point, so this series should not be merged.
Why prevents you have a peer VF do the role of migration driver?
Basically, what I am proposing is, connect two VFs to the L1 guest.
One VF is
migration driver, one VF is passthrough to L2 guest.
And same scheme works.
A peer VF? A management VF? still break the existing usecase. and how
do you transfer ownership of L2 VF from PF to L1 VF?
A peer management VF which services admin command (like PF).
Ownership of admin command is delegated to the management VF.
interesting, do you plan to cook a patch implementing this?
No. I am hoping that you can help to draft those patches for nested case to work when one wants to hand of single VM to single nested guest VM.
I will not be able to test any of nested things and show its performance value either, as I donât see how rest of the eco system can match up for the nested.
Hence, your expertise in drafting extension for nested is desired.
I see it does not support nested. As MST ever pointed out, a management VF sounds awkward

Really make sense?

How do you transfer the ownership?
An additional ownership deletgation by a new admin command.
if you think this can work, do you want to cook a patch to implement this before you submitting this live migration series?
How to you maintain a different group?
One to one assignment.
same as above
How do you isolate the groups?
Not sure, what it means. The explicit group is created and VFs are placed in this group.
VF resource are on PF, right?
How to you keep the guest or host secure?
Please be specific. Its very broad question when it comes to defining the interface.
without isolation, can be attacked?
How do you manage the overlaps?
Overlaps between?
host pf and L1 VF
How do you implement the hardware support that?
Please consult your board designers. Hard to say how to implement something in generic.
so you don't have an idea
How do you change the PCI routing?
Why anything to be changed in PCI routing?
do you place PF and mangement VF in an ACL group?
Do does L1 management VF's member device belong to the PF physically?

It does not break any existing deployments.
we are talking about nested, don't break nested
Virtio spec for nested is not defined yet. Hence nothing is broken. Please avoid using the verb, _break_.
virtio nested works for many years



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]