OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-comment] Re: [PATCH v1 3/8] device-context: Define the device context fields for device migration

On Thu, Oct 12, 2023 at 10:09:30AM +0000, Parav Pandit wrote:
> > From: Zhu, Lingshan <lingshan.zhu@intel.com>
> > Sent: Thursday, October 12, 2023 3:30 PM
> > 
> > On 10/11/2023 6:54 PM, Parav Pandit wrote:
> > >> From: Zhu, Lingshan <lingshan.zhu@intel.com>
> > >> Sent: Wednesday, October 11, 2023 3:38 PM
> > >>
> > >>>> The system admin can choose only passthrough some of the devices
> > >>>> for nested guests, so passthrough the PF to L1 guest is not a good
> > >>>> idea, because there can be many devices still work for the host or L1.
> > >>> Possible. One size does not fit all.
> > >>> What I expressed is most common scenarios that user care about.
> > >> don't block existing usecases, don't break the userspace, nested is common.
> > > Nothing is broken as virtio spec do not have any single construct to support
> > migration.
> > > If nested is common, can you share the performance number with real virtio
> > device with/without 2 level nesting?
> > > I frankly donât know how they look like.
> > virtio devices support nested, I mean don't break this usecase And end user
> > accept performance overhead in nested, this is not related to this topic.
> > 
> Can you show an example of virtio device nesting and live migration already supported where the device has _done_ the live migration.
> Due to which you claim that new feature of admin command-based owner and member device breaks something?
> Please donât use the verb "break".
> Your proposal is the first of its kind that supports migrating nested device.
> This is why new patches of config register or admin command does not break anything existing.

Wording aside, new features should support as wide a variety of configs
as possible, if some config is not supported there should be
a very good reason.

> > >
> > >>>>> In second use case, where one want to bind only one member device
> > >>>>> to one VM, I think same plumbing can be extended to have another
> > >>>>> VF, to take
> > >>>> the role of migration device instead of owner device.
> > >>>>> I donât see a good way to passthrough and also do in-band
> > >>>>> migration without
> > >>>> lot of device specific trap and emulation.
> > >>>>> I also donât know the cpu performance numbers with 3 levels of
> > >>>>> nested page
> > >>>> table translation which to my understanding cannot be accelerated
> > >>>> by the current cpu.
> > >>>> host_PA->L1_QEMU_VA->L1_Guest_PA->L1_QEMU_VA->L2_Guest_PA and
> > so
> > >> on,
> > >>>> there can be performance overhead, but can be done.
> > >>>>
> > >>>> So admin vq migration still don't work for nested, this is surely a blocker.
> > >>> In specific case of member devices are located at different nest
> > >>> level, it does
> > >> not.
> > >> so you got the point, so this series should not be merged.
> > >>> Why prevents you have a peer VF do the role of migration driver?
> > >>> Basically, what I am proposing is, connect two VFs to the L1 guest.
> > >>> One VF is
> > >> migration driver, one VF is passthrough to L2 guest.
> > >>> And same scheme works.
> > >> A peer VF? A management VF? still break the existing usecase. and how
> > >> do you transfer ownership of L2 VF from PF to L1 VF?
> > > A peer management VF which services admin command (like PF).
> > > Ownership of admin command is delegated to the management VF.
> > interesting, do you plan to cook a patch implementing this?
> No. I am hoping that you can help to draft those patches for nested case to work when one wants to hand of single VM to single nested guest VM.
> I will not be able to test any of nested things and show its performance value either, as I donât see how rest of the eco system can match up for the nested.
> Hence, your expertise in drafting extension for nested is desired.
> > Really make sense?
> > 
> > How do you transfer the ownership?
> An additional ownership deletgation by a new admin command.
> > How to you maintain a different group?
> One to one assignment.
> > How do you isolate the groups?
> Not sure, what it means. The explicit group is created and VFs are placed in this group.
> > How to you keep the guest or host secure?
> Please be specific. Its very broad question when it comes to defining the interface.
> > How do you manage the overlaps?
> Overlaps between?
> > How do you implement the hardware support that?
> Please consult your board designers. Hard to say how to implement something in generic.
> > How do you change the PCI routing?
> Why anything to be changed in PCI routing?
> > > It does not break any existing deployments.
> > we are talking about nested, don't break nested
> Virtio spec for nested is not defined yet. Hence nothing is broken. Please avoid using the verb, _break_.

Well people are passing virtio devices through to nested guests.
Ideally such configs should, somehow, support nested hypervisors
migrating nested guests. Considering e.g. write tracking
needs decent performance for live migration to deserve the name,
I doubt pulling data across PCIe with synchronous MMIO
operations with no pipelining will work well enough.
At the same time, if the maintainance cost at spec level is
low and the feature is self-contained, then why not.
Which this one poking at existing registers with subtle semantics,


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]