[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [virtio-comment] Re: [PATCH v1 3/8] device-context: Define the device context fields for device migration
On 10/12/2023 7:37 PM, Parav Pandit wrote:
From: Zhu, Lingshan <lingshan.zhu@intel.com> Sent: Thursday, October 12, 2023 4:40 PM On 10/12/2023 6:09 PM, Parav Pandit wrote:From: Zhu, Lingshan <lingshan.zhu@intel.com> Sent: Thursday, October 12, 2023 3:30 PM On 10/11/2023 6:54 PM, Parav Pandit wrote:From: Zhu, Lingshan <lingshan.zhu@intel.com> Sent: Wednesday, October 11, 2023 3:38 PMThe system admin can choose only passthrough some of the devices for nested guests, so passthrough the PF to L1 guest is not a good idea, because there can be many devices still work for the host orL1.Possible. One size does not fit all. What I expressed is most common scenarios that user care about.don't block existing usecases, don't break the userspace, nested iscommon.Nothing is broken as virtio spec do not have any single construct to supportmigration.If nested is common, can you share the performance number with real virtiodevice with/without 2 level nesting?I frankly donât know how they look like.virtio devices support nested, I mean don't break this usecase And end user accept performance overhead in nested, this is not related to thistopic.Can you show an example of virtio device nesting and live migration alreadysupported where the device has _done_ the live migration.Due to which you claim that new feature of admin command-based ownerand member device breaks something? current virito/kvm/qemu support nested.Sure, two of the 3 components are not part of the virtio spec. Hence, they are not broken.
you want virtio work for them right? don't break this.
Please donât use the verb "break". Your proposal is the first of its kind that supports migrating nested device. This is why new patches of config register or admin command does not breakanything existing. if your proposal don't support nested, you break nested use cases.In second use case, where one want to bind only one member device to one VM, I think same plumbing can be extended to have another VF, to takethe role of migration device instead of owner device.I donât see a good way to passthrough and also do in-band migration withoutlot of device specific trap and emulation.I also donât know the cpu performance numbers with 3 levels of nested pagetable translation which to my understanding cannot be accelerated by the current cpu. host_PA->L1_QEMU_VA->L1_Guest_PA->L1_QEMU_VA->L2_Guest_PAandsoon,there can be performance overhead, but can be done. So admin vq migration still don't work for nested, this is surely ablocker.In specific case of member devices are located at different nest level, it doesnot. so you got the point, so this series should not be merged.Why prevents you have a peer VF do the role of migration driver? Basically, what I am proposing is, connect two VFs to the L1 guest. One VF ismigration driver, one VF is passthrough to L2 guest.And same scheme works.A peer VF? A management VF? still break the existing usecase. and how do you transfer ownership of L2 VF from PF to L1 VF?A peer management VF which services admin command (like PF). Ownership of admin command is delegated to the management VF.interesting, do you plan to cook a patch implementing this?No. I am hoping that you can help to draft those patches for nested case towork when one wants to hand of single VM to single nested guest VM.I will not be able to test any of nested things and show its performance valueeither, as I donât see how rest of the eco system can match up for the nested.Hence, your expertise in drafting extension for nested is desired.Answer to your below question of patch drafting is here. If you can help to extend it will be good.
where are the draft patch?
Really make sense? How do you transfer the ownership?An additional ownership deletgation by a new admin command.if you think this can work, do you want to cook a patch to implement this before you submitting this live migration series?I answered this already above.
talk is cheap, show me your patch
How to you maintain a different group?One to one assignment.same as aboveHow do you isolate the groups?Not sure, what it means. The explicit group is created and VFs are placed inthis group. VF resource are on PF, right?Which resource? Before jumping to resource, may be you want to answer "group isolation"?How to you keep the guest or host secure?Please be specific. Its very broad question when it comes to defining theinterface. without isolation, can be attacked?What isolation are you talking about? I am suggesting that one VF as dummy PF is given the role of admin commands.How do you manage the overlaps?Overlaps between?host pf and L1 VFL1 VF works at it own level. Host PF works at its own level. This is the true nesting.How do you implement the hardware support that?Please consult your board designers. Hard to say how to implement somethingin generic. so you don't have an idea:) Right, I do not have idea for Intel boards. I was suggesting a management VF that can service the admin commands.How do you change the PCI routing?Why anything to be changed in PCI routing?do you place PF and mangement VF in an ACL group?ACL group at which layer?Do does L1 management VF's member device belong to the PF physically?Yes.
Answer all questions above, if you think a management VF can work, please show me your patch.
It does not break any existing deployments.we are talking about nested, don't break nestedVirtio spec for nested is not defined yet. Hence nothing is broken. Please avoidusing the verb, _break_. virtio nested works for many yearsI replied: your break comment is not applicable to virtio_spec, nor does it apply to any existing software you listed. As Michael said, software based nesting is used.. See if actual hw based devices can implement it or not. Many components of cpu cannot do N level nesting either, but may be virtio can. I donât know how yet.
two facts: 1. virito works for nested for years 2. your admin vq lm solution does not work for nested
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]