OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] [PATCH v1 1/8] admin: Add theory of operation for device migration


On Thu, Oct 19, 2023 at 05:31:37AM +0000, Parav Pandit wrote:
> > How could we make any agreement without an accurate the definition of
> > "passthrough" who is a key to understand each other?
> 
> I replied few times in past emails but since those email threads are so long, it is easy to miss out.
> 
> Passthrough definition:
> a. virtio member device mapped to the guest vm
> b. only pci config space and msix of a member device is intercepted by hypervisor.
> c. virtio config space, virtio cvqs, data vqs of a member device is directly accessed by the guest vm without intercepted by the hypervisor.
> 
> (Why b?, no grand reason, it is how the hypervisors are working where to integrate the virtio member device to).

I think it's a reasonable use-case, though of course not at all the only
way to design a system. Some more ways:
2- intercept everything except data vqs and cvqs
	I think this is a reasonable way to build the system and has a bunch
	of advantages short term. The main disadvantage as compared to
	passthrough is the need to keep config space coherent with
	device operation - the way to do it is device specific and
	might get fragile.

4- intercept everything except data vqs
	Here we get another problem in isolating some vqs but not
        others. the problem becomes bigger is that you also
	need to communicate control vq to the device.

also, with both of the above options, we have a question of how
are we communicating with the device to keep control path
and data path in sync when device's dma is mapped to guest.
using PASIDs for isolation might work but again, support is
far from universal so we can't really assume it as
the only way in the spec.

Absent PASID the popular way seems to be shadow vq which basically does

4- software intercept for everything
       clearly that's a lot of CPU overhead, I do not think we can focus on that
       as the only way in the spec, though some hypervisors might
       already have a lot of migration overhead to the point where
       virtio can afford any amount of overhead and it won't be
       measureable.


I also note some or all of the intercepts can always come and go.  For
example, a common setup is that if target VCPUs are running then IOMMU
will inject interrupts directly into guest - if not you generally trap
to hypervisor. Similarly, shadow vq might be active just temporarily.

Which approach is best? I feel ideally virtio would find ways to support
them all rather than deciding on a policy in the spec.

-- 
MST



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]