OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] [PATCH v1 1/8] admin: Add theory of operation for device migration


On Fri, Oct 13, 2023 at 11:41:21AM +0000, Parav Pandit wrote:
> 
> > From: virtio-comment@lists.oasis-open.org <virtio-comment@lists.oasis-
> > open.org> On Behalf Of Michael S. Tsirkin
> > Sent: Friday, October 13, 2023 4:56 PM
> 
> > > This is the question you never answer even if I keep asking.
> > 
> > It is, fundamentally, a question of supporting as many architectures as we can
> > as opposed to being opinionated.
> > 
> > On the one end of the spectrum, device is completely under guest control and
> > anything external has to trap to hypervisor.
> > None of existing implementations are there, at least pci config space is typically
> > under hypervisor control.
> > What Parav calls "passthrough" is built I think along these lines:
> > memory and interrupts go straight to guest, config space is trapped and
> > emulated.
> > On the other side of the spectrum is trapping everything in hypervisor.
> > Your "2 to 3 registers" is also not there, but is I think closer to that end of the
> > arc.
> > 
> > Any new feature should ideally be a building block supporting as many
> > approaches as possible. Fundamentally that requires a level of indirection, as
> > usual :) Having two completely distict interfaces for that straight off the bat?
> > Gimme a break.
> 
> There are two approaches.

I know much more than 2.
There are as many approaches as hypervisor implementations.

> 1. Passthrough a virtio member device to guest
> Only PCI config space and MSI-X table is trapped.
> MSI-X table is also trapped due to a cpu/platform limitation. I will not go in that detail for a moment.
> All the rest of the virtio member device interface is passthrough to the guest.
> This includes,
> (a) virtio common config space
> (b) virtio device specific config space
> (c) cvq if present
> (d) io vqs and more vqs
> (e) any shared memory
> (f) any new construct that arise in coming years
> 
> If one wants to do nesting, the member device should support nesting and it will be still able to do to next level.
> To my knowledge, most cpus support single level nesting, that is VMM and VM.
> Any higher-level nesting involves good amount of emulation in privileged operations.
> 
> If virtio to do even more efficient than rest of the platform, I propose that member device can support nesting, so VMM->VM_L1 and VM_L1->VM_L2 constructs are same.
> This gives the best of both. Nesting support and passthrough both.
> And since its layered approach, it naturally works for nested case.
> 
> 2. Data path accelerated in device, rest all emulated.
> This method make sense when underlying device is not a native virtio device.
> 
> But for some reason, ok, one wants to build the infrastructure, we can attempt to find common pieces between #1 and #2 methods.

We can't just build new interfaces each time someone wants a slightly
different point on the pass through/emulation curve.
I feel this is an important point for TC members to agree on.

-- 
MST



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]