OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-comment] [PATCH V2 2/2] virtio: introduce STOP status bit

å 2021/7/15 äå5:26, Stefan Hajnoczi åé:
On Thu, Jul 15, 2021 at 09:38:55AM +0800, Jason Wang wrote:
å 2021/7/15 äå12:22, Max Gurtovoy åé:
On 7/14/2021 6:07 PM, Stefan Hajnoczi wrote:
It requires much more works than the simple virtqueue interface:
(the main
issues is that the function is not self-contained in a single function)

1) how to interact with the existing device status state machine?
2) how to make it work in a nested environment?
3) how to migrate the PF?
4) do we need to allow more control other than just stop/freeze
the device
in the admin virtqueue? If yes, how to handle the concurrent
access from PF
and VF?
5) how it is expected to work with non-PCI virtio device?
I guess your device splitting proposal addresses some of these things?

Max probably has the most to say about these points.

If you want more input I can try to answer too, but I personally am not
developing devices that need this right now, so I might not be the best
person to propose solutions.
I think we mentioned this in the past and agreed that the only common
entity between my solution for virtio VF migration to this proposal is
the new admin control queue.

I can prepare some draft for this.

In our solution the PF will manage migration process for it's VFs using
the PF admin queue. PF is not migratable.

That limits the use cases.

I don't know who is using nested environments in production so don't
know if it worth talking about that.

There should be plenty users for the nested case.
Yes, nested virtualization is becoming available in clouds, etc. I think
nested virtualization support should be part of the design.

But, if you would like to implement it for testing, no problem. The VF
in level n, probably seen as PF in level n+1. So it can manage the
migration process for its nested VFs.

The PF dependency makes the design almost impossible to be used in a nested
I'm not sure I understood Max's example, but first I want to check I
understand yours:

A physical PF is passed through to an L1 guest. L2 guests are assigned
VFs created by the L1 guest from the PF.

Now we want to live migrate the L1 guest to another host. We need to
migrate the PF and its VFs are automatically included since there is no
migration from the L2 perspective?

Yes, and I believe the more common case is.

PF is for L0, and we want to migrate L2 guest.

This can hardly work in the current design.

The reason is that the functions is not self contained in the VF.

For question 5) what non-PCI devices are interesting in live migration ?

Why not? Virtio support transport other than PCI (CCW, MMIO).
Yes, VIRTIO isn't tied to PCI and the migration functionality should be
mappable to other transports.

Luckily the admin virtqueue approach maps naturally to other transports.
What requires more thought is how the admin virtqueue is
enumerated/managed on those other transports.

So admin virtqueue is really one way to go. But we can't mandate it in the spec. Sometime, it would be hard to define where the admin virtqueue needs to be located consider the transport may lack the concept of something like PF.

To me the most valuable part of the admin virtqueue is that it sits in the PF (or management device) where the DMA is naturally isolated.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]