OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [virtio-comment] [PATCH 5/5] virtio-pci: implement VIRTIO_F_QUEUE_STATE


> From: Zhu, Lingshan <lingshan.zhu@intel.com>
> Sent: Tuesday, September 12, 2023 1:06 PM
> 
> On 9/12/2023 2:52 PM, Parav Pandit wrote:
> >> From: Zhu, Lingshan <lingshan.zhu@intel.com>
> >> Sent: Tuesday, September 12, 2023 12:13 PM Why need P2P for Live
> >> Migration?
> > A peer device may be accessing the virtio device. Hence first all the devices to
> be stopped like [1] allowing them to accept driver notifications from the peer
> device.
> > Once all the devices are stopped, than each device to be freeze to not do any
> device context updates. At this point the final device context can be read by the
> owner driver.
> Is it beyond the spec? Nvidia specific use case and not related to virtio live
> migration?
Not at all Nvidia specific.
And not all at all beyond the specification.
PCI transport is probably by far most common transport of virtio.
And hence, spec proposed in [1] covers it.

It is the base line implementation of leading OS such as Linux kernel.

Decade mature stack like vfio recommends support for p2p as base line without which multiple devices migration can fail as hypervisor has no knowledge if two devices are interacting or not.

[1] https://lists.oasis-open.org/archives/virtio-comment/202309/msg00071.html


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]