OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-comment] [PATCH V2 2/2] virtio: introduce STOP status bit

On 7/21/2021 1:48 PM, Stefan Hajnoczi wrote:
On Tue, Jul 20, 2021 at 04:09:27PM +0300, Max Gurtovoy wrote:
On 7/20/2021 3:57 PM, Stefan Hajnoczi wrote:
On Tue, Jul 20, 2021 at 03:27:00PM +0300, Max Gurtovoy wrote:
On 7/20/2021 6:02 AM, Jason Wang wrote:
å 2021/7/19 äå8:43, Stefan Hajnoczi åé:
On Fri, Jul 16, 2021 at 10:03:17AM +0800, Jason Wang wrote:
å 2021/7/15 äå6:01, Stefan Hajnoczi åé:
On Thu, Jul 15, 2021 at 09:35:13AM +0800, Jason Wang wrote:
å 2021/7/14 äå11:07, Stefan Hajnoczi åé:
On Wed, Jul 14, 2021 at 06:29:28PM +0800, Jason Wang wrote:
å 2021/7/14 äå5:53, Stefan Hajnoczi åé:
On Tue, Jul 13, 2021 at 08:16:35PM +0800, Jason Wang wrote:
å 2021/7/13 äå6:00, Stefan Hajnoczi åé:
On Tue, Jul 13, 2021 at 11:27:03AM +0800, Jason Wang wrote:
å 2021/7/12 äå5:57, Stefan Hajnoczi åé:
On Mon, Jul 12, 2021 at 12:00:39PM +0800, Jason Wang wrote:
å 2021/7/11 äå4:36, Michael S. Tsirkin åé:
On Fri, Jul 09, 2021
at 07:23:33PM +0200,
Eugenio Perez Martin
basically the difference between the vhost/vDPA's
selective passthrough
approach and VFIO's full passthrough approach.
We can't do VFIO full pasthrough for migration anyway,
some kind of mdev is
required but it's duplicated with the current vp_vdpa driver.
I'm not sure that's true. Generic VFIO PCI migration can probably be
achieved without mdev:
1. Define a migration PCI Capability that indicates support for
to implement
  ÂÂÂÂ the migration interface in hardware instead of an mdev driver.
So I think it still depend on the driver to implement migrate
state which is
vendor specific.
The current VFIO migration interface depends on a device-specific
software mdev driver but here I'm showing that the physical device can
implement the migration interface so that no device-specific driver code
is needed.
This is not what I read from the patch:

  Â* device_state: (read/write)
  Â*ÂÂÂÂÂ - The user application writes to this field to inform the vendor
  Â*ÂÂÂÂÂÂÂ about the device state to be transitioned to.
  Â*ÂÂÂÂÂ - The vendor driver should take the necessary actions to change
  Â*ÂÂÂÂÂÂÂ device state. After successful transition to a given state, the
  Â*ÂÂÂÂÂÂÂ vendor driver should return success on write(device_state,
  Â*ÂÂÂÂÂÂÂ system call. If the device state transition fails, the vendor
  Â*ÂÂÂÂÂÂÂ should return an appropriate -errno for the fault condition.

Vendor driver need to mediate between the uAPI and the actual device.
We're building an infrastructure for VFIO PCI devices in the last few

It should be merged hopefully to kernel 5.15.
Do you have links to patch series or a brief description of the VFIO API
features that are on the roadmap?
we devided it to few patchsets .

The entire series can be found at:


We'll first add support for mlx5 devices suspend/resume (ConnectX-6 and

The driver is ready in the series above.
I looked briefly and it seems to implement the existing
VFIO_REGION_TYPE_MIGRATION API for mlx5 devices? I thought
"infrastructure for VFIO PCI devices" meant you were adding new
VFIO/mdev migration APIs.

No, why do we need new API ?

We created an infrastructure for vendor to develop vendor_specific/protocol_specific vfio_pci drivers.

These drivers can add support for migration.

The next driver to be developed in our context is virtio_vfio_pci.

And for that we need a standard probably. I prefer we won't develop mlx_virtio_vfio_pci for NVIDIA virtio PCI devices but have a standard way to do migration.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]