OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] [PATCH v1 6/6] vhost-user: add VFIO based accelerators support


On Mon, Feb 05, 2018 at 06:47:51PM +0100, Paolo Bonzini wrote:
> On 25/01/2018 05:03, Tiwei Bie wrote:
> > The key difference with PCI passthru is that, in this case only
> > the data path of the device (e.g. DMA ring, notify region and
> > queue interrupt) is pass-throughed to the VM, the device control
> > path (e.g. PCI configuration space and MMIO regions) is still
> > defined and emulated by QEMU.
> > 
> > The benefits of keeping virtio device emulation in QEMU compared
> > with virtio device PCI passthru include (but not limit to):
> > 
> > - consistent device interface for guest OS in the VM;
> > - max flexibility on the hardware (i.e. the accelerators) design;
> > - leveraging the existing virtio live-migration framework;
> > 
> > The virtual IOMMU isn't supported by the accelerators for now.
> > Because vhost-user currently lacks of an efficient way to share
> > the IOMMU table in VM to vhost backend. That's why the software
> > implementation of virtual IOMMU support in vhost-user backend
> > can't support dynamic mapping well. Once this problem is solved
> > in vhost-user, virtual IOMMU can be supported by accelerators
> > too, and the IOMMU feature bit checking in this patch can be
> > removed.
> 
> I don't understand why this would use vhost-user.  vhost-user is meant
> for connecting to e.g. a user-space switch that is shared between
> multiple virtual machines.

Yeah, you're right!

The commit log you quoted is talking about the benefits
of vDPA (i.e. only passthru the data path), which is not
related to vhost-user.

The usage of vhost-user you described is exactly why we
want to use vhost-user. In our case, the accelerator for
each VM is a PCI VF device and the PCI card has vswitch
logic (the VFs are the ports of switch to connect VMs).
So the card is a vswitch accelerator which will be shared
between multiple VMs. If we extend vhost-user, QEMU can
keep using the vhost-user interface to connect to the
user-space switch which has an optional accelerator.

More details can be found in the "Why extend vhost-user
for vDPA" section of the cover letter:

----- START -----

Why extend vhost-user for vDPA
==============================

We have already implemented various virtual switches (e.g. OVS-DPDK)
based on vhost-user for VMs in the Cloud. They are purely software
running on CPU cores. When we have accelerators for such NFVi applications,
it's ideal if the applications could keep using the original interface
(i.e. vhost-user netdev) with QEMU, and infrastructure is able to decide
when and how to switch between CPU and accelerators within the interface.
And the switching (i.e. switch between CPU and accelerators) can be done
flexibly and quickly inside the applications.

----- END -----

I'll try to add these infos into the commit log. Thanks!

Best regards,
Tiwei Bie

> 
> In this case, there would be one VFIO device per VM (because different
> VM must be in different VFIO groups).  So I don't understand the benefit
> of configuring the control path of the VFIO device outside QEMU.
> 
> Paolo


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]