OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: [PATCH v1 6/6] vhost-user: add VFIO based accelerators support


On Fri, Jan 26, 2018 at 11:41:27AM +0800, Jason Wang wrote:
> On 2018年01月26日 07:59, Michael S. Tsirkin wrote:
> > > The virtual IOMMU isn't supported by the accelerators for now.
> > > Because vhost-user currently lacks of an efficient way to share
> > > the IOMMU table in VM to vhost backend. That's why the software
> > > implementation of virtual IOMMU support in vhost-user backend
> > > can't support dynamic mapping well.
> > What exactly is meant by that? vIOMMU seems to work for people,
> > it's not that fast if you change mappings all the time,
> > but e.g. dpdk within guest doesn't.
> 
> Yes, software implementation support dynamic mapping for sure. I think the
> point is, current vhost-user backend can not program hardware IOMMU. So it
> can not let hardware accelerator to cowork with software vIOMMU.

Vhost-user backend can program hardware IOMMU. Currently
vhost-user backend (or more precisely the vDPA driver in
vhost-user backend) will use the memory table (delivered
by the VHOST_USER_SET_MEM_TABLE message) to program the
IOMMU via vfio, and that's why accelerators can use the
GPA (guest physical address) in descriptors directly.

Theoretically, we can use the IOVA mapping info (delivered
by the VHOST_USER_IOTLB_MSG message) to program the IOMMU,
and accelerators will be able to use IOVA. But the problem
is that in vhost-user QEMU won't push all the IOVA mappings
to backend directly. Backend needs to ask for those info
when it meets a new IOVA. Such design and implementation
won't work well for dynamic mappings anyway and couldn't
be supported by hardware accelerators.

> I think
> that's another call to implement the offloaded path inside qemu which has
> complete support for vIOMMU co-operated VFIO.

Yes, that's exactly what we want. After revisiting the
last paragraph in the commit message, I found it's not
really accurate. The practicability of dynamic mappings
support is a common issue for QEMU. It also exists for
vfio (hw/vfio in QEMU). If QEMU needs to trap all the
map/unmap events, the data path performance couldn't be
high. If we want to thoroughly fix this issue especially
for vfio (hw/vfio in QEMU), we need to have the offload
path Jason mentioned in QEMU. And I think accelerators
could use it too.

Best regards,
Tiwei Bie

> 
> Thanks
> 
> > 
> > > Once this problem is solved
> > > in vhost-user, virtual IOMMU can be supported by accelerators
> > > too, and the IOMMU feature bit checking in this patch can be
> > > removed.
> > Given it works with software backends right now, I suspect
> > this will be up to you guys to address.
> > 
> 


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]