OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: [PATCH v2 0/6] Extend vhost-user to support VFIO based accelerators


On Wed, Mar 28, 2018 at 06:33:01PM +0300, Michael S. Tsirkin wrote:
> On Wed, Mar 28, 2018 at 08:24:07PM +0800, Tiwei Bie wrote:
> > > > Update notes
> > > > ============
> > > > 
> > > > IOMMU feature bit check is removed in this version, because:
> > > > 
> > > > The IOMMU feature is negotiable, when an accelerator is used and
> > > > it doesn't support virtual IOMMU, its driver just won't provide
> > > > this feature bit when vhost library querying its features. And if
> > > > it supports the virtual IOMMU, its driver can provide this feature
> > > > bit. It's not reasonable to add this limitation in this patch set.
> > > 
> > > Fair enough. Still:
> > > Can hardware on intel platforms actually support IOTLB requests?
> > > Don't you need to add support for vIOMMU shadowing instead?
> > > 
> > 
> > For the hardware I have, I guess they can't for now.
> 
> So VFIO in QEMU has support for vIOMMU shadowing.
> Can you use that somehow?

Yeah, I guess we can use it in some way. Actually supporting
vIOMMU is a quite interesting feature. It would provide
better security, and for the hardware backend case there
would be no performance penalty with static mapping after
the backend got all the mappings. I think it could be done
as another work. Based on your previous suggestion in this
thread, I have split the guest notification offload and host
notification offload (I'll send the new version very soon).
And I plan to let this patch set just focus on fixing the
most critical performance issue - the host notification offload.
With this fix, using hardware backend in vhost-user could get
a very big performance boost and become much more practicable.
So maybe we can focus on fixing this critical performance issue
first. How do you think?

> 
> Ability to run dpdk within guest seems important.

I think vIOMMU isn't a must to run DPDK in guest. For Linux
guest we also have igb_uio and uio_pci_generic to run DPDK,
for FreeBSD guest we have nic_uio. They don't need vIOMMU,
and they could offer the best performance.

Best regards,
Tiwei Bie

> 
> -- 
> MST
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]