OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication


On Tue, May 23, 2017 at 07:09:05PM +0800, Wei Wang wrote:
> On 05/20/2017 04:44 AM, Michael S. Tsirkin wrote:
> > On Fri, May 19, 2017 at 05:00:37PM +0800, Wei Wang wrote:
> > > > > That being said, we compared to vhost-user, instead of vhost_net,
> > > > > because vhost-user is the one
> > > > > that is used in NFV, which we think is a major use case for vhost-pci.
> > > > If this is true, why not draft a pmd driver instead of a kernel one?
> > > Yes, that's right. There are actually two directions of the vhost-pci driver
> > > implementation - kernel driver
> > > and dpdk pmd. The QEMU side device patches are first posted out for
> > > discussion, because when the device
> > > part is ready, we will be able to have the related team work on the pmd
> > > driver as well. As usual, the pmd
> > > driver would give a much better throughput.
> > For PMD to work though, the protocol will need to support vIOMMU.
> > Not asking you to add it right now since it's work in progress
> > for vhost user at this point, but something you will have to
> > keep in mind. Further, reviewing vhost user iommu patches might be
> > a good idea for you.
> > 
> 
> For the dpdk pmd case, I'm not sure if vIOMMU is necessary to be used -
> Since it only needs to share a piece of memory between the two VMs, we
> can only send that piece of memory info for sharing, instead of sending the
> entire VM's memory and using vIOMMU to expose that accessible portion.
> 
> Best,
> Wei

I am not sure I understand what you are saying here. My understanding is
that at the moment with VM1 using virtio and VM2 vhost pci, all of VM1's
memory is exposed to VM2. If VM1 is using a userspace driver, it needs a
way for the kernel to limit the memory regions which are accessible to
the device. At the moment this is done by VFIO by means of interacting
with a vIOMMU.

-- 
MST


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]