OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication

On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
On 2017年05月18日 11:03, Wei Wang wrote:
On 05/17/2017 02:22 PM, Jason Wang wrote:
On 2017年05月17日 14:16, Jason Wang wrote:
On 2017年05月16日 15:12, Wei Wang wrote:

Care to post the driver codes too?

OK. It may take some time to clean up the driver code before
post it out. You can first
have a check of the draft at the repo here:

Interesting, looks like there's one copy on tx side. We used to
have zerocopy support for tun for VM2VM traffic. Could you
please try to compare it with your vhost-pci-net by:

We can analyze from the whole data path - from VM1's network stack to
send packets -> VM2's
network stack to receive packets. The number of copies are actually the
same for both.
That's why I'm asking you to compare the performance. The only reason for
vhost-pci is performance. You should prove it.
There is another reason for vhost-pci besides maximum performance:

vhost-pci makes it possible for end-users to run networking or storage
appliances in compute clouds.  Cloud providers do not allow end-users to
run custom vhost-user processes on the host so you need vhost-pci.


Then it has non NFV use cases and the question goes back to the performance comparing between vhost-pci and zerocopy vhost_net. If it does not perform better, it was less interesting at least in this case.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]