OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication




On 2017年05月23日 13:47, Wei Wang wrote:
On 05/23/2017 10:08 AM, Jason Wang wrote:


On 2017年05月22日 19:46, Wang, Wei W wrote:
On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
On 2017年05月18日 11:03, Wei Wang wrote:
On 05/17/2017 02:22 PM, Jason Wang wrote:
On 2017年05月17日 14:16, Jason Wang wrote:
On 2017年05月16日 15:12, Wei Wang wrote:
Hi:

Care to post the driver codes too?

OK. It may take some time to clean up the driver code before post it out. You can first have a check of the draft at the repo here:
https://github.com/wei-w-wang/vhost-pci-driver

Best,
Wei
Interesting, looks like there's one copy on tx side. We used to
have zerocopy support for tun for VM2VM traffic. Could you please
try to compare it with your vhost-pci-net by:

We can analyze from the whole data path - from VM1's network stack
to send packets -> VM2's network stack to receive packets. The
number of copies are actually the same for both.
That's why I'm asking you to compare the performance. The only reason
for vhost-pci is performance. You should prove it.
There is another reason for vhost-pci besides maximum performance:

vhost-pci makes it possible for end-users to run networking or storage
appliances in compute clouds.  Cloud providers do not allow end-users
to run custom vhost-user processes on the host so you need vhost-pci.

Stefan
Then it has non NFV use cases and the question goes back to the performance comparing between vhost-pci and zerocopy vhost_net. If it does not perform
better, it was less interesting at least in this case.

Probably I can share what we got about vhost-pci and vhost-user:
https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
Right now, I don’t have the environment to add the vhost_net test.

Thanks, the number looks good. But I have some questions:

- Is the number measured through your vhost-pci kernel driver code?

Yes, the kernel driver code.

Interesting, in the above link, "l2fwd" was used in vhost-pci testing. I want to know more about the test configuration: If l2fwd is the one that dpdk had, want to know how can you make it work for kernel driver. (Maybe packet socket I think?) If not, want to know how do you configure it (e.g through bridge or act_mirred or others). And in OVS dpdk, is dpdk l2fwd + pmd used in the testing?


- Have you tested packet size other than 64B?

Not yet.

Better to test more since the time spent on 64B copy should be very fast.


- Is zerocopy supported in OVS-dpdk? If yes, is it enabled in your test?
zerocopy is not used in the test, but I don't think zerocopy can increase
the throughput to 2x.

I agree, but we need prove this with numbers.

Thanks

On the other side, we haven't put effort to optimize
the draft kernel driver yet.

Best,
Wei




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]