[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
On 05/23/2017 02:32 PM, Jason Wang wrote:
On 2017年05月23日 13:47, Wei Wang wrote:On 05/23/2017 10:08 AM, Jason Wang wrote:On 2017年05月22日 19:46, Wang, Wei W wrote:On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:On 2017年05月19日 23:33, Stefan Hajnoczi wrote:Then it has non NFV use cases and the question goes back to the performance comparing between vhost-pci and zerocopy vhost_net. If it does not performOn Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:On 2017年05月18日 11:03, Wei Wang wrote:That's why I'm asking you to compare the performance. The only reasonOn 05/17/2017 02:22 PM, Jason Wang wrote:On 2017年05月17日 14:16, Jason Wang wrote:On 2017年05月16日 15:12, Wei Wang wrote:OK. It may take some time to clean up the driver code before post it out. You can first have a check of the draft at the repo here:Hi: Care to post the driver codes too?https://github.com/wei-w-wang/vhost-pci-driver Best, WeiInteresting, looks like there's one copy on tx side. We used tohave zerocopy support for tun for VM2VM traffic. Could you pleasetry to compare it with your vhost-pci-net by:We can analyze from the whole data path - from VM1's network stack to send packets -> VM2's network stack to receive packets. The number of copies are actually the same for both.for vhost-pci is performance. You should prove it.There is another reason for vhost-pci besides maximum performance:vhost-pci makes it possible for end-users to run networking or storage appliances in compute clouds. Cloud providers do not allow end-users to run custom vhost-user processes on the host so you need vhost-pci.Stefanbetter, it was less interesting at least in this case.Probably I can share what we got about vhost-pci and vhost-user:https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdfRight now, I don’t have the environment to add the vhost_net test.Thanks, the number looks good. But I have some questions: - Is the number measured through your vhost-pci kernel driver code?Yes, the kernel driver code.Interesting, in the above link, "l2fwd" was used in vhost-pci testing. I want to know more about the test configuration: If l2fwd is the one that dpdk had, want to know how can you make it work for kernel driver. (Maybe packet socket I think?) If not, want to know how do you configure it (e.g through bridge or act_mirred or others). And in OVS dpdk, is dpdk l2fwd + pmd used in the testing?
Oh, that l2fwd is a kernel module from OPNFV vsperf (http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html) For both legacy and vhost-pci cases, they use the same l2fwd module. No bridge is used, the module already works at L2 to forward packets between two net devices. Best, Wei
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]