OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication




On 2017年05月25日 20:22, Jason Wang wrote:

Even with vhost-pci to virito-net configuration, I think rx zerocopy could be achieved but not implemented in your driver (probably more easier in pmd).

Yes, it would be easier with dpdk pmd. But I think it would not be important in the NFV use case,
since the data flow goes to one direction often.

Best,
Wei


I would say let's don't give up on any possible performance optimization now. You can do it in the future.

If you still want to keep the copy in both tx and rx, you'd better:

- measure the performance of larger packet size other than 64B
- consider whether or not it's a good idea to do it in vcpu thread, or move it to another one(s)

Thanks

And what's more important, since you care NFV seriously. I would really suggest you to draft a pmd for vhost-pci and use it to for benchmarking. It's real life case. OVS dpdk is known for not optimized for kernel drivers.

Good performance number can help us to examine the correctness of both design and implementation.

Thanks


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]