OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM communication

On 12/05/2017 04:49 PM, Avi Cohen (A) wrote:

-----Original Message-----
From: Jason Wang [mailto:jasowang@redhat.com]
Sent: Tuesday, 05 December, 2017 9:19 AM
To: Wei Wang; virtio-dev@lists.oasis-open.org; qemu-devel@nongnu.org;
mst@redhat.com; marcandre.lureau@redhat.com; stefanha@redhat.com;
Cc: jan.kiszka@siemens.com; Avi Cohen (A); zhiyong.yang@intel.com
Subject: Re: [virtio-dev] [PATCH v3 0/7] Vhost-pci for inter-VM communication

On 2017年12月05日 15:15, Wei Wang wrote:
On 12/05/2017 03:01 PM, Jason Wang wrote:

On 2017年12月05日 11:33, Wei Wang wrote:
Vhost-pci is a point-to-point based inter-VM communication solution.
patch series implements the vhost-pci-net device setup and
emulation. The device is implemented as a virtio device, and it is
set up via the vhost-user protocol to get the neessary info (e.g the
memory info of the remote VM, vring info).

Currently, only the fundamental functions are implemented. More
features, such as MQ and live migration, will be updated in the

The DPDK PMD of vhost-pci has been posted to the dpdk mailinglist here:

v2->v3 changes:
1) static device creation: instead of creating and hot-plugging the
     device when receiving a vhost-user msg, the device is not
     via the qemu booting command line.
2) remove vqs: rq and ctrlq are removed in this version.
      - receive vq: the receive vq is not needed anymore. The PMD
                    shares the remote txq and rxq - grab from remote
txq to
                    receive packets, and put to rxq to send packets.
      - ctrlq: the ctrlq is replaced by the first 4KB metadata area
of the
               device Bar-2.
3) simpler implementation: the entire implementation has been
     from ~1800 LOC to ~850 LOC.

Any performance numbers you can share?

Hi Jason,

Performance testing and tuning on the data plane is in progress (btw,
that wouldn't affect the device part patches).
If possible, could we start the device part patch review in the meantime?


Hi Wei:

Will do, but basically, the cover lacks of the motivation for vhost-pci and I want
to see some numbers first since I suspect it can over-perform exist data-path.

[Avi Cohen (A)]
Hi Wei
I can try testing to get **numbers**  - I can do it now , need a little help from you
I've started with downloading/building and installing  of > driver: https://github.com/wei-w-wang/vhost-pci-driver  to the kernel guest,
  **without**  downloading the 2nd patch > device: https://github.com/wei-w-wang/vhost-pci-device
But my guest kernel was corrupted after reboot (kernel panic/out of mem ..) -  can you tell me the steps to apply these patches ?
Best Regards

The kernel driver do have some bugs in some environment, so it might be a good source to get feeling about how it works, but I wouldn't recommend you to test it by yourself at this point.

We are currently focusing on the dpdk pmd, and wouldn't get back to the kernel driver until all are merged. Sorry about that.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]