OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [RESEND Patch v1 00/37] Implementation of vhost-pci for inter-vm commucation

Hi Marc-André,

Do you have any comments on the responses? Thanks.

On 12/20/2016 12:32 PM, Wei Wang wrote:
Hi Marc-André, thanks for the comments.

On 12/20/2016 12:43 AM, Marc-André Lureau wrote:
Hi Wei,

On Mon, Dec 19, 2016 at 7:00 AM Wei Wang <wei.w.wang@intel.com <mailto:wei.w.wang@intel.com>> wrote:

    This patch series implements vhost-pci, which is a point-to-point
    based inter-vm
    communication solution. The QEMU side implementation includes the
    extension, vhost-pci device emulation and management. The current
    device part
    implementation is based on virtio 1.0, but it can be easily
    upgraded to support
    the upcoming virtio 1.1.

    The current QEMU implementation supports the polling mode driver
    on both sides
    to receive packets. More features, such as interrupt support, live
    support, protected memory accesses will be added later.

I highly appreciate the effort you put in splitting the patch series and commenting each, although some are probably superfluous.

I will see if I can combine some of the unnecessary splits in the next version.

High level question, why do you need to create device dynamically? I would rather have the following qemu setup:

-chardev socket,id=chr,path=.. -device vhost-pci-net,chardev=chr

This would also avoid some global state (vp_slave etc)

With the above commands, the slave QEMU will create and plug in the vhost-pci-net device at the QEMU booting time. I think this won't work, because at the slave QEMU booting time, the master, in most cases, hasn't connected to the slave to transmit the info (e.g. memory info) that's required for the device setup - for example, the device bar can't be constructed without the memory info passed from the master, and if the device is created and plugged in the VM without a bar at the beginning, I think we don't have another chance to add a bar on the fly when the device is already driven in the guest. That's the reason that I get a global vp_slave to manage (dynamically create/destroy a device when requested by the master) the vhost-pci device.

Regarding the protocol changes to support slave request: I tried to explained that before, apprently I didn't manage to. It is not enough to support bidirectionnal communications to simply add chardev frontend handlers. Until now, qemu's code expects an immediate reply after a request. With your protocol change, it must now also consider that the slave may send a request before the master request reaches the slave handler. So all req/reply write()/read() must now handle in between requests from slave to be race-free (master can read back a request when it expects a reply). That's not really trivial change, that's why I proposed to have a secondary channel for slave->master communications in the past. Not only would this be easier to implement, but the protocol documentation would also be simpler, the cost is simply 1 additional unix socket (that I proposed to setup and pass with ancilliary data on the main channel).

I don't disagree with the second channel method. That implementation hasn't been in the upstream QEMU yet, are you planning to get it in first, so that we can upstream vhost-pci based on that?

    RESEND change: Fixed some coding style issue

there are some spelling to be fixed, and perhaps some variables/fields to rename (asyn -> async, crq -> ctrlq?) That can be addressed in a detailed review.

Sure, I will fix them.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]