OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [Qemu-devel] [PATCH] *** Vhost-pci RFC v2 ***


On 09/01/2016 09:05 PM, Marc-André Lureau wrote:
On Thu, Sep 1, 2016 at 4:13 PM Wei Wang <wei.w.wang@intel.com <mailto:wei.w.wang@intel.com>> wrote:

    On 09/01/2016 04:49 PM, Marc-André Lureau wrote:
    > Hi
    >
    > On Thu, Sep 1, 2016 at 12:19 PM Wei Wang <wei.w.wang@intel.com
    <mailto:wei.w.wang@intel.com>
    > <mailto:wei.w.wang@intel.com <mailto:wei.w.wang@intel.com>>> wrote:
    >
    >     On 08/31/2016 08:30 PM, Marc-André Lureau wrote:
    >
    >>     - If it could be made not pci-specific, a better name for the
    >>     device could be simply "driver": the driver of a virtio device.
    >>     Or the "slave" in vhost-user terminology - consumer of virtq. I
    >>     think you prefer to call it "backend" in general, but I find it
    >>     more confusing.
    >
    >     Not really. A virtio device has it own driver (e.g. a virtio-net
    >     driver for a virtio-net device). A vhost-pci device plays
    the role
    >     of a backend (just like vhost_net, vhost_user) for a virtio
    >     device. If we use the "device/driver" naming convention, the
    >     vhost-pci device is part of the "device". But I actually
    prefer to
    >     use "frontend/backend" :) If we check the QEMU's
    >     doc/specs/vhost-user.txt, it also uses "backend" to describe.
    >
    >
    > Yes, but it uses "backend" freely without any definition and to name
    > eventually different things. (at least "slave" is being defined
    as the
    > consumer of virtq, but I think some people don't like to use
    that word).
    >

    I think most people know the concept of backend/frontend, that's
    probably the reason why they usually don't explicitly explain it in a
    doc. If you guys don't have an objection, I suggest to use it in the
    discussion :)  The goal here is to get the design finalized first.
    When
    it comes to the final spec wording phase, we can decide which
    description is more proper.


"backend" is too broad for me. Instead I would stick to something closer to what we want to name and define. If it's the consumer of virtq, then why not call it that way.

OK. Let me get used to it (provider VM - frontend, consumer VM - backend).


    > Have you thought about making the device not pci specific? I don't
    > know much about mmio devices nor s/390, but if devices can hotplug
    > their own memory (I believe mmio can), then it should be possible to
    > define a device generic enough.

    Not yet. I think the main difference would be the way to map the
    frontend VM's memory (in our case, we use a BAR). Other things
    should be
    generic.


I hope some more knowledgeable people will chime in.

That would be great.



    >
    >>     - Why is it required or beneficial to support multiple
    "frontend"
    >>     devices over the same "vhost-pci" device? It could simplify
    >>     things if it was a single device. If necessary, that could also
    >>     be interesting as a vhost-user extension.
    >
    >     We call it "multiple backend functionalities" (e.g.
    vhost-pci-net,
    >     vhost-pci-scsi..). A vhost-pci driver contains multiple such
    >     backend functionalities, because in this way they can reuse
    >     (share) the same memory mapping. To be more precise, a vhost-pci
    >     device supplies the memory of a frontend VM, and all the backend
    >     functionalities need to access the same frontend VM memory,
    so we
    >     consolidate them into one vhost-pci driver to use one vhost-pci
    >     device.
    >
    >
    > That's what I imagined. Do you have a use case for that?

    Currently, we only have the network use cases. I think we can
    design it
    that way (multple backend functionalities), which is more generic (not
    just limited to network usages). When implementing it, we can
    first have
    the network backend functionality (i.e. vhost-pci-net) implemented. In
    the future, if people are interested in other backend
    functionalities, I
    think it should be easy to add them.


My question is not about the support of various kind of devices (that is clearly a worthy goal to me) but to have support simultaneously of several frontend/provider devices on the same vhost-pci device: is this required or necessary? I think it would simplify things if it was 1-1 instead, I would like to understand why you propose a different design.

It is not required, but necessary, I think. As mentioned above, those consumer-side functionalities basically access the same provider VM's memory, so I think one vhost-pci device is enough to hold that memory. When it comes to the consumer guest kernel, we only need to ioremap that memory once. Also, a pair of controlq-s is enough to handle the control path messages between all those functionalities and the QEMU. I think the design also looks compact in this way. what do you think?

If we make it an N-N model (each functionality has a vhost-pci device), then the QEMU and guest kernel need to repeat those memory setup things N times.

Best,
Wei


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]