OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [RFC PATCH] docs/interop: define STANDALONE protocol feature for vhost-user


"Michael S. Tsirkin" <mst@redhat.com> writes:

> On Tue, Jul 04, 2023 at 01:36:00PM +0100, Alex BennÃe wrote:
>> Currently QEMU has to know some details about the back-end to be able
>> to setup the guest. While various parts of the setup can be delegated
>> to the backend (for example config handling) this is a very piecemeal
>> approach.
>
>> This patch suggests a new feature flag (VHOST_USER_PROTOCOL_F_STANDALONE)
>> which the back-end can advertise which allows a probe message to be
>> sent to get all the details QEMU needs to know in one message.
>
> The reason we do piecemeal is that these existing pieces can be reused
> as others evolve or fall by wayside.

Sure I have no objection in principle but we then turn code like:

        if (dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_STANDALONE)) {
            err = vhost_user_get_backend_specs(dev, errp);
            if (err < 0) {
                error_setg_errno(errp, EPROTO, "vhost_get_backend_specs failed");
                return -EPROTO;
            }
        }

to

        if (dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_ID) &&
            dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_CFGSZ) &&
            dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MINVQ) &&
            dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MAXVQ)
        ) {
            err = vhost_user_get_virtio_id(dev, errp);
            if (err < 0) {
                error_setg_errno(errp, EPROTO, "vhost_get_backend_id failed");
                return -EPROTO;
            }
            err = vhost_user_get_virtio_cfgsz(dev, errp);
            if (err < 0) {
                error_setg_errno(errp, EPROTO, "vhost_get_backend_cfgsz failed");
                return -EPROTO;
            }
            err = vhost_user_get_virtio_minvq(dev, errp);
            if (err < 0) {
                error_setg_errno(errp, EPROTO, "vhost_get_backend_minvq failed");
                return -EPROTO;
            }
            err = vhost_user_get_virtio_maxvq(dev, errp);
            if (err < 0) {
                error_setg_errno(errp, EPROTO, "vhost_get_backend_maxvq failed");
                return -EPROTO;
            }
            dev->specs.valid = true;
        }

for little gain IMHO.

> For example, I can think of instances where you want to connect
> specifically to e.g. networking backend, and specify it
> on command line. Reasons could be many, e.g. for debugging,
> or to prevent connecting to wrong device on wrong channel
> (kind of like type safety).

I don't quite follow what you are trying to say here.

> What is the reason to have 1 message? startup latency?
> How about we allow pipelining several messages then?
> Will be easier.

I'm not overly worried about performance because this is all at
start-up. I am worried about excessive complexity though. We already
have quite a lot of interacting protocol messages.

>
>
>> 
>> Signed-off-by: Alex BennÃe <alex.bennee@linaro.org>
>> 
>> ---
>> Initial RFC for discussion. I intend to prototype this work with QEMU
>> and one of the rust-vmm vhost-user daemons.
>> ---
>>  docs/interop/vhost-user.rst | 37 +++++++++++++++++++++++++++++++++++++
>>  hw/virtio/vhost-user.c      |  8 ++++++++
>>  2 files changed, 45 insertions(+)
>> 
>> diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
>> index 5a070adbc1..85b1b1583a 100644
>> --- a/docs/interop/vhost-user.rst
>> +++ b/docs/interop/vhost-user.rst
>> @@ -275,6 +275,21 @@ Inflight description
>>  
>>  :queue size: a 16-bit size of virtqueues
>>  
>> +Backend specifications
>> +^^^^^^^^^^^^^^^^^^^^^^
>> +
>> ++-----------+-------------+------------+------------+
>> +| device id | config size |   min_vqs  |   max_vqs  |
>> ++-----------+-------------+------------+------------+
>> +
>> +:device id: a 32-bit value holding the VirtIO device ID
>> +
>> +:config size: a 32-bit value holding the config size (see ``VHOST_USER_GET_CONFIG``)
>> +
>> +:min_vqs: a 32-bit value holding the minimum number of vqs supported
>> +
>> +:max_vqs: a 32-bit value holding the maximum number of vqs supported, must be >= min_vqs
>> +
>
> looks like a weird set of info.

It's basically the information you need for -device vhost-user-device to
start-up (and what is essentially the information set by the stubs as
they start-up).

> why would we want # of vqs and not their sizes?

I thought the vring's themselves where allocated by the driver. We only
need to the number of vqs so we can allocate the tracking structures.

> why config size but not config itself?

We already have GET_CONFIG and SET_CONFIG but without knowing the size
of the config space we can't properly set it up.

<snip>

-- 
Alex BennÃe
Virtualisation Tech Lead @ Linaro


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]