OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

On 06/13/2017 05:04 PM, Jason Wang wrote:

On 2017年06月13日 15:17, Wei Wang wrote:
On 06/13/2017 02:29 PM, Jason Wang wrote:
The issue is what if there's a mismatch of max #sgs between qemu and
When the vhost backend is used, QEMU is not involved in the data path.
The vhost backend
directly gets what is offered by the guest from the vq. Why would
there be a mismatch of
max #sgs between QEMU and vhost, and what is the QEMU side max #sgs
used for? Thanks.
You need query the backend max #sgs in this case at least. no? If not
how do you know the value is supported by the backend?


Here is my thought: vhost backend has already been supporting 1024 sgs, so I think it might not be necessary to query the max sgs that the vhost
backend supports. In the setup phase, when QEMU detects the backend is
vhost, it assumes 1024 max sgs is supported, instead of giving an extra
call to query.

We can probably assume vhost kernel supports up to 1024 sgs. But how about for other vhost-user backends?

So far, I haven't seen any vhost backend implementation supporting less than 1024 sgs.

Since vhost-user is an open protocol we can not check each implementation (some may be even close sourced). For safety, we need an explicit clarification on this.

And what you said here makes me ask one of my questions in the past:

Do we have plan to extend 1024 to a larger value or 1024 looks good for the future years? If we only care about 1024, there's even no need for a new config filed, a feature flag is more than enough. If we want to extend it to e.g 2048, we definitely need to query vhost backend's limit (even for vhost-kernel).

According to virtio spec (e.g. 2.4.4), unreasonably large descriptors are not encouraged to be used by the guest. If possible, I would suggest to use 1024 as the largest number of descriptors that the guest can chain, even when
we have larger queue size in the future. That is,
if (backend == QEMU backend)
config.max_chain_size = 1023 (defined by the qemu backend implementation);
else if (backend == vhost)
    config.max_chain_size = 1024;

It is transparent to the guest. From the guest's point of view, all it knows is a value
given to him via reading config.max_chain_size.

So not transparent actually, guest at least guest need to see and check for this. So the question still, since you only care about two cases in fact:

- backend supports 1024
- backend supports <1024 (qemu or whatever other backends)

So it looks like a new feature flag is more than enough. If device(backends) support this feature, it can make sure 1024 sgs is supported?

That wouldn't be enough. For example, QEMU3.0 backend supports max_chain_size=1023,
while QEMU4.0 backend supports max_chain_size=1021. How would the guest know
the max size with the same feature flag? Would it still chain 1023 descriptors with QEMU4.0?


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]