OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size




On 2017年06月13日 14:08, Wei Wang wrote:
On 06/13/2017 11:55 AM, Jason Wang wrote:

On 2017年06月13日 11:51, Wei Wang wrote:
On 06/13/2017 11:19 AM, Jason Wang wrote:

On 2017年06月13日 11:10, Wei Wang wrote:
On 06/13/2017 04:43 AM, Michael S. Tsirkin wrote:
On Mon, Jun 12, 2017 at 05:30:46PM +0800, Wei Wang wrote:
Ping for comments, thanks.
This was only posted a week ago, might be a bit too short for some
people.
OK, sorry for the push.
A couple of weeks is more reasonable before you ping.  Also, I
sent a bunch of comments on Thu, 8 Jun 2017.  You should probably
address these.

I responded to the comments. The main question is that I'm not sure
why we need the vhost backend to support VIRTIO_F_MAX_CHAIN_SIZE.
IMHO, that should be a feature proposed to solve the possible issue
caused by the QEMU implemented backend.
The issue is what if there's a mismatch of max #sgs between qemu and
vhost?

When the vhost backend is used, QEMU is not involved in the data path.
The vhost backend
directly gets what is offered by the guest from the vq. Why would
there be a mismatch of
max #sgs between QEMU and vhost, and what is the QEMU side max #sgs
used for? Thanks.
You need query the backend max #sgs in this case at least. no? If not
how do you know the value is supported by the backend?

Thanks

Here is my thought: vhost backend has already been supporting 1024 sgs,
so I think it might not be necessary to query the max sgs that the vhost
backend supports. In the setup phase, when QEMU detects the backend is
vhost, it assumes 1024 max sgs is supported, instead of giving an extra
call to query.

We can probably assume vhost kernel supports up to 1024 sgs. But how about for other vhost-user backends?

And what you said here makes me ask one of my questions in the past:

Do we have plan to extend 1024 to a larger value or 1024 looks good for the future years? If we only care about 1024, there's even no need for a new config filed, a feature flag is more than enough. If we want to extend it to e.g 2048, we definitely need to query vhost backend's limit (even for vhost-kernel).

Thanks


The advantage is that people who is using the vhost backend can upgrade
to use 1024 tx queue size by only applying the QEMU patches. Adding an extra call to query the size would need to patch their vhost backend (like vhost-user),
which is difficult for them.


Best,
Wei








---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]