OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

On 06/13/2017 02:31 PM, Jason Wang wrote:

On 2017年06月13日 14:13, Wei Wang wrote:
On 06/13/2017 11:59 AM, Jason Wang wrote:

On 2017年06月13日 11:55, Jason Wang wrote:
The issue is what if there's a mismatch of max #sgs between qemu and

When the vhost backend is used, QEMU is not involved in the data
path. The vhost backend
directly gets what is offered by the guest from the vq.
FYI, qemu will try to fallback to userspace if there's something wrong
with vhost-kernel (e.g the IOMMU support). This doesn't work for
vhost-user actually, but it works for vhost-kernel.


That wouldn't be a problem. When it falls back to the QEMU backend,
the "max_chain_size" will be set according to the QEMU backend
(e.g. 1023). Guest will read the max_chain_size register.

What if there's backend that supports less than 1023? Or in the future, we increase the limit to e.g 2048?

I agree the potential issue is that it's assumed (hardcoded) that the vhost
backend supports 1024 chain size. This only comes to be an issue if the vhost
backend implementation supports less than 1024 in the future, but that
can be solved by introducing another feature to support that.

If that's acceptable, some customers will be easy to upgrade their
already deployed products to use 1024 tx queue size.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]