OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

On 06/16/2017 11:15 PM, Michael S. Tsirkin wrote:
On Fri, Jun 16, 2017 at 06:10:27PM +0800, Wei Wang wrote:
On 06/16/2017 04:57 PM, Jason Wang wrote:

On 2017年06月16日 11:22, Michael S. Tsirkin wrote:
I think the issues can be solved by VIRTIO_F_MAX_CHAIN_SIZE.

For now, how about splitting it into two series of patches:
1) enable 1024 tx queue size for vhost-user, to let the users of
to easily use 1024 queue size.
Fine with me. 1) will get property from user but override it on
!vhost-user. Do we need a protocol flag? It seems prudent but we get
back to cross-version migration issues that are still pending solution.
What do you have in mind about the protocol flag?
Merely this: older clients might be confused if they get
a s/g with 1024 entries.

I don't disagree to add that. But the client (i.e. vhost-user
slave) is a host userspace program, and it seems that users can
easily patch their host side applications if there is any issue,
maybe we also don't need to be too prudent about that, do we?

Also, the usage of the protocol flag looks like a duplicate of what
we plan to add in the next step - the virtio common feature flag,
VIRTIO_F_MAX_CHAIN_SIZE, which is more general and can be used
across different backends.

Btw, I just tested the patch of 1), and it works fine with migration from
patched to non-patched version of QEMU. I'll send it out. Please have a

Marc Andre, what's the status of that work?

2) enable VIRTIO_F_MAX_CHAIN_SIZE,  to enhance robustness.
Rather, to support it for more backends.
Ok, if we want to support different values of max chain size in the
future. It would be problematic for migration of cross backends,
consider the case when migrating from 2048 (vhost-user) to 1024

I think that wouldn't be a problem. If there is a possibility to change the
backend resulting in a change of config.max_change_size, a configuration
change notification can be injected to the guest, then guest will read and
get the new value.

This might not be supportable by all guests. E.g. some requests might
already be in the queue. I'm not against reconfiguring devices across
migration but I think it's a big project. As a 1st step I would focus on
keeping configuration consistent across migrations.

Would it be common and fair for vendors to migrate from a new QEMU
to an old QEMU, which would downgrade the services that they provide
to their users?

Even for any reason that downgrade happens, I think it is
sensible to sacrifice something (e.g. drop the unsupported
requests from the queue) for the transition, right?

On the other side, packet drop is normally handled at the packet
protocol layer, e.g. TCP. Also, usually some amount of packet drop
is acceptable during live migration.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]