OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] [PATCH] virtio-net: use mtu size as buffer length for big packets




On 8/9/2022 11:15 PM, Michael S. Tsirkin wrote:
On Wed, Aug 10, 2022 at 02:14:07AM -0400, Michael S. Tsirkin wrote:
On Tue, Aug 09, 2022 at 04:24:23PM -0700, Si-Wei Liu wrote:

On 8/9/2022 3:49 PM, Parav Pandit wrote:
From: Michael S. Tsirkin <mst@redhat.com>
Sent: Tuesday, August 9, 2022 6:26 PM
To: Parav Pandit <parav@nvidia.com>
Cc: Si-Wei Liu <si-wei.liu@oracle.com>; Jason Wang
<jasowang@redhat.com>; Gavin Li <gavinl@nvidia.com>; Hemminger,
Stephen <stephen@networkplumber.org>; davem
<davem@davemloft.net>; virtualization <virtualization@lists.linux-
foundation.org>; Virtio-Dev <virtio-dev@lists.oasis-open.org>;
jesse.brandeburg@intel.com; alexander.h.duyck@intel.com;
kubakici@wp.pl; sridhar.samudrala@intel.com; loseweigh@gmail.com; Gavi
Teitz <gavi@nvidia.com>
Subject: Re: [virtio-dev] [PATCH] virtio-net: use mtu size as buffer length for
big packets

On Tue, Aug 09, 2022 at 09:49:03PM +0000, Parav Pandit wrote:
From: Michael S. Tsirkin <mst@redhat.com>
Sent: Tuesday, August 9, 2022 5:38 PM
[..]
I think virtio-net driver doesn't differentiate MTU and MRU, in
which case the receive buffer will be reduced to fit the 1500B
payload size when mtu is lowered down to 1500 from 9000.
How? Driver reduced the mXu to 1500, say it is improved to post
buffers of
1500 bytes.
Device doesn't know about it because mtu in config space is RO field.
Device keep dropping 9K packets because buffers posted are 1500
bytes.
This is because device follows the spec " The device MUST NOT pass
received packets that exceed mtu".


The "mtu" here is the device config field, which is

          /* Default maximum transmit unit advice */

It is the field from struct virtio_net_config.mtu. right?
This is RO field for driver.

there is no guarantee device will not get a bigger packet.
Right. That is what I also hinted.
Hence, allocating buffers worth upto mtu is safer.
yes

When user overrides it, driver can be further optimized to honor such new
value on rx buffer posting.

no, not without a feature bit promising device won't get wedged.

I mean to say as_it_stands today, driver can decide to post smaller buffers with larger mtu.
Why device should be affected with it?
( I am not proposing such weird configuration but asking for sake of correctness).
I am also confused how the device can be wedged in this case.
Yea sorry. I misunderstood the code. It can't be.
Here's a problem as I see it. Let's say we reduce mtu.
Small buffers are added. Now we increase mtu.
Device will drop all packets until small buffers are consumed.

Should we make this depend on the vq reset ability maybe?
To be honest I am not sure if worth it, very few user changes MTU on the fly with traffic ongoing, for the most cases I've seen users just change it only once in deployment time. Even if they change it on the fly they may need to be aware of the consequence and implication of loss of packets. In real devices, mtu change could end up with link status change and in that case there's usually no guarantee arrived packet will be kept during the window.

While I could understand this would slightly introduce regression on functionality, as the worst case for packet loss if device dropping packets, it would be all smaller buffers of the full queue size. For correctness and elegance I don't mind introducing specific feature that changes MTU, or even relying on vq_reset is fine. device_reset would be too overwhelmed for this special use case IMHO.

-Siwei


--
MST



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]