OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets



On 9/7/2022 11:15 AM, Gavin Li wrote:
External email: Use caution opening links or attachments


On 9/7/2022 10:17 AM, Jason Wang wrote:
External email: Use caution opening links or attachments


å 2022/9/1 10:10, Gavin Li åé:
Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for big
packets even when GUEST_* offloads are not present on the device.
However, if guest GSO is not supported, it would be sufficient to
allocate segments to cover just up the MTU size and no further.
Allocating the maximum amount of segments results in a large waste of
buffer space in the queue, which limits the number of packets that can
be buffered and can result in reduced performance.

Therefore, if guest GSO is not supported, use the MTU to calculate the
optimal amount of segments required.

When guest offload is enabled at runtime, RQ already has packets of
bytes
less than 64K. So when packet of 64KB arrives, all the packets of such
size will be dropped. and RQ is now not usable.

So this means that during set_guest_offloads() phase, RQs have to be
destroyed and recreated, which requires almost driver reload.

If VIRTIO_NET_F_CTRL_GUEST_OFFLOADS has been negotiated, then it should
always treat them as GSO enabled.

Accordingly, for now the assumption is that if guest GSO has been
negotiated then it has been enabled, even if it's actually been disabled
at runtime through VIRTIO_NET_F_CTRL_GUEST_OFFLOADS.


Nit: Actually, it's not the assumption but the behavior of the codes
itself. Since we don't try to change guest offloading in probe so it's
ok to check GSO via negotiated features?

Above commit log description is incorrect.
It got here from an intermediate patch.

Actually, GSO always takes the priority. If it is offered, driver will always post 64K worth of buffers.
When it is not offered, mtu is honored.

Let me repost v6 with above corrected commit log.
Thanks

Thanks



Below is the iperf TCP test results over a Mellanox NIC, using vDPA for
1 VQ, queue size 1024, before and after the change, with the iperf
server running over the virtio-net interface.

MTU(Bytes)/Bandwidth (Gbit/s)
ÂÂÂÂÂÂÂÂÂÂÂÂÂ BeforeÂÂ After
ÂÂ 1500ÂÂÂÂÂÂÂ 22.5ÂÂÂÂ 22.4
ÂÂ 9000ÂÂÂÂÂÂÂ 12.8ÂÂÂÂ 25.9

Signed-off-by: Gavin Li <gavinl@nvidia.com>
Reviewed-by: Gavi Teitz <gavi@nvidia.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
changelog:
v4->v5
- Addressed comments from Michael S. Tsirkin
- Improve commit message
v3->v4
- Addressed comments from Si-Wei
- Rename big_packets_sg_num with big_packets_num_skbfrags
v2->v3
- Addressed comments from Si-Wei
- Simplify the condition check to enable the optimization
v1->v2
- Addressed comments from Jason, Michael, Si-Wei.
- Remove the flag of guest GSO support, set sg_num for big packets and
ÂÂ use it directly
- Recalculate sg_num for big packets in virtnet_set_guest_offloads
- Replace the round up algorithm with DIV_ROUND_UP
---
 drivers/net/virtio_net.c | 37 ++++++++++++++++++++++++-------------
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index f831a0290998..dbffd5f56fb8 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -225,6 +225,9 @@ struct virtnet_info {
ÂÂÂÂÂ /* I like... big packets and I cannot lie! */
ÂÂÂÂÂ bool big_packets;

+ÂÂÂÂ /* number of sg entries allocated for big packets */
+ÂÂÂÂ unsigned int big_packets_num_skbfrags;
+
ÂÂÂÂÂ /* Host will merge rx buffers for big packets (shake it! shake
it!) */
ÂÂÂÂÂ bool mergeable_rx_bufs;

@@ -1331,10 +1334,10 @@ static int add_recvbuf_big(struct
virtnet_info *vi, struct receive_queue *rq,
ÂÂÂÂÂ char *p;
ÂÂÂÂÂ int i, err, offset;

-ÂÂÂÂ sg_init_table(rq->sg, MAX_SKB_FRAGS + 2);
+ÂÂÂÂ sg_init_table(rq->sg, vi->big_packets_num_skbfrags + 2);

-ÂÂÂÂ /* page in rq->sg[MAX_SKB_FRAGS + 1] is list tail */
-ÂÂÂÂ for (i = MAX_SKB_FRAGS + 1; i > 1; --i) {
+ÂÂÂÂ /* page in rq->sg[vi->big_packets_num_skbfrags + 1] is list
tail */
+ÂÂÂÂ for (i = vi->big_packets_num_skbfrags + 1; i > 1; --i) {
ÂÂÂÂÂÂÂÂÂÂÂÂÂ first = get_a_page(rq, gfp);
ÂÂÂÂÂÂÂÂÂÂÂÂÂ if (!first) {
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ if (list)
@@ -1365,7 +1368,7 @@ static int add_recvbuf_big(struct virtnet_info
*vi, struct receive_queue *rq,

ÂÂÂÂÂ /* chain first in list head */
ÂÂÂÂÂ first->private = (unsigned long)list;
-ÂÂÂÂ err = virtqueue_add_inbuf(rq->vq, rq->sg, MAX_SKB_FRAGS + 2,
+ÂÂÂÂ err = virtqueue_add_inbuf(rq->vq, rq->sg,
vi->big_packets_num_skbfrags + 2,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ first, gfp);
ÂÂÂÂÂ if (err < 0)
ÂÂÂÂÂÂÂÂÂÂÂÂÂ give_pages(rq, first);
@@ -3690,13 +3693,27 @@ static bool virtnet_check_guest_gso(const
struct virtnet_info *vi)
ÂÂÂÂÂÂÂÂÂÂÂÂÂ virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO);
 }

+static void virtnet_set_big_packets_fields(struct virtnet_info *vi,
const int mtu)
+{
+ÂÂÂÂ bool guest_gso = virtnet_check_guest_gso(vi);
+
+ÂÂÂÂ /* If device can receive ANY guest GSO packets, regardless of mtu,
+ÂÂÂÂÂ * allocate packets of maximum size, otherwise limit it to only
+ÂÂÂÂÂ * mtu size worth only.
+ÂÂÂÂÂ */
+ÂÂÂÂ if (mtu > ETH_DATA_LEN || guest_gso) {
+ÂÂÂÂÂÂÂÂÂÂÂÂ vi->big_packets = true;
+ÂÂÂÂÂÂÂÂÂÂÂÂ vi->big_packets_num_skbfrags = guest_gso ?
MAX_SKB_FRAGS : DIV_ROUND_UP(mtu, PAGE_SIZE);
+ÂÂÂÂ }
+}
+
 static int virtnet_probe(struct virtio_device *vdev)
 {
ÂÂÂÂÂ int i, err = -ENOMEM;
ÂÂÂÂÂ struct net_device *dev;
ÂÂÂÂÂ struct virtnet_info *vi;
ÂÂÂÂÂ u16 max_queue_pairs;
-ÂÂÂÂ int mtu;
+ÂÂÂÂ int mtu = 0;

ÂÂÂÂÂ /* Find if host supports multiqueue/rss virtio_net device */
ÂÂÂÂÂ max_queue_pairs = 1;
@@ -3784,10 +3801,6 @@ static int virtnet_probe(struct virtio_device
*vdev)
ÂÂÂÂÂ INIT_WORK(&vi->config_work, virtnet_config_changed_work);
ÂÂÂÂÂ spin_lock_init(&vi->refill_lock);

-ÂÂÂÂ /* If we can receive ANY GSO packets, we must allocate large
ones. */
-ÂÂÂÂ if (virtnet_check_guest_gso(vi))
-ÂÂÂÂÂÂÂÂÂÂÂÂ vi->big_packets = true;
-
ÂÂÂÂÂ if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
ÂÂÂÂÂÂÂÂÂÂÂÂÂ vi->mergeable_rx_bufs = true;

@@ -3853,12 +3866,10 @@ static int virtnet_probe(struct virtio_device
*vdev)

ÂÂÂÂÂÂÂÂÂÂÂÂÂ dev->mtu = mtu;
ÂÂÂÂÂÂÂÂÂÂÂÂÂ dev->max_mtu = mtu;
-
-ÂÂÂÂÂÂÂÂÂÂÂÂ /* TODO: size buffers correctly in this case. */
-ÂÂÂÂÂÂÂÂÂÂÂÂ if (dev->mtu > ETH_DATA_LEN)
-ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ vi->big_packets = true;
ÂÂÂÂÂ }

+ÂÂÂÂ virtnet_set_big_packets_fields(vi, mtu);
+
ÂÂÂÂÂ if (vi->any_header_sg)
ÂÂÂÂÂÂÂÂÂÂÂÂÂ dev->needed_headroom = vi->hdr_len;



---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]