OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio] A plea on the first draft...


"Michael S. Tsirkin" <mst@redhat.com> writes:
> On Wed, Oct 30, 2013 at 03:07:04PM +1030, Rusty Russell wrote:
>> Hi all,
>> 
>>         I'd really appreciate it if you could make time during the next
>> week to comb your various areas of the draft to see if there are any
>> places we can remove old features or advice.
>> 
>> Skimming through, there are three I am aware of:
>> 
>> 1) We should unconditionally recommend resetting the device as first
>>    step.  It's good practice.
>> 
>> 2) We should not suggest that notifications occur in the
>>    no-free-descriptor case.  It's bad advice, and only helps with buggy
>>    hosts.
>> 
>> 3) We should make the net header a constant size by always having a
>>    "num_buffers" field, and insisting that it be 1 if !VIRTIO_NET_F_MRG_RXBUF.
>> 
>> Cheers,
>> Rusty.
>
> I am not sure I agree with 3.

Yep, I put them together here, because I'm lazy (I edited the draft as I
re-read it).

> In the end it does not seem simplify either guests or hosts.

Having a fixed size structure is definitely a simplification.  See the
rough patch to the Linux kernel below (which just removes support; we
won't actually do this!)

> And if you implement transitional drivers, it complicates
> non mergeable buffer handling.

Not at all, since we already have a toggle on the size.  I didn't create
a patch for this one.

> Agree with 1 and 2.
> Let's split them so we can vote separately?

Yes, I'll put that in the Agenda separately.

> What would simplify drivers a lot is if we said that
> all devices must support mergeable buffers
> if enabled by driver.
> That's a large chunk of code in guest.
>
> Want to write a text like this?

But it's nice to have simple devices, too.

Perhaps we should drop big packet support though?  So if we don't have
mergeable buffers, we only support receiving ETH_HLEN + VLAN_HLEN +
ETH_DATA_LEN (1518) byte packets?

That should simplify things.

See patch 2 below.





>
>
>> diff --git a/virtio-v1.0-wd01-part1-specification.txt b/virtio-v1.0-wd01-part1-specification.txt
>> index ce79c05..fc04d4a 100644
>> --- a/virtio-v1.0-wd01-part1-specification.txt
>> +++ b/virtio-v1.0-wd01-part1-specification.txt
>> @@ -492,7 +492,7 @@ how to communicate with the specific device.
>>  2.2.1. Device Initialization
>>  ---------------------------
>>  
>> -1. Reset the device. This is not required on initial start up.
>> +1. Reset the device.
>>  
>>  2. The ACKNOWLEDGE status bit is set: we have noticed the device.
>>  
>> @@ -561,10 +561,6 @@ operates as follows:
>>  
>>  1. Place the buffer(s) into free descriptor(s).
>>  
>> -  (a) If there are no free descriptors, the guest may choose to
>> -    notify the device even if notifications are suppressed (to
>> -    reduce latency).[8]
>> -
>>  2. Place the id of the buffer in the next ring entry of the
>>    available ring.
>>  
>> @@ -1955,7 +1951,7 @@ case, the packet itself is preceeded by a header:
>>  		u16 gso_size;
>>  		u16 csum_start;
>>  		u16 csum_offset;
>> -	/* Only if VIRTIO_NET_F_MRG_RXBUF: */
>> +		/* Ignored for transmitted packets. */
>>  		u16 num_buffers;
>>  	};
>>  
>> @@ -2001,12 +1997,9 @@ the different features the driver negotiated.
>>      the VIRTIO_NET_HDR_GSO_ECN bit may be set in “gso_type” as
>>      well, indicating that the TCP packet has the ECN bit set.[18]
>>  
>> -3. If the driver negotiated the VIRTIO_NET_F_MRG_RXBUF feature,
>> -  the num_buffers field is set to zero.
>> -
>> -4. The header and packet are added as one output buffer to the
>> +3. The header and packet are added as one output buffer to the
>>    transmitq, and the device is notified of the new entry (see "2.4.1.4.
>> -  Notifying The Device").[19]
>> +  Notifying The Device").
>>  
>>  2.4.1.5.1.1. Packet Transmission Interrupt
>>  -----------------------------------------
>> @@ -2046,17 +2039,17 @@ packets until no more are found, then re-enable them.
>>  
>>  Processing packet involves:
>>  
>> -1. If the driver negotiated the VIRTIO_NET_F_MRG_RXBUF feature,
>> -  then the “num_buffers” field indicates how many descriptors
>> -  this packet is spread over (including this one). This allows
>> -  receipt of large packets without having to allocate large
>> +1. The “num_buffers” field indicates how many descriptors this packet
>> +  is spread over (including this one).  The device must set this to 1
>> +  if VIRTIO_NET_F_MRG_RXBUF has not been negotiated.
>> +
>> +  This allows receipt of large packets without having to allocate large
>>    buffers. In this case, there will be at least “num_buffers” in
>>    the used ring, and they should be chained together to form a
>>    single packet. The other buffers will not begin with a struct
>>    virtio_net_hdr.
>>  
>> -2. If the VIRTIO_NET_F_MRG_RXBUF feature was not negotiated, or
>> -  the “num_buffers” field is one, then the entire packet will be
>> +2. If “num_buffers” field is one, then the entire packet will be
>>    contained within this buffer, immediately following the struct
>>    virtio_net_hdr.
>>  
>> @@ -3581,11 +3574,6 @@ separate cache lines.
>>  [7] These fields are kept here because this is the only part of the
>>  virtqueue written by the device
>>  
>> -[8] The Linux drivers do this only for read-only buffers: for
>> -write-only buffers, it is assumed that the driver is merely
>> -trying to keep the receive buffer ring full, and no notification
>> -of this expected condition is necessary.
>> -
>>  [9] https://lists.linux-foundation.org/mailman/listinfo/virtualization
>>  
>>  [11] Even if it does mean documenting design or implementation
>> @@ -3620,9 +3608,6 @@ as a guarantee of the transport header size.
>>  [18] This case is not handled by some older hardware, so is called out
>>  specifically in the protocol.
>>  
>> -[19] Note that the header will be two bytes longer for the
>> -VIRTIO_NET_F_MRG_RXBUF case.
>> -
>>  [20] Obviously each one can be split across multiple descriptor
>>  elements.
>>  
>> 
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe from this mail list, you must leave the OASIS TC that
>> generates this mail.  Follow this link to all your TCs in OASIS at:
>> https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php

PATCH1: put num_buffers in struct virtio_net_hdr

 drivers/net/virtio_net.c        | 109 ++++++++++++++++------------------------
 include/uapi/linux/virtio_net.h |   6 ---
 2 files changed, 44 insertions(+), 71 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 77f4b3ce71b6..487894d2a4b5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -134,13 +134,6 @@ struct virtnet_info {
 	struct notifier_block nb;
 };
 
-struct skb_vnet_hdr {
-	union {
-		struct virtio_net_hdr hdr;
-		struct virtio_net_hdr_mrg_rxbuf mhdr;
-	};
-};
-
 struct padded_vnet_hdr {
 	struct virtio_net_hdr hdr;
 	/*
@@ -148,7 +141,7 @@ struct padded_vnet_hdr {
 	 * QEMU bug, and data sg buffer shares same page with this header sg.
 	 * This padding makes next sg 16 byte aligned after virtio_net_hdr.
 	 */
-	char padding[6];
+	char padding[4];
 };
 
 /* Converting between virtqueue no. and kernel tx/rx queue no.
@@ -174,9 +167,9 @@ static int rxq2vq(int rxq)
 	return rxq * 2;
 }
 
-static inline struct skb_vnet_hdr *skb_vnet_hdr(struct sk_buff *skb)
+static inline struct virtio_net_hdr *skb_vnet_hdr(struct sk_buff *skb)
 {
-	return (struct skb_vnet_hdr *)skb->cb;
+	return (struct virtio_net_hdr *)skb->cb;
 }
 
 /*
@@ -239,8 +232,8 @@ static struct sk_buff *page_to_skb(struct receive_queue *rq,
 {
 	struct virtnet_info *vi = rq->vq->vdev->priv;
 	struct sk_buff *skb;
-	struct skb_vnet_hdr *hdr;
-	unsigned int copy, hdr_len, offset;
+	struct virtio_net_hdr *hdr;
+	unsigned int copy, offset;
 	char *p;
 
 	p = page_address(page);
@@ -250,19 +243,14 @@ static struct sk_buff *page_to_skb(struct receive_queue *rq,
 	if (unlikely(!skb))
 		return NULL;
 
-	hdr = skb_vnet_hdr(skb);
+	memcpy(skb_vnet_hdr(skb), p, sizeof(*hdr));
+	len -= sizeof(*hdr);
 
-	if (vi->mergeable_rx_bufs) {
-		hdr_len = sizeof hdr->mhdr;
-		offset = hdr_len;
-	} else {
-		hdr_len = sizeof hdr->hdr;
+	if (vi->mergeable_rx_bufs)
+		offset = sizeof(*hdr);
+	else
 		offset = sizeof(struct padded_vnet_hdr);
-	}
-
-	memcpy(hdr, p, hdr_len);
 
-	len -= hdr_len;
 	p += offset;
 
 	copy = len;
@@ -299,11 +287,11 @@ static struct sk_buff *page_to_skb(struct receive_queue *rq,
 
 static int receive_mergeable(struct receive_queue *rq, struct sk_buff *skb)
 {
-	struct skb_vnet_hdr *hdr = skb_vnet_hdr(skb);
+	struct virtio_net_hdr *hdr = skb_vnet_hdr(skb);
 	struct page *page;
 	int num_buf, i, len;
 
-	num_buf = hdr->mhdr.num_buffers;
+	num_buf = hdr->num_buffers;
 	while (--num_buf) {
 		i = skb_shinfo(skb)->nr_frags;
 		if (i >= MAX_SKB_FRAGS) {
@@ -314,7 +302,7 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *skb)
 		page = virtqueue_get_buf(rq->vq, &len);
 		if (!page) {
 			pr_debug("%s: rx error: %d buffers missing\n",
-				 skb->dev->name, hdr->mhdr.num_buffers);
+				 skb->dev->name, hdr->num_buffers);
 			skb->dev->stats.rx_length_errors++;
 			return -EINVAL;
 		}
@@ -336,7 +324,7 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
 	struct virtnet_stats *stats = this_cpu_ptr(vi->stats);
 	struct sk_buff *skb;
 	struct page *page;
-	struct skb_vnet_hdr *hdr;
+	struct virtio_net_hdr *hdr;
 
 	if (unlikely(len < sizeof(struct virtio_net_hdr) + ETH_HLEN)) {
 		pr_debug("%s: short packet %i\n", dev->name, len);
@@ -374,13 +362,11 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
 	stats->rx_packets++;
 	u64_stats_update_end(&stats->rx_syncp);
 
-	if (hdr->hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
+	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		pr_debug("Needs csum!\n");
-		if (!skb_partial_csum_set(skb,
-					  hdr->hdr.csum_start,
-					  hdr->hdr.csum_offset))
+		if (!skb_partial_csum_set(skb, hdr->csum_start, hdr->csum_offset))
 			goto frame_err;
-	} else if (hdr->hdr.flags & VIRTIO_NET_HDR_F_DATA_VALID) {
+	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID) {
 		skb->ip_summed = CHECKSUM_UNNECESSARY;
 	}
 
@@ -388,9 +374,9 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
 	pr_debug("Receiving skb proto 0x%04x len %i type %i\n",
 		 ntohs(skb->protocol), skb->len, skb->pkt_type);
 
-	if (hdr->hdr.gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
 		pr_debug("GSO!\n");
-		switch (hdr->hdr.gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
+		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
 		case VIRTIO_NET_HDR_GSO_TCPV4:
 			skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
 			break;
@@ -402,14 +388,14 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
 			break;
 		default:
 			net_warn_ratelimited("%s: bad gso type %u.\n",
-					     dev->name, hdr->hdr.gso_type);
+					     dev->name, hdr->gso_type);
 			goto frame_err;
 		}
 
-		if (hdr->hdr.gso_type & VIRTIO_NET_HDR_GSO_ECN)
+		if (hdr->gso_type & VIRTIO_NET_HDR_GSO_ECN)
 			skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
 
-		skb_shinfo(skb)->gso_size = hdr->hdr.gso_size;
+		skb_shinfo(skb)->gso_size = hdr->gso_size;
 		if (skb_shinfo(skb)->gso_size == 0) {
 			net_warn_ratelimited("%s: zero gso size.\n", dev->name);
 			goto frame_err;
@@ -432,7 +418,7 @@ static int add_recvbuf_small(struct receive_queue *rq, gfp_t gfp)
 {
 	struct virtnet_info *vi = rq->vq->vdev->priv;
 	struct sk_buff *skb;
-	struct skb_vnet_hdr *hdr;
+	struct virtio_net_hdr *hdr;
 	int err;
 
 	skb = __netdev_alloc_skb_ip_align(vi->dev, MAX_PACKET_LEN, gfp);
@@ -442,7 +428,7 @@ static int add_recvbuf_small(struct receive_queue *rq, gfp_t gfp)
 	skb_put(skb, MAX_PACKET_LEN);
 
 	hdr = skb_vnet_hdr(skb);
-	sg_set_buf(rq->sg, &hdr->hdr, sizeof hdr->hdr);
+	sg_set_buf(rq->sg, &hdr, sizeof *hdr);
 
 	skb_to_sgvec(skb, rq->sg + 1, 0, skb->len);
 
@@ -673,66 +659,59 @@ static void free_old_xmit_skbs(struct send_queue *sq)
 
 static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
 {
-	struct skb_vnet_hdr *hdr;
+	struct virtio_net_hdr *hdr;
 	const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest;
 	struct virtnet_info *vi = sq->vq->vdev->priv;
 	unsigned num_sg;
-	unsigned hdr_len;
 	bool can_push;
 
 	pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest);
-	if (vi->mergeable_rx_bufs)
-		hdr_len = sizeof hdr->mhdr;
-	else
-		hdr_len = sizeof hdr->hdr;
-
 	can_push = vi->any_header_sg &&
 		!((unsigned long)skb->data & (__alignof__(*hdr) - 1)) &&
-		!skb_header_cloned(skb) && skb_headroom(skb) >= hdr_len;
+		!skb_header_cloned(skb) && skb_headroom(skb) >= sizeof(*hdr);
 	/* Even if we can, don't push here yet as this would skew
 	 * csum_start offset below. */
 	if (can_push)
-		hdr = (struct skb_vnet_hdr *)(skb->data - hdr_len);
+		hdr = (struct virtio_net_hdr *)(skb->data - sizeof(*hdr));
 	else
 		hdr = skb_vnet_hdr(skb);
 
 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
-		hdr->hdr.flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
-		hdr->hdr.csum_start = skb_checksum_start_offset(skb);
-		hdr->hdr.csum_offset = skb->csum_offset;
+		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+		hdr->csum_start = skb_checksum_start_offset(skb);
+		hdr->csum_offset = skb->csum_offset;
 	} else {
-		hdr->hdr.flags = 0;
-		hdr->hdr.csum_offset = hdr->hdr.csum_start = 0;
+		hdr->flags = 0;
+		hdr->csum_offset = hdr->csum_start = 0;
 	}
 
 	if (skb_is_gso(skb)) {
-		hdr->hdr.hdr_len = skb_headlen(skb);
-		hdr->hdr.gso_size = skb_shinfo(skb)->gso_size;
+		hdr->hdr_len = skb_headlen(skb);
+		hdr->gso_size = skb_shinfo(skb)->gso_size;
 		if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
-			hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
 		else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
-			hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
 		else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP)
-			hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_UDP;
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
 		else
 			BUG();
 		if (skb_shinfo(skb)->gso_type & SKB_GSO_TCP_ECN)
-			hdr->hdr.gso_type |= VIRTIO_NET_HDR_GSO_ECN;
+			hdr->gso_type |= VIRTIO_NET_HDR_GSO_ECN;
 	} else {
-		hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_NONE;
-		hdr->hdr.gso_size = hdr->hdr.hdr_len = 0;
+		hdr->gso_type = VIRTIO_NET_HDR_GSO_NONE;
+		hdr->gso_size = hdr->hdr_len = 0;
 	}
 
-	if (vi->mergeable_rx_bufs)
-		hdr->mhdr.num_buffers = 0;
+	hdr->num_buffers = 0;
 
 	if (can_push) {
-		__skb_push(skb, hdr_len);
+		__skb_push(skb, sizeof(*hdr));
 		num_sg = skb_to_sgvec(skb, sq->sg, 0, skb->len);
 		/* Pull header back to avoid skew in tx bytes calculations. */
-		__skb_pull(skb, hdr_len);
+		__skb_pull(skb, sizeof(*hdr));
 	} else {
-		sg_set_buf(sq->sg, hdr, hdr_len);
+		sg_set_buf(sq->sg, hdr, sizeof(*hdr));
 		num_sg = skb_to_sgvec(skb, sq->sg + 1, 0, skb->len) + 1;
 	}
 	return virtqueue_add_outbuf(sq->vq, sq->sg, num_sg, skb, GFP_ATOMIC);
diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
index 172a7f00780c..dfa685d8d6e8 100644
--- a/include/uapi/linux/virtio_net.h
+++ b/include/uapi/linux/virtio_net.h
@@ -88,12 +88,6 @@ struct virtio_net_hdr {
 	__u16 gso_size;		/* Bytes to append to hdr_len per frame */
 	__u16 csum_start;	/* Position to start checksumming from */
 	__u16 csum_offset;	/* Offset after that to place checksum */
-};
-
-/* This is the version of the header to use when the MRG_RXBUF
- * feature has been negotiated. */
-struct virtio_net_hdr_mrg_rxbuf {
-	struct virtio_net_hdr hdr;
 	__u16 num_buffers;	/* Number of merged rx buffers */
 };
 
PATCH2: remove support for big packets without mrgrxbuf.

 drivers/net/virtio_net.c | 63 +++---------------------------------------------
 1 file changed, 3 insertions(+), 60 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 487894d2a4b5..2f9a0cf88d39 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -97,9 +97,6 @@ struct virtnet_info {
 	/* # of queue pairs currently used by the driver */
 	u16 curr_queue_pairs;
 
-	/* I like... big packets and I cannot lie! */
-	bool big_packets;
-
 	/* Host will merge rx buffers for big packets (shake it! shake it!) */
 	bool mergeable_rx_bufs;
 
@@ -329,14 +326,14 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
 	if (unlikely(len < sizeof(struct virtio_net_hdr) + ETH_HLEN)) {
 		pr_debug("%s: short packet %i\n", dev->name, len);
 		dev->stats.rx_length_errors++;
-		if (vi->mergeable_rx_bufs || vi->big_packets)
+		if (vi->mergeable_rx_bufs)
 			give_pages(rq, buf);
 		else
 			dev_kfree_skb(buf);
 		return;
 	}
 
-	if (!vi->mergeable_rx_bufs && !vi->big_packets) {
+	if (!vi->mergeable_rx_bufs) {
 		skb = buf;
 		len -= sizeof(struct virtio_net_hdr);
 		skb_trim(skb, len);
@@ -439,52 +436,6 @@ static int add_recvbuf_small(struct receive_queue *rq, gfp_t gfp)
 	return err;
 }
 
-static int add_recvbuf_big(struct receive_queue *rq, gfp_t gfp)
-{
-	struct page *first, *list = NULL;
-	char *p;
-	int i, err, offset;
-
-	/* page in rq->sg[MAX_SKB_FRAGS + 1] is list tail */
-	for (i = MAX_SKB_FRAGS + 1; i > 1; --i) {
-		first = get_a_page(rq, gfp);
-		if (!first) {
-			if (list)
-				give_pages(rq, list);
-			return -ENOMEM;
-		}
-		sg_set_buf(&rq->sg[i], page_address(first), PAGE_SIZE);
-
-		/* chain new page in list head to match sg */
-		first->private = (unsigned long)list;
-		list = first;
-	}
-
-	first = get_a_page(rq, gfp);
-	if (!first) {
-		give_pages(rq, list);
-		return -ENOMEM;
-	}
-	p = page_address(first);
-
-	/* rq->sg[0], rq->sg[1] share the same page */
-	/* a separated rq->sg[0] for virtio_net_hdr only due to QEMU bug */
-	sg_set_buf(&rq->sg[0], p, sizeof(struct virtio_net_hdr));
-
-	/* rq->sg[1] for data packet, from offset */
-	offset = sizeof(struct padded_vnet_hdr);
-	sg_set_buf(&rq->sg[1], p + offset, PAGE_SIZE - offset);
-
-	/* chain first in list head */
-	first->private = (unsigned long)list;
-	err = virtqueue_add_inbuf(rq->vq, rq->sg, MAX_SKB_FRAGS + 2,
-				  first, gfp);
-	if (err < 0)
-		give_pages(rq, first);
-
-	return err;
-}
-
 static int add_recvbuf_mergeable(struct receive_queue *rq, gfp_t gfp)
 {
 	struct page *page;
@@ -519,8 +470,6 @@ static bool try_fill_recv(struct receive_queue *rq, gfp_t gfp)
 	do {
 		if (vi->mergeable_rx_bufs)
 			err = add_recvbuf_mergeable(rq, gfp);
-		else if (vi->big_packets)
-			err = add_recvbuf_big(rq, gfp);
 		else
 			err = add_recvbuf_small(rq, gfp);
 
@@ -1328,7 +1277,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
 		struct virtqueue *vq = vi->rq[i].vq;
 
 		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
-			if (vi->mergeable_rx_bufs || vi->big_packets)
+			if (vi->mergeable_rx_bufs)
 				give_pages(&vi->rq[i], buf);
 			else
 				dev_kfree_skb(buf);
@@ -1564,12 +1513,6 @@ static int virtnet_probe(struct virtio_device *vdev)
 	vi->config_enable = true;
 	INIT_WORK(&vi->config_work, virtnet_config_changed_work);
 
-	/* If we can receive ANY GSO packets, we must allocate large ones. */
-	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
-	    virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) ||
-	    virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN))
-		vi->big_packets = true;
-
 	if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
 		vi->mergeable_rx_bufs = true;
 




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]