OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH RESEND] virtio-net: support setting coalescing params for multiple vqs




å 2023/12/21 äå11:37, Jason Wang åé:
On Wed, Dec 20, 2023 at 10:40âPM Heng Qi <hengqi@linux.alibaba.com> wrote:
Currently, when each time the driver attempts to update the coalescing parameters
for a vq, it needs to kick the device and wait for the ctrlq response to return.

If a command can only update one vq parameters, when the parameters are updated
frequently (such as netdim), ctrlq on the device is kicked frequently, which will
increase the device CPU scheduling overhead, and the number and overhead of device
DMA will also increase.

Merging multiple vq updated parameters into one command can effectively reduce
the number of kick devices and device DMA times.

Test results show that this greatly improves the efficiency of the ctrlq in
responding to multiple vq coalescing parameter updates issued by the driver.
So netdim is per virtqueue, to make use of this, you need to batch the
netdim requests first.

Yes. I think the code looks more intuitive.
Please view the following PoC test code combined with the current code of upstream virtio netdim

+struct ctrl_coal_vq_entry {
+ÂÂÂÂÂÂ u16 vqn;
+ÂÂÂÂÂÂ u16 reserved;
+ÂÂÂÂÂÂ int usecs;
+ÂÂÂÂÂÂ int packets;
+};
+
+struct virtio_net_ctrl_mrg_coal_vq {
+ÂÂÂÂÂÂ int num_entries;
+ÂÂÂÂÂÂ struct ctrl_coal_vq_entry coal_vqs[];
+};
+

@@ -313,12 +335,18 @@ struct virtnet_info {

ÂÂÂÂÂÂÂ struct control_buf *ctrl;

+ÂÂÂÂÂÂ struct virtio_net_ctrl_mrg_coal_vq *ctrl_coal_vqs;
+


@@ -4390,6 +4586,7 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
ÂÂÂÂÂÂÂ if (!vi->rq)
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ goto err_rq;

+ÂÂÂÂÂÂ vi->ctrl_coal_vqs = kzalloc(sizeof(struct virtio_net_ctrl_mrg_coal_vq) + vi->max_queue_pairs * sizeof(struct ctrl_coal_vq_entry), GFP_KERNEL);


+ÂÂÂÂÂÂ struct scatterlist sg;
+ÂÂÂÂÂÂ for (i = 0; i < vi->curr_queue_pairs; i++) {
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ vi->ctrl_coal_vqs->coal_vqs[i].vqn = rxq2vq(i);
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ vi->ctrl_coal_vqs->coal_vqs[i].usecs = 8;
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ vi->ctrl_coal_vqs->coal_vqs[i].packets = 16;
+ÂÂÂÂÂÂÂ }
+ÂÂÂÂÂÂÂ vi->ctrl_coal_vqs->num_entries = i;
+ÂÂÂÂÂÂÂ sg_init_one(&sg, vi->ctrl_coal_vqs, sizeof(struct virtio_net_ctrl_mrg_coal_vq) + vi->ctrl_coal_vqs->num_entries * sizeof(struct ctrl_coal_vq_entry));
+ÂÂÂÂÂÂÂ vi->req_num++;
+ÂÂÂÂÂÂÂ if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ VIRTIO_NET_CTRL_NOTF_COAL_*VQS*_SET, &sg))

Â#define VIRTIO_NET_CTRL_NOTF_COAL_VQ_GETÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 3
+#define VIRTIO_NET_CTRL_NOTF_COAL_VQS_SETÂÂÂÂÂÂÂÂÂÂÂÂÂ 4

  And if you do that, you can batch the commands
as well.


Batch commands we have tried:

+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ for (i = 0; i < vi->curr_queue_pairs; i++) {
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ err = virtnet_pre_send_cmd(vi, &vi->vq_ctrls[i], 8, 16);
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ }
+
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ if (unlikely(!virtqueue_kick(vi->cvq)))
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return vi->ctrl->status == VIRTIO_NET_OK;
+
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ for (i = 0; i < vi->curr_queue_pairs; i++) {
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ while (!virtqueue_is_broken(vi->cvq) && !virtqueue_get_buf(vi->cvq, &tmp))
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ cpu_relax();
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ }
+ÂÂÂÂÂÂ }


+
+static bool virtnet_pre_send_cmd(struct virtnet_info *vi, struct vq_coal *vq_ctrls,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ u32 max_usecs, u32 max_packets)
+{
+ÂÂÂÂÂÂ struct scatterlist hdr, out, stat;
+ÂÂÂÂÂÂ struct scatterlist *sgs[4];
+ÂÂÂÂÂÂ unsigned out_num = 0;
+ÂÂÂÂÂÂ int ret;
+
+ÂÂÂÂÂÂ BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ));
+
+ÂÂÂÂÂÂ vq_ctrls->hdr.class = VIRTIO_NET_CTRL_NOTF_COAL;
+ÂÂÂÂÂÂ vq_ctrls->hdr.cmd = VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET;
+ÂÂÂÂÂÂ sg_init_one(&hdr, &vq_ctrls->hdr, sizeof(vq_ctrls->hdr));
+ÂÂÂÂÂÂ sgs[out_num++] = &hdr;
+
+ÂÂÂÂÂÂ vq_ctrls->coal_vq.vqn = cpu_to_le16(0);
+ÂÂÂÂÂÂ vq_ctrls->coal_vq.coal.max_usecs = cpu_to_le32(max_usecs);
+ÂÂÂÂÂÂ vq_ctrls->coal_vq.coal.max_packets = cpu_to_le32(max_packets);
+ÂÂÂÂÂÂ sg_init_one(&out, &vq_ctrls->ho, sizeof(vq_ctrls->ho));
+
+ÂÂÂÂÂÂ if (&out)
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ sgs[out_num++] = &out;
+
+ÂÂÂÂÂÂ vq_ctrls->status = ~0;
+ÂÂÂÂÂÂ sg_init_one(&stat, &vq_ctrls->status, sizeof(vq_ctrls->status));
+ÂÂÂÂÂÂ sgs[out_num] = &stat;
+
+ÂÂÂÂÂÂ BUG_ON(out_num + 1 > ARRAY_SIZE(sgs));
+ÂÂÂÂÂÂ ret = virtqueue_add_sgs(vi->cvq, sgs, out_num, 1, vi, GFP_ATOMIC);
+ÂÂÂÂÂÂ if (ret < 0) {
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ dev_warn(&vi->vdev->dev,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ "Failed to add sgs for command vq: %d\n.", ret);
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return false;
+ÂÂÂÂÂÂ }


Overall, batch reqs are sufficient. Because the current major overhead is the number of DMAs.
For example, for a device with 256 queues,

For the current upstream code, the overhead is 256 kicks + 256*8 DMA times.
The overhead of batch cmds is 1 kick + 256*8 DMA times.
The overhead of batch reqs is 1 kick + 8 DMA times.

Below is 8 DMA times:
- get avail idx 1 time
- Pull avail ring information 1 time
- Pull the desc pointed to by the avail ring 3 times
- Pull the hdr and out bufs pointed to by avail ring desc 2 times
- Write once to the status buf pointed to by status 1 time

Thanks!

Then you get one kick for sevreal coal requests?

Or are you saying you run out of ctrl vq?

Thanks


This publicly archived list offers a means to provide input to the
OASIS Virtual I/O Device (VIRTIO) TC.

In order to verify user consent to the Feedback License terms and
to minimize spam in the list archive, subscription is required
before posting.

Subscribe: virtio-comment-subscribe@lists.oasis-open.org
Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
List help: virtio-comment-help@lists.oasis-open.org
List archive: https://lists.oasis-open.org/archives/virtio-comment/
Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
Committee: https://www.oasis-open.org/committees/virtio/
Join OASIS: https://www.oasis-open.org/join/



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]