OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [RFC PATCH v3] can: virtio: Initial virtio CAN driver.


Hello Vincent,

sometimes it's close to weekend and already late. I've to correct myself.

On 12.05.23 11:53, Vincent MAILHOL wrote:

+static netdev_tx_t virtio_can_start_xmit(struct sk_buff *skb,
+                                        struct net_device *dev)
+{
+       struct virtio_can_priv *priv = netdev_priv(dev);
+       struct canfd_frame *cf = (struct canfd_frame *)skb->data;
+       struct virtio_can_tx *can_tx_msg;
+       struct virtqueue *vq = priv->vqs[VIRTIO_CAN_QUEUE_TX];
+       struct scatterlist sg_out[1];
+       struct scatterlist sg_in[1];
+       struct scatterlist *sgs[2];
This 2 here.

+       /* Normal queue stop when no transmission slots are left */
+       if (atomic_read(&priv->tx_inflight) >= priv->can.echo_skb_max ||
+           vq->num_free == 0 || (vq->num_free < 2 &&
Replace the Magic number 2 with a #define.

Is this 2 here.

Obviously with my previous answer I switched into panic mode thinking already about explaining indirect descriptors and all kind of virtio details and the expression in depth not realizing any more that something different was requested.

Appropriate answer:

/* CAN TX needs 2 descriptors: 1 device readable and 1 device writable */
#define CAN_TX_DESC (1 + 1)

Or something with ARRAY_SIZE(sgs) to get the number of elements in sgs keeping the first 2 above.

And then I'll have to think again whether I really want to keep sgs_in[1] and sgs_out[1] as arrays. Not now, now is weekend.

+           !virtio_has_feature(vq->vdev, VIRTIO_RING_F_INDIRECT_DESC))) {
+               netif_stop_queue(dev);
+               netdev_dbg(dev, "TX: Normal stop queue\n");
+       }
+
+       spin_unlock_irqrestore(&priv->tx_lock, flags);
+
+kick:
+       if (netif_queue_stopped(dev) || !netdev_xmit_more()) {
+               if (!virtqueue_kick(vq))
+                       netdev_err(dev, "%s(): Kick failed\n", __func__);
+       }
+
+       return xmit_ret;
+}



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]