[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: Re: [PATCH v2 07/11] transport-fabrics: introduce opcodes
On Fri, Jun 02, 2023 at 04:39:24PM +0800, zhenwei pi wrote: > On 6/1/23 04:55, Stefan Hajnoczi wrote: > > On Thu, May 04, 2023 at 04:19:06PM +0800, zhenwei pi wrote: > > Does virtio_of_op_get_device_feature return the feature bits offered by > > the device or does it update to reflect negotiated feature bits after > > virtio_of_op_set_driver_feature? > > > > virtio_of_op_get_device_feature returns the same feature bits after > virtio_of_op_set_driver_feature. Because 1) the device feature is capability > of device, 2) a target may be shared by multi initiators. > > For now, I don't see any dependence on getting driver feature. Do you have > any concern about this? No, that sounds good. I just want the semantics to be clearly defined because VIRTIO Transports differ in whether the driver can read back feature bits after negotiation. Doing so is not necessary because the Device Status Field already indicates whether or not feature bit negotiation was successful. > > > > + > > > +\paragraph{virtio_of_op_set_driver_feature}\label{sec:Virtio Transport Options / Virtio Over Fabrics / Transmission Protocol / Opcodes Definition / virtio_of_op_set_driver_feature} > > > + > > > +virtio_of_op_set_driver_feature is used to set driver feature for control queue only. > > > +The initiator MUST issue a \nameref{sec:Virtio Transport Options / Virtio Over Fabrics / Transmission Protocol / Commands Definition / Feature Command}, > > > +and specify the value field of Common Command as le64. > > > + > > > +The initiator uses feature_select field to select which feature bits to set. > > > +Value 0x0 selects Feature Bits 0 to 63, 0x1 selects Feature Bits 64 to 128, etc. > > > + > > > +\paragraph{virtio_of_op_get_num_queues}\label{sec:Virtio Transport Options / Virtio Over Fabrics / Transmission Protocol / Opcodes Definition / virtio_of_op_get_num_queues} > > > + > > > +virtio_of_op_get_num_queues is used to get the number of queues for control queue only. > > > +The initiator MUST issue a \nameref{sec:Virtio Transport Options / Virtio Over Fabrics / Transmission Protocol / Commands Definition / Common Command}, > > > +and reads from value field of Completion as le16. > > > + > > > +\paragraph{virtio_of_op_get_queue_size}\label{sec:Virtio Transport Options / Virtio Over Fabrics / Transmission Protocol / Opcodes Definition / virtio_of_op_get_queue_size} > > > + > > > +virtio_of_op_get_queue_size is used to get the size of a specified queue for control queue only. > > > +The initiator MUST issue a \nameref{sec:Virtio Transport Options / Virtio Over Fabrics / Transmission Protocol / Commands Definition / Queue Command} with specified queue_id, > > > +and reads from value field of Completion as le16. > > > > Is it possible to set the queue size? For example, the PCI Transport > > allows the driver to lower the queue size but not increase it (see > > 4.1.5.1.3 Virtqueue Configuration). > > > > Agree. Because a target may be shared by multi initiators, it's not > reasonable to set queue size of target, the queue size only affect this > initiator itself. > For example, a target supports queue size 1024. initiatorX uses 128 queue > size, and initiatorY uses 1024. Do you have any suggestion about this? I assumed that there is a 1:1 mapping between VIRTIO Over Fabrics Targets (TVQN + Target ID) and VIRTIO Devices. I expected initiatorY's Connect Command to be rejected by the target when initiatorX is already connected. Therefore there is no conflict between two initiators choosing different queue sizes. Anyway, I see no issue with allowing the initiator to reduce the queue size. This allows the target to allocate fewer resources to the device until the next device reset. Stefan
Attachment:
signature.asc
Description: PGP signature
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]