OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] [PATCH v3 1/4] Add VIRTIO_RING_F_INDIRECT_SIZE


On Donnerstag, 24. März 2022 11:36:50 CET Michael S. Tsirkin wrote:
> On Thu, Mar 24, 2022 at 10:16:00AM +0100, Christian Schoenebeck wrote:
> > On Mittwoch, 23. März 2022 13:35:00 CET Michael S. Tsirkin wrote:
> > > On Wed, Mar 23, 2022 at 11:20:07AM +0100, Christian Schoenebeck wrote:
> > > > On Montag, 21. März 2022 23:13:55 CET Michael S. Tsirkin wrote:
> > > > > On Mon, Mar 21, 2022 at 10:23:07AM +0100, Christian Schoenebeck 
wrote:
> > > > > > On Sonntag, 20. März 2022 22:52:16 CET Michael S. Tsirkin wrote:
> > > > > > > On Sun, Mar 20, 2022 at 06:43:53PM +0100, Christian Schoenebeck
> > 
> > wrote:
> > > > > > > OK let's go back and agree on what we are trying to achieve. 
> > > > > > > The
> > > > > > > github
> > > > > > > issue and the cover letter imply that while indirect descriptors
> > > > > > > would
> > > > > > > normally allow huge tables, we artificially limit them to queue
> > > > > > > size,
> > > > > > > and you want to be able to relax that.
> > > > > > 
> > > > > > Correct, that's my motivation for all of this.
> > > > > 
> > > > > Okay. So I think that given this, we can limit the total number
> > > > > of non-indirect descriptors, including non-indirect ones
> > > > > in a chain + all the ones in indirect pointer table if any,
> > > > > and excluding the indirect descriptor itself, and this
> > > > > will address the issue you are describing here, right?
> > > > 
> > > > As far as I understand your suggestion, yes, it would cover:
> > > > 
> > > > A. My use case [1]: allowing indirect table length > queue size.
> > > > B. Stefan's use case [2]: forcing indirect table length < queue size.
> > > > C. Your use case: forcing chain (within FIFOs) length < queue size.
> > > > 
> > > > [1]
> > > > https://lore.kernel.org/netdev/cover.1640870037.git.linux_oss@crudebyt
> > > > e.c
> > > > om/ [2]
> > > > https://lore.kernel.org/all/YXpmwP6RtvY0BmSM@stefanha-x1.localdomain/
> > > > 
> > > > However it would not cover:
> > > > 
> > > > D. Your other use case: blocking indirect tables.
> > > 
> > > Well, the practical use-case I have in mind is actually for when using
> > > mergeable buffers with the network device, in practice there's never a
> > > s/g but there's no way for device to know that. So maybe we can declare
> > > that indirect must not be used when there's a single descriptor in the
> > > table (esp when the new feature is accepted), and then just limiting s/g
> > > to 1 will do the trick in this specific case.
> > > 
> > > > E. Potential other people's need: max. chain length (within FIFOs) !=
> > > > max.
> > > > 
> > > >    indirect table length.
> > > 
> > > I don't understand this one, sorry.
> > 
> > Example: say you would want to allow large indirect tables, but OTOH want
> > to block any chained (pre-allocated) direct descriptors in the vring
> > FIFOs> 
> > themselfes, e.g.:
> > 	Queue Size (max. # "buffers" in FIFO) = 127
> > 	Indirect Size (max. # segments in indirect descr. table) = 65535
> > 	Direct Size (max. # chained descr. per "buffer" in FIFO) = 1
> > 	Total Size (max. # sum of all descr. per "buffer") = 65536
> > 
> > Your suggestion would not cover this example. Why would somebody want
> > that?
> > Because of device/driver simplification
> 
> It does not simplify the driver, does it? It's always an extra resriction on
> it.

Why not? If it's the driver that sets Direct Size = 1 then it can ignore 
handling chained direct descriptors. Sounds like a simplification to me.

> > (i.e. same motivation you apparently
> > had in mind for blocking indirect tables for network devices)
> 
> OK I got this one. Device writers do complain about the complexity.
> However the tradeoff in limiting what device accepts is ability
> to migrate to a different device. So there's an advantage to
> making this a separate feature bit so that it's clear to people it's
> a low-end device.
> 
> > and to prevent
> > one side from starving because other side filled entire FIFO with large
> > bulk data for a single message.
> 
> This last I don't see why would it be the device's problem. If driver
> will starve the device it can always do so, if it wants to use
> indirect aggressively to avoid filling up the queue it can do so too.

To actually make it possible to enforce such kind of policy: "do not use the 
FIFO for large bulk data, use indirect space for that instead".

> > > > Instead of yet again mixing different limits again with each other,
> > > > what
> > > > about using 3 distinct fields for those limits instead:
> > > > 
> > > > 1. max. indirect table length (any descriptor in the table)
> > > > 2. max. chain length (currently: descriptors within FIFOs)
> > > > 3. max. total descriptors per buffer (sum of all descriptors, both
> > > > from
> > > > FIFOs>
> > > > 
> > > >    and indirect tables, including descriptor pointing to a table).
> > > 
> > > So this doesn't actually address at least what you called "your
> > > use-case" above. Well it almost does, but not exactly because
> > > of the extra indirect descriptor.
> > 
> > Of course it would cover it.
> > And of course you would have to either subtract
> > or add one descriptor somewhere,
> 
> Well, indirect descriptors can be chained with VIRTQ_DESC_F_NEXT right?
> The number of in the chain is not really predictable, it's not always
> 1.

I don't think so:

  2.6.5.3.1 Driver Requirements: Indirect Descriptors
  ..
  "A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and VIRTQ_DESC_F_NEXT in
  flags."

So as far as I can see it, the amount of direct descriptors is currently 
always exactly one if an indirect table is used.

Best regards,
Christian Schoenebeck




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]