OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [virtio-dev] [PATCH v2] virtio: Improve queue_reset polarity to match to default reset state



> From: Jason Wang <jasowang@redhat.com>
> Sent: Wednesday, April 27, 2022 11:15 PM
> 
> On Wed, Apr 27, 2022 at 11:39 PM Parav Pandit <parav@nvidia.com> wrote:
> >
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: Wednesday, April 27, 2022 11:30 AM
> > >
> > > On Wed, Apr 27, 2022 at 02:51:36PM +0000, Parav Pandit wrote:
> > > >
> > > > > From: Jason Wang <jasowang@redhat.com>
> > > > > Sent: Wednesday, April 27, 2022 7:30 AM
> > > > >
> > > > > On Wed, Apr 27, 2022 at 6:26 PM Parav Pandit <parav@nvidia.com>
> wrote:
> > > > > >
> > > > > > example flow:
> > > > > > a) 0,0 -> device init time value
> > > > > > b) 1,0 -> vq is enabled by driver and working
> > > > >
> > > > > Did you see my reply in V1? What's the reason for using write to
> > > > > clear behavior that is different from the device status?
> > > > >
> > > > > We can simply make this as 1, 1 here and let the driver write to
> > > > > 0 to reset the virtqueue.
> > > > >
> > > > > And if we do this, the queue_enable and queue_reset are always
> > > > > the same, then we can simply reuse queue_enable.
> > > > >
> > > > Yes, I know we can make this work using new feature bit + single
> > > queue_enable register.
> > > > I replied that in v0 to Michael.
> > >
> > > A bigger question in my eyes is that down the road we might want to
> > > be able to stop the ring without having it lose state.
> > > The natural interface for that seems to be writing 0 to queue enable.
> > Why queue_enable and not queue_reset?
> >
> > to me this interface is unlikely performant and useful for such case.
> > When we want to pause/stop the VQ and query the state we need
> performant scheme, that can even work in a batch for all the VQs.
> > At that point programming 64 registers to pause/stop VQ without losing
> state and querying its indices etc won't be scalable with register interface.
> 
> The register interface to sync indices has already been implemented in real
> hardware for years.
> 
Sure. I meant to that when we want to pause a VQ and restart later use-case will require more plumbing than just enable/disable register.
And to do that, a register interface won't be performant/scalable. So for the wider use case this may not be the good choice.

And I explained the other reason that we lose the state information with this busy-wait register in the other reply to V2 and summary in the github issue too on Michael's request.

> > I imagine a AQ (likely) or some other interface.
> 
> So did the queue_enable registers, we need to write 1 to queue_enable for
> each virtqueue before DRIVER_OK.
> 
> Where to allow writing 0 to queue_enable is orthogonal to scalability.

Sure, lets sync after you get chance to go through my other reply to Michael about why with single busy-wait register we lose the state.
And hence, queue_reset register (with the fix) is better.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]