OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v3] virtio-snd: add virtio sound device specification


On 19.12.2019 08:17, Gerd Hoffmann wrote:
   Hi,

The driver can still split the data from the application into a set of
smaller virtio buffers.  And this is how I would write a driver.  Create
a bunch of buffers, period_bytes each, enough to cover buffer_bytes.
Then go submit them, re-use them robin-round (each time the device
completes a buffer re-fill and re-submit it), without a special case
with one big buffer for the (playback) stream start.

Yes, but what about device side? Most likely, it will copy the contents of the
buffer to another location and immediately complete the request (otherwise it
is not clear when to complete).

qemu has rather small buffers backend buffers, to keep latencies low.
So, yes it would copy data to backend buffers.  No, it would most likely
not copy over everything immediately.  It will most likely leave buffers
in the virtqueue, reading the data piecewise in the audio backend timer
callback, complete the virtio request when it has read all data.  So
there wouldn't be a huge gap between completing the request and feeding
the data to the host hardware.

Yes, there is a gap, and this causes the following two concerns:

1. An application can rewind its pointer and rewrite part of this buffer. The
worst case is if the application rewrites the part currently being read by the
device.

2. This approach looks like shared memory on a smaller scale, and the whole
meaning of the messages is lost. Because sending a message assumes that what's
gone is gone.


Another reason for this (beside keeping latencies low) is that it is a
good idea to avoid or at least limit guest-triggered allocations on the
host for security reasons.

Yes, that makes sense.


While talking about latencies:  What happened to the idea to signal
additional device latencies (backend buffering and processing ...) to
the driver?

Yeah, such an additional latency has a runtime nature, and we could report it
as part of an I/O request completion. We only need to choose units (time or
bytes).


In addition, the driver must advance the
hardware position, and completing the request is the only possible place to do
this. In most cases, all this will not coincide with the period notification.

So, period notification should be triggered when the device has actually
completed processing the data (i.e. host playback finished)?  I can see
that this can be useful.  I think qemu wouldn't be able to support that
with the current audio backend design, but that isn't a blocker given
this is an optional feature.  Also I think the spec should to clarify
how period notification is supposed to work, how that relates to
buffer completion and latencies.

This means that the notification itself is an optional feature that may or may
not be enabled for message-based transport.

Which transports are expected to support that?
If multiple transports:  Can a device signal it supports these
notifications for one transport but not for another?

I think this does not depend on transport, but take a look at the comment
below.


I think, it's better not to set queued buffer size restrictions.

Lets make it SHOULD then.

Again, if periods are not used, then there's no period_bytes. And they are not
used, for example, in most low-latency applications.

(Well, technically speaking, low-latency applications in Linux must configure
period size for ALSA. But then they usually suppress period notifications and
send very small chunks of data, which usually do not correspond to the size of
the period at all.)

So you want pass the data on to the device in whatever chunks the
userspace application hands them to the driver?  Ok, reasonable.

Should we define period_bytes as upper limit then, i.e. virtio buffers
should be period_bytes or smaller?

I think, it's becoming more and more complicated and we should revise
everything.

Now we have only optional notifications (periods and xrun). But let's say,
that the device MAY send the xrun notifications and MUST send the period
notifications, and remove them from feature bits.

Then we define requirements for message-based transport:
1. the tx and rx queues are filled in with buffers having period_bytes size,
2. buffer_bytes % period_bytes == 0,
3. total buffers size in the queue is buffer_bytes.

It is still not clear, when to complete playback buffers, but at least such
approach makes the driver being interrupt-driven.

Basically, it is what you said but with little changes.


Completely different topic:  I think we should consider adding tags to
streams.  Some background:  In qemu we have two almost identical hda
codecs: hda-duplex and hda-micro.  The only difference is that
hda-duplex has the input stream tagged as "line-in" and the output
stream as "line-out", whereas hda-micro has the input channel tagged as
"microphone" and the output channel as "speaker".  hda-micro was created
because some windows application refused to accept "line-in" as audio
source.  So applications clearly do care, and in case you have two input
streams it would help applications pick a reasonable default.

Should it be a fixed-size field in the capabilities structure?



cheers,
   Gerd



--
Anton Yakovlev
Senior Software Engineer

OpenSynergy GmbH
Rotherstr. 20, 10245 Berlin

Phone: +49 30 60 98 54 0
E-Mail: anton.yakovlev@opensynergy.com

www.opensynergy.com

Handelsregister/Commercial Registry: Amtsgericht Charlottenburg, HRB 108616B
GeschÃftsfÃhrer/Managing Director: Regis Adjamah


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]