OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] [PATCH] snd: Add virtio sound device specification

On 30.10.2019 13:11, Mark Brown wrote:
On Tue, Oct 29, 2019 at 02:16:04PM +0100, Anton Yakovlev wrote:
On 29.10.2019 13:18, Mark Brown wrote:
On Tue, Oct 29, 2019 at 11:14:52AM +0100, Anton Yakovlev wrote:
On 28.10.2019 17:05, Liam Girdwood wrote:

What about buffer and period size capabilities ?

In this specification we assume that buffer size is up to guest decision.
Also, period size is ALSA-only conception and should not be mentioned here
at all (we are not stick only to ALSA).

Period size isn't just an ALSA thing - it's a widely supported and used
hardware feature.  When using a ring buffer (or just sending large
amounts of data at once, but mainly ring buffers) it is useful to know
how the hardware is progressing through the data without having to use
CPU timers (which may not be well synced with the audio clock), the
period interrupts do that.

Periods here are kind of notification frequency, right? It's supposed to be
used for sending notifications to driver?

Yes, the driver gets a notification every time the hardware makes it
through a periodn of data.

If a device implementation has reliable way to receive and forward such notifications to a driver, then it's fine. The reason why we didn't add this to the spec is because usually a device interacts with or implements by itself some kind of software mixer. And all communications with real hardware are hidden behind multiple abstraction layers. In such case, it's either very difficult or even impossible to map period-based notifications to whatever a sw mixer provides to its clients.

Perhaps it could be an optional feature.

+struct virtio_snd_pcm_xfer {
+    le32 stream;
+    u8 data[];
+    le32 status;
+    le32 actual_length;

Not following this, is actual_length the size of data[]. If so, it must
preceed data[].

No, the actual_length field is supposed to be used by device side to report
actual amount of bytes read from/written to a buffer. In real world
scenario, if an I/O request contains N bytes, a device can play/capture *up
to* N bytes. Thus, it's required to report this length back to a driver.

So really there's two structs here, a header struct with just stream in
it and then a tail struct with the length and status information which
the consumer locates by looking at the end of the buffer and working

Yes, it is. Since both these structures are coupled together and must be
sent at the same time, in spec they are represented as one "structure".

OK...  that is a fairly weird and confusing way of writing things.

Not sure, how to describe this in context of the specification in a better way.

Since xrun conditions are higher level conceptions, we decided to delegate
such issues to guest application itself. It helps to make overall design
simpler. And it seems there's not so much we can do if xrun condition is
happened on the device side.

I'm not so sure about that - in a message based system I would expect
the device side to be able to at least say "I am throwing away data" for
cases where it's arriving too fast to be consumed and the device ran out
of buffer space, and ideally also say if it underflows.  It's more of a
high level concept with free running ring buffers where the device can
just happily keep reading to or writing from the buffer regardless of if
the host is paying attention to it.

That's the point. The spec describes a device. Device is supposed just to
read from/write to hardware buffer regardless of its content.

The rate at which data is consumed or produced in an audio system is
driven by the audio hardware.  This is very likely to not be synced to
other clock domains in the system so users are reliant on the hardware
to drive the rate at which data is driven through the system and hence
need support from it for figuring out the rate to transfer data and
handle any problems that occur.  Good error reporting is a part of that,
it makes it much easier to diagnose problems if you can get a direct
and clear report of what went wrong rather than having to for example
listen to the audio and work back from that.  Like I say this is
especially true with a message based system where software has less
visibility of how the hardware is progressing through the buffer.

Yes, it's true. And it's not like we against having error reporting. The problem is there's no deterministic way how to treat such errors in pv-solution.

It's like with periods discussed above. If a device is a native ALSA-application and talks directly with hardware in exclusive mode, then it's capable to signal actual underflow/overflow condition reported by ALSA layer. But if a device is a client of sw mixer server, then reported error might be server-specific (like "underflow" means, that server's queue is empty, but real hardware buffer still might contain 10ms or 50ms of frames for playback; i.e. guest application probably has enough time to provide new data).

I would say it's an issue with mixing together driver and device context, where driver context is always predictable, but device' - not. And sometimes it can lead to unexpected behavior.

At the end, the main source of xrun conditions is delayed execution of a virtual machine or an application in it (like double scheduling and so on). And 44100Hz is still ~44100 frames per seconds. After different experiments, we decided just to stick to simpler design.

Again, it could be an optional feature if device has reliable source of error notifications.

Also, the main problems with xrun is what to do with this information? Let's
assume that device can report something like "I'm going to run out of frames
for playback soon" or "I have no more frames for playback", then what? In
low latency scenario, user space application will provide small chunks of
data. It means, hardware buffer will be empty in like 95% of cases. And the
only thing driver can do is somehow indicate xrun to upper layer (which will
be not true from application point's of view, if hardware pointer has not
crossed application pointer yet in ALSA). I.e. exactly the same result,
where there's no notification from the device side.

Error handling is the application's problem, usually it will either give
up entirely or restart the stream.  Restarting is useful since if you've
got a situation where clocks have drifted apart what'll often happen is
that you'll get constant glitches which are extremely distracting to
users, when you restart the drift gets reset and things will be OK for
as long as it takes for drift to reaccumilate.  Obviously that's not
great and the system should be trying to avoid it but it's at least
mitigation and like I say the explicit notification that something went
wrong can be very helpful in diagnosing problems.

Yes, I agree. I described our concerns in a comment above.

Anton Yakovlev
Senior Software Engineer

OpenSynergy GmbH
Rotherstr. 20, 10245 Berlin

Phone: +49 30 60 98 54 0
E-Mail: anton.yakovlev@opensynergy.com


Handelsregister/Commercial Registry: Amtsgericht Charlottenburg, HRB 108616B
Geschäftsführer/Managing Director: Regis Adjamah

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]