OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] [PATCH] snd: Add virtio sound device specification


On 12.11.2019 19:02, Liam Girdwood wrote:
On Tue, 2019-11-12 at 17:05 +0100, Jean-Philippe Brucker wrote:
This would be a good improvement, it's less copying and would
likely
improve user experience, however the buffer ptr still suffers
latency
as it's queued (same for stream positions going the other way).

Right if the queuing overhead is still too large, then I don't think
the
current virtqueues can be used.

They can be used for non low latency audio, I don't see any issues if
latency is not important. e.g. media playback, some gaming, most system
notifications.

It can support low latency as well. We are not talking about high-throughput device like block or network devices. Bit stream here is constant and quite low (in compare with), so it's not a problem even with message based approach to write/read frames near actual hw pointer.

And here I want to talk about real issue. I already mentioned it a few times: operating system schedulers. You describe low latency solution, but you didn't explain how are you gonna support realtime properties. The closer to hw position we read/write - the easier to miss a deadline.

The fact: tasks can be delayed. Using RT-schedulers in soft realtime OSes does not help much, since they do not give you any guarantees. Thus, delay can be random and quite big, and it fully depends on number of available cores (and their speed). If a VM has dedicated cores, it's great. But usually for type 2 hv we will have shared cores, that makes situation even worse.

Since low latency applications in a guest are not virtualization-awared, they
will push latency to the limit. From other side, virtio driver is virtualization-awared and could assist somehow. In our draft solution with message based approach we artificially increase a latency to an extent allowing to avoid xrun condition due to the worst possible delay (just put additional silence at the beginning of a stream). And it's still far from ideal, since a length of that additional latency highly depends on hardware and virtualization setup.

With zero-copy and friends based approach, what are you going to do with such issues? Because (in general case) you might not have enough cores to run things smoothly and deadline will be missed quite often.


--
Anton Yakovlev
Senior Software Engineer

OpenSynergy GmbH
Rotherstr. 20, 10245 Berlin



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]