OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] [PATCH] snd: Add virtio sound device specification

On Wed, 2019-11-13 at 10:44 +0100, Anton Yakovlev wrote:
> On 12.11.2019 19:02, Liam Girdwood wrote:
> > On Tue, 2019-11-12 at 17:05 +0100, Jean-Philippe Brucker wrote:
> > > > This would be a good improvement, it's less copying and would
> > > > likely
> > > > improve user experience, however the buffer ptr still suffers
> > > > latency
> > > > as it's queued (same for stream positions going the other way).
> > > 
> > > Right if the queuing overhead is still too large, then I don't
> > > think
> > > the
> > > current virtqueues can be used.
> > 
> > They can be used for non low latency audio, I don't see any issues
> > if
> > latency is not important. e.g. media playback, some gaming, most
> > system
> > notifications.
> It can support low latency as well. 

Not really, without zero copy buffering and immediate position
reporting it's difficult to get latency below a certain threshold. As
discussed most use cases don't care, but some use cases do. 

> We are not talking about high-throughput 
> device like block or network devices. Bit stream here is constant and
> quite 
> low (in compare with), so it's not a problem even with message based
> approach 
> to write/read frames near actual hw pointer.

Throughput has no relation to latency. MMC cards have a great
throughput but have high latency. e.g. recording a WAV to MMC using a
small application buffer (say 8 - 16k) will consistently overflow the
buffer since MMC write latency is poor (but once your start writing
throughput is very high). 

> And here I want to talk about real issue. I already mentioned it a
> few times: 
> operating system schedulers. You describe low latency solution, but
> you didn't 
> explain how are you gonna support realtime properties. The closer to
> hw 
> position we read/write - the easier to miss a deadline.

This is hypervisor/HW/guest specific and unrelated to this proposal. 

In general if I have guest core affinity then hypervisor context
switching for that guest is in low 10s of uS (depends on hypervisor of
course and HW). This is good enough for low latency audio.

> The fact: tasks can be delayed. Using RT-schedulers in soft realtime
> OSes does 
> not help much, since they do not give you any guarantees. Thus, delay
> can be 
> random and quite big, and it fully depends on number of available
> cores (and 
> their speed). If a VM has dedicated cores, it's great. But usually
> for type 2 
> hv we will have shared cores, that makes situation even worse.
> Since low latency applications in a guest are not virtualization-
> awared, they
> will push latency to the limit. From other side, virtio driver is 
> virtualization-awared and could assist somehow. In our draft solution
> with 
> message based approach we artificially increase a latency to an
> extent 
> allowing to avoid xrun condition due to the worst possible delay
> (just put 
> additional silence at the beginning of a stream). And it's still far
> from 
> ideal, since a length of that additional latency highly depends on
> hardware 
> and virtualization setup.

I'm saying there is nothing wrong with the message based approach for
most use cases. We just need to build in support (to the configuration
structures) for zero copy and immediate position reporting so that
guests/drivers that can support it can use it.

I would like the PCM configuration data to be robust enough to be able
to report such capabilities and for guests to configure these
capabilities if supported. I know a lot of the buffer sharing APIs are
at proposal stage, so I think we are good as long as we have headroom
to add these later (without breaking any backward compatibility).



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]