OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] [PATCH] snd: Add virtio sound device specification


On 03.12.2019 10:00, Gerd Hoffmann wrote:

...snip...

PCM_MSG -- I would drop the feature bit and make that mandatory, so we
have a common baseline on which all drivers and devices can agree on.

Then we need to inform device what "transport" will be in use (I assumed it
would be feature negotiation).

Whenever other transports (i.e. via shared memory) are supported: yes,
that should be a feature bit.

Not sure about choosing the transport.  If both msg (i.e. via virtqueue)
and shared memory are available, does it make sense to allow the driver
choose the transport each time it starts a stream?

Shared memory based transport in any case will require some additional
actions. For HOST_MEM case the driver will need to get an access to host
buffer somehow. In GUEST_MEM case the driver will need to provide a buffer for
the host.

At first sight, we could extend the set_params request with the
transport_type field and some additional information.

Or have a per-transport set_params request command.

Or, since now we decided to make a message-based transport as a default one,
at the moment we can go without explicit transport selection. Some additional
extensions could be done at the future, when buffer sharing mechanism will be
stabilized.


For example, in case
of GUEST_MEM the request could be followed by a buffer sg-list.

I'm not convinced guest_mem is that useful.  host_mem allows to give the
guest access to the buffers used by the hosts sound hardware, which is
probably what you need if the MSG transport can't handle the latency
requirements you have.

Actually, it might be pretty useful.

If a device is not capable of sharing host memory with a guest but still
capable of using guest shared memory, then there's a good use case for that:
when a buffer is mapped into a user space application. It assumes, that the
driver is not involved in frame transfer at all (thus, it can not queue
buffers into a virtqueue and send notifications to the device). But if that
memory is shared with the device (as long as some piece of memory containing
an application position as well), then it's possible to implement quite
simple poller in the device. And it would be pretty aligned with common
software mixer workflow (like in PA or QEmu), where a mixer invokes client's
callbacks for reading/writing next piece of data. The device will need only
to check an application position and directly read from/write to a shared
buffer.


...snip...

If we gonna introduce any buffer constrains, it must be set by the
device in a stream configuration.

If we want allow the device specify min/max period_bytes which it can
handle, then yes, that should go into the stream configuration.

Or we use negotiation: driver asks for period_bytes in set-params, the
driver picks the closest period_bytes value it can handle and returns
that.

As I said before, periods are not used everywhere. Also, even in ALSA such
kind of negotiations may be non trivial. I would propose to leave choosing the
period_bytes value up to the driver. We could add yet one mandatory field to
the set_params request - driver's buffer size. (If the driver wants to use
period notification feature, then buffer_bytes % period_bytes must be 0). If
the device has its own intermediate buffer of any kind, it's possible to
adjust this according to the buffer_bytes value (like making it's being no
smaller than the specified size and so on). This way we could resolve original concerns regarding possible different buffer sizes.


Also, the capture stream is a special case. Now we don't state explicitly
whether read request is blockable or not.

The concept of "blockable" doesn't exist at that level.  The driver
submits buffers to the device, the device fills them and notifies the
driver when the buffer is full.  It simply doesn't work like a read(2)
syscall.

But you described exactly "blockable" case: an I/O request is completed not
immediately but upon some condition (the buffer is full). In case of message-
based transport, both the device and the driver will have its own buffers.

Well, no.  The device doesn't need any buffers, it can use the buffers
submitted by the driver.  Typical workflow:

   (1) The driver puts a bunch of empty buffers into the rx (record/read)
       virtqueue (each being period_bytes in size).
   (2) The driver starts recording.
   (3) The device fills the first buffer with recorded sound data.
   (4) When the buffer is full the device returns it to the driver,
       takes the next from the virtqueue to continue recording.
   (5) The driver takes the filled buffer and does whatever it wants do
       with the data (typically pass on to the userspace app).
   (6) The driver submits a new empty buffer to the virtqueue to make
       sure the device doesn't run out of buffers.

So, it's not a "here is a buffer, fill it please", "here is the next,
..." ping pong game between driver and device.  There is a queue with
multiple buffers instead, and the device fills them one by one.

Then, should we make this pattern to be mandatory?


...snip...


And
for capturing these buffers might be filled at different speed. For example,
in order to improve latency, the device could complete requests immediately
and fill in buffers with whatever it has at the moment.

Latency obviously depends on period_bytes.  If the driver cares about
latency it should simply work with lots of small buffers instead of a
few big ones.

Well, smaller buffers would mean higher rate of hypercalls/traps but better
latency. And larger buffers would mean less amount of hypercalls/traps but
worse latency. There's always a trade-off and an implementer might choose
whatever fits better. I mean, this idea was behind previous design version
(where we could use the actual_length field for supporting all possible
cases).


--
Anton Yakovlev
Senior Software Engineer

OpenSynergy GmbH
Rotherstr. 20, 10245 Berlin

Phone: +49 30 60 98 54 0
E-Mail: anton.yakovlev@opensynergy.com



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]