OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification


On 27.04.23 15:16, Alexandre Courbot wrote:
On Thu, Apr 27, 2023 at 12:11âAM Alexander Gordeev
<alexander.gordeev@opensynergy.com> wrote:

On 21.04.23 06:02, Alexandre Courbot wrote:
Hi Alexander,

On Mon, Apr 17, 2023 at 9:52âPM Alexander Gordeev
<alexander.gordeev@opensynergy.com> wrote:

Hi Alexandre,

Thanks for you letter! Sorry, it took me some time to write an answer.

First of all I'd like to describe my perspective a little bit because it
seems, that in many cases we (and other people writing their feedbacks)
simply have very different priorities and background.

OpenSynergy, the company that I work for, develops a proprietary
hypervisor called COQOS mainly for automotive and aerospace domains. We
have our proprietary device implementations, but overall our goal is to
bring open standards into these quite closed domains and we're betting
big on virtio. The idea is to run safety-critical functions like cockpit
controller alongside with multimedia stuff in different VMs on the same
physical board. Right now they have it on separate physical devices. So
they already have maximum isolation. And we're trying to make this
equally safe on a single board. The benefit is the reduced costs and
some additional features. Of course, we also need features here, but at
the same time security and ease of certification are among the top of
our priorities. Nobody wants cars or planes to have security problems,
right? Also nobody really needs DVB and even more exotic devices in cars
and planes AFAIK.

For the above mentioned reasons our COQOS hypervisor is running on bare
metal. Also memory management for the guests is mostly static. It is
possible to make a shared memory region between a device and a driver
managed by device in advance. But definitely no mapping of random host
pages on the fly is supported.

AFAIU crosvm is about making Chrome OS more secure by putting every app
in its own virtualized environment, right?

Not really, but for the discussion here you can assume that it is a
VMM similar to QEmu with KVM enabled.

Thanks for the clarification. If my idea about your use-case is not
totally correct, then it would be very helpful if you can provide more
details about it.

It's nothing fancy ; Linux host, Linux (e.g. Android) guests.

Thanks for the explanation.

But
virtio being a standard, we should focus on making something that is
usable by everyone instead of individual use-cases.

In an ideal world this would be possible. But in practice we have
various constraints and priorities. So sometimes it makes sense to
achieve more by digging in different directions instead of arguing
endlessly.

Also this mode of
operation is not supported in our hypervisor for reasons mentioned
above. So in our case this PoC doesn't yet prove anything unfortunately.

I did not have your use-case in mind while writing the PoC, its
purpose was to demonstrate the suitability of V4L2 as a protocol for
virtualizing video.

Now if your hypervisor does static memory management and pre-allocates
memory for guest buffers, then the V4L2 MMAP memory type actually
looks like the best fit for the job. There are no tokens like virtio
objects UUID to manage, and the MMAP request can be as simple as
returning the pre-mapped address of the buffer in the guest PAS.

If instead it carves some predefined amount of memory out for the
whole guest and expects it to allocate buffer memory from there, then
the USERPTR memory type (which works like the guest pages of
virtio-video) is what you want to use.

It doesn't look like a good idea to us. This means preconfiguring memory
regions in the hypervisor config. It is hard to predict the amount of
memory, that is necessary. If we allocate too much, this is a waste of
memory. If we allocate too little, it won't be enough. Then we don't
know yet how to make V4L2 allocate from that memory. Then this memory
has to be managed on the host side. And memory management is exactly the
thing, that causes most security issues, right? So overall this is very
tedious, potentially wasteful and not flexible.

My last paragraph mentions that you can also let the guest manage the
buffer memory from its own RAM. Or maybe I am missing how memory is
managed on your hypervisor, but if that's the case elaborate on where
you want the buffer memory to come from.

Hmm, I think what you write is indeed not different from using guest
memory as it is done in virtio-video. This maps to USERPTR indeed.
Well, it is nothing special. The hypervisor allocates memory for the
guest, then guest is free to manage it. It can also give this memory to
the host.
It is possible to statically create another memory region, that is
managed by the device side and is accessible by the guest. But as I
wrote this is not what we want for various reasons.

Static allocations of shared memory regions are possible.
But then we have to tell V4L2 to allocate buffers there. Then we'll need
a region per virtual device. This is just very tedious and inflexible.
That's why we're mainly interested in having the guest pages sharing in
the virtio video spec.

I'll be happy to update the PoC and make it able to use guest pages as
buffer backing memory. It just wasn't the priority to demonstrate the
global approach.

Great, thank you. If you have a concrete plan already, I think it could
be beneficial to discuss it now. Otherwise I'd prefer to keep working on
the current approach until I see something concrete.

Just give me a couple more weeks and I think I can produce the code.
But I'm afraid you have already made up your mind anyway.

I had indeed. So IMO it is pointless right now. I simply wanted to have
a detailed plan just in case and make sure, that we both understand it
in the same way. If in the end we're forced to compromise, than this has
to be implemented (or reimplemented?).

      b. For modern cameras the V4L2 interface is not enough anyway. This
was already discussed AFAIR. There is a separate virtio-camera
specification, that indeed is based on V4L2 UAPI as you said. But
combining these two specs is certainly not future proof, right? So I
think it is best to let the virtio-camera spec to be developed
independently.
I don't know if virtio-camera has made progress that they have not
published yet, but from what I have seen virtio-v4l2 can cover
everything that the currently published driver does (I could not find
a specification, but please point me to it if it exists), so there
would be no conflict to resolve.

V4L2 with requests support should be capable of handling complex
camera configurations, but the effort indeed seems to have switched to
KCAM when it comes to supporting complex native cameras natively. That
being said:

* KCAM is not merged yet, is probably not going to be for some time
(https://lwn.net/Articles/904776/), and we don't know how we can
handle virtualization with it,
* The fact that the camera is complex on the host does not mean that
all that complexity needs to be exposed to the guest. I don't know how
the camera folks want to manage this, but one can imagine that the
host could expose a simpler model for the virtual camera, with only
the required knobs, while the host takes care of doing all the complex
configuration.
* The counter argument can be made that simple camera devices do not
need a complex virtualization solution, so one can also invoke
simplicity here to advocate for virtio-v4l2.

My point is not to say that all other camera virtualization efforts
should be abandoned - if indeed there is a need for something more
specific, then nothing prevents us from having a virtio-camera
specification added. However, we are nowhere close to this at the
moment, and right now there is no official solution for camera
virtualization, so I see no reason to deny the opportunity to support
simple camera devices since its cost would just be to add "and cameras
device" in the paragraph of the spec that explains what devices are
supported.

Well, for reasons described above it still seems perfectly fine to me to
have separate devices. Ok, the argument, that this approach also seems
more future-proof, is not a strong one.

Please elaborate on its weaknesses then.

Well, as you said basically. The weakness of the argument is that the
virtio-camera is not yet published, the KCAM is not merged yet, so yeah,
the future is not clear actually.

BTW I just thought about one more case, that is already real: sharing
camera streams with pipewire. I think pipewire doesn't provide a V4L2
UAPI interface, right?

I believe it does: https://archlinux.org/packages/extra/x86_64/pipewire-v4l2/

Oh, indeed. Thanks for the link!

> This package contains an LD_PRELOAD library that redirects v4l2
applications to PipeWire.

Well, it is cool, that they did this for the compatibility. But this
doesn't look clean and straightforward enough to use this for a new
development.

But in any case, that's irrelevant to the guest-host interface, and I
think a big part of the disagreement stems from the misconception that
V4L2 absolutely needs to be used on either the guest or the host,
which is absolutely not the case.

I understand this, of course. I'm arguing, that it is harder to
implement it, get it straight and then maintain it over years. Also it
brings limitations, that sometimes can be workarounded in the virtio
spec, but this always comes at a cost of decreased readability and
increased complexity. Overall it looks clearly as a downgrade compared
to virtio-video for our use-case. And I believe it would be the same for
every developer, that has to actually implement the spec, not just do
the pass through. So if we think of V4L2 UAPI pass through as a
compatibility device (which I believe it is), then it is fine to have
both and keep improving the virtio-video, including taking the best
ideas from the V4L2 and overall using it as a reference to make writing
the driver simpler.

2. There is no flexibility to choose whatever way of memory management
host and guest would like to use. Now the guest user-space application
selects this.

Errr no. The guest user-space chooses a type of memory from what the
guest kernel exposes, which depends on what the host itself decides to
expose.

I don't agree. If an already written user-space app supports only MMAP,
then there is no way to force it use USERPTR, right? Please correct me
if I'm wrong.

The memory types exposed by the guest kernel do not need to match
those exposed by the hypervisor or that the guest kernel chooses to
use.

For instance, imagine that the hypervisor does not support allocating
buffer memory - i.e. it does not support the MMAP memory type. The
guest will then have to use its own memory for buffer allocation, and
send them to the host with the USERPTR memory type.

Now if a guest user-space application only supports MMAP, that's not a
problem at all. Most V4L2 drivers allocate MMAP buffers from regular
memory. So when the application requests MMAP buffers, the guest
kernel can honor this request by allocating some memory itself, and
changes it to USERPTR when passing the request to the hypervisor so it
knows that guest memory is in use.

I am responsible for this misconception since I insisted on using the
same memory types (MMAP, USERPTR, DMABUF) as V4L2 for guest/host
communication, which is misleading. It would probably be less
confusing to define new types (HOST, GUEST and VIRTIO_OBJ) just for
virtio-v4l2 and forbid the use of the kernel/user space memory types.

With these new names, I think it is clear that we have the exact
feature set of virtio-video (guest memory and virtio objects) covered,
plus another one where we allow the host to perform the allocation
itself (which may be useful if the video device has its own dedicated
memory). Again, this is only for host/guest communication. Guest
kernel/userspace is a different thing and can be implemented in
different ways depending on what the host supports.

Thank you very much! This is basically the same thing, that I called
"rewriting the flow of V4L2 UAPI calls on the fly" earlier. You can find
the quote in this email below. As I wrote in the quote, this already
seems hackish to me.

Ensuring consistency everywhere, always remembering in which context a
particular VIDIOC_SOMETHING command is used. Is this command used in
host to guest interaction or in kernel to user-space interaction? You're
going to have problems grepping the code. Well, maybe it can be clearly
separated from each other and from the pass through case. I'm sure it
will still create a lot of confusion.

You went further and suggested to redefine the memory types for
host-guest interactions. I agree completely! This is absolutely the
right thing to do. Mapping original V4L2 memory types would be awkward.
At the same time you now have to remember, that there are two
definitions of enum v4l2_memory, one in the spec, and one in
buffer.html. And you have to remember, that the one from the spec has
priority for host-guest interactions, and the one from buffer.html has
priority for kernel to user-space interactions. That's why I call it a
workaround.

This reminds me another benchmark for code quality: the amount of WTFs
per minute. :)

The latter makes the solution much less flexible IMO. For example, this
won't work well with our hypervisor. There might other special needs in
other use-cases. Like sharing these object UUIDs. Probably this can
handled by mapping, for example, V4L2_MEMORY_USERPTR to guest-pages
sharing, V4L2_MEMORY_DMABUF to the UUIDs (which is not quite correct
IMHO).

Please elaborate on why this is not correct.

Because IMHO UUIDs pointing to memory allocated by virtio-gpu are quite
different dmabufs created in the guest with udmabuf, for example. This
can be confusing.

True, and that's another reason to define our own memory types to
remove that confusion.

Yes, and I'm very happy we agree on this.

So this already means querying the device for supported sharing
methods, rewriting the flow of V4L2 UAPI calls on the fly, ensuring
consistency, etc. This already looks hackish to me. Do you have a better
plan?

How do you support different kinds of memory without querying? Or do
you suggest we stick to a single one?

I am also not quite sure what you mean by "rewriting the flow of V4L2
UAPI calls on the fly". There is no "rewriting" - V4L2 structures are
just used to communicate with the host instead of virtio-video
structures.

I'd like to know your ideas or better a concrete plan for enabling
user-space apps, that only support MMAP, to work on top of a device,
that supports only guest pages sharing.

Hopefully my explanation above clears that.

Yes, thank you.

--
Alexander Gordeev
Senior Software Engineer

OpenSynergy GmbH
Rotherstr. 20, 10245 Berlin

Phone: +49 30 60 98 54 0 - 88
Fax: +49 (30) 60 98 54 0 - 99
EMail: alexander.gordeev@opensynergy.com

www.opensynergy.com

Handelsregister/Commercial Registry: Amtsgericht Charlottenburg, HRB 108616B
GeschÃftsfÃhrer/Managing Director: RÃgis Adjamah

Please mind our privacy notice<https://www.opensynergy.com/datenschutzerklaerung/privacy-notice-for-business-partners-pursuant-to-article-13-of-the-general-data-protection-regulation-gdpr/> pursuant to Art. 13 GDPR. // Unsere Hinweise zum Datenschutz gem. Art. 13 DSGVO finden Sie hier.<https://www.opensynergy.com/de/datenschutzerklaerung/datenschutzhinweise-fuer-geschaeftspartner-gem-art-13-dsgvo/>


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]