OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] Memory sharing device

On Wed, Feb 6, 2019 at 12:14 PM Dr. David Alan Gilbert <dgilbert@redhat.com> wrote:
* Roman Kiryanov (rkir@google.com) wrote:
> Hi Dave,
> > In virtio-fs we have two separate stages:
> >Â Âa) A shared arena is setup (and that's what the spec Stefan pointed to is about) -
> >Â Â Â it's statically allocated at device creation and corresponds to a chunk
> >Â Â Â of guest physical address space
> We do exactly the same:
> https://android.googlesource.com/platform/external/qemu/+/emu-master-dev/hw/pci/goldfish_address_space.c#659
> >Â Âb) During operation the guest kernel asks for files to be mapped into
> >Â Â Â part of that arena dynamically, using commands sent over the queue
> >Â Â Â - our queue carries FUSE commands, and we've added two new FUSE
> >   commands to perform the map/unmap. They talk in terms of offsets
> >Â Â Â within the shared arena, rather than GPAs.
> In our case we have no files to map, only pointers returned from
> OpenGL or Vulkan.
> Do you have the approach to share for this use case?

I should say that the spec I'm talking aobut is my 1st virito spec
change; so take my ideas with a large pinch of salt!

> > How do you transmit the glMapBufferRange command from QEMU driver to
> > host?
> In December we did this by passing these bits over our guest-host channel
> (another driver, goldfish_pipe). Frank is currently working on moving
> this into our memory
> mapping device as "something changed in the memory you shared".
> Do you this it is possible to have virtio-pipe where we could send
> arbitrary blobs between
> guest and host? We want to move all our drivers into userspace so we
> could share memory
> using the device you are currently working on and this virtio-pipe to
> pass MMIOs and IRQs
> to control our devices to avoid dealing with kernel drivers at all.

It sounds to me like you want something like a virtio-pipe, with
a shared arena (like specified using the spec change I suggested)
but with either a separate queue, or commands in the queue to do the
mapping/unmapping of your GL pointers from your arena.

This sounds close to what we want, but the current suggestions to use virtio-serial/virtio-vsock are difficult to deal with as they add on the req of console forwarding/hard limits on the number of queues, or coupling to unix sockets on the host.

What about this:

A new spec, called "virtio-pipe". It only sends control messages. It's meant to work in tandem with the current virtio host memory proposal. It's not specialized to anything; it doesn't use sockets on the host either, instead, it uses dlopen/dlsym on the host to load a library implementing the wanted userspace devices, together with a minimal ioctl in the guest to capture everything:

There is one ioctl:

u64 offset (to the virtio host memory object)
u64 size
u64 metadata (driver-dependent data)
u32 wait (whether the guest is waiting for the host to be done with something)

These are sent over virtqueue.

On the host, these pings arrive and call some dlsym'ed functions:

u32 on_context_create - when guest userspace open()'s the virtio-pipe, this returns a new id.
on_context_destroy(u32 id) - on last close of the pipe
on_ioctl_ping(u32 id, u64 physaddr, u64 size, u64 metadata, u32 wait) - called when the guest ioctl ping's.Â

There would need to be some kind of IRQ-like mechanism (either done with actual virtual irqs, or polling, or a mprotect/mwait-like mechanism) that tells the guest the host is done with something.

This would be the absolute minimum and most general way to send anything to/from the host with explicit control messages; any device can be defined on top of this with no changes to virtio or qemu.


> Thank you.
> Regards,
> Roman.
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]