On Tue, Feb 19, 2019 at 7:46 PM Michael S. Tsirkin <email@example.com
On Tue, Feb 19, 2019 at 07:54:04AM -0800, Frank Yang wrote:
> To update driver/device functionality, ideally we want to ship two small shared
> one to guest userspace and one to plug in to the host VMM.
I don't think we want to support that last in QEMU.Â Generally you want
process isolation, not shared library plugins - definitely not host
side - VMM is just to sensitive to allow random plugins -
and maybe not guest side either.
Yeah that's a good point.
I haven't really thought too much into it since I've been
planning around, for upstreaming purposes,
if not a shared library plugin, then non-shared code living in the VMM,
though that does make things more complex for us since we also don't want to further
customize QEMU if possible.
IPC though, seems like it would add quite some overhead,
unless there's some generally accepted portable way to run via shared memory
that doesn't also involve busy waiting in a way that burns up the CPU?
Then we could maybe define a new transport that works through that, or something.
Well, regardless of IPC mechanism, we would also need to solve a compatibility issue:
on most host OSes, we can't just take any host pointer and map it into another process (then the hypervisor mapping that pointer to the guest);
the Vulkan use case for example, ironically, seems to only work well after remapping through a hypervisor;
IPC cannot be done with them unless the driver happens to support
that flavor of external memory (cross process shareableÂ+ host visible).
I'm at a conference tomorrow but I hope to complete review
of the proposal and respond by end of week.
Thanks very much Michael for taking the the time to review it.