OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Backend libraries for VirtIO device emulation


Dr. David Alan Gilbert <dgilbert@redhat.com> writes:

> * Alex BennÃe (alex.bennee@linaro.org) wrote:
>> Hi,
>> 
>> So the context of my question is what sort of common software layer is
>> required to implement a virtio backend entirely in userspace?
>> 
>> Currently most virtio backends are embedded directly in various VMMs
>> which emulate a number of devices as well as deal with handling devices
>> that are vhost aware and link with the host kernel. However there seems
>> to be a growing interest in having backends implemented in separate
>> processes, potentially even hosted in other guest VMs.
>> 
>> As far as I can tell there is a lot of duplicated effort in handling the
>> low level navigation of virt queues and buffers. QEMU has code in
>> hw/virtio as well as contrib/libvhost-user which is used by the recent
>> virtiofsd daemon. kvm-tool has a virtio subdirectory that implements a
>> similar set of functionality for it's emulation. The Rust-vmm project
>> has libraries for implementing the device traits.
>> 
>> Another aspect to this is the growing interest in carrying virtio over
>> other hypervisors. I'm wondering if there is enough abstraction possible
>> to have a common library that is hypervisor agnostic? Can a device
>> backend be emulated purely with some shared memory and some sockets for
>> passing messages/kicks from/to the VMM which then deals with the hypervisor
>> specifics of the virtio-transport?
>
> It's a little tricky because it has to interface tightly with the way
> that the memory-mapping works for the hypervisor, so that the external
> process can access the memory of the queues.

I suspect the problem space can at least be reduced to at least a
POSIX-like environment - if that makes things simpler. The setting up of
memory-mappings should be the problem of the VMM, which would possibly
be hypervisor specific. After that it is simply(?) a question of sharing
the appropriate bit of memory between the VMM and the device process.

The other model would be the device process runs inside another guest -
most likely a Linux VM. Here the guest kernel can be told an area of
memory is special in some way and provide a device node that can be
mmaped in more or less the same way. In this configuration it can't even
be aware of what the underlying hypervisor is - just a block of memory
and a way to receive message queue events.

> QEMU's vhost-user has a fair amount of code for handling the mappings,
> dirty logging for migration, iommu's and things like reset (which is
> pretty hairy, and probably needs more work).

I suspect all of these multi-process models just hand wave away details
like migration because that really does benefit from a single process
with total awareness of the state of the system. That said I wonder how
robust a guest can be if the device emulation may go away at any time?

I guess in virtio if you never signal the consumption of a virt-queue it
will still be there waiting until you restart the emulation process and
pick up from where you left off?

>
> Dave
>
>> Thoughts?
>> 
>> -- 
>> Alex BennÃe
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
>> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>> 


-- 
Alex BennÃe


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]