OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: VIRTIO adoption in other hypervisors


On 28.02.20 17:47, Alex BennÃe wrote:

Jan Kiszka <jan.kiszka@siemens.com> writes:

On 28.02.20 11:30, Jan Kiszka wrote:
On 28.02.20 11:16, Alex BennÃe wrote:
Hi,

<snip>
I believe there has been some development work for supporting VIRTIO on
Xen although it seems to have stalled according to:

    https://wiki.xenproject.org/wiki/Virtio_On_Xen

Recently at KVM Forum there was Jan's talk about Inter-VM shared memory
which proposed ivshmemv2 as a VIRTIO transport:

    https://events19.linuxfoundation.org/events/kvm-forum-2019/program/schedule/


As I understood it this would allow Xen (and other hypervisors) a simple
way to be able to carry virtio traffic between guest and end point.

And to clarify the scope of this effort: virtio-over-ivshmem is not
the fastest option to offer virtio to a guest (static "DMA" window),
but it is the simplest one from the hypervisor PoV and, thus, also
likely the easiest one to argue over when it comes to security and
safety.

So to drill down on this is this a particular problem with type-1
hypervisors?

Well, this typing doesn't help here (like it rarely does). There are kvm-based setups that are stripped down and hardened in a way where other folks would rather think of "type 1". I just had a discussion around such a model for a cloud scenario that runs on kvm.


It seems to me any KVM-like run loop trivially supports a range of
virtio devices by virtue of trapping accesses to the signalling area of
a virtqueue and allowing the VMM to handle the transaction which ever
way it sees fit.

I've not quite understood the way Xen interfaces to QEMU aside from it's
different to everything else. More over it seems the type-1 hypervisors
are more interested in providing better isolation between segments of a
system whereas VIRTIO currently assumes either the VMM or the hypervisor
has full access the full guest address space. I've seen quite a lot of
slides that want to isolate sections of device emulation to separate
processes or even separate guest VMs.

The point is in fact not only whether to trap IO accesses or to ask the guest to rather target something like ivshmem (in fact, that is where use cases I have in mind deviated from those of that cloud operator). It is specifically the question how the backend should be able to transfer data to/from the frontend. If you want to isolate the both from each other (driver VMs/domains/etc.), you either need a complex virtual IOMMU (or "grant tables") or a static DMA windows (like ivshmem). The former is more efficient with large transfers, the latter is much simpler and therefore more robust.

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]