OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] Memory sharing device


> > For communication between guest processes within the same VM I don't
> > really see a need to involve the hypervisor ...
> >
> Right, once the host memory is set up we can rely on purely guest side stuff
> map sub-regions of it.

Or just use guest ram ...

> > > Yes, also, other devices of the same VM.
> >
> > So why involve the hypervisor here?  The guest can handle that on its
> > own.  Passing an image data buffer from the usb webcam to the intel gpu
> > for display (on bare metal) isn't fundamentally different from passing a
> > buffer from virtio-camera to virtio-gpu (in a VM).  Linux guests will
> > use dma-bufs for that, other OSes probably something else.
> That's true that it can be handled purely in the guest layers,
> if there is an existing interface in the guest
> to pass the proposed host memory id's / offsets / sizes
> between them.

Note:  I think using a pci memory bar (aka host memory mapped into the
guest) as backing storage for dma-bufs isn't going to work.

> However, for the proposed host memory sharing spec,
> would there be a standard way to share the host memory across
> different virtio devices without relying on Linux dmabufs?

I think with the current draft for each device (virtio-fs, virtio-gpu,
...) has its own device-specific memory, and there is no mechanism to
exchange buffers between devices.


I'm also not convinced that explicitly avoiding dmabufs is a good idea
here.  That would put virtio into its own universe and sharing buffers
with non-virtio devices will not work.  Think about a intel vgpu as
display device, or a usb camera attached to the guest using usb

Experience shows that using virtualization-specific features /
optimizations / short-cuts often turns out to have drawbacks in the long
run, even if it looked like a good idea initially.  Just look at the
mess we had with virtio-pci dma after iommu emulation landed in qemu.
And this is only one example, we have more of this ...


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]