OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [RFC PATCH v2 1/2] virtio-gpu: add resource create blob


> v3 only allows VIRTIO_GPU_BLOB_MEM_GUEST for dumb blob resources,
> which implies we won't really use TRANSFER_BLOB.
> 
> Currently, the path seems to be:
> 
> SET_SCANOUT
> TRANSFER_TO_HOST_2D --> copies from iovecs to host private resource
> RESOURCE_FLUSH_BLOB --> copies from host private resource to framebuffer

Yes.

> A theoretical display update path would be for dumb BOs without shared
> mappings would be:
> 
> SET_SCANOUT_BLOB
> RESOURCE_FLUSH_BLOB --> copies from iovecs to framebuffer
> 
> That should work with crosvm, you might want to verify if it's doable with QEMU.

I think qemu can handle it.  With udmabuf qemu can simply mmap() the
resource, create a pixman image, then run with it.  Without udmabuf
it'll be a bit more complicated because we can't offload most work to
pixman then, but should still be doable.  Qemu could also choose to
allow blob resources only in case udmabuf is available to avoid handling
that.  So no blockers here.

> One possiblity is to only use dumb blob resources in virtio-gpu kms
> when shared guest is available rather than refactoring that code,

I think this makes sense.

> > same goes for dma-buf imports (inside the guest).
> 
> dma-buf import from another virtio driver is very interesting, but
> we'll probably need a some import UUID hypercall for that?

No, just dma-buf import from somewhere, say a gpu passed to the guest
via pci pass-through does the rendering to a dma-buf and that dmabuf
gets imported into virtio-gpu for display in a host window.

That'll requires BLOB resources to work, because we have to create a
guest bo (and virtio resource) without knowning what format the guest
wants to use to scanout the thing.

We may likewise allow that only in case shared guest is available.

> Though TRANSFER_BLOB can be useful.  I can see the following use cases:
> 
> - implementing guest kernel dma-buf mmap and synchronization
> (begin_cpu_access/end_cpu_access).  Currently, it doesn't work but no
> guest user-space relies on it, so no one has noticed/complained.
> - emulated coherent memory
> 
> It sounds like you think TRANSFER_BLOB is worth the maintenance vs.
> future proofing tradeoff, which is fair.

We can leave it out for now and see if we'll need it at some point in
the future.  One point of adding blob resources is to allow shared
mappings, so maybe it wouldn't be used. There is also the option to
fallback to traditional resources ...

cheers,
  Gerd



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]