OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: virtio-gpu dedicated heap


$ ./scripts/get_maintainer.pl -f ./drivers/gpu/drm/virtio/

David Airlie <airlied@linux.ie> (maintainer:VIRTIO GPU DRIVER)
Gerd Hoffmann <kraxel@redhat.com> (maintainer:VIRTIO GPU DRIVER)
Daniel Vetter <daniel@ffwll.ch> (maintainer:DRM DRIVERS)
dri-devel@lists.freedesktop.org (open list:VIRTIO GPU DRIVER)
virtualization@lists.linux-foundation.org (open list:VIRTIO GPU DRIVER)
linux-kernel@vger.kernel.org (open list)

You might want to CC these people.

On Thu, Mar 03, 2022 at 08:07:03PM -0800, Gurchetan Singh wrote:
> +iommu@lists.linux-foundation.org not iommu-request
> 
> On Thu, Mar 3, 2022 at 8:05 PM Gurchetan Singh <gurchetansingh@chromium.org>
> wrote:
> 
>     Hi everyone,
> 
>     With the current virtio setup, all of guest memory is shared with host
>     devices.  There has been interest in changing this, to improve isolation of
>     guest memory and increase confidentiality.  
> 
>     The recently introduced restricted DMA mechanism makes excellent progress
>     in this area:
> 
>     https://patchwork.kernel.org/project/xen-devel/cover/
>     20210624155526.2775863-1-tientzu@chromium.org/  
> 
>     Devices without an IOMMU (traditional virtio devices for example) would
>     allocate from a specially designated region.  Swiotlb bouncing is done for
>     all DMA transfers.  This is controlled by the VIRTIO_F_ACCESS_PLATFORM
>     feature bit.
> 
>     https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/
>     3064198
> 
>     This mechanism works great for the devices it was designed for, such as
>     virtio-net.  However, when trying to adapt to it for other devices, there
>     are some limitations.  
> 
>     It would be great to have a dedicated heap for virtio-gpu rather than
>     allocating from guest memory.  
> 
>     We would like to use dma_alloc_noncontiguous on the restricted dma pool,
>     ideally with page-level granularity somehow.  Continuous buffers are
>     definitely going out of fashion.
> 
>     There are two considerations when using it with the restricted DMA
>     approach:
> 
>     1) No bouncing (aka memcpy)
> 
>     Expensive with graphics buffers, since guest user space would designate
>     shareable graphics buffers with the host.  We plan to use
>     DMA_ATTR_SKIP_CPU_SYNC when doing any DMA transactions with GPU buffers.
> 
>     Bounce buffering will be utilized with virtio-cmds, like the other virtio
>     devices that use the restricted DMA mechanism.
> 
>     2) IO_TLB_SEGSIZE is too small for graphics buffers
> 
>     This issue was hit before here too:
> 
>     https://www.spinics.net/lists/kernel/msg4154086.html
> 
>     The suggestion was to use shared-dma-pool rather than restricted DMA.  But
>     we're not sure a single device can have restricted DMA (for
>     VIRTIO_F_ACCESS_PLATFORM) and shared-dma-pool (for larger buffers) at the
>     same time.  Does anyone know? 
> 
>     If not, it sounds like "splitting the allocation into dma_max_mapping_size
>     () chunks" for restricted-dma is also possible.  What is the preferred
>     method?
> 
>     More generally, we would love more feedback on the proposed design or
>     consider alternatives!
> 



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]