OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Re: [virtio-dev] [PATCH v2] virtio-tee: Reserve device ID 46 for TEE device

On Tue, 3 Oct 2023 at 19:06, Arnd Bergmann <arnd@arndb.de> wrote:
> On Thu, Sep 28, 2023, at 16:48, Sumit Garg wrote:
> > On Wed, 27 Sept 2023 at 21:39, Arnd Bergmann <arnd@linaro.org> wrote:
> >> On Wed, 27 Sept 2023 at 16:09, Sumit Garg <sumit.garg@linaro.org> wrote:
> >>
> >> I don't know if there is a limit on the size of the virtio shared
> >> memory area other than the PCI MMIO space size, could
> >> this just be made (much) larger?
> >
> > Actually it's a double edged sword. You wouldn't like to block too
> > much host memory which remains unused and on the other hand not too
> > less that can't serve guest application needs. That's the reason we
> > should provide an option to dynamically share guest memory with a TEE.
> To clarify: the address space in the virtio-shmem segment
> does not have to be permanently backed by host memory, it
> just provides a part of the guest-physical mmio space and
> could be populated as needed. So you might have a 1TB
> shmem region in the device itself but only use a single 4KB
> page in it.

Thanks for the clarification. It makes sense to use this approach for
TEE shared memory allocation purposes. The guest client application
should be able to mmap() the buffer allocated from virtio-shmem space.

> The restriction here is that the address of the shmem segment
> itself is fixed on the PCI bus (or whichever virtio transport
> is used), rather than decided by the guest from its memory.

I can understand this restriction.

> >> > The TEE communication protocol has to support INOUT shared memory
> >> > buffers. So it will be quite tricky to support it via TX only and RX
> >> > only buffers (many more buffer copies).
> >>
> >> I don't remember if you can just list the same address in
> >> virtio for both directions, but that could solve this problem.
> >
> > Okay in that case I think following approach can work:
> >
> > - Issue VIRTIO_TEE_CMD_REGISTER_MEM to register an additional buffer
> > passed through virtqueue.
> > - Instruct TEE to perform operations on it via VIRTIO_TEE_CMD_INVOKE_FUNC.
> > - Issue VIRTIO_TEE_CMD_UNREGISTER_MEM to unregister that additional
> > buffer passed through virtqueue.
> >
> > I suppose we can pass either guest user-space or kernel reference in
> > that virtqueue. Does this approach make sense to you?
> No, I'm not sure what you are suggesting here, i.e. which address
> space VIRTIO_TEE_CMD_REGISTER_MEM would operate on. This would
> normally be guest physical memory, which might correspond to pinned
> userspace pages, but I don't think that makes sense in the context
> of virtio, as we discussed before.

I suppose this is a similar situation with DMA buffers too, correct?
Is it not allowed to share DMA buffers backed by guest physical memory
with virtio devices?

As otherwise, in case of VIRTIO-TEE we will be reluctant to maintain
shadow/bounce buffers in virtio-shmem space which will be inefficient.

> If you would use the virtio
> shmem backing, VIRTIO_TEE_CMD_REGISTER_MEM could refer to an offset
> within the device shmem address space, which would then become
> backed by host memory.

Yeah that sounds like a sensible approach for buffers allocated from
virtio-shmem space.

> IIRC the way we had discussed this before was that the virtio-tee
> driver would not register memory at all, but instead pass all
> requests through virtqueues. When the device (host) passes the
> request up to the actual TEE, it could transparently register
> the buffer before the transaction and unregister it afterwards,
> or copy the data to a pre-registered area.

This refers to temporary shared memory in TEE terms. It is the least
efficient approach. It may turn a single VIRTIO_TEE_CMD_INVOKE_FUNC
into at max 9 transactions underneath among host and TEE: 8
register/unregister shared memory invocations and 1 actual invoke
command. This can be turned into a single transaction among host and
TEE if we can make the above discussed approaches to work.


>     Arnd

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]