OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] [PATCH v2] virtio-tee: Reserve device ID 46 for TEE device


On Wed, 27 Sept 2023 at 21:39, Arnd Bergmann <arnd@linaro.org> wrote:
>
> On Wed, 27 Sept 2023 at 16:09, Sumit Garg <sumit.garg@linaro.org> wrote:
> >
>
> It looks like I accidentally dropped the entire Cc list in my earlier reply,
> adding them all back.

No worries.

>
> > On Wed, 27 Sept 2023 at 01:44, Arnd Bergmann <arnd@linaro.org> wrote:
> > >
> > > On Tue, 26 Sept 2023 at 08:44, Sumit Garg <sumit.garg@linaro.org> wrote:
> > > > On Tue, 26 Sept 2023 at 08:16, Jens Wiklander <jens.wiklander@linaro.org> wrote:
> > > > > On Tue, Sep 26, 2023 at 8:00âAM Sumit Garg <sumit.garg@linaro.org> wrote:
> > > > > >
> > > > > > How about shared memory support? We would like to register guest pages with the trusted OS.
> > > > >
> > > > > Coincidently Arnd and I (among others) discussed this in person last
> > > > > week and the conclusion was that only temporary shared memory is
> > > > > possible with virtio. So the shared memory has to be set up and torn
> > > > > down by the host during each operation, typically open-session or
> > > > > invoke-func.
> > > >
> > > > Agree as I was part of those discussions. But I would like to
> > > > understand the reasoning behind it. Is there any restriction by VIRTIO
> > > > specification that we can't register guest page PAs to a device (TEE
> > > > in our case) to allow for zero copy transfers?
> > > >
> > > > Alex mentioned some references to virtio GPU device. I suppose I need
> > > > to dive into its implementation to see if there are any similarities
> > > > to our use-case.
> > > >
> > > > > That might not be optimal if trying to maximize
> > > > > performance, but it is portable.
> > > >
> > > > IMO, the ABI should be flexible enough to support a TEE with optimum
> > > > performance.
> > >
> > > As we discussed last week, I can see two possible ways to implement
> > > a TEE device within the constraints of the virtio specification:
> > >
> > > a) Allocate a shared memory area in the device (host) and export it
> > >    to the driver (guest) via a virtio shared memory area. This shared
> > >    memory can be shared with userspace using mmap() if necessary.
> > >    A tee command in this case would be sent using a normal virtqueue
> > >    to the device with a pair of one transmit and one receive buffer.
> > >    Any arguments that refer to memory blocks in this case are
> > >    offsets into the shared memory area. Using this preallocated buffer
> > >    is similar to earlier TEE implementations but has some restrictions.
> > >    The command in this case has to be copied in and out by the
> > >    hypervisor implementation.
> >
> > Yeah that was the initial approach that we used for OP-TEE but it was
> > limited by the fixed size of the shared memory area. A guest client
> > application may want to share a larger data buffer with TA which can't
> > be supported via this approach.
>
> I don't know if there is a limit on the size of the virtio shared
> memory area other than the PCI MMIO space size, could
> this just be made (much) larger?

Actually it's a double edged sword. You wouldn't like to block too
much host memory which remains unused and on the other hand not too
less that can't serve guest application needs. That's the reason we
should provide an option to dynamically share guest memory with a TEE.

>
> > > b) Send all data through the virtqueue itself, pointing into normal
> > >    guest memory. The first buffer sent to the device is the request,
> > >    while the receive buffer is the result. Instead of pointers to
> > >    shared memory, this means that all data transfers would be
> > >    done in additional buffers on the same virtio transaction, and
> > >    the host would have to register the guest memory dynamically
> > >    as part of the command before forwarding them to a TEE that
> > >    relies on registering shared memory, and unmap it afterwards
> > >    since the guest might reuse the buffers for other data later
> > >    that it does not want to share with the TEE
> > >
> >
> > The TEE communication protocol has to support INOUT shared memory
> > buffers. So it will be quite tricky to support it via TX only and RX
> > only buffers (many more buffer copies).
>
> I don't remember if you can just list the same address in
> virtio for both directions, but that could solve this problem.
>

Okay in that case I think following approach can work:

- Issue VIRTIO_TEE_CMD_REGISTER_MEM to register an additional buffer
passed through virtqueue.
- Instruct TEE to perform operations on it via VIRTIO_TEE_CMD_INVOKE_FUNC.
- Issue VIRTIO_TEE_CMD_UNREGISTER_MEM to unregister that additional
buffer passed through virtqueue.

I suppose we can pass either guest user-space or kernel reference in
that virtqueue. Does this approach make sense to you?

-Sumit

> > > Registering guest memory to the TEE permanently would be
> > > a layering violation since that makes invalid assumptions about
> > > the type of virtio transport that do not make sense to a virtio
> > > driver.
> >
> > AFAICS, the VIRTIO is just a transport to relay information among
> > guest kernel drivers and host emulated devices. The registration of
> > guest memory to TEE won't be permanent but rather has a limited
> > lifetime which is alive until the guest client application closes the
> > context with TEE. So once the TEE context is closed by the guest
> > client application then all the corresponding registered memory will
> > be freed.
>
> Right, so while in theory you can implement random non-virtio
> semantics by passing other commands through a virtqueue, I would
> no longer consider it a virtio driver at that point, since it makes
> assumptions about the host system implementation beyond what
> is abstracted in virtio.
>
> > > As far as the driver is concerned, the virtqueue is a
> > > socket type interface that does transactions on input and
> > > output data in place but has no concept of guest memory.
> >
> > That's true. As part of virtio-tee, we will pass page pointers in that
> > virtio input/output data in place. I suppose it's better to discuss
> > the implementation details once AMD folks put the virtio-tee
> > specification out for review. Also, they have implementations for
> > virtio-tee frontend and backend too which they will make up for public
> > review.
> >
> > There certainly can be some essential details of virtio spec that I am
> > missing here since I only started exploring it a month back. If you
> > have some certain pointers to the spec then I will be happy to read
> > them carefully.
>
> I have not found an explicit wording that forbids you from
> referencing physical memory addresses indirectly in buffers
> that are passed through virtqueues, but the basic definition of
> a virtqueue in section 2.6 [1] describes the way that buffers
> are passed, and if a TEE driver wants to pass commands
> and their arguments in something that is not a virtqueue or
> a virtio-shmem area, I think you are clearly outside of the intended
> model.
>
>      Arnd
>
> https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-270006


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]