OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] Virtio-loopback: PoC of a new Hardware Abstraction Layer for non-Hypervisor environments based on Virtio


Hello Xuan,


The main differences between virtio-loopback and vDUSE are mainly:

1) the data sharing mechanism

2) the Virtio/Vhost-user devices which are supported by each solution


In particular, Virtio-loopback implements a zero-copy memory mapping

mechanism, the data are directly accessible by the user-space and

supports vhost-user-blk, vhost-user-input, vhost-user-rng.


At the best of my knowledge, VDUSE is based on a bouncing buffer

mechanism which does not implement the zero-copy principle. In addition,

it supports vhost-user-blk and vhost-user-net only.


Kind regards,

Timos



On Tue, Apr 18, 2023 at 11:01âAM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
On Thu, 13 Apr 2023 16:35:59 +0300, Timos Ampelikiotis <t.ampelikiotis@virtualopensystems.com> wrote:
> Dear virtio-dev community,
>
> I would like to introduce you Virtio-loopback, Proof of Concept (PoC) that
> we have been working on at Virtual Open Systems in the context of the
> Automotive Grade Linux community (Virtualization & Containers expert
> group - EG-VIRT).
>
> We consider this work as a PoC and we are not currently planning
> upstream. However, if the zero-copy or any other aspect of this work
> is interesting for other Virtio implementations, we would be glad to
> discuss more.

What is the difference between this and vduse?

Thanks.

>
> Overview:
> ---------
>
> Virtio-loopback is a new hardware abstraction layer designed for
> non-Hypervisor
> environments based on virtio. The main objective is to enable applications
> communication with vhost-user devices in a non-hypervisor environment.
>
> More in details, Virtio-loopback's design consists of a new transport
> (Virtio-loopback), a user-space daemon (Adapter), and a vhost-user device.
> The data path has been implemented using the "zero-copy" principle, where
> vhost-user devices access virtqueues directly in the kernel space. This
> first
> implementation supports multi-queues, does not require virtio-protocol
> changes
> and applies minor modifications to the vhost-user library. Supported
> vhost-user
> devices are today vhost-user-rng (both rust and C version), vhost-user-input
> and vhost-user-blk.
>
> Motivation & requirements:
> -------------------------
>
> 1. Enable the usage of the same user-space driver on both virtualized and
>Â Â non-virtualized environments.
>
> 2. Maximize performance with zero copy design principles
>
> 3. Applications using such drivers are unchanged and transparently running
> in
>Â Â both virtualized or non-virtualized environment.
>
> Design description:
> -------------------
>
> a) Component's description:
> --------------------------
>
> The Virtio-loopback architecture consists of three main components
> described below:
>
> 1) Driver: In order to route the VIRTIO communication in user-space
>Â Â virtio-loopback driver was implemented and consists of:
>Â Â - A new transport layer which is based on virtio-mmio and it is
>Â Â Â responsible of routing the read/write communication of the virtio
>Â Â Â device to the adapter binary.
>Â Â - A character device which works as an intermediate layer between the
>Â Â Â user-space components and the transport layer. The character device
> helps
>Â Â Â the adapter to provide all the required information and initialize the
>Â Â Â transport and at the same time, provides direct access to the vrings
>Â Â Â from user-space. The access to the vrings is based on a memory mapping
>Â Â Â mechanism which gives the ability to the vhost-user device to read and
>Â Â Â write data directly into kernel's memory without the need of any copy.
>
> 2) Adapter: Implements the role that QEMU had in the corresponding
> virtualized
>Â Â scenario. Specifically, combines the functionality of two main QEMU
>Â Â components, virtio-mmio transport emulation and vhost-user backend, in
> order
>Â Â to work as a bridge between the transport and the vhost-user device. The
> two
>Â Â main parts of the adapter are:
>Â Â - A vhost-user backend which is the main communication point with the
>Â Â Â vhost-user device.
>Â Â - A virtio-emulation which handles mostly the messages coming from the
>Â Â Â driver and translates them into vhost-user messages/actions.
>
> 3) Vhost-user device: This components required only minimal modifications to
>Â Â make the vrings directly accessible in kernel's memory.
>
> b) Communication between the virtio-loopback components:
> -------------------------------------------------------
>
> After the describing the role of its component, a few details need to be
> given
> about how they interact with each other and the mechanisms used.
>
> 1) Transport & Adapter:
>Â Â - The two components share a communication data structure which describes
>Â Â Â the current read/write operation requested by the transport.
>Â Â - When this data structure has been filled with all the required
> information
>Â Â Â the transport triggers and EventFD and waits. The adapter wakes up,
> takes
>Â Â Â the corresponding actions and finally notifies and unlocks the
> transport
>Â Â Â by calling an IOCTL system call.
>Â Â - Compared to the virtualized environment scenario, the adapter calls an
>Â Â Â IOCTL system call to the driver in place of an interrupt.
>
> 2) Adapter & Vhost-user device:
>Â Â - The mechanisms used between these two component are the same as in
>Â Â Â the virtualized environment case.
>Â Â Â a) A UNIX socket is in place to exchange any VHOST-USER messages.
>Â Â Â b) EventFDs are being used in order to trigger VIRTIO kick/call
>Â Â Â Â Ârequests.
>
> 3) Transport & Vhost-user device:
>Â Â - Since the Vrings are allocated into the Kernel's memory, vhost-user
>Â Â Â device needs to communicate and request access from the virtio-loopback
>Â Â Â driver. These requirement is served by implementing MMAP and IOCTL
> system
>Â Â Â calls in the driver.
>
> c) Vrings & Zero copy memory mapping mechanism:
> ----------------------------------------------
>
> The vrings are allocated by the virtio driver into the kernel's memory space
> and in order to be accessible by the user-space, especially by the
> vhost-user
> device, a new memory mapping mechanism needed to be created into the
> virtio-loopback driver. The new memory mapping mechanism is based on a
> page-fault handler which maps the accessed pages on-demand.
>
> Known issues & room for improvement:
> -----------------------------------
>
> Known limitation found in the current implementation:
> - The memory mapping mechanism needs improvements, in the current
>Â Âimplementation the device can potentially access the whole kernel's
>Â Âmemory. A more fine grained mmapping can be set by the kernel by
>Â Ânarrowing down the memory block shared.
>
> Possible next development targets might be about:
> - Security checks for the memory shared with the user-space (vhost
> user-device)
> - Add parallel device handling for the virtio-loopback transport and adapter
> - Add support for more vhost-user devices
>
> More information:
> ----------------
>
> The full description of the technology can be found in the links below:
> - Virtio-loopback design document
> <https://git.virtualopensystems.com/virtio-loopback/docs/-/blob/virtio-loopback-rfc/design_docs/EG-VIRT_VOSYS_virtio_loopback_design_v1.4_2023_04_03.pdf>
> - How to test the technology
> <https://git.virtualopensystems.com/virtio-loopback/docs/-/blob/virtio-loopback-rfc/README.md>
>
> Links for all the key components of the design can be found below:
> 1) Virtio-loopback-transport
> <https://git.virtualopensystems.com/virtio-loopback/loopback_driver/-/tree/virtio-loopback-rfc>
> 2) Adapter
> <https://git.virtualopensystems.com/virtio-loopback/adapter_app/-/tree/virtio-loopback-rfc>
> 3) Vhost-user devices in Qemu
> <https://git.virtualopensystems.com/virtio-loopback/qemu/-/tree/virtio-loopback-rfc>
>
> Virtio-loopback has been tested on RCAR-M3 board (AGL needlefish) and x86
> systems (Fedora 37). The results have been found to be comparable with VDUSE
> technology in virtio-blk case:
> - Automotive Grade Linux All Member Meeting Spring (8-9/03/2023) -
> Presentation
> <https://static.sched.com/hosted_files/aglammspring2023/44/vosys_virtio-loopback-berlin_2023-03-08.pdf>
>Â Â+ Activity done in the context of the AERO EU project (grant agreement No
> 101092850)
>
> Thank you for taking the time to review this PoC,
> I would appreciate your feedback and suggestions for improvements.
>
> Best regards,
> Timos Ampelikiotis
>


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]