OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [RFC] Virtio RDMA


On Tue, Feb 15, 2022 at 4:15 PM Junji Wei <weijunji@bytedance.com> wrote:
>
> Hi all,
>
> This RFC aims to introduce our recent work on VirtIO-RDMA.
>
> We have finished a draft of VirtIO-RDMA specification and a vhost-user
> RDMA demo based on the spec.This demo can work with CM/Socket
> and UD/RC QP now.
>
> NOTE that this spec now only focuses on emulating a soft
> RoCE (RDMA over Converged Ethernet) device with normal Network Interface
> Card (without RDMA capability). So most Infiniband (IB) specific features
> such as Subnet Manager (SM), Local Identifier (LID) and Automatic Path
> Migration (APM) are not covered in this specification.
>
> There are four parts of our work:
>
> 1. VirtIO-RDMA driver in linux kernel:
> https://github.com/weijunji/linux/tree/virtio-rdma-patch
>
> 2. VirtIO-RDMA userspace provider in rdma-core:
> https://github.com/weijunji/rdma-core/tree/virtio-rdma
>
> 3. VHost-User RDMA backend in QEMU:
> https://github.com/weijunji/qemu/tree/vhost-user-rdma
>
> 4. VHost-User RDMA demo implements with DPDK:
> https://github.com/weijunji/dpdk-rdma
>
>
> To test with our demo:
>
> 1. Build Linux kernel with config INFINIBAND_VIRTIO_RDMA
>
> 2. Build QEMU with config VHOST_USER_RDMA
>
> 3. Build rdma-core and install it to VM image
>
> 4. Build and install DPDK(NOTE that we only tested on DPDK 20.11.3)
>
> 5. Build dpdk-rdma:
>     $ cd dpdk-rdma
>     $ meson build
>     $ cd build
>     $ ninja
>
> 6. Run dpdk-rdma:
>     $ sudo ./dpdk-rdma --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' \
>       --vdev 'net_tap0' --lcore '1-3'
>     $ sudo brctl addif virbr0 dtap0
>
> 7. Boot kernel with qemu with following args using libvirt:
> <qemu:commandline>
>     <qemu:arg value='-chardev'/>
>     <qemu:arg value='socket,path=/tmp/sock0,id=vunet'/>
>     <qemu:arg value='-netdev'/>
>     <qemu:arg value='vhost-user,id=net1,chardev=vunet,vhostforce,queues=1'/>
>     <qemu:arg value='-device'/>
>     <qemu:arg value='virtio-net-pci,netdev=net1,bus=pci.0,multifunction=on,addr=0x2'/>
>     <qemu:arg value='-chardev'/>
>     <qemu:arg value='socket,path=/tmp/vhost-rdma0,id=vurdma'/>
>     <qemu:arg value='-device'/>
>     <qemu:arg value='vhost-user-rdma-pci,page-per-vq,disable-legacy=on,addr=2.1,chardev=vurdma'/>
> </qemu:commandline>
>
> NOTE that virtio-net-pci and vhost-user-rdma-pci MUST in same PCI addresss.
>

A silly question, if RoCE is the focus, why not extending virtio-net instead?

Thanks



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]