OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: Re: [virtio-comment] About adding a new device type virtio-nvme


On  Tue, 17 Jan 2023 19:19:59 +0200, Max Gurtovoy wrote:

>On 17/01/2023 4:04, äèä wrote:

>> On Wed, 11 Jan 2023 10:16:55 -0500, Stefan wrote:

>>

>>

>>>> On Wed, Jan 11, 2023 at 11:21:35AM +0800, äèä wrote:

>>>> As we know, nvme has more features than virtio-blk. For example, with the development of virtualization IO offloading to hardware, virtio-blk and NVME-OF offloading to hardware >are developing rapidly. So if virtio and nvme are combined into Virtio-NvMe, Is it necessary to add a device type Virtio-NvMe ?

>>

>>

>>ÂÂ

>>
>>

>>

>>> Hi,

>>> In theory, yes, virtio-nvme can be done. The question is why do it?

>>

>>

>>> NVMe already provides a PCI hardware spec for software and hardware

>>> implementations to follow. An NVMe PCI device can be exposed to the

>>> guest and modern operating systems recognize it without requiring new

>>> drivers.

>>

>>> The value of VIRTIO here is probably in the deep integration into the

>>> virtualization stack with vDPA, vhost, etc. A virtio-nvme device can use

>>> all these things whereas a PCI device needs to do everything from

>>> scratch.

>>

>> The NVME technology and ecosystem are complete. However, in virtualization scenarios, NVME devices can only use PCIe pass-through . When NVME and virtio combine to connect to the vDPA ecosystem, live migration is supported.

>>

>>

>>> Let's not forget that virtio-blk is widely used and new commands are

>>> being added as needed. Which NVMe features are you missing in

>>> virtio-blk?

>> With the introduction of the concept of DPU, a large number of vendors are offloading virtual devices to hardware. The back-end of Virtio-blk does not support remote storage. Therefore, Virtio-Nvme-of can well combine the advantages of remote storage and virtio live migration

>>

>>

>>

>>> I guess this is why virtio-nvme hasn't been done before: people who want

>>> NVMe can already do NVMe PCI, people who want VIRTIO can use virtio-blk,

>>> and so there hasn't been a great need to combine VIRTIO and NVMe yet.

>>

>>> What advantages do you see in having virtio-nvme?

>>

>>

>> virtio-nvmeÂadvantages :

>> 1)Â live migration

>Â

>

>This is WIP and will use VFIO live migration framework.

Yes, VFIO live migration framework is WIP,  but I still think vdpa is a friendlier framework.


>

>Â

>> 2)Â support remote storage

>

>

>There are solutions today that can use remote storage as an NVMe

>Namespace. For example, DPU based NVMe device such as NVIDIA'S NVMe SNAP

>device.



Yes, you're right. Nvme has a built-in advantage over virtio-blk hardware offloading.
The reason why I propose Virtio-NVMe is to combine nvme and virtio, so that NVME 
can adapt to virtio ecosystem based on virtio interface specifications, such as vdpa.



--
Leo Hou/houyingle



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]