OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: Re: [virtio-comment] About adding a new device type virtio-nvme


Thu, 19 Jan 2023 12:33:54 +0200, Max Gurtovoy wrote:
>On 19/01/2023 12:19, äèä wrote:

>> Wed, 18 Jan 2023 12:09:59 +0200, Max Gurtovoy wrote:

>>> On 18/01/2023 5:23, äèä wrote:

>>>> On Tue, 17 Jan 2023 19:19:59 +0200, Max Gurtovoy wrote:

>>>>> On 17/01/2023 4:04, äèä wrote:

>>>>>> On Wed, 11 Jan 2023 10:16:55 -0500, Stefan wrote:

>>>>>>>> On Wed, Jan 11, 2023 at 11:21:35AM +0800, äèä wrote:

>>>>>>>> As we know, nvme has more features than virtio-blk. For example, with the development of virtualization IO offloading to hardware, virtio-blk and NVME-OF offloading to hardware >are developing rapidly. So if virtio and nvme are combined into Virtio-NvMe, Is it necessary to add a device type Virtio-NvMe ?

>>>>>>ÂÂÂÂ

>>>>>>> Hi,

>>>>>>> In theory, yes, virtio-nvme can be done. The question is why do it?

>>>>>>> NVMe already provides a PCI hardware spec for software and hardware

>>>>>>> implementations to follow. An NVMe PCI device can be exposed to the

>>>>>>> guest and modern operating systems recognize it without requiring new

>>>>>>> drivers.

>>>>>>> The value of VIRTIO here is probably in the deep integration into the

>>>>>>> virtualization stack with vDPA, vhost, etc. A virtio-nvme device can use

>>>>>>> all these things whereas a PCI device needs to do everything from

>>>>>>> scratch.

>>>>>> The NVME technology and ecosystem are complete. However, in virtualization scenarios, NVME devices can only use PCIe pass-through . When NVME and virtio combine to connect to the vDPA ecosystem, live migration is supported.

>>>>>>> Let's not forget that virtio-blk is widely used and new commands are

>>>>>>> being added as needed. Which NVMe features are you missing in

>>>>>>> virtio-blk?

>>>>>> With the introduction of the concept of DPU, a large number of vendors are offloading virtual devices to hardware. The back-end of Virtio-blk does not support remote storage. Therefore, Virtio-Nvme-of can well combine the advantages of remote storage and virtio live migration

>>>>>>> I guess this is why virtio-nvme hasn't been done before: people who want

>>>>>> NVMe can already do NVMe PCI, people who want VIRTIO can use virtio-blk,

>>>>>>> and so there hasn't been a great need to combine VIRTIO and NVMe yet.

>>>>>>> What advantages do you see in having virtio-nvme?

>>>>>> virtio-nvmeÂadvantages :

>>>>>> 1)Â live migration

>>>>>ÂÂÂ

>>>>> This is WIP and will use VFIO live migration framework.

>>>> Yes, VFIO live migration framework is WIP, but I still think vdpa is a friendlier framework.

>>

>>

>>> Not sure what you consider friendly ?

>> My personal opinion: VFIO live migration requires device design requirements.

>> But vDPA-based live migration, the software-abstracted vDPA device in the vDPA

>> framework can do some state recording, The design requirements for virtio devices

>> that are offloading to hardware may be lower.

>>

>>ÂÂ

>>

>>

>>> The community agreed that in SR-IOV - VF migration is done via PF interface.

>>

>>

>>> Any device specific migration (e.g. vdpa/virtio)Â is not as generic as

>>> VFIO migration. Also it will be maintained by a smaller group of engineers.

>>

>>> If you would like to use vdpa - I suggest using virtio-blk and not

>>> inventing virtio-nvme device that will for sure be with less feature set

>>> than pure NVMe.

>>

>>

>>> In case you're missing some feature in virtio-blk that exist in NVMe,

>>> you're welcome to submit a proposal to the technical group with that

>>> feature.

>>

>> Yes, this is good advice.

>> virtio-blk adds Fabrics related commands to enable virtio-blk to support

>> virtio-blk-of (over Fabric), I wonder if it is feasible.

>Â

>I'm totally confused.

>I thought you're are trying to build some virtualized environment and

>you're looking for storage devices that support Live migration.

>How does virtio-blk-of will assist here ?



I would like to push virtio storage in hardware offloading scenarios, 
enabling open source solutions that support remote storage access.
So we're talking about the need for virtio-nvme and virtio-blk-of.



>And how will it be better than using existing over fabric solutions that

>can be the backend of the storage device (iscsi, nvmf, etc..) ?




>>

>>

>>

>>

>>> NVIDIA also has a DPU based physical Virtio-blk device (NVIDIA'S

>>> virtio-blk SNAP) that support SR-IOV and remote storage access.

>>

>> For remote storage access, how is the physical Virtio-blk device's back-end implemented?

>> What protocol is used?

>> Is it an open source solution?

>>

>>

>>

>>

>>> Live migration specification is WIP in both NVMe and Virtio working

>>> groups. I can't say who will be merge first.

>> Yes, I agree with you on that point.

>>

>>

>>

>>>>>ÂÂÂ

>>>>>> 2)Â support remote storage

>>>>> There are solutions today that can use remote storage as an NVMe

>>>>> Namespace. For example, DPU based NVMe device such as NVIDIA'S NVMe SNAP

>>>>> device.

>>>> Yes, you're right. Nvme has a built-in advantage over virtio-blk hardware offloading.

>>>> The reason why I propose Virtio-NVMe is to combine nvme and virtio, so that NVME

>>>> can adapt to virtio ecosystem based on virtio interface specifications, such as vdpa.



Â




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]