OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: Re: [virtio-comment] About adding a new device type virtio-nvme


On Thu, Jan 19, 2023 at 07:02:00PM +0800, äèä wrote:
> Thu, 19 Jan 2023 12:33:54 +0200, Max Gurtovoy wrote:
> >On 19/01/2023 12:19, äèä wrote:
> 
> >> Wed, 18 Jan 2023 12:09:59 +0200, Max Gurtovoy wrote:
> 
> >>> On 18/01/2023 5:23, äèä wrote:
> 
> >>>> On Tue, 17 Jan 2023 19:19:59 +0200, Max Gurtovoy wrote:
> 
> >>>>> On 17/01/2023 4:04, äèä wrote:
> 
> >>>>>> On Wed, 11 Jan 2023 10:16:55 -0500, Stefan wrote:
> 
> >>>>>>>> On Wed, Jan 11, 2023 at 11:21:35AM +0800, äèä wrote:
> 
> >>>>>>>> As we know, nvme has more features than virtio-blk. For example, with the development of virtualization IO offloading to hardware, virtio-blk and NVME-OF offloading to hardware >are developing rapidly. So if virtio and nvme are combined into Virtio-NvMe, Is it necessary to add a device type Virtio-NvMe ?
> 
> >>>>>>ÂÂÂÂ
> 
> >>>>>>> Hi,
> 
> >>>>>>> In theory, yes, virtio-nvme can be done. The question is why do it?
> 
> >>>>>>> NVMe already provides a PCI hardware spec for software and hardware
> 
> >>>>>>> implementations to follow. An NVMe PCI device can be exposed to the
> 
> >>>>>>> guest and modern operating systems recognize it without requiring new
> 
> >>>>>>> drivers.
> 
> >>>>>>> The value of VIRTIO here is probably in the deep integration into the
> 
> >>>>>>> virtualization stack with vDPA, vhost, etc. A virtio-nvme device can use
> 
> >>>>>>> all these things whereas a PCI device needs to do everything from
> 
> >>>>>>> scratch.
> 
> >>>>>> The NVME technology and ecosystem are complete. However, in virtualization scenarios, NVME devices can only use PCIe pass-through . When NVME and virtio combine to connect to the vDPA ecosystem, live migration is supported.
> 
> >>>>>>> Let's not forget that virtio-blk is widely used and new commands are
> 
> >>>>>>> being added as needed. Which NVMe features are you missing in
> 
> >>>>>>> virtio-blk?
> 
> >>>>>> With the introduction of the concept of DPU, a large number of vendors are offloading virtual devices to hardware. The back-end of Virtio-blk does not support remote storage. Therefore, Virtio-Nvme-of can well combine the advantages of remote storage and virtio live migration
> 
> >>>>>>> I guess this is why virtio-nvme hasn't been done before: people who want
> 
> >>>>>> NVMe can already do NVMe PCI, people who want VIRTIO can use virtio-blk,
> 
> >>>>>>> and so there hasn't been a great need to combine VIRTIO and NVMe yet.
> 
> >>>>>>> What advantages do you see in having virtio-nvme?
> 
> >>>>>> virtio-nvmeÂadvantages :
> 
> >>>>>> 1)Â live migration
> 
> >>>>>ÂÂÂ
> 
> >>>>> This is WIP and will use VFIO live migration framework.
> 
> >>>> Yes, VFIO live migration framework is WIP, but I still think vdpa is a friendlier framework.
> 
> >>
> 
> >>
> 
> >>> Not sure what you consider friendly ?
> 
> >> My personal opinion: VFIO live migration requires device design requirements.
> 
> >> But vDPA-based live migration, the software-abstracted vDPA device in the vDPA
> 
> >> framework can do some state recording, The design requirements for virtio devices
> 
> >> that are offloading to hardware may be lower.
> 
> >>
> 
> >>ÂÂ
> 
> >>
> 
> >>
> 
> >>> The community agreed that in SR-IOV - VF migration is done via PF interface.
> 
> >>
> 
> >>
> 
> >>> Any device specific migration (e.g. vdpa/virtio)Â is not as generic as
> 
> >>> VFIO migration. Also it will be maintained by a smaller group of engineers.
> 
> >>
> 
> >>> If you would like to use vdpa - I suggest using virtio-blk and not
> 
> >>> inventing virtio-nvme device that will for sure be with less feature set
> 
> >>> than pure NVMe.
> 
> >>
> 
> >>
> 
> >>> In case you're missing some feature in virtio-blk that exist in NVMe,
> 
> >>> you're welcome to submit a proposal to the technical group with that
> 
> >>> feature.
> 
> >>
> 
> >> Yes, this is good advice.
> 
> >> virtio-blk adds Fabrics related commands to enable virtio-blk to support
> 
> >> virtio-blk-of (over Fabric), I wonder if it is feasible.
> 
> >Â
> 
> >I'm totally confused.
> 
> >I thought you're are trying to build some virtualized environment and
> 
> >you're looking for storage devices that support Live migration.
> 
> >How does virtio-blk-of will assist here ?
> 
> 
> 
> I would like to push virtio storage in hardware offloading scenarios, 
> enabling open source solutions that support remote storage access.
> So we're talking about the need for virtio-nvme and virtio-blk-of.

If I understand correctly you're saying the guest driver needs to speak
the same protocol as the remote storage?

That's a good idea for local storage because it avoids extra layers of
software that parses/translates commands.

However, I don't understand why it matters for remote storage because
commands needs to be parsed by the DPU and sent as messages over a
fabric anyway. Whether you go virtio-blk<->NVMeoF,
virtio-blk<->virtio-blk-of, or nvme-pci<->NVMeoF, it's still the same
path. None of them presents a significant optimization opportunity.

The main optimization is to configure some sort of RDMA to avoid copying
around I/O buffers, but the buffers only contain data and are not
protocol-specific so virtio-blk<->NVMeoF should work.

Can you explain what you wrote in a bit more detail, I don't understand
why virtio-blk-of is needed?

Stefan

Attachment: signature.asc
Description: PGP signature



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]