OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] About adding a new device type virtio-nvme



å 2023/1/18 22:14, Stefan Hajnoczi åé:
On Wed, Jan 18, 2023 at 10:15:12AM +0800, äèä wrote:
On Tue, 17 Jan 2023 10:34:09 -0500, Stefan wrote:
On Tue, Jan 17, 2023 at 05:41:57PM +0800, äèä wrote:
On Tue, 17 Jan 2023 09:32:05 +0100ïDavid wroteï
On 17.01.23 03:04, äèä wrote:
The two diagrams are quite similar. Did you want to highlight a
difference between the two approaches in the diagram?
The biggest difference is the VFIO and vDPA frameworks. The vDPA (virtio data path acceleration) kernel framework
is a pillar in productizing the end-to-end vDPA solution and it enables NIC vendors to integrate their vDPA NIC kernel
drivers into the framework as part of their productization efforts.
Detailed information referenceïhttps://www.redhat.com/en/blog/introduction-vdpa-kernel-framework
For the sake of the argument, let's assume VFIO can't be used in your
situation so vDPA is required. The part I don't understand is which
specific NVMe features you need that virtio-blk lacks?


I can think one:

Avoid guest application migration from NVMe to virtio-blk?

Thanks



Stefan



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]