[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: Re: [virtio-comment] About adding a new device type virtio-nvme
On Wed, Jan 18, 2023 at 10:15:12AM +0800, äèä wrote: > On Tue, 17 Jan 2023 10:34:09 -0500, Stefan wrote: > > >On Tue, Jan 17, 2023 at 05:41:57PM +0800, äèä wrote: > >> On Tue, 17 Jan 2023 09:32:05 +0100ïDavid wroteï > >> >On 17.01.23 03:04, äèä wrote: > >> > >> > >> > >> >> virtio-nvmeÂadvantages : > >> >> 1) live migration > >> >> 2) support remote storage > >> > >> > >> > >> >At least 1) is an implementation detail in the NVME implementation in > >> >the hypervisor. I suspect 2) in a similar way, or is there a fundamental > >> >issue with that? > >> > >> > >> > >> >One problematic thing about the NVME implementation in QEMU is that it > >> >will pin (via vfio) all guest RAM. Could that be avoided using > >> >virtio-NVME, or what exactly would be the difference between virtio-nvme > >> >and ordinary NVME? > >> > >> > >> > >> In the virtualization scenario where devices are offload to hardwareï > >> > >> > >> NVMEï > >> --------------------------------------------------------------------------------------------------------------------- > >> _____________________________________________________________________________ > >> | ___________________________________________________________ | > >> | | _____________________________________________________ | | > >> | | | | | | > >> | | | __________________________________ | | | > >> | | | | ______ | | | | ______ > >> | | | User | | Mem |-----------------------|----|-|----|-----> | | > >> | | | | |______| SPDK | | | | (gVA) |______| > >> | | | | (gVA) | | | | | | > >> | | | |______|___________________________| | | | | | > >> | | |--GuestOS----------|--------------------------------| | | | | > >> | | | ______\/__________________________ | | | | | > >> | | VM | | VFIO | | | | | | > >> | User | | Kernel |___________ __________________| | | | | | > >> | | | | vfio-pci | | | | | | | | > >> | | | | (gIOVA) | | vfio_iommu_type1 | | | | | | > >> Software | | | |______|____|___|__________________| | | | | | > >> | | |___________________|________________________________| | | | | > >> | | ___________________|________________________________ | | | | > >> | | | ______|____ __________________ | | | \/ \/ > >> | | | | \/ | | | | | | ______ > >> | | | | NVME | | vIOMMU | | | | | | > >> | | | QEMU | Instance | | --|----|-|----|-----> |______| > >> | | | | (gIOVA) | | (gIOVA-->gPA) | | | | (gPA) | | > >> | | | |_____|_____| |__________________| | | | | | > >> | | |__________________|_________________________________| | | | | > >> | |_______________________|___________________________________| | | | > >> |---HostOS---------------------------|----------------------------------------| | | > >> | _______________________|___________________________________ | | | > >> | | | | | | | > >> | | | VFIO | | | | > >> | Kernel |_______________________|_____ _________________________| | | | > >> | | \/ | | | | | | > >> | | vfio-pci | | vfio_iommu_type1 | | | | > >> | | | | | | | | | > >> | |_______________________|_____|___|_________________________| | | | > >> |____________________________________|________________________________________| | | > >> -----------------------------------------------|-------------------------------------------------|----|-------------- > >> ____________________________________\/____ _________________________ ___\/___\/___________ > >> | | | | | | | | | | > >> Hardware | | | DMA (gIOVA) --|---|-> IOMMU --|---------|->|______| | > >> | | |_____________________________| | (gIOVA-->hPA) | (hPA) | Physical Memory | > >> | DPU | | |_________________________| |_____________________| > >> | | NVME-of | > >> | |__________________________________| > >> | | | > >> |___________________________|______________| > >> --------------------------------------|------------------------------------------------------------------------------ > >> | TCP (RDMA, and so on) > >> ______________v__________ > >> | | > >> Remote storage | | > >> | Network Storage | > >> | | > >> |_________________________| > >> > >> --------------------------------------------------------------------------------------------------------------------- > >> > >> > >> It is difficult to implement PCIe passthrough live migration. > > > >  > >Linux commit 115dcec65f61d53e25e1bed5e380468b30f98b14 ("vfio: Define > >device migration protocol v2") defines the VFIO migration API and it's > >implemented by several drivers in the kernel. > > > Yes, this commit supports VFIO live migration, but the feature is a work in progress, > recent submission: https://lore.kernel.org/all/20230116141135.12021-10-avihaih@nvidia.com/ > > > >Can you explain the difficulty of implementing PCIe passthrough live > >migration in more detail? > > VFIO live migration requires IOMMU to support dirty page tracking. Currently, > no IOMMU device supports this feature. So, VFIO live migration will take a long time. > Detailed information referenceïhttps://www.qemu.org/docs/master/devel/vfio-migration.html Can physical devices can do their own dirty page tracking in the meantime since they know which pages are being written to? I have CCed Alex Williamson regarding VFIO. Stefan
Attachment:
signature.asc
Description: PGP signature
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]