OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [virtio-comment] Re: [PATCH v3 6/8] admin: Add theory of operation for write recording commands



> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Friday, November 17, 2023 6:07 PM
> To: Parav Pandit <parav@nvidia.com>
> 
> On Fri, Nov 17, 2023 at 12:15:54PM +0000, Parav Pandit wrote:
> > Hi Alex, Jason,
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: Friday, November 17, 2023 5:20 PM
> > > To: Parav Pandit <parav@nvidia.com>
> >
> > > > > Allocating resources on outgoing migration is a very bad idea.
> > > > > It is common to migrate prcisely because you are out of resources.
> > > > > Incoming is a different story, less of a problem.
> > > > >
> > > > The resource allocated may not be on same system.
> > > > Also the resource allocated while the VM is running, so I donât see a
> problem.
> > >
> > > > Additionally, this is not what the Linux kernel maintainers of
> > > > iommu subsystem
> > > told us either.
> > > > Let me know if you check with Alex W and Jason who build this interface.
> > >
> > > VFIO guys have their own ideas, if they want to talk to virtio guys
> > > they can come here and do that.
> >
> > Since one of the use cases would have accepted to let dirty tracking to fail, I
> dont see a problem.
> > This is not the only command on source that fails.
> > So I anticipate that QEMU and libvirt or any vfio user would build the
> orchestration around the possible failure because the UAPI is well defined.
> >
> > When there is hypervisor, that must have zero failures on src side, such kernel
> + device can build everything reserved upfront.
> >
> > Do you say, QEMU has zero memory allocations on source side for migration?
> > That would be interesting to know.
> 
> More or less yes. More precisely while in theory allocations it's doing can fail in
> practice it happens rarely enough that QEMU does not even bother checking
> and will immediately crash if they do. The reason is that it's using virtual
> memory, so it scales to a huge number of VMs.
> Migrating a single VM at a time is not even worth discussing.

Wow that is even much worse to crash the running VM, instead of failing the migration.
I have live migrated VMs one by one and have seen customers migrate on hyperconverged systems. Ofcourse it was not QEMU.
Single VM migration is real and used by cloud operators.
Why would you ignore it?


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]