OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: RE: [virtio-comment] [PATCH v1 1/8] admin: Add theory of operation for device migration

> From: Jason Wang <jasowang@redhat.com>
> Sent: Wednesday, October 18, 2023 6:23 AM

> Why can suspend go directly from guest to device then?
Because all the virtio registers are treated equally by the live migration driver.
So why not?
As explained device synchronizing all the operations which are not mediated by VMM.

If somehow you claim that all the synchronization is possible _only_ in software in various mediation layer,
And it is impossible in single place in device, than I do not agree.
V2 listed most of the synchronization points of the device.
> > When such device reset is done, it does not reset the device context, nor it
> clears the dirty page records, because they are done by the controlling function.
> >
> > > I'm not convinced that the scalability is broken by just having 2 or
> > > 3 more registers. We all know MSIX requires much more than this.
> > >
> > Saying there is some other high resource consumer method exists, so lets
> consume more in new interface we do now, is not good approach.
> This is self-contradictory and double standard. You allow #MSI-X vectors to
> grow but not config?

> > MSI-X on its v2 is underway. Hopefully it will be finished this year, which is
> already cutting down O(N) resources.
> Why couldn't such a method be applied to config registers and others?
Because config registers are not de-duplicating type.
Meaning each config register is unique in nature. For such work a queue/dma approach is taken.
So it is applied to config registers and others to not place as _always_available registers.
We just need to do that in virtio too.
All the recent work of flow filters, counters no longer rely on the config registers as we both agreed in discussion [1] with your comment

"Adding cvq is much easier than inventing(duplicating) the work of a transport."

[1] https://lore.kernel.org/virtio-comment/CACGkMEseZeT4VX8Ut-7KraxLKNOMKOgFDNxqKofXSFT8yHfg-w@mail.gmail.com/#t
> >
> > > You can't solve all the issues in one series. As stated many times,
> > I am not solving all issues in one series. This series builds the infrastructure.
> It's you that is raising the scalability issue, and what you've ignored is that such
> an "issue" has existed for many years and various hardware has been built on
> top of that.
That does not mean one should continue with such issue.
And that hardware consumes more power and memory that results in overall device inefficiency.
You should have objected the IMS patches in Linux kernel, you should also object new MSI-X proposal and say just use registers.

> Again, we should make sure the function is correct before we can talk about
> others, otherwise it would be an endless discussion.
And using registers is not the way to make it correct.
Lets make sure that basic function for the member device to first level is correct.

> >
> > > if you really
> > > care about the scalability, the correct way is to behave like a real
> > > transport through virtqueue instead of trying to duplicate the
> > > functionality of transport virtqueue slowly.
> > >
> > This will be impossible because no device will transport driver notifications
> using a virtqueue.
> > Therefore, virtqueue is not some generic transport that does everything - as
> simple as that. hence there is no transport virtqueue.
> You won't get such a wrong conclusion if you read that proposal.
I have read those 4 or 5 patches posted by Lingshan and showed you that time that driver notifications are not coming via virtqueue.
And if I missed it, and if they are coming via virtqueue, it does not meet the performance and "efficiency principle" from the paper you pointed.

> >
> > And virtqueue for bulk data transfer exists so no need to invent yet another
> thing without a good reason.
> I don't understand why this is related to transport virtqueue anyhow, it's also a
> queue interface, no?
Transport virtqueue is diversion of unrelated topic here.

A guest vm driver must be able to talk to the member device for all queue configuration etc through its own channel not mediated by the hypervisor.
Otherwise such plumbing does not work for any confidential compute workload. Hence, I wouldnât discuss transport virtqueue for now.

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]