OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [PATCH V2 6/6] virtio-pci: implement dirty page tracking


> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Tuesday, November 7, 2023 3:55 PM
> 
> On Tue, Nov 07, 2023 at 06:01:27PM +0800, Zhu, Lingshan wrote:
> >
> >
> > On 11/6/2023 7:13 PM, Parav Pandit wrote:
> > > > From: Zhu, Lingshan <lingshan.zhu@intel.com>
> > > > Sent: Monday, November 6, 2023 3:04 PM So, please no pass-through
> > > > discussion anymore.
> > > If you comment like this, nothing can progress.
> > >
> > > What you are implying with above language is:
> > > "hey a virtio can do live migration ONLY by creating vdpa device on top of
> ALREADY virtio device, and you get another virtio device by running through 3
> layers of stack you get virtio device on other side!".
> > I never say that right? I keep explaining how pass-through and "trap
> > and emulate work", I even explained how PASID work.
> 
> Parav, Lingshan can we please stop the "what is pass through" arguments?
> 
I am not discussing it at all anymore.
The use case is well defined and seeing how we can/cannot have one proposal.

> I think thatthe term is vague, as
> (almost?) no hypervisor passes all accesses without exemption through.
> And the fact you are speaking past enough other on this subject for how long
> now? seems to demonstrate I'm right.
> 
In v3 I acknowledged both the use cases in commit log unlike the other side.

> Describing migration in the spec as opposed to leaving it up to hypervisors
> seems valuable at least to me since historically hypervisors did such a bad job
> of it. 
It is done in v3.

> So I personally feel it's nice if it's there, and the SUSPEND bit only works
> after DRIVER_OK. So that's an example argument that makes sense to me.  
As talked suspend is useful and should be controlled by the guest anyway for power management.
Hypervisor is not supposed to use it during LM.

> But
> number of layers involved in control path seems completely irrelevant to most
> people. *That* is an nvidia thing, something very specific about vfio and vdpa
> and whatnot.
> Nothing to do with the spec, wrong list for this.
> 
Certainly, it is not an Nvidia thing.
Multiple device vendors would like to do this and equally users too.
So, I disagree.
I don't want to debate it.

Didn't you ask at start of this email to stop debating on what is passthrough?

> 
> > >
> > > Then for sure, I disagree to it for 100% for such a single-minded design.
> > >
> > > At least I am trying to propose if a solution can work for generic
> passthrough where least amount of hypervisor mediation is done.
> > >
> > > And an extension where hypervisor has choice to more medication layers as
> it finds suitable.
> > > And if there are technical issues, may be two different interfaces or more
> admin commands needed for two modes.
> > > The idea is to attempt to converge and discuss those details, not the
> opposite.
> > >
> > > Your above comment shows a clear sign of non-collaboration to make both
> mode works.
> > Well, I see you are emotional, please take a deep breath and calm
> > down, to be professional, give yourself a break, and really not
> > necessary to be mad at me.
> >
> > As you know I am just a Junior Engineer in Intel, not like you a
> > Senior Principle Engineer who has spent many years and have developed
> > knowledge in this area. So I am quite technical focusing, they are all
> > technical discussions till now.
> 
> It looks more like a passive-agressive flamewar from the side.
> So maybe try to see other's point of view. I asked what's the advantage of
> admin vq thing for migration and you said "it's an nvidia thing".
Huh, really you have to say this?
There are two TC sign-off on the patches.

Since I don't have the link to previously listed advantages I have to repeat here.

1. admin vq is must (it is not about an advantage) to support device passthrough to the guest.

Passthrough definition: following things are not trapped by hypervisor in one use case.
(a) virtio common and device config space
(b) cvqs for 6 and more device types
(c) hypervisor not involved in mixing PCI specific FLRs with virtio specific logic.

> And when people try to point them out to you, you go well tough.
> Maybe but we are wasting time here.
Only thing Lingshan pointed out if some QoS on AQ.
He never responded on dirty page tracking, why he cannot use it.
He never responded why device context cannot be used.

> 
> > We always welcome collaboration, remember Jason has proposed a
> > solution to build admin vq based on these basic facilities, and I am
> > fully agree on his proposal.
> 
> I didn't see anything specific frankly, I can easily see how Parav could get mad if
> he posts a reasonably fleshed out patchset (which admittedly, needs work with
> wording etc) and instead of review gets back "rework this on top of these basic
> facilities which we don't yet know how they will work but maybe will". We'll be
> stuck in this loop for how long?
> 
The series from Lingshan clearly does not address the requirements listed in v3.
And he is not open to converge it either.

My humble input is:
1. Accept the two use cases listed of vfio and vdpa being practical to support existing stacks
2. Try to converge two cases; if there is common virtio spec framework it can use
3. If they can, great lets use it.
4. If not, both use cases need different infrastructure, so build two.

Do you have any better suggestions to support both use cases?


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]