OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v3 6/8] admin: Add theory of operation for write recording commands


On Tue, Nov 07, 2023 at 12:04:29PM +0800, Jason Wang wrote:
> > > > Each virtio and non virtio devices who wants to report their dirty page report,
> > > will do their way.
> > > >
> > > > > 3) inventing it in the virtio layer will be deprecated in the future
> > > > > for sure, as platform will provide much rich features for logging
> > > > > e.g it can do it per PASID etc, I don't see any reason virtio need
> > > > > to compete with the features that will be provided by the platform
> > > > Can you bring the cpu vendors and committement to virtio tc with timelines
> > > so that virtio TC can omit?
> > >
> > > Why do we need to bring CPU vendors in the virtio TC? Virtio needs to be built
> > > on top of transport or platform. There's no need to duplicate their job.
> > > Especially considering that virtio can't do better than them.
> > >
> > I wanted to see a strong commitment for the cpu vendors to support dirty page tracking.
> 
> The RFC of IOMMUFD support can go back to early 2022. Intel, AMD and
> ARM are all supporting that now.
> 
> > And the work seems to have started for some platforms.
> 
> Let me quote from the above link:
> 
> """
> Today, AMD Milan (or more recent) supports it while ARM SMMUv3.2
> alongside VT-D rev3.x also do support.
> """
> 
> > Without such platform commitment, virtio also skipping it would not work.
> 
> Is the above sufficient? I'm a little bit more familiar with vtd, the
> hw feature has been there for years.


Repeating myself - I'm not sure that will work well for all workloads. Definitely KVM did
not scan PTEs. It used pagefaults with bit per page and later as VM size
grew switched to PLM.  This interface is analogous to PLM, what Lingshan
proposed is analogous to bit per page - problem unfortunately is
you can't easily set a bit by DMA.

So I think this dirty tracking is a good option to have.



> >
> > > > i.e. in first year of 2024?
> > >
> > > Why does it matter in 2024?
> > Because users needs to use it now.
> >
> > >
> > > > If not, we are better off to offer this, and when/if platform support is, sure,
> > > this feature can be disabled/not used/not enabled.
> > > >
> > > > > 4) if the platform support is missing, we can use software or
> > > > > leverage transport for assistance like PRI
> > > > All of these are in theory.
> > > > Our experiment shows PRI performance is 21x slower than page fault rate
> > > done by the cpu.
> > > > It simply does not even pass a simple 10Gbps test.
> > >
> > > If you stick to the wire speed during migration, it can converge.
> > Do you have perf data for this?
> 
> No, but it's not hard to imagine the worst case. Wrote a small program
> that dirty every page by a NIC.
> 
> > In the internal tests we donât see this happening.
> 
> downtime = dirty_rates * PAGE_SIZE / migration_speed
> 
> So if we get very high dirty rates (e.g by a high speed NIC), we can't
> satisfy the requirement of the downtime. Or if you see the converge,
> you might get help from the auto converge support by the hypervisors
> like KVM where it tries to throttle the VCPU then you can't reach the
> wire speed.

Will only work for some device types.



> >
> > >
> > > > There is no requirement for mandating PRI either.
> > > > So it is unusable.
> > >
> > > It's not about mandating, it's about doing things in the correct layer. If PRI is
> > > slow, PCI can evolve for sure.
> > You should try.
> 
> Not my duty, I just want to make sure things are done in the correct
> layer, and once it needs to be done in the virtio, there's nothing
> obviously wrong.

Yea but just vague questions don't help to make sure eiter way.


> > In the current state, it is mandating.
> > And if you think PRI is the only way,
> 
> I don't, it's just an example where virtio can leverage from either
> transport or platform. Or if it's the fault in virtio that slows down
> the PRI, then it is something we can do.
> 
> >  than you should propose that in the dirty page tracking series that you listed above to not do dirty page tracking. Rather depend on PRI, right?
> 
> No, the point is to not duplicate works especially considering virtio
> can't do better than platform or transport.

If someone says they tried and platform's migration support does not
work for them and they want to build a solution in virtio then
what exactly is the objection? virtio is here in the
first place because emulating devices didn't work well.

-- 
MST



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]