OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH v3 6/8] admin: Add theory of operation for write recording commands


On Wed, Nov 22, 2023 at 12:29âAM Parav Pandit <parav@nvidia.com> wrote:
>
>
> > From: Jason Wang <jasowang@redhat.com>
> > Sent: Tuesday, November 21, 2023 10:47 AM
> >
> > On Fri, Nov 17, 2023 at 8:51âPM Parav Pandit <parav@nvidia.com> wrote:
> > >
> > >
> > > > From: virtio-comment@lists.oasis-open.org
> > > > <virtio-comment@lists.oasis- open.org> On Behalf Of Michael S.
> > > > Tsirkin
> > > > Sent: Friday, November 17, 2023 6:11 PM
> > > >
> > > > On Fri, Nov 17, 2023 at 12:22:59PM +0000, Parav Pandit wrote:
> > > > >
> > > > >
> > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Sent: Friday, November 17, 2023 5:03 PM
> > > > > > To: Parav Pandit <parav@nvidia.com>
> > > > >
> > > > > > > Somehow the claim of shadow vq is great without sharing any
> > > > > > > performance
> > > > > > numbers is what I don't agree with.
> > > > > >
> > > > > > It's upstream in QEMU. Test it youself.
> > > > > >
> > > > > We did few minutes back.
> > > > > It results in a call trace.
> > > > > Vhost_vdpa_setup_vq_irq crashes on list corruption on net-next.
> > > >
> > > > Wrong list for this bug report.
> > > >
> > > > > We are stopping any shadow vq tests on unstable stuff.
> > > >
> > > > If you don't want to benchmark against alternatives how are you
> > > > going to prove your stuff is worth everyone's time?
> > >
> > > Comparing performance of the functional things count.
> > > You suggest shadow vq, frankly you should post the grand numbers of
> > shadow vq.
> >
> > We need an apple to apple comparison. Otherwise you may argue with that,
> > no?
> >
> When the requirements are met the comparison can be made of the solution.
> And I donât see that the basic requirements are matching for two different use cases.
> So no point in discussing one OS specific implementation as reference point.

Shadow virtqueue is not OS specific, it's a common method. If you
disagree, please explain why.

> Otherwise I will end up adding vfio link in the commit log in next version as you are asking similar things here and being non neutral to your ask.

When doing a benchmark, you need to describe your setups, no? So any
benchmark is setup specific, nothing wrong.

It looks to me you claim your method is better, but refuse to give proofs.

>
> Anyway, please bring the perf data whichever you want to compare in another forum. It is not the criteria anyway.

So how can you prove your method is the best one? You have posted the
series for months, and so far I still don't see any rationale about
why you choose to go that way.

This is very odd as we've gone through several methods one or two
years ago when discussing vDPA live migration.

>
> > >
> > > It is really not my role to report bug of unstable stuff and compare the perf
> > against.
> >
> > Qemu/KVM is highly relevant here no? And it's the way to develop the
> > community. The shadow vq code is handy.
> It is relevant for direct mapped device.

Let's focus on the function then discuss the use cases. If you can't
prove your proposal has a proper function, what's the point of
discussing the use cases?

> There is absolutely no point of converting virtio device to another virtualization layer and run again and get another virtio device.
> So for direct mapping use case shadow vq is not relevant.

It is needed because shadow virtqueue is the baseline. Most of the
issues don't exist in the case of shadow virtqueue.

We don't want to end up with a solution that

1) can't outperform shadow virtqueue
2) have more issues than shadow virtqueue

> For other use cases, please continue.
>
> >
> > Just an email to Qemu should be fine, we're not asking you to fix the bug.
> >
> > Btw, how do you define stable? E.g do you think the Linus tree is stable?
> >
> Basic test with iperf is not working. Crashing it.

As a kernel developer, dealing with crashing at any layer is pretty common. No?

Thanks


> All of this is complete unrelated discussion to this series to slow down the work.
> I donât see any value.
> Michael asked to do the test, we did, it does not work. Functionally broken code has no comparison.
>
> > Thanks
> >
> > >
> > > We propose device context and provided the numbers you asked. Mostly
> > wont be able to go farther than this.
> > >
> > > This publicly archived list offers a means to provide input to the
> > > OASIS Virtual I/O Device (VIRTIO) TC.
> > >
> > > In order to verify user consent to the Feedback License terms and to
> > > minimize spam in the list archive, subscription is required before
> > > posting.
> > >
> > > Subscribe: virtio-comment-subscribe@lists.oasis-open.org
> > > Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
> > > List help: virtio-comment-help@lists.oasis-open.org
> > > List archive: https://lists.oasis-open.org/archives/virtio-comment/
> > > Feedback License:
> > > https://www.oasis-open.org/who/ipr/feedback_license.pdf
> > > List Guidelines:
> > > https://www.oasis-open.org/policies-guidelines/mailing-lists
> > > Committee: https://www.oasis-open.org/committees/virtio/
> > > Join OASIS: https://www.oasis-open.org/join/
> > >
>



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]