OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-dev] [PATCH v4] content: Introduce VIRTIO_NET_F_STANDBY feature


On Mon, Dec 03, 2018 at 06:09:19PM -0800, si-wei liu wrote:
> > I agree. But a single flag is not much of an extension. We don't even
> > need it in netlink, can be anywhere in e.g. sysfs.
> I think sysfs attribute is for exposing the capability, while you still need
> to set up macvtap with some special mode via netlink. That way it doesn't
> break current behavior, and when VF's MAC filter is added macvtap would need
> to react to remove the filter from NIC. And add the one back when VF's MAC
> is removed.

All this will be up to the developers actually working on it. My
understanding is that intel is going to just change the behaviour
unconditionally, and it's already the case for Mellanox.
That creates a critical mass large enough that maybe others
just need to confirm.

...


> > Meanwhile what's missing and was missing all along for the change you
> > seem to be advocating for to get off the ground is people who
> > are ready to actually send e.g. spec, guest driver, test patches.
> Partly because it hadn't been converged to the best way to do it (even the
> group ID mechanism with PCI bridge can address our need you don't seem to
> think it is valuable). The in-kernel approach is fine at its appearance, but
> I personally don't believe changing every legacy driver is the way to go.
> It's the choice of implementation and what has been implemented in those
> drivers today IMHO is nothing wrong.

It's not a question of being wrong as such.
A standard behaviour is clearly better than each driver doing its
own thing which is the case now. As long as we ar standardizing,
let's standardize on something that matches our needs?
But I really see no problem with also supporting other options,
as long as someone is prepared to actually put in the work.


> > 
> > > >    Still this assumes just creating a VF
> > > > doesn't yet program the on-card filter to cause packet drops.
> > > Suppose this behavior is fixable in legacy Intel NIC, you would still need
> > > to evacuate the filter programmed by macvtap previously when VF's filter
> > > gets activated (typically when VF's netdev is netif_running() in a Linux
> > > guest). That's what we and NetVSC call as "datapath switching", and where
> > > this could be handled (driver, net core, or userspace) is the core for the
> > > architectural design that I spent much time on.
> > > 
> > > Having said it, I don't expect or would desperately wait on one vendor to
> > > fix a legacy driver which wasn't quite motivated, then no work would be done
> > > on that.
> > Then that device can't be used with the mechanism in question.
> > Or if there are lots of drivers like this maybe someone will be
> > motivated enough to post a better implementation with a new
> > feature bit. It's not that I'm arguing against that.
> > 
> > But given the options of teaching management to play with
> > netlink API in response to guest actions, and with VCPU stopped,
> > and doing it all in host kernel drivers, I know I'll prefer host kernel
> > changes.
> We have some internal patches that leverage management to respond to various
> guest actions. If you're interested we can post them. The thing is no one
> would like to work on the libvirt changes, while internally we have our own
> orchestration software which is not libvirt. But if you think it's fine we
> can definitely share our QEMU patches while leaving out libvirt.
> 
> Thanks,
> -Siwei

Sure, why not.

The following is generally necessary for any virtio project to happen:
- guest patches
- qemu patches
- spec documentation

Some extras are sometimes a dependency, e.g. host kernel patches.


Typically at least two of these are enough for people to
be able to figure out how things work.




> > 
> > > If you'd go the way, please make sure Intel could change their
> > > driver first.
> > We'll see what happens with that. It's Sridhar from intel that implemented
> > the guest changes after all, so I expect he's motivated to make them
> > work well.
> > 
> > 
> > > >    Let's
> > > > assume drivers are fixed to do that. How does userspace know
> > > > that's the case? We might need some kind of attribute so
> > > > userspace can detect it.
> > > Where do you envision the new attribute could be at? Supposedly it'd be
> > > exposed by the kernel, which constitutes a new API or API changes.
> > > 
> > > 
> > > Thanks,
> > > -Siwei
> > People add e.g. new attributes in sysfs left and right.  It's unlikely
> > to be a matter of serious contention.
> > 
> > > > > > Question is how does userspace know driver isn't broken in this respect?
> > > > > > Let's add a "vf failover" flag somewhere so this can be probed?
> > > > > > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> > 


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]