[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [virtio-dev] [RFC PATCH net-next v2 1/2] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
On 1/22/2018 4:02 PM, Stephen Hemminger wrote:
Yes. Cloud Service Providers using KVM as hypervisor have a similar requirement to provide acceleratedIn the case of SwitchDev it should be possible for the port representors and the switch to provide data on which interfaces are bonded on the host side and which aren't. With that data it would be pretty easy to just put together a list of addresses that would prefer to go the para-virtual route instead of being transmitted through physical hardware. In addition a bridge implies much more overhead since normally a bridge can receive a packet in on one interface and transmit it on another. We don't really need that. This is more of a VEPA type setup and doesn't need to be anything all that complex. You could probably even handle the Tx queue selection via a simple eBPF program and map since the input for whatever is used to select Tx should be pretty simple, destination MAC, source NUMA node, etc, and the data-set shouldn't be too large.That sounds interesting. A separate device might make this kind of setup a bit easier. Sridhar, did you look into creating a separate device for the virtual bond device at all? It does not have to be in a separate module, that kind of refactoring can come later, but once we commit to using the same single device as virtio, we can't change that.No. I haven't looked into creating a separate device. If we are going to create a new device, i guess it has to be of a new device type with its own driver. As we are using virtio_net to control and manage the VF data path, it is not clear to me what is the advantage of creating a new device rather than extending virtio_net to manage the VF datapath via transparent bond mechanism. Thanks SridharThe requirement with Azure accelerated network was that a stock distribution image from the store must be able to run unmodified and get accelerated networking. Not sure if other environments need to work the same, but it would be nice. That meant no additional setup scripts (aka no bonding) and also it must work transparently with hot-plug. Also there are diverse set of environments: openstack, cloudinit, network manager and systemd. The solution had to not depend on any one of them, but also not break any of them.
networking with VM images that support virtio_net. Thanks Sridhar