OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] [RFC PATCH] admin-queue: bind the group member to the device


On Wed, Jun 28, 2023 at 02:06:32PM +0800, Xuan Zhuo wrote:
> On Wed, 28 Jun 2023 10:49:45 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Jun 27, 2023 at 6:54âPM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 27 Jun 2023 17:00:06 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Jun 27, 2023 at 4:28âPM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > >
> > > > > Thanks Parav for pointing it out. We may have some gaps on the case.
> > > > >
> > > > > Let me introduce our case, which I think it is simple and should be easy to
> > > > > understand.
> > > > >
> > > > > First, the user (customer) purchased a bare metal machine.
> > > > >
> > > > > ## Bare metal machine
> > > > >
> > > > > Let me briefly explain the characteristics of a bare metal machine. It is not a
> > > > > virtual machine, it is a physical machine, and the difference between it and a
> > > > > general physical machine is that its PCI is connected to a device similar to a
> > > > > DPU. This DPU provides devices such as virtio-blk/net to the host through PCI.
> > > > > These devices are managed by the vendor, and must be created and purchased
> > > > > on the vendor's management platform.
> > > > >
> > > > > ## DPU
> > > > >
> > > > > There is a software implementation in the DPU, which will respond to PCI
> > > > > operations. But as mentioned above, resources such as network cards must be
> > > > > purchased and created before they can exist. So users can create VF, which is
> > > > > just a pci-level operation, but there may not be a corresponding backend.
> > > > >
> > > > > ## Management Platform
> > > > >
> > > > > The creation and configuration of devices is realized on the management
> > > > > platform.
> > > > >
> > > > > After the user completed the purchase on the management platform (this is an
> > > > > independent platform provided by the vendor and has nothing to do with
> > > > > virtio), then there will be a corresponding device implementation in the DPU.
> > > > > This includes some user configurations, available bandwidth resources and other
> > > > > information.
> > > > >
> > > > > ## Usage
> > > > >
> > > > > Since the user is directly on the HOST, the user can create VMs, passthrough PF
> > > > > or VF into the VM. Or users can create a large number of dockers, all of which
> > > > > use a separate virtio-net device for performance.
> > > > >
> > > > > The reason why users use vf is that we need to use a large number of virtio-net
> > > > > devices. This number reaches 1k+.
> > > > >
> > > > > Based on this scenario, we need to bind vf to the backend device. Because, we
> > > > > cannot automatically complete the creation of the virtio-net backend device when
> > > > > the user creates a vf.
> > > > >
> > > > > ## Migration
> > > > >
> > > > > In addition, let's consider another scenario of migration. If a vm is migrated
> > > > > from another host, of course its corresponding virtio device is also migrated to
> > > > > the DPU. At this time, our newly created vf can only be used by the vm after it
> > > > > is bound to the migrated device. We do not want this vf to be a brand new
> > > > > device.
> > > > >
> > > > > ## Abstraction
> > > > >
> > > > > So, this is how I understand the process of creating vf:
> > > > >
> > > > > 1. Create a PCI VF, at this time there may be no backend virtio device, or there
> > > > >     is only a default backend. It does not fully meet our expectations.
> > > > > 2. Create device or migrate device
> > > > > 3. Bind the backend virtio device to the vf
> > > >
> > > > 3) should come before 2)?
> > > >
> > > > Who is going to do 3) btw, is it the user? If yes, for example, if a
> > > > user wants another 4 queue virtio-net devices, after purchase, how
> > > > does the user know its id?
> > >
> > > Got the id from the management platform.
> >
> > So it can do the binding via that management platform which this
> > became a cloud vendor specific interface.
> 
> In our scenario, this is bound by the user using this id and vf id in the os.
> 
> >
> > >
> > > >
> > > > >
> > > > > In most scenarios, the first step may be enough. We can make some fine-tuning on
> > > > > this default device, such as modifying its mac. In the future, we can use admin
> > > > > queue to modify its msix vector and other configurations.
> > > > >
> > > > > But we should allow, we bind a backend virtio device to a certain vf. This is
> > > > > useful for live migration and virtio devices with special configurations.
> > > >
> > > > All of these could be addressed if a dynamic provisioning model is
> > > > implemented (SIOV or transport virtqueue). Trying to have a workaround
> > > > in SR-IOV might be tricky.
> > >
> > >
> > > SR-IOV vf is native PCI device, this is the advancement.
> >
> > The problem is that it doesn't support flexible provisioning, e.g
> > create and destroy a single VF.
> 
> YES. ^_^!!

So sure, create it. Once you have created it, you can
use the VF# to talk to it.


I *suspect* that what this ID does is replace provisioning commands.

So instead of saying "create VF#3 with MAC 0xABC and 0x1000VQs"
you would have management say "ID 0xFACE refers to MAC ABC and 1000VQs"
and later you will say "bind VF#3 to ID 0xFACE" and that will
set it up.

Is that it?

But why is it important to do it in two steps like this?
as opposed to in one step?  I have no idea.

> 
> >
> > >
> > >
> > > >
> > > > >
> > > > > The design of virtio itself is two layers, and virtio should allow switching the
> > > > > transport layer by nature. This is our advantage.
> > > >
> > > > Is it not switching the transport layer but about binding/unbinding
> > > > vitio devices to VF?
> > >
> > > YES.
> > >
> > > >
> > > > Is a new capability or similar admin cmd sufficient in this case?
> > >
> > > All is ok.
> > >
> > >
> > > >
> > > > struct virtio_pci_bind_cap {
> > > >         struct virtio_pci_cap cap;
> > > >         u16 bind; // virtio_device_id
> > > >         u16 unbind; // virtio_device_id
> > > > };
> > >
> > > You mean that the "bind" or "unbind" is writeable?
> 
> This is a good idea.
> 
> Thanks.

So stealing valuable memory from limited pci config space, no error
handling, no filtering... Ugh.  Let's not put a round peg in a square
hole.

For management I think we should use admin commands. They were built for
the management use-case.
Config space (pci and virtio) is better for driver slow path.

-- 
MST



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]