OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [virtio-comment] Re: [PATCH V2 1/2] virtio: introduce virtqueue state as basic facility



å 2021/7/7 äå8:03, Max Gurtovoy åé:

On 7/7/2021 5:50 AM, Jason Wang wrote:

å 2021/7/7 äå7:49, Max Gurtovoy åé:

On 7/6/2021 10:08 PM, Michael S. Tsirkin wrote:
On Tue, Jul 06, 2021 at 07:09:10PM +0200, Eugenio Perez Martin wrote:
On Tue, Jul 6, 2021 at 11:32 AM Michael S. Tsirkin <mst@redhat.com> wrote:
On Tue, Jul 06, 2021 at 12:33:33PM +0800, Jason Wang wrote:
This patch adds new device facility to save and restore virtqueue
state. The virtqueue state is split into two parts:

- The available state: The state that is used for read the next
ÂÂ available buffer.
- The used state: The state that is used for making buffer used.

Note that, there could be devices that is required to set and get the requests that are being processed by the device. I leave such API to
be device specific.

This facility could be used by both migration and device diagnostic.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Hi Jason!
I feel that for use-cases such as SRIOV,
the facility to save/restore vq should be part of a PF
that is there needs to be a way for one virtio device to
address the state of another one.

Hi!

In my opinion we should go the other way around: To make features as
orthogonal/independent as possible, and just make them work together
if we have to. In this particular case, I think it should be easier to
decide how to report status, its needs, etc for a VF, and then open
the possibility for the PF to query or set them, reusing format,
behavior, etc. as much as possible.

I think that the most controversial point about doing it non-SR IOV
way is the exposing of these features/fields to the guest using
specific transport facilities, like PCI common config. However I think it should not be hard for the hypervisor to intercept them and even to
expose them conditionally. Please correct me if this guessing was not
right and you had other concerns.

Possibly. I'd like to see some guidance on how this all will work
in practice then. Maybe make it all part of a non-normative section
for now.
I think that the feature itself is not very useful outside of
migration so we don't really gain much by adding it as is
without all the other missing pieces.
I would say let's see more of the whole picture before we commit.

I agree here. I also can't see the whole picture for SRIOV case.


Again, it's not related to SR-IOV at all. It tries to introduce basic facility in the virtio level which can work for all types of virtio device.

Transport such as PCI need to implement its own way to access those state. It's not hard to implement them simply via capability.

It works like other basic facility like device status, features etc.

For SR-IOV, it doesn't prevent you from implementing that via the admin virtqueue.



I'll try to combine the admin control queue suggested in previous patch set to my proposal of PF managing the VF migration.


Note that, the admin virtqueue should be transport indepedent when trying to introduce them.



Feature negotiation is part of virtio device-driver communication and not part of the migration software that should manage the migration process.

For me, seems like queue state is something that should be internal and not be exposed to guest drivers that see this as a new feature.


This is not true, we have the case of nested virtualization. As mentioned in another thread, it's the hypervisor that need to choose between hiding or shadowing the internal virtqueue state.

Thanks

In the nested environment, do you mean the Level 1 is Real PF with X VFs and in Level 2 the X VF seen as PFs in the guests and expose another Y VFs ?


I meant PF is managed in L0. And the VF is assigned to L2 guest. In this case, we can expose the virtqueue state feature to L1 guest for migration L2 guest.



If so, the guest PF will manage the migration for it's Y VFs.


Does this mean you want to pass PF to L1 guest?

Thanks



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]