OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [VIRTIO GPU PATCH v3 0/1] Add new feature flag VIRTIO_GPU_F_FREEZE_S3


Hi,

On 2023/8/2 15:13, Parav Pandit wrote:
> 
>> From: Chen, Jiqian <Jiqian.Chen@amd.com>
>> Sent: Wednesday, August 2, 2023 11:28 AM
>>
>> On 2023/8/2 12:49, Parav Pandit wrote:
>>>
>>>
>>>> From: virtio-dev@lists.oasis-open.org
>>>> <virtio-dev@lists.oasis-open.org> On Behalf Of Chen, Jiqian
>>>> Sent: Wednesday, August 2, 2023 8:51 AM Hi all,
>>>>
>>>> Do you have any other comments on the modification of virtio-gpu S3?
>>>> Looking forward to your reply and comments.
>>>>
>>
>> Hi Parav Pandit,
>> Thank you for your reply. Let me try to answer your question.
>>
>>> I am not familiar with the GPU, so a dumb question is, why the S3 state is gpu
>> specific?
>> S3 state is not gpu specific. I think different virtio devices may have different
>> actions/problems when S3.
> I am making assumption that the gpu device is pci. :)
> If so can you please use the transport specific notification from gpu guest driver to notify to qemu?
In my existing implementation, qemu is notified through it(gpu guest driver queue).

> 
>> When I do S3 on Xen, I found guest's display can't come back and the root cause
>> is in virtio-gpu backend in QEMU.
>> So, to solve that problem, I change codes to let guest notify QEMU virtio-gpu's
>> suspend state, and then QEMU will not destroy resources that used for display.
>> Please see attached kernel and QEMU patch links.
>> For above reason, Gerd suggest me to add a new feature flag specifically for
>> virtio-gpu, so that guest and host can negotiate whenever to enable above
>> mechanism.
>>
>>> Can a transport specific suspend state be used and apply to all virtio devices?
>> Based on my limited knowledge. Different virtio devices have different virtio
>> queues, my modifications let guest to notify QEMU by using virtio-gpu's control
>> queue.
>> Other virtio devices can't get that notification unless you can traverse all virtio
>> devices and notify one by one, or other global method?
>> But for now, this patch is to add a new feature flag just used for virtio-gpu.
>>
> If this is done at the pci transport level, all the virtio device benefit from it without inventing in each device type.
I think there is no need to improve to pci level. Because this feature is a compromise solution,
if resources are destroyed on qemu side, guest has not enough data to re-create them,
so we choose to keep them when suspension. That is a virtio-gpu specific scenario.

What's your opinion, Gerd Hoffmann, and Robert Beckett? Am I right?

> 
>>> And can you please add both the rationale to the commit message?
>> Sure, I will expand the description of my commit message and add them.
>>
>>>
>>>> On 2023/7/20 20:18, Jiqian Chen wrote:
>>>>> v3:
>>>>>
>>>>> Hi all,
>>>>> Thanks for Gerd Hoffmann's advice. V3 makes below changes:
>>>>> * Use enum for freeze mode, so this can be extended with more
>>>>>   modes in the future.
>>>>> * Rename functions and paratemers with "_S3" postfix.
>>>>> * Explain in more detail
>>>>>
>>>>> And latest version on QEMU and Linux kernel side:
>>>>> 	QEMU: https://lore.kernel.org/qemu-devel/20230720120816.8751-1-
>>>> Jiqian.Chen@amd.com
>>>>> 	Kernel:
>>>>> https://lore.kernel.org/lkml/20230720115805.8206-1-Jiqian.Chen@amd.c
>>>>> om
>>>>> /T/#t
>>>>>
>>>>> Best regards,
>>>>> Jiqian Chen.
>>>>>
>>>>>
>>>>> v2:
>>>>> link,
>>>>> https://lists.oasis-open.org/archives/virtio-comment/202307/msg00160
>>>>> .h
>>>>> tml
>>>>>
>>>>> Hi all,
>>>>> Thanks to Gerd Hoffmann for his suggestions. V2 makes below changes:
>>>>> * Elaborate on the types of resources.
>>>>> * Add some descriptions for S3 and S4.
>>>>>
>>>>>
>>>>> v1:
>>>>> link,
>>>>> https://lists.oasis-open.org/archives/virtio-comment/202306/msg00595
>>>>> .h
>>>>> tml
>>>>>
>>>>> Hi all,
>>>>> I am working to implement virtgpu S3 function on Xen.
>>>>>
>>>>> Currently on Xen, if we start a guest through Qemu with enabling
>>>>> virtgpu, and then suspend and s3resume guest. We can find that the
>>>>> guest kernel comes back, but the display doesn't. It just shown a black
>> screen.
>>>>>
>>>>> That is because when guest was during suspending, it called into
>>>>> Qemu and Qemu destroyed all resources and reset renderer. This made
>>>>> the display gone after guest resumed.
>>>>>
>>>>> So, I add a mechanism that when guest is suspending, it will notify
>>>>> Qemu, and then Qemu will not destroy resources. That can help
>>>>> guest's display come back.
>>>>>
>>>>> As discussed and suggested by Robert Beckett and Gerd Hoffmann on v1
>>>>> qemu's mailing list. Due to that mechanism needs cooperation between
>>>>> guest and host. What's more, as virtio drivers by design paravirt
>>>>> drivers, it is reasonable for guest to accept some cooperation with
>>>>> host to manage suspend/resume. So I request to add a new feature
>>>>> flag, so that guest and host can negotiate whenever freezing is supported or
>> not.
>>>>>
>>>>> Jiqian Chen (1):
>>>>>   virtio-gpu: Add new feature flag VIRTIO_GPU_F_FREEZE_S3
>>>>>
>>>>>  device-types/gpu/description.tex | 42
>>>>> ++++++++++++++++++++++++++++++++
>>>>>  1 file changed, 42 insertions(+)
>>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Jiqian Chen.
>>
>> --
>> Best regards,
>> Jiqian Chen.

-- 
Best regards,
Jiqian Chen.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]