OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v1] virtio-gpu: Document new fence-passing feature


On 2/1/24 19:59, Gurchetan Singh wrote:
> On Mon, Jan 29, 2024 at 9:21âPM Dmitry Osipenko <
> dmitry.osipenko@collabora.com> wrote:
> 
>> On 1/30/24 04:34, Gurchetan Singh wrote:
>>> On Sun, Jan 28, 2024 at 11:12âAM Dmitry Osipenko <
>>> dmitry.osipenko@collabora.com> wrote:
>>>
>>>> Document new virtio-gpu protocol fence-passing feature that extends 3d
>>>> submit command with support of in-fences which are passed from guest to
>>>> host for waiting. This feature allows to offload waiting for in-fences
>>>> from guest to host, where it can be done more efficiently.
>>>>
>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>> ---
>>>>  device-types/gpu/description.tex | 22 +++++++++++++++++++++-
>>>>  1 file changed, 21 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/device-types/gpu/description.tex
>>>> b/device-types/gpu/description.tex
>>>> index 443524851a05..e5e23b5c9072 100644
>>>> --- a/device-types/gpu/description.tex
>>>> +++ b/device-types/gpu/description.tex
>>>> @@ -37,6 +37,8 @@ \subsection{Feature bits}\label{sec:Device Types / GPU
>>>> Device / Feature bits}
>>>>    resources is supported.
>>>>  \item[VIRTIO_GPU_F_CONTEXT_INIT (4)] multiple context types and
>>>>    synchronization timelines supported.  Requires VIRTIO_GPU_F_VIRGL.
>>>> +\item[VIRTIO_GPU_F_FENCE_PASSING (5)] passing fence IDs from guest to
>> host
>>>> +  for waiting supported.  Requires VIRTIO_GPU_F_VIRGL.
>>>>  \end{description}
>>>>
>>>>  \subsection{Device configuration layout}\label{sec:Device Types / GPU
>>>> Device / Device configuration layout}
>>>> @@ -746,7 +748,25 @@ \subsubsection{Device Operation: controlq
>>>> (3d)}\label{sec:Device Types / GPU Dev
>>>>
>>>>  \item[VIRTIO_GPU_CMD_SUBMIT_3D]
>>>>    Submit an opaque command stream.  The type of the command stream is
>>>> -  determined when creating a context.
>>>> +  determined when creating a context.  Request data is
>>>> +  \field{struct virtio_gpu_cmd_submit}.  The \field{size} field
>> describes
>>>> +  the size of \field{cmd_data} array in bytes.  The
>> \field{num_in_fences}
>>>> +  field is active only if VIRTIO_GPU_F_FENCE_PASSING is enabled,
>> otherwise
>>>> +  \field{num_in_fences} is treated as zero.  The array of
>>>> \field{in_fences}
>>>> +  IDs is placed in between the \field{num_in_fences} field and the
>>>> +  \field{cmd_data} array.  The \field{in_fences} array contains
>> virtio-gpu
>>>> +  \field{fence_id}'s corresponding to the out-fence IDs of a previous
>>>> +  3d submits.
>>>> +
>>>> +\begin{lstlisting}
>>>> +struct virtio_gpu_cmd_submit {
>>>> +        struct virtio_gpu_ctrl_hdr hdr;
>>>> +        le32 size;
>>>> +        le32 num_in_fences;
>>>> +        le64 in_fences[num_in_fences];
>>>> +        u32 cmd_data[size/4];
>>>> +};
>>>> +\end{lstlisting}
>>>>
>>>
>>> Don't you need VIRTIO_GPU_FLAG_FENCE_SHAREABLE somewhere too?
>>
>> We haven't made use of this flag in the experimental/prototype version
>> that was made for the native contexts. The flag is unused by both Qemu
>> and CrosVM.
>>
>> We added Linux kernel UAPI flag VIRTGPU_EXECBUF_SHARED_FENCE and it's
>> deemed to be enough. If the UAPI flag isn't set for a job's out-fence,
>> then guest uses guest-side synchronization for this fence when it's
>> passed as in-fence for the further jobs.
>>
>> The FENCE_SHAREABLE flag isn't needed by nctx/virgl/venus protocols. All
>> fences are shareable by default. If VMM fails to export fence, it is
>> treated like fence was already signaled. In case of Venus, it's up to
>> Venus to decide internally which fences can be made shareable, i.e. both
>> guest and virglrenderer Venus implementations are responsible for making
>> decision whether fence should be made exportable. Venus won't make
>> out-fence exportable for internal intermediate jobs and will encode this
>> export-info within the Venus protocol.
>>
>> Whether FENCE_SHAREABLE virtio-gpu flag will be needed by gfxstream is
>> questionable. Gfxstream should be similar to Venus, i.e. you should be
>> able to add flag to the gfxstream. We'll need to be certain that the new
>> common flag is definitely needed by making use of it before specifying
>> it in the stable virtio-gpu protocol.
>>
> 
> I don't think you've gotten fence passing working with display
> integration.  Although passing external fences between two GPU instances
> does happen, the much more common and important case is passing a fence
> from the virtual GPU to virtual display.
> 
> Let me describe the use case for you:
> 
>    1. GPU creates a fence
>    2. rutabaga exports fence on creation via stream_renderer_export_fence /
>    or virglrenderer_export_fence --> creates [fence_id --> descriptor
>    mapping] when FLAG_SHAREABLE is set
>    3. SUBMIT_CMD + CROSS_DOMAIN_SEND --> takes the fence_ids and sends a
>    fence to the compositor
> 
> You also refer to
> https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/1138#note_1996999
> .
> 
> Once you have a working prototype that shares an out_fence with the
> display, then we can talk about whether FLAG_SHAREABLE is needed or not.
> So I suggest you focus on that before upstreaming uapis + virtio protocols.

AMD nctx uses fence-passing for multi-gpu job syncing. I don't think
Pierre-Eric uses it for display syncing on Qemu, Pierre don't you?

The point is that the negated variant of the flag (i.e.
FLAG_NOT_SHAREABLE) could be added later on if it will be needed. To my
knowledge, there is a lot of work to be done until we'll be able to test
the display path on CrOS. Hence it's more preferred not to block
enabling of the basic variant of the feature, given that we can extend
it in the future, IMO.

-- 
Best regards,
Dmitry



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]