OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v1] virtio-gpu: Document new fence-passing feature




On Mon, Feb 12, 2024 at 6:15âPM Rob Clark <robdclark@gmail.com> wrote:
On Mon, Feb 12, 2024 at 4:19âPM Gurchetan Singh
<gurchetansingh@chromium.org> wrote:
>
>
>
> On Fri, Feb 9, 2024 at 1:56âPM Rob Clark <robdclark@gmail.com> wrote:
>>
>> tbh, I don't think VIRTGPU_EXECBUF_SHARED_FENCE is a good idea either.
>> A non-shareable fence fd is kind of an oxymoron.
>
>
> We can call it CÂand VIRTIO_GPU_FLAG_FENCE_HOST_SHAREABLE to denote the difference. Currently, no fences are shareable across host contexts/virtio-devices/displays. With the new feature bit, they are.
>

sure, but the point I'm getting at is that existing fence fd APIs
(whether uapi, wsi, vk, or egl) don't differentiate between guest and
host shareability, so in practice we are just going to have to mark
everything as host-shareable if the producing and consuming ctx both
support host-shareable.. but if you actually need to make this
differentiation there is something I don't understand about your
use-case or you don't understand about the design.

I'll send you a CL that makes use of VIRTIO_GPU_FLAG_FENCE_HOST_SHAREABLE so we can move this discussion to code.
Â

>>
>>
>> > So in summary with my review comments:
>> >Â Â Â- Can we get a FLAG_SHAREABLE in v2?
>>
>> We _can_.. I don't think it is a good idea, but a useless bit in the
>> protocol isn't the end of the world. I prefer that non-shareable
>> fences be handled in context specific protocol. And I suspect that
>> sooner or later any fence that is visible at the virtgpu level, will
>> need to be shareable.
>
>
> We will definitely use the bit. We might build some-type of fence LRU cache or something using the flag, to avoid the last signalled fence trick.

if you need to avoid the "trick", I have concerns about your fence
implementation.

>
>> than within a particular renderer instance.. and in that case it can
>> be handled with context specific protocol.
>>
>> >Â Â Â- Can we get a RESOURCE_FLUSH(in_fence) for KMS integration in v2?
>>
>> Nak, and nak for DESTROY_FENCE(S)
>
>
> I think +Kaiyi Li +Colin Downs-Razouk have a use case for RESOURE_FLUSH with an acquire fence. Hopefully, they can describe it here and the timeline (I think Q2 .. not far?).
>
> I think all Android VMs "in a box" (not ARCVM-style) would find such an API useful.

If there is an independent reason for RESOURCE_FLUSH, then sure, we
can discuss it on it's merits.. or even a related reason. I'm not too
concerned about the timeline, but want someone to describe a use-case
where the existing proposal doesn't fit. It should always be "safe"
to substitute a later signaled fence for an earlier one (possibly
modulo CONTEXT_LOST, but then, you know, everything is undefined)

Yes, I'll let others comment on RESOURCE_FLUSHÂ+ acquire fence since I'm not the only gfxstreamist out there...
Â

BR,
-R

>
>>
>> Lack of fence lifetime built into the protocol isn't a deficiency,
>> it's a feature. It keeps the design as simple and lightweight and
>> flexible as possible.
>>
>> BR,
>> -R
>>
>> >Â Â Â- Regarding transiency:
>> >Â Â Â Â Â Â - Good idea, but can we add it to the spec level rather than being implementation defined?
>> >Â Â Â Â Â Â - For it to be transient everywhere, we'll need even virtio-video to take an in-fence at-least?
>> >Â Â Â Â Â Â - I recommend more research on the transient subject, it *could* work
>> >
>> > [1] https://lists.nongnu.org/archive/html/qemu-devel/2023-05/msg00595.html
>> >
>> >>
>> >> The last_signaled_fence may be a workaround that leads to weird edge
>> >> cases.. but over-complicating transient resource lifetime in the
>> >> protocol is not the answer.
>> >>
>> >> BR,
>> >> -R
>> >>
>> >> > We can probably survive these oddities, but we can probably avoid them too, so that's why it would be nice for the guest to provide the information it has.
>> >> >
>> >> >>
>> >> >> > Though, the guest kernel already tracks fence lifetimes through dma_fence. What if we add:
>> >> >> >
>> >> >> > - DESTROY_FENCES(fence_ids, num_fences)
>> >> >> > - a virtio-config ("fence_destroy_threshold"), which controls num_fences in DESTROY_FENCES
>> >> >> >
>> >> >> > When fence_destroy_threshold == 0, this would be the current proposed solution (fence lifetimes implementation defined). However, a user can make "fence_destroy_threshold" 100 or something like that, to cause a VM-exit everytime 100 fences have been destroyed.
>> >> >> >
>> >> >> > This attempts a tunable compromise between API purity and performance concerns. WDYT?
>> >> >>
>> >> >> Tbh, I think the current approach is cleaner from the PoV of what a
>> >> >> fence is.. it is just a seqno, it should not have a lifetime.
>> >> >>
>> >> >> BR,
>> >> >> -R
>> >> >>
>> >> >> >
>> >> >> >>
>> >> >> >>
>> >> >> >> But this is all implementation detail
>> >> >> >>
>> >> >> >> BR,
>> >> >> >> -R
>> >> >> >>
>> >> >> >> >
>> >> >> >> > Essentially, let's do our due diligence and verify the most important use case (gpu --> display) actually works.
>> >> >> >> >
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> BR,
>> >> >> >> >> -R
>> >> >> >> >>
>> >> >> >> >> >> --
>> >> >> >> >> >> Best regards,
>> >> >> >> >> >> Dmitry
>> >> >> >> >> >>


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]