OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH v1] virtio-gpu: Document new fence-passing feature




On Fri, Feb 9, 2024 at 1:56âPM Rob Clark <robdclark@gmail.com> wrote:
tbh, I don't think VIRTGPU_EXECBUF_SHARED_FENCE is a good idea either.
A non-shareable fence fd is kind of an oxymoron.

We can callÂit VIRTGPU_EXECBUF_HOST_SHAREABLE_FENCE andÂVIRTIO_GPU_FLAG_FENCE_HOST_SHAREABLE to denote the difference. Currently, no fences are shareable across host contexts/virtio-devices/displays. With the new feature bit, they are.
Â

> So in summary with my review comments:
>Â Â Â- Can we get a FLAG_SHAREABLE in v2?

We _can_.. I don't think it is a good idea, but a useless bit in the
protocol isn't the end of the world. I prefer that non-shareable
fences be handled in context specific protocol. And I suspect that
sooner or later any fence that is visible at the virtgpu level, will
need to be shareable.

We will definitely use the bit. We might build some-type of fence LRU cache or something using the flag, to avoid the last signalled fence trick.Â

than within a particular renderer instance.. and in that case it can
be handled with context specific protocol.

>Â Â Â- Can we get a RESOURCE_FLUSH(in_fence) for KMS integration in v2?

Nak, and nak for DESTROY_FENCE(S)

I thinkÂ+Kaiyi LiÂ+Colin Downs-RazoukÂhaveÂa use case for RESOURE_FLUSH with an acquire fence. Hopefully, they can describe it here and the timeline (I think Q2 .. not far?).ÂÂ

I think all Android VMs "in a box" (not ARCVM-style) would find such an API useful.Â


Lack of fence lifetime built into the protocol isn't a deficiency,
it's a feature. It keeps the design as simple and lightweight and
flexible as possible.

BR,
-R

>Â Â Â- Regarding transiency:
>Â Â Â Â Â Â - Good idea, but can we add it to the spec level rather than being implementation defined?
>Â Â Â Â Â Â - For it to be transient everywhere, we'll need even virtio-video to take an in-fence at-least?
>Â Â Â Â Â Â - I recommend more research on the transient subject, it *could* work
>
> [1] https://lists.nongnu.org/archive/html/qemu-devel/2023-05/msg00595.html
>
>>
>> The last_signaled_fence may be a workaround that leads to weird edge
>> cases.. but over-complicating transient resource lifetime in the
>> protocol is not the answer.
>>
>> BR,
>> -R
>>
>> > We can probably survive these oddities, but we can probably avoid them too, so that's why it would be nice for the guest to provide the information it has.
>> >
>> >>
>> >> > Though, the guest kernel already tracks fence lifetimes through dma_fence. What if we add:
>> >> >
>> >> > - DESTROY_FENCES(fence_ids, num_fences)
>> >> > - a virtio-config ("fence_destroy_threshold"), which controls num_fences in DESTROY_FENCES
>> >> >
>> >> > When fence_destroy_threshold == 0, this would be the current proposed solution (fence lifetimes implementation defined). However, a user can make "fence_destroy_threshold" 100 or something like that, to cause a VM-exit everytime 100 fences have been destroyed.
>> >> >
>> >> > This attempts a tunable compromise between API purity and performance concerns. WDYT?
>> >>
>> >> Tbh, I think the current approach is cleaner from the PoV of what a
>> >> fence is.. it is just a seqno, it should not have a lifetime.
>> >>
>> >> BR,
>> >> -R
>> >>
>> >> >
>> >> >>
>> >> >>
>> >> >> But this is all implementation detail
>> >> >>
>> >> >> BR,
>> >> >> -R
>> >> >>
>> >> >> >
>> >> >> > Essentially, let's do our due diligence and verify the most important use case (gpu --> display) actually works.
>> >> >> >
>> >> >> >>
>> >> >> >>
>> >> >> >> BR,
>> >> >> >> -R
>> >> >> >>
>> >> >> >> >> --
>> >> >> >> >> Best regards,
>> >> >> >> >> Dmitry
>> >> >> >> >>


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]