OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

virtio-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [PATCH V7 2/2] virtio-gpio: Add support for interrupts


On Thu, Jul 29, 2021 at 10:45 AM Viresh Kumar <viresh.kumar@linaro.org> wrote:

> > Fair enough, I don't care too much about this one, I was just thinking we'd
> > expect it to do it the other way round since that is more consistent with the
> > continued operation after the interrupts remain enabled and we go back and
> > forth requeing new eventq requests without turning them off again.
>
> Coming back to the confusing part here. I think we should make the
> interrupt controller (device) dumb here and leave all intelligence at
> the users (driver).
>
> What about defining the sequence of events like this:
>
> For edge interrupt:
>
> - driver sets trigger type to EDGE (falling/rising).
> - device enables the interrupt line, latches a value only after this
>   point.
> - driver queues a buffer over eventq (here or before setting trigger type).
> - Once buffer is available over eventq, device returns it back and
>   latches the next interrupt as soon as it happens.
> - driver on receiving the buffer, re-queues the buffer and handles the
>   interrupt by calling client's handler. Though queuing the buffer it
>   after handling the request won't make a huge difference (and may be
>   better for simpler driver code) as the next interrupt is latched
>   anyway by the device and can be processed only after this one is
>   done at the driver (some locking will be there to make sure only
>   one interrupt gets processed per line at a time).
> - Note that in this sequence we don't mask/unmask the interrupt at
>   all. That's what kernel will also do in handle_edge_irq() if I
>   understood it correctly.

Maybe this is where we misunderstood each other: I was thinking
of it in terms of handle_fasteoi_irq() control flow, not handle_edge_irq().

For the edge interrupt, it doesn't make much of a difference, but
for the level case, I think we really want handle_fasteoi_irq(),
and as I understand it, the driver side should work the same way
here.

> For level interrupt:
>
> - driver sets trigger type to LEVEL (high/low).
> - device enables the interrupt line, latches a value only after this
>   point.
> - driver queues a buffer over eventq (here or before setting trigger type).
> - Once buffer is available over eventq, device returns it back and
>   latches the next interrupt as soon as it happens. Its okay.
> - The driver on receiving the buffer, passes control to irq-core which
>   masks the interrupt by calling driver to set the trigger type to
>   NONE.
> - The device masks the interrupt and discards any latched value.
> - The irq-core handles the interrupt by calling client's handler.
> - The irq-core unmasks the interrupt by calling the driver to set the
>   trigger type to LEVEL.
> - The device unmasks the interrupt and latches any new value from here
>   on.
> - The driver queues the buffer again.
>
> How does that sound ? This removes any logical dependency between
> availability of the buffer and the interrupt being masked or unmasked.

I think having to mask/unmask the level interrupt explicitly adds
way too much complexity here, in particular when these are both
blocking operations. With the fasteoi flow, the irq core never has
to mask it, but only atomically requeue the buffer at the end.

       Arnd


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]