Re: [PATCH v3 1/3] pps: pps-gpio: split IRQ handler into hardirq and threaded parts
From: Michael Byczkowski
Date: Fri Apr 10 2026 - 06:01:39 EST
Dear David,
The patch uses IRQF_ONESHOT, which keeps the IRQ line masked until
the threaded handler completes. The next hardirq cannot fire until
pps_gpio_irq_thread() returns, so info->ts cannot be overwritten
and the "buses arriving at once" scenario cannot occur.
Additionally, PPS is by definition 1 Hz, thus the threaded handler has
~1 second to complete its work (a few gpiod_get_value() calls and
pps_event()), which takes microseconds at most.
Best regards,
Michael
> On 9. Apr 2026, at 20:52, David Laight <david.laight.linux@xxxxxxxxx> wrote:
>
> On Thu, 9 Apr 2026 17:27:21 +0200
> Michael Byczkowski <by@xxxxxxxxxxxx> wrote:
>
>> On PREEMPT_RT, all IRQ handlers are force-threaded. The current
>> pps_gpio_irq_handler captures the PPS timestamp via pps_get_ts()
>> inside the handler, but on RT this runs in thread context — after
>> a scheduling delay that adds variable latency (jitter) to the
>> timestamp.
>>
>> Split the handler into a hardirq primary (pps_gpio_irq_hardirq)
>> that only captures the timestamp, and a threaded handler
>> (pps_gpio_irq_thread) that processes the event. With
>> request_threaded_irq(), the primary handler runs in hardirq context
>> even on PREEMPT_RT, preserving nanosecond timestamp precision.
>
> What happens if the threaded irq handler doesn't run until after
> the next (or more than one) hard irq?
>
> Threaded irq really don't work well for timer interrupts at all.
> They end up like buses - none come for ages and then they all
> arrive at once.
>
> David
>
>>
>> On non-RT kernels, request_threaded_irq with an explicit primary
>> handler behaves identically to the previous request_irq call.