Re: [patch V2 11/15] completion: Use simple wait queues

From: Davidlohr Bueso
Date: Fri Mar 20 2020 - 05:02:16 EST


On Wed, 18 Mar 2020, Thomas Gleixner wrote:

From: Thomas Gleixner <tglx@xxxxxxxxxxxxx>

completion uses a wait_queue_head_t to enqueue waiters.

wait_queue_head_t contains a spinlock_t to protect the list of waiters
which excludes it from being used in truly atomic context on a PREEMPT_RT
enabled kernel.

The spinlock in the wait queue head cannot be replaced by a raw_spinlock
because:

- wait queues can have custom wakeup callbacks, which acquire other
spinlock_t locks and have potentially long execution times

- wake_up() walks an unbounded number of list entries during the wake up
and may wake an unbounded number of waiters.

For simplicity and performance reasons complete() should be usable on
PREEMPT_RT enabled kernels.

completions do not use custom wakeup callbacks and are usually single
waiter, except for a few corner cases.

Replace the wait queue in the completion with a simple wait queue (swait),
which uses a raw_spinlock_t for protecting the waiter list and therefore is
safe to use inside truly atomic regions on PREEMPT_RT.

There is no semantical or functional change:

- completions use the exclusive wait mode which is what swait provides

- complete() wakes one exclusive waiter

- complete_all() wakes all waiters while holding the lock which protects
the wait queue against newly incoming waiters. The conversion to swait
preserves this behaviour.

complete_all() might cause unbound latencies with a large number of waiters
being woken at once, but most complete_all() usage sites are either in
testing or initialization code or have only a really small number of
concurrent waiters which for now does not cause a latency problem. Keep it
simple for now.

The fixup of the warning check in the USB gadget driver is just a straight
forward conversion of the lockless waiter check from one waitqueue type to
the other.

Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Arnd Bergmann <arnd@xxxxxxxx>

Reviewed-by: Davidlohr Bueso <dbueso@xxxxxxx>