On 2018-03-05 09:08:11 [-0600], Corey Minyard wrote:
Starting with the changeâ
8a64547a07980f9d25e962a78c2e10ee82bdb742 fs/dcache: use swait_queue instead
of
waitqueue
The following change is the obvious reason:yeah but vanilla uses wake_up() which does spin_lock_irqsafe() so it is
--- a/kernel/sched/swait.c
+++ b/kernel/sched/swait.c
@@ -69,6 +69,7 @@ void swake_up_all(struct swait_queue_head *q)
ÂÂÂÂÂÂÂ struct swait_queue *curr;
ÂÂÂÂÂÂÂ LIST_HEAD(tmp);
+ÂÂÂÂÂÂ WARN_ON(irqs_disabled());
ÂÂÂÂÂÂÂ raw_spin_lock_irq(&q->lock);
ÂÂÂÂÂÂÂ list_splice_init(&q->task_list, &tmp);
ÂÂÂÂÂÂÂ while (!list_empty(&tmp)) {
I've done a little bit of analysis here, percpu_ref_kill_and_confirm()
does spin_lock_irqsave() and then does a percpu_ref_put(). If the
refcount reaches zero, the release function of the refcount is
called. In this case, the block code has set this to
blk_queue_usage_counter_release(), which calls swake_up_all().
It seems like a bad idea to call percpu_ref_put() with interrupts
disabled. This problem actually doesn't appear to be RT-related,
there's just no warning call if the RT tree isn't used.
not an issue there.
The odd part here is that percpu_ref_kill_and_confirm() does _irqsave()
which suggests that it might be called from any context and then it does
wait_event_lock_irq() which enables interrupts again while it waits. So
it can't be used from any context.
I'm not sure if it's best to just do the put outside the lock, orswake_up_all() does raw_spin_lock_irq() because it should be called from
have modified put function that returns a bool to know if a release
is required, then the release function can be called outside the
lock. I can do patches and test, but I'm hoping for a little
guidance here.
non-IRQ context. And it drops the lock (+IRQ enable) between wake-ups in
case we "need_resched()" because we woke a high-priority waiter. There
is the list_splice because we wanted to drop the locks (and have IRQs
open) during the entire wake up process but finish_swait() may happen
during the wake up and so we must hold the lock while the list-item is
removed for the queue head.
I have no idea what is the wisest thing to do here. The obvious fix
would be to use the irqsafe() variant here and not drop the lock between
wake ups. That is essentially what swake_up_all_locked() does which I
need for the completions (and based on some testing most users have one
waiter except during PM and some crypto code).
It is probably no comparison to wake_up_q() (which does multiple wake
ups without a context switch) but then we did this before like that.
Preferably we would have a proper list_splice() and some magic in the
"early" dequeue part that works.
I'm also wondering why we don't have a warning like this in theIdeally you would add lockdep_assert_irqs_enabled() to
*_spin_lock_irq() macros, perhaps turned on with a debug
option. That would catch things like this sooner.
local_irq_disable() so you would have it hidden behind lockdep with an
recursion check and everything. But this needs a lot of headers like
task_struct soâ
I had once WARN_ON_ONCE(irqs_disabled()) added to testdrive it and had a
few false-positive in the early boot or constructs like in
__run_hrtimer(). I didn't look at it furtherâ
Thanks,Sebastian
-corey