On Wed, Aug 03, 2016 at 02:51:23PM -0700, Bart Van Assche wrote:
So I started testing the patch below that should fix the same hang but
without triggering any wait list corruption.
diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index f15d6b6..4e3f651 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -282,7 +282,7 @@ void abort_exclusive_wait(wait_queue_head_t *q,
wait_queue_t *wait,
spin_lock_irqsave(&q->lock, flags);
if (!list_empty(&wait->task_list))
list_del_init(&wait->task_list);
- else if (waitqueue_active(q))
+ if (waitqueue_active(q))
__wake_up_locked_key(q, mode, key);
spin_unlock_irqrestore(&q->lock, flags);
}
So the problem with this patch is that it will violate the nr_exclusive
semantics in that it can result in too many wakeups -- which is a much
less severe (typically harmless) issue.
We now always wake up the next waiter, even if there wasn't an actual
wakeup we raced against. And if we then also get a wakeup, we can end up
with 2 woken tasks (instead of the nr_exclusive=1).
Now, since wait loops must all deal with spurious wakeups, this ends up
as harmless overhead.
But I'd still like to understand where we loose the wakeup.