On Sun, Dec 24, 2000 at 04:17:10PM +1100, Andrew Morton wrote:
> I was talking about a different scenario:
>
> add_wait_queue_exclusive(&q->wait_for_request, &wait);
> for (;;) {
> __set_current_state(TASK_UNINTERRUPTIBLE);
> /* WINDOW */
> spin_lock_irq(&io_request_lock);
> rq = get_request(q, rw);
> spin_unlock_irq(&io_request_lock);
> if (rq)
> break;
> generic_unplug_device(q);
> schedule();
> }
> remove_wait_queue(&q->wait_for_request, &wait);
>
> Suppose there are two tasks sleeping in the schedule().
>
> A wakeup comes. One task wakes. It loops aound and reaches
> the window. At this point in time, another wakeup gets sent
> to the waitqueue. It gets directed to the task which just
> woke up![..]
Ok, this is a very minor window compared to the current one, but yes, that
could happen too in test4.
> I assume this is because this waitqueue gets lots of wakeups sent to it.
It only gets the strictly necessary number of wakeups.
> Linus suggested at one point that we clear the waitqueue's
> WQ_FLAG_EXCLUSIVE bit when we wake it up, [..]
.. and then set it after checking if a new request is available, just
before schedule(). That would avoid the above race (and the one
I mentioned in previous email) but it doesn't address the lost wakeups
for example when setting USE_RW_WAIT_QUEUE_SPINLOCK to 1.
Considering wakeups only the ones that moves the task to the runqueue will get
rid of the races all together and it looks right conceptually so I prefer it.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Dec 31 2000 - 21:00:07 EST