Re: [PATCH 03/11] locking, rwsem: introduce basis for down_write_killable

From: Michal Hocko
Date: Wed May 11 2016 - 03:24:07 EST


On Tue 10-05-16 14:38:06, Peter Zijlstra wrote:
> On Tue, May 10, 2016 at 01:53:20PM +0200, Michal Hocko wrote:
> > On Tue 10-05-16 19:43:20, Tetsuo Handa wrote:
> > > I hit "allowing the OOM killer to select the same thread again" problem
> > > ( http://lkml.kernel.org/r/20160408113425.GF29820@xxxxxxxxxxxxxx ), but
> > > I think that there is a bug in down_write_killable() series (at least
> > > "locking, rwsem: introduce basis for down_write_killable" patch).
> > >
> > > Complete log is at http://I-love.SAKURA.ne.jp/tmp/serial-20160510-sem.txt.xz .
> > [...]
> > > 2 threads (PID: 1314 and 1443) are sleeping at rwsem_down_read_failed()
> > > but no thread is sleeping at rwsem_down_write_failed_killable().
> > > If there is no thread waiting for write lock, threads waiting for read
> > > lock must be able to run. This suggests that one of threads which was
> > > waiting for write lock forgot to wake up reader threads.
> >
> > Or that the write lock holder is still keeping the lock held. I do not
> > see such a process in your list though. Is it possible that the
> > debug_show_all_locks would just miss it as it is not sleeping?
> >
> > > Looking at rwsem_down_read_failed(), reader threads waiting for the
> > > writer thread to release the lock are waiting on sem->wait_list list.
> > > Looking at __rwsem_down_write_failed_common(), when the writer thread
> > > escaped the
> > >
> > > /* Block until there are no active lockers. */
> > > do {
> > > if (signal_pending_state(state, current)) {
> > > raw_spin_lock_irq(&sem->wait_lock);
> > > ret = ERR_PTR(-EINTR);
> > > goto out;
> > > }
> > > schedule();
> > > set_current_state(state);
> > > } while ((count = sem->count) & RWSEM_ACTIVE_MASK);
> > >
> > > loop due to SIGKILL, I think that the writer thread needs to check for
> > > remaining threads on sem->wait_list list and wake up reader threads
> > > before rwsem_down_write_failed_killable() returns -EINTR.
> >
> > I am not sure I understand. The rwsem counter is not write locked while
> > the thread is sleeping and when we fail on the signal pending so readers
> > should be able to proceed, no?
> >
> > Or are you suggesting that the failure path should call rwsem_wake? I
> > do not see __mutex_lock_common for killable wait doing something like
> > that and rwsem_wake is explicitly documented that it is called after the
> > lock state has been updated already. Now I might be missing something
> > subtle here but I guess the code is correct and it is more likely that
> > the holder of the lock wasn't killed but it is rather holding the lock
> > and doing something else.
>
> Mutex is much simpler; it doesn't have to do the reader-vs-writer
> fairness thing.
>
> However, at the time I was thinking that if we have:
>
> reader (owner)
> writer (pending)
> reader (blocked on writer)
>
> and writer would get cancelled, the up_read() would do a wakeup and kick
> the blocked reader.
>
> But yes, immediately kicking further pending waiters might be better.

OK, that makes sense. We shouldn't be waiting for the first reader to
do up_read.

> Also, looking at it again; I think we're forgetting to re-adjust the
> BIAS for the cancelled writer.

Hmm, __rwsem_down_write_failed_common does

/* undo write bias from down_write operation, stop active locking */
count = rwsem_atomic_update(-RWSEM_ACTIVE_WRITE_BIAS, sem);

which should remove the bias AFAIU. Later we do

if (waiting) {
count = READ_ONCE(sem->count);

/*
* If there were already threads queued before us and there are
* no active writers, the lock must be read owned; so we try to
* wake any read locks that were queued ahead of us.
*/
if (count > RWSEM_WAITING_BIAS)
sem = __rwsem_do_wake(sem, RWSEM_WAKE_READERS);

} else
count = rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);

and that might set RWSEM_WAITING_BIAS but the current holder of the lock
should handle that correctly and wake the waiting tasks IIUC. I will go
and check the code closer. It is quite easy to get this subtle code
wrong...
--
Michal Hocko
SUSE Labs