Re: Filesystem lockup with CONFIG_PREEMPT_RT
From: Austin Schuh
Date: Fri Jun 27 2014 - 21:19:03 EST
On Fri, Jun 27, 2014 at 11:19 AM, Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
> On Fri, 27 Jun 2014 20:07:54 +0200
> Mike Galbraith <umgwanakikbuti@xxxxxxxxx> wrote:
>
>> > Why do we need the wakeup? the owner of the lock should wake it up
>> > shouldn't it?
>>
>> True, but that can take ages.
>
> Can it? If the workqueue is of some higher priority, it should boost
> the process that owns the lock. Otherwise it just waits like anything
> else does.
>
> I much rather keep the paradigm of the mainline kernel than to add a
> bunch of hacks that can cause more unforeseen side effects that may
> cause other issues.
>
> Remember, this would only be for spinlocks converted into a rtmutex,
> not for normal mutex or other sleeps. In mainline, the wake up still
> would not happen so why are we waking it up here?
>
> This seems similar to the BKL crap we had to deal with as well. If we
> were going to sleep because we were blocked on a spinlock converted
> rtmutex we could not release and retake the BKL because we would end up
> blocked on two locks. Instead, we made sure that the spinlock would not
> release or take the BKL. It kept with the paradigm of mainline and
> worked. Sucked, but it worked.
>
> -- Steve
Sounds like you are arguing that we should disable preemption (or
whatever the right mechanism is) while holding the pool lock?
Workqueues spin up more threads when work that they are executing
blocks. This is done through hooks in the scheduler. This means that
we have to acquire the pool lock when work blocks on a lock in order
to see if there is more work and whether or not we need to spin up a
new thread.
It would be more context switches, but I wonder if we could kick the
workqueue logic completely out of the scheduler into a thread. Have
the scheduler increment/decrement an atomic pool counter, and wake up
the monitoring thread to spawn new threads when needed? That would
get rid of the recursive pool lock problem, and should reduce
scheduler latency if we would need to spawn a new thread.
Austin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/