Re: [PATCH RFC] rtmutex: Permit rt_mutex_unlock() to be invokedwith irqs disabled

From: Paul E. McKenney
Date: Sun Jul 24 2011 - 11:57:17 EST


On Sun, Jul 24, 2011 at 11:00:41AM +0200, Thomas Gleixner wrote:
> On Sat, 23 Jul 2011, Paul E. McKenney wrote:
> > On Sun, Jul 24, 2011 at 02:05:13AM +0200, Thomas Gleixner wrote:
> > > On Sun, 24 Jul 2011, Thomas Gleixner wrote:
> > > > > > Thomas, I'm inclined to merge this, any objections?
> > > > >
> > > > > FWIW, it has been passing tests here.
> > > >
> > > > If it's only the unlock path, I'm fine with that change.
> > > >
> > > > Acked-by-me
> > >
> > > Hrmpft. That's requiring all places to take the lock irq safe. Not
> > > really amused. For -RT that's a hotpath and we can really do without
> > > the irq fiddling there. That needs a bit more thought.
> >
> > Indeed... If I make only some of the lock acquisitions irq safe, lockdep
> > will yell at me. And rightfully so, as that could result in deadlock.
> >
> > So, what did you have in mind?
>
> Have no real good idea yet for this. Could you grab rt and check
> whether you can observe any impact when the patch is applied?

Hmmm, wait a minute... There might be a way to do this with zero
impact on the fastpath, given that I am allocating an rt_mutex on
the stack that is used only by RCU priority boosting, and that only
rt_mutex_init_proxy_locked(), rt_mutex_lock(), and rt_mutex_unlock()
are used.

So I could do the following:

o Use lockdep_set_class_and_name() to make the ->wait_lock()
field of my rt_mutex have a separate lockdep class. I guess
I should allocate a global variable for lock_class_key
rather than allocating it on the stack. ;-)

o Make all calls from RCU priority boosting to rt_mutex_lock()
and rt_mutex_unlock() have irqs disabled.

o Make __rt_mutex_slowlock() do the following when sleeping:

raw_spin_unlock(&lock->wait_lock);

debug_rt_mutex_print_deadlock(waiter);

{
int was_disabled = irqs_disabled();

if (was_disabled)
local_irq_enable();

schedule_rt_mutex(lock);

if (was_disabled)
local_irq_disable();

}

raw_spin_lock(&lock->wait_lock);
set_current_state(state);

Would that work reasonably?

Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/