Re: [RFC] RT-patch update to remove the global pi_lock
From: Steven Rostedt
Date: Mon Aug 22 2005 - 20:33:58 EST
On Mon, 2005-08-22 at 17:51 -0700, Daniel Walker wrote:
> On Mon, 2005-08-22 at 20:26 -0400, Steven Rostedt wrote:
>
> >
> > How would you add to a lock with just holding a lock for a task? When
> > you are grabbing a lock, you must first grab a raw lock associated to
> > the lock being grabbed. Although, I'm starting to look into this idea,
> > and I'm going to first see if the current wait_lock could suffice. I
> > may also need to add an additional lock to the task to follow the lock
> > -> task -> lock route. The tasks order should be the same as the locks
> > when the are bound (holding) a lock. Since the task won't be able to
> > release it without holding the raw lock of the lock it is releasing.
> > (boy this gets confusing to talk about, since you need to talk about
> > locks and the locking method within the lock!)
>
> You might need to explain that one more time . I'm sure it needs more
> though, but the pi_lock just protects another cpu from enter
> pi_setprio() . What we really want is to protect only the specific
> structures modified inside pi_setprio() . Or that's my understanding .
> Are you thinking of something else?
Nope, nothing else. That's pretty much it.
>
> I think you would at least need to lock the wait_lock for each lock that
> is looped over inside pi_setprio() . Because you access the wait_list
> inside the loop .
That's what I was thinking. The wait_list locks could be the lock that
is used to replace the pi_lock.
>
> There is also a pi_waiters list that is per task. You would need to make
> a lock for that, I think . Or protect it somehow .
>
Yeah this is sort of what I was thinking of when I follow the
lock->task->lock route.
Here's a visual:
X=> : blocked on
-> : owned by
P1 X=> L1 -+
^ |
P2 X===+ |
|
+->P4 X=> L3 -> P5
|
P3 X=> L2 -+
So P4 owns locks L1 and L2 with tasks P1 and P2 blocked on L1 and P3
blocked on L2. And P4 is blocked on lock L3 owned by P5.
This has the following lists:
L1->wait_list => P1 => P2
L2->wait_list => P3
L3->wait_list => P4
P4->pi_waiters => P1 => P2 => P3
P5->pi_waiters => P4
OK, we add another lock to the task that protects the pi_waiters.
Always grab this lock _after_ grabbing all the necessary wait_locks.
So when P5 needs to update it's pi_waiters, it first grabs the
L3->wait_lock, and then the P5->pi_lock.
So when blocking on a lock, you would grab the lock->wait_locks of all
the locks that you are blocked on. When P2 blocked on a L1 it would
have grabbed (in this order) L1->wait_lock, L3->wait_lock, P4->pi_lock.
Actually, for simplicity, since the pi_lock is associated to the task,
we may be able to safely grab the task pi_lock before a wait_lock. Say
if we have P5 blocked on L4 owned by P6. When updating the
P4->pi_waiters (say when P2 blocked on L1), we could do this:
Grab L1->wait_lock, L3->wait_lock, P4->pi_lock, L4->wait_lock. Since we
should never be grabbing the L4->wait_lock (or higher) when grabbing the
P4->pi_lock.
God, I think a thesis can be made out of this. Well, let me start
coding, since I'm one of those that write code better than I design.
I'm a Spiral type of guy, not a Waterfall one ;-)
Code crap, write about it, recode it as gold!
-- Steve
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/