Re: [PATCH] cpuset semaphore depth check optimize

From: Roman Zippel
Date: Fri Sep 16 2005 - 21:08:25 EST


Hi,

On Thu, 15 Sep 2005, Paul Jackson wrote:

> > > if (atomic_read(&cs->count) == 1 && notify_on_release(cs)) {
>
> if per chance the cs->count was one, then for an instant no other task
> was using this cpuset, and it had no children. But you can still get
> to the cpuset via its /dev/cpuset path.
>
> So by the time we get to the second half of this line where we check
> for "notify_on_release(cs)", all hell could have broken loose, and
> there might be 17 tasks using this self same cpuset, and 19 child
> cpusets of this cpuset. These interlopers. could have arrived by
> accessing the cpuset using its path below /dev/cpuset.

Define "using", as long as the count is different from the cpuset is
active and the possible actions on it are limited.

> The flip side is just as plausible. We cannot, in any case, execute an
> unguarded atomic_dec on cpuset->count, if that cpuset has been marked
> notify_on_release, and if that cpuset is accessible by any of the
> above three possible ways, due to the risk the decrement will put the
> count to zero, and we'd miss issuing a release notifier.

No, as long as you own the cpuset (i.e. increased count) no one else can
take it away from you (without proper locking). So once you get the
locking right, the rest will fall in place. :-)

> Putting that part aside, why would you make a point of stating that
> "that clearing tsk->cpuset doesn't require the spinlock"? I don't
> take cpuset_sem when I clear tsk->cpuset, so why would you think I'd
> take this new spinlock instead?

But you take task_lock currently, which is replaced by the new spinlock.
In general _any_ access to tsk->cpuset is protected by the spinlock.

bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/