Re: [GIT PULL] percpu fixes for 2.6.32-rc6

From: Linus Torvalds
Date: Tue Nov 10 2009 - 13:55:42 EST




On Wed, 11 Nov 2009, Tejun Heo wrote:
>
> If I'm missing something, I'm sure you'll hammer it into me.

Here's from the comments on that function:

* RETURNS:
* 0 if noop, 1 if successfully extended, -errno on failure.

and here's from one of the main callers:

list_for_each_entry(chunk, &pcpu_slot[slot], list) {
...
switch (pcpu_extend_area_map(chunk, &flags)) {
case 0:
break;
case 1:
goto restart; /* pcpu_lock dropped, restart */

where that '&pcpu_slot[slot]' list is protected by the pcpu_lock, and
nothing else. At least according to all the _other_ comments in that file.
Including the one that very much tries to _explain_ the locking at the
top, quote:

"The latter is a spinlock and protects the index data structures - chunk
slots, chunks and area maps in chunks."

So as far as I can tell, either the comments are all crap, the whole
restart code is pointless and in fact the whole spin-lock is seemingly
almost entirely pointless to begin with (since pcpu_alloc_mutex is the
only thing that matters), or the code is buggy.

Also, quite frankly, even if the code _isn't_ buggy, it's still wrong to
release a lock that somebody else took. It's a sure-fire way to write
unmaintainable code with bugs almost guaranteed in the future. Yes, it
happens, and sometimes it's the only sane way to do it, but in this case
that really isn't true as far as I can tell.

>From my (admittedly fairly quick) look, my suggested split-up really would
make the code _more_ readable (no need for that subtle "negative, zero or
positive all mean different things" logic), and hopefully avoid the whole
"drop the lock that somebody else took", because we could drop it in the
caller where it was taken.

So it all boils down to: the code is unquestionably ugly and almost
certainly broken. And if it isn't broken, then _all_ the comments are
total crap.

Your choice.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/