> PSET - Processor Sets for the Linux kernel
> http://isunix.it.ilstu.edu/~thockin/pset/
[...]
A few comments:
First please don't include the SGI includes/documentation, because they're
not free and copyrighted by SGI.
And from your patch:
+++ linux-2.1.115/arch/i386/kernel/entry.S Sat Aug 15 22:12:30 1998
@@ -562,9 +562,10 @@
.long SYMBOL_NAME(sys_capset) /* 185 */
.long SYMBOL_NAME(sys_sigaltstack)
.long SYMBOL_NAME(sys_sendfile)
+ .long SYMBOL_NAME(sys_sysmp)
.long SYMBOL_NAME(sys_ni_syscall) /* streams1 */
.long SYMBOL_NAME(sys_ni_syscall) /* streams2 */
This is plain wrong because it moves the fixed allocated LiS system calls
and replaces one of them with sysmp()
In pset_del_pse():
+ /* use a write lock, because we don't have upgradeable readers */
+ write_lock_irqsave(&pset_lock, flags);
+
+ /* can't delete pset which has active processes */
+ if (pset->refcount)
+ return -EBUSY;
+
This can exit with irqs still disabled and the spinlock still held.
In pset_add_cpumask:
+ /* does the processor set exist? if not, return error code */
+ if ((pset = pset_find_pset(pset_id)) == NULL)
+ return -EINVAL;
+
+ write_lock_irqsave(&pset_lock, flags);
+
+ pset->cpumask |= cpumask;
+
+ write_unlock_irqrestore(&pset_lock, flags);
+
Race here, pset may go away between the pset_find_pset and the lock
aquisition.
Same race is pset_del_cpumask. You should probably require all callers
of pset_find_pset() to hold the pset_lock already (linux spinlocks are
not recursive). pset_assign_pid_to_pset() has this race too, but be careful
with the fix here because getting pset_lock while holding tasklist_lock
is deadlock country.
Also the tasklist_lock lockings don't lock save, you need to make sure
that it is held all between searching for the task and actually
manipulating it.
Probably other races too, I didn't look further.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html