On Tue, 2002-09-17 at 14:57, Ingo Molnar wrote:
> i'd do (a). current->state is to be used anyway, and the default-untaken
> first branch should be cheap. Plus by moving things down the splitup of
> the function would create more code duplication than necessery i think.
Note by moving it down, the only gain over keeping it at the top is not
having to check for the BKL...
Anyhow, I would appreciate it if you could give this a try (with kernel
preemption enabled)... any comments are appreciated.
(Note you need a 2.5.35-bk release to get the dump_stack(). Otherwise
use show_trace(0).)
Robert Love
diff -urN linux-2.5.35/kernel/sched.c linux/kernel/sched.c
--- linux-2.5.35/kernel/sched.c Sun Sep 15 22:18:24 2002
+++ linux/kernel/sched.c Tue Sep 17 15:24:08 2002
@@ -940,9 +940,6 @@
struct list_head *queue;
int idx;
- if (unlikely(in_atomic()))
- BUG();
-
#if CONFIG_DEBUG_HIGHMEM
check_highmem_ptes();
#endif
@@ -950,8 +947,20 @@
preempt_disable();
prev = current;
rq = this_rq();
-
release_kernel_lock(prev);
+
+ /*
+ * Test if we are atomic. Since do_exit() needs to call into
+ * schedule() atomically, we ignore that for now. Otherwise,
+ * whine if we are scheduling when we should not be.
+ */
+ if (likely(current->state != TASK_ZOMBIE)) {
+ if (unlikely((preempt_count() & ~PREEMPT_ACTIVE) != 1)) {
+ printk(KERN_ERR "scheduling while non-atomic!\n");
+ dump_stack();
+ }
+ }
+
prev->sleep_timestamp = jiffies;
spin_lock_irq(&rq->lock);
@@ -959,7 +968,7 @@
* if entering off of a kernel preemption go straight
* to picking the next task.
*/
- if (unlikely(preempt_count() & PREEMPT_ACTIVE))
+ if (unlikely(preempt_count() == PREEMPT_ACTIVE))
goto pick_next_task;
switch (prev->state) {
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Mon Sep 23 2002 - 22:00:20 EST