>Move the != SCHED_OTHER test before the goodness test,
>it will avoid clobbering p->policy, fouling up the
>actual reschedule.
Yes, this morning it' s been not very productive ;). Excuse me.
Using goodness() was also buggy because I could race with the end of
sys_sched_yield() and remove the SCHED_YIELD state before it could be
processes by the real schedule().
Now I also seen the way Linus fixed the jiffy wrap bug in cpu_idle().
But it' s not precise because that way if schedule() returns after one
hour, the cpu will go in hard_idle even if we will reuse the CPU a bit
after. I fixed the APM case following Linus's style. And there's no need
to set SCHED_YIELD inside the loop since goodness() is never run on the
idle task.
Index: linux/arch/i386/kernel/process.c
diff -u linux/arch/i386/kernel/process.c:1.1.1.3 linux/arch/i386/kernel/process.c:1.1.1.1.2.10
--- linux/arch/i386/kernel/process.c:1.1.1.3 Thu Dec 3 12:51:05 1998
+++ linux/arch/i386/kernel/process.c Fri Dec 4 01:17:47 1998
@@ -108,6 +108,7 @@
unsigned long start_idle = jiffies;
/* endless idle loop with no priority at all */
+ current->policy = SCHED_YIELD;
for (;;) {
if (jiffies - start_idle > HARD_IDLE_TIMEOUT)
hard_idle();
@@ -115,10 +116,12 @@
if (boot_cpu_data.hlt_works_ok && !hlt_counter && !current->need_resched)
__asm__("hlt");
}
- if (current->need_resched)
+ if (current->need_resched)
+ {
+ schedule();
start_idle = jiffies;
- current->policy = SCHED_YIELD;
- schedule();
+ } else
+ schedule();
check_pgt_cache();
}
}
@@ -131,12 +134,11 @@
int cpu_idle(void *unused)
{
-
/* endless idle loop with no priority at all */
+ current->policy = SCHED_YIELD;
while(1) {
if (current_cpu_data.hlt_works_ok && !hlt_counter && !current->need_resched)
__asm__("hlt");
- current->policy = SCHED_YIELD;
schedule();
check_pgt_cache();
}
This patch instead try to achieve better iteractive performance from
reschedule_idle() when a nice+19 process is running in background. It
should reduce the number of schedule() that will end in a not real task
switch...
Index: linux/kernel/sched.c
diff -u linux/kernel/sched.c:1.1.1.2 linux/kernel/sched.c:1.1.1.1.2.24
--- linux/kernel/sched.c:1.1.1.2 Fri Nov 27 11:19:09 1998
+++ linux/kernel/sched.c Fri Dec 4 00:24:26 1998
@@ -93,7 +96,7 @@
static inline void reschedule_idle(struct task_struct * p)
{
-
+ struct task_struct * this = current;
/*
* For SMP, we try to see if the CPU the task used
* to run on is idle..
@@ -130,8 +133,9 @@
}
#endif
#endif
- if (p->policy != SCHED_OTHER || p->counter > current->counter + 3)
- current->need_resched = 1;
+ if (p->policy != SCHED_OTHER ||
+ p->counter + p->priority > this->counter + this->priority + 3)
+ current->need_resched = 1;
}
/*
Comments?
Andrea Arcangeli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/