Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q7

From: Ingo Molnar
Date: Thu Sep 02 2004 - 00:34:03 EST



* Mark_H_Johnson@xxxxxxxxxxxx <Mark_H_Johnson@xxxxxxxxxxxx> wrote:

> One place where we may need to consider more mcount() calls is in the
> scheduler. I got another 500+ msec trace going from dequeue_task to
> __switch_to.

(you mean 500+ usec, correct?)

there's no way the scheduler can have 500 usecs of overhead going from
dequeue_task() to __switch_to(): we have all interrupts disabled and
take zero locks! This is almost certainly some hardware effect (i
described some possibilities and tests a couple of mails earlier).

In any case, please enable nmi_watchdog=1 so that we can see (in -Q7)
what happens on the other CPUs during such long delays.

> I also looked briefly at find_first_bit since it appears in a number
> of traces. Just curious, but the coding for the i386 version is MUCH
> different in style than several other architectures (e.g, PPC64,
> SPARC). Is there some reason why it is recursive on the x86 and a loop
> in the others?

what do you mean by recursive? It uses the SCAS (scan string) x86
instruction.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/