Re: [patch] voluntary-preempt-2.6.9-rc1-bk4-Q5

From: Ingo Molnar
Date: Tue Aug 31 2004 - 19:56:18 EST



* Lee Revell <rlrevell@xxxxxxxxxxx> wrote:

> 00000001 0.009ms (+0.000ms): generic_set_mtrr (set_mtrr)
> 00000001 0.009ms (+0.000ms): prepare_set (generic_set_mtrr)

this is the call to prepare_set() [implicit mcount()].

> 00000002 0.010ms (+0.000ms): prepare_set (generic_set_mtrr)

explicit mcount() #1,

> 00000002 0.010ms (+0.000ms): prepare_set (generic_set_mtrr)

#2,

> 00000002 0.375ms (+0.364ms): prepare_set (generic_set_mtrr)

#3. So the latency is this codepath:

+ mcount();
wbinvd();
+ mcount();

bingo ...

to continue:

> 00000002 0.375ms (+0.000ms): prepare_set (generic_set_mtrr)

mcount #4

> 00000002 0.526ms (+0.150ms): prepare_set (generic_set_mtrr)

#5. This means the following code had the latency:

write_cr0(cr0);
+ mcount();
wbinvd();
+ mcount();

the other wbinvd(). Since we didnt execute all that much it didnt take
as much time as the first wbinvd() [the cache was just write-flushed, so
less flushing had to be done second time around].

plus:

00000002 0.548ms (+0.006ms): generic_set_mtrr (set_mtrr)
00000002 0.552ms (+0.004ms): post_set (generic_set_mtrr)
00000001 0.708ms (+0.155ms): set_mtrr (mtrr_add_page)
00000001 0.713ms (+0.005ms): sub_preempt_count (sys_ioctl)

proves that it's post_set() that took 155 usecs here, which too does a
wbinvd().

so it's the invalidation of the cache that takes so long.

i believe that the invalidations are excessive. It is quite likely that
no invalidation has to be done at all. Does your box still start up X
fine if you uncomment all those wbinvd() calls?

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/