Re: 2.6.19-rc1: x86_64 slowdown in lmbench's fork

From: Tim Chen
Date: Fri Nov 03 2006 - 16:14:57 EST


On Fri, 2006-11-03 at 18:47 +0100, Andi Kleen wrote:
> > So unless there is some other array that is sized by NR_IRQs
> > in the context switch path which could account for this in
> > other ways. It looks like you just got unlucky.
>
>
> TLB/cache profiling data might be useful?
> My bet would be more on cache effects.
>

The TLB miss, cache miss and page walk profiles did not change when I
measured it.

I have a suspicion that the overhead to pin parent and child
processes to specific cpu had something to do with the
change in time observed. Lmbench includes this overhead in
the fork time it reported. I had chosen the lmbench option to
set parent and child process on specific cpu.

When I skip this by picking another lmbench option to let scheduler
pick the placement of parent and child process. I see that
the fork time now stays unchanged with this setting. Wonder if
the increase in time is in sched_setaffinity.

Tim
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/