Re: [RFC PATCH 0/4] Gang scheduling in CFS
From: Nikunj A Dadhania
Date: Mon Jan 02 2012 - 05:26:59 EST
On Mon, 02 Jan 2012 11:39:00 +0200, Avi Kivity <avi@xxxxxxxxxx> wrote:
> On 01/02/2012 06:20 AM, Nikunj A Dadhania wrote:
[...]
> > > non-PLE - Test Setup:
> > >
> > > dbench 8vm (degraded -30%)
> > > | dbench| 2.01 | 1.38 | -30 |
> >
> >
> > Baseline:
> > 57.75% init [kernel.kallsyms] [k] native_safe_halt
> > 40.88% swapper [kernel.kallsyms] [k] native_safe_halt
> >
> > Gang V2:
> > 56.25% init [kernel.kallsyms] [k] native_safe_halt
> > 42.84% swapper [kernel.kallsyms] [k] native_safe_halt
> >
> > Similar comparison here.
> >
>
> Wierd, looks like a mismeasurement...
>
Getting similar numbers across different runs/reboots with dbench.
> what happens if you add a bash
> busy loop?
>
Perf output for bash busy loops inside the guest:
9.93% sh libc-2.12.so [.] _int_free
8.37% sh libc-2.12.so [.] _int_malloc
6.14% sh libc-2.12.so [.] __GI___libc_malloc
6.03% sh bash [.] 0x480e6
loop.sh
----------------------------------
for i in `seq 1 8`
do
while :; do :; done &
pid[$i]=$!;
done
sleep 60
for i in `seq 1 8`
do
kill -9 ${pid[$i]}
done
----------------------------------
Used the following command to capture the perf events inside the guest:
ssh root@xxxxxxxxxxxxxx 'perf record -a -o loop-perf.out --
/root/loop.sh '
Regards,
Nikunj
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/