Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40

From: Satoru Takeuchi
Date: Mon Apr 16 2007 - 06:18:54 EST


At Mon, 16 Apr 2007 10:47:25 +0200,
Ingo Molnar wrote:
>
>
> * Satoru Takeuchi <takeuchi_satoru@xxxxxxxxxxxxxx> wrote:
>
> > > btw., other schedulers might work better with some more test-time:
> > > i'd suggest to use 60 seconds (./massive_intr 10 60) [or maybe more,
> > > using more threads] to see long-term fairness effects.
> >
> > I tested CFS with massive_intr. I did long term, many CPUs, and many
> > processes cases.
> >
> > Test environment
> > ================
> >
> > - kernel: 2.6.21-rc6-CFS
> > - run time: 300 secs
> > - # of CPU: 1 or 4
> > - # of processes: 200 or 800
> >
> > Result
> > ======
> >
> > +---------+-----------+-------+------+------+--------+
> > | # of | # of | avg | max | min | stdev |
> > | CPUs | processes | (*1) | (*2) | (*3) | (*4) |
> > +---------+-----------+-------+------+------+--------+
> > | 1(i386) | | 117.9 | 123 | 115 | 1.2 |
> > +---------| 200 +-------+------+------+--------+
> > | | | 750.2 | 767 | 735 | 10.6 |
> > | 4(ia64) +-----------+-------+------+------+--------+
> > | | 800(*5) | 187.3 | 189 | 186 | 0.8 |
> > +---------+-----------+-------+------+------+--------+
> >
> > *1) average number of loops among all processes
> > *2) maximum number of loops among all processes
> > *3) minimum number of loops among all processes
> > *4) standard deviation
> > *5) Its # of processes per CPU is equal to first test case.
> >
> > Pretty good! CFS seems to be fair in any situation.
>
> thanks for testing this! Indeed the min-max values and standard
> deviation look all pretty healthy. (They in fact seem to be better than
> the other patch of mine against upstream that you tested, correct?)

Is it?

patch:
http://lkml.org/lkml/2007/3/27/225

test result:
http://lkml.org/lkml/2007/3/31/41

If so, yes. That patch has some dispersion.

>
> [ And there's also another nice little detail in your feedback: CFS
> actually builds, boots and works fine on ia64 too ;-) ]

You are welcome. I can use larger machine by chance, and also tested
there just now.

Test environment
================

- kernel: 2.6.21-rc6-CFS
- run time: 300 secs
- # of CPU: 12
- # of processes: 200 or 2400

Result
===================================

+----------+-----------+-------+------+------+--------+
| # of | # of | avg | max | min | stdev |
| CPUs | processes | | | | |
+----------+-----------+-------+------+------+--------+
| | 200 | 2250 | 2348 | 2204 | 64 |
| 12(ia64) +-----------+-------+------+------+--------+
| | 2400 | 187.5 | 197 | 176 | 4.3 |
+----------+-----------+-------+------+------+--------+

Looks like good too.


BTW, I've a question. Actually this problem is fixed on CFS and DS. However
they are mostly written from scratch and doeesn't suitable for stable version,
for example 2.6.20.X. Can your other patch be compromise for stable version?
Although that patch is not perfect, but I think it's preferable to leave it
alone.

Satoru
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/