On Friday 27 April 2007 08:00, Bill Davidsen wrote:It was more intended to give an immediate feedback on gross behavior. On some old schedulers (2.4.x) it visibly ran one xterm after the other, while on 2.6.2[01] that behavior is gone and all schedulers give equal time as seen by the eye. Looking at the behavior with line and jump scroll, under load or not, X nice or nasty, allows a quick check on where the bad cases are if any exist.
Ingo Molnar wrote:
* Ed Tomlinson <edt@xxxxxx> wrote:Several points on this...
the problem is, the glxgears portion of this test is an _inverse_SD 0.46 1-2 FPScfs v5 nice -10 60-65 FPS
cfs v5 nice -19 219-233 FPS
cfs v5 nice 0 1000-1996
testcase.
The reason? glxgears on true 3D hardware will _not_ use X, it will
directly use the 3D driver of the kernel. So by renicing X to -19 you
give the xterms more chance to show stuff - the performance of the
glxgears will 'degrade' - but that is what you asked for: glxgears is
'just another CPU hog' that competes with X, it's not a "true" X client.
if you are after glxgears performance in this test then you'll get the
best performance out of this by renicing X to +19 or even SCHED_BATCH.
First, I don't think this is accelerated in the way you mean, the
machine is a test server, with motherboard video using the 945G video
driver. Given the limitations of the support in that setup, I don't
think it qualified as "true 3D hardware," although I guess I could try
using the vesafb version as a test.
The 2nd thing I note is that on FC6 this scheduler seems to confuse
'top' to some degree, since the glxgears is shown as taking 51% of the
CPU (one core), while the state breakdown shows about 73% in idle,
waitio, and int. image attached.
top by itself certainly cannot be trusted to give true representation of the cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory usage of an application, but top's resolution being tied to HZ accounting makes it not reliable in that regard.
After I upgrade the kernel and cfs to the absolute latest I'll repeat
this, as well as test with vesafb, and my planned run under heavy load.
I have a problem with your test case Bill. Its behaviour would depend on how gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card is. I get completely different results to those of the other testers given the different hardware configuration and I don't think my results are valuable. My problem with this testcase is - What would you define as "perfect" behaviour for your test case? It seems far too arbitrary.