Re: Auto-Adaptive scheduler - Final chapter ( the numbers ) ...

From: Davide Libenzi (davidel@maticad.it)
Date: Mon Jan 31 2000 - 03:58:16 EST


From: Richard Gooch <rgooch@ras.ucalgary.ca>
To: Davide Libenzi <davidel@maticad.it>
Cc: <linux-kernel@vger.rutgers.edu>
Sent: Sunday, January 30, 2000 10:40 PM

Hi Richard,

THX for Your answer.
With this message I don't want to restart the flames the patch has generated
but only give a response
to Richard.
As I've said before I've only proposed the patch for a discussion ( and what
a discussion ! ).

> Davide Libenzi writes:
> > Hi Richard,
> >
> > > Yeah, I'm reading it (barely, I've been deleting most of my email
> > > because I've been on holidays). The PMC (Performance Monitoring
> > > Counter) patch I wrote is available at:
> > > http://www.atnf.csiro.au/~rgooch/linux/
> >
> > can You give me a hint on how to use int the better way Your PMC
> > code ?
>
> The page with the kernel patches also has a link to the user-space
> API and command-line utilities. That should give you all you need.

Ok THX !

> > How is the noise figure induced onto the measure ?
>
> What do you mean?

I've looked the code, it's clear now.

> > What is Your impression of the patch ( if You've seen ) ?
>
> I've not seen it. I've had to delete most of my email this month as
> I've been on holidays. I either had to delete or go mad. I chose
> sanity. For the same reason, this reply is late. Sorry.
>
> > It has also the "side effect" of keeping RT task onto the higher
> > cluster for a better RT latency.
>
> From what I've seen in the discussion, your patch enables your new
> scheduler after many processes have been on the run queue for a while,
> correct? That's not the same as the case that was interesting me when
> I was doing my RT run queue work.
>
> That case that interests me is when you have a number of SCHED_OTHER
> processes on the run queue (usually due to some event waking up
> several processes), and then a new event wakes up a RT process (either
> an interrupt or another RT process).
>
> With a large run queue, there is a significant slowdown in waking the
> RT process. What I did was to create a separate RT run queue so that
> RT context switch times were (more) isolated from SCHED_OTHER.
> The (real) application where this mattered was an embedded system that
> controlled hardware, plus had a serial interface to display status. The
> processes that controlled hardware (via device drivers) were RT
> whereas the display processes were SCHED_OTHER.
>
> So it doesn't look like your patch does anything for the case that I
> care about, because there won't be a large number of processes on the
> run queue for long times. That doesn't mean your patch is bad, it just
> doesn't help *me*.

Boot point and delay can be made software tunable, but for a better RT
response
it's better to keep the cluster scheduler running.

> However, without having looked at your patch (but having glanced at
> the general response on the list), I'm skeptical about your patch, for
> these reasons:
>
> - any change to the scheduler to optimise a particular usage pattern
> is likely to cost us in performance for the general case

The cost is < 0.01 for RQ = 2 ( it uses the same code ).

> - you've said your patch only adds cost when the system usage pattern
> is the one you want to improve. The onus is on you to clearly
> demonstrate this (even if you're right, you need to shout it over
> and over to be heard over the screams of panic and fear)
>
> - even if your patch costs the general case a mere 0.01 percent, you
> have to justify that this (negligible) performance loss is justified

A cost < 0.01 %, IMVHO, can be tolerable if We've perf boost with longer RQ.
But if we state that longer RQ makes no sense for us, or better that longer
RQ implies
other problems than the scheduler, I can only give a 100% agree here.

> - and thus you need to demonstrate that your patch fixes a real
> problem, and doesn't just provide a crutch for silly applications.

As many my messages states I've _only_ reported benchmark or simile
benchmarks
apps and I've expressed my doubt that we will have that RQs in "nature".

> And it's the final point where I have my concerns. While I'm willing
> to sit back and acknowledge that you can keeps costs to a level which
> is sufficiently low that I won't care, I don't feel comfortable with
> providing crutches for silly applications.

I'm one of the most fans of code purism and perfection, so I agree with You
here.
My personal experience looking inside code ( especially in Corporate one
with XXXK lines of code )
is that exist more bad code than good one.

> And I really feel that the pipeline processing model, where you pass a
> big chunk of data between various processes, is fundamentally flawed.
> It's fine for things done at the command line, and for prototyping.
> But having a production application that is constantly shuffling big
> data between many processes is just plain silly.
>
> I know why people do it, of course: modularity. And modularity is
> good, no question of it. But it can cost you (not always, a good
> design can give you modularity with no cost or negligible cost,
> whereas a dumb (or just "obvious") design can give high cost). The
> programmer has to decide whether the costs outweigh the benefits.
>
> But if you opt for modularity, then don't expect me to be happy about
> losing even 0.01% of my performance for your design. Especially when I
> know that the cache polluting effects of shuffling data around will
> swamp whatever performance gains you hope to achieve.
>
> [I can see I'm going into rant mode here]
> Maybe it's just me, but whenever I see people going down the pipeline
> processing path, I just shake my head in sadness, especially when I
> see them using it in production systems. Yeah, sure, it's easy to code
> up, and boy does it look cute when display with a nice GUI and you can
> see the data trickling between processing nodes. Spiffy!
>
> But the operative word is "trickling". If you're lucky, each node will
> process long enough that cache refill times are lost in the noise. In
> which case, scheduler overheads are lost in the noise of cache
> refills.
>
> But I still think it's a bad design. It looks neat on paper (and
> really pretty on that GUI), but it sucks in real life and reflects
> laziness on the part of the programmer.
>
> I'm convinced you can modularise without the hideous cost of pipeline
> processing. Therefore any change that gives the illusion that pipeline
> processing is not that inefficient should be strongly discouraged.

This is a big mistake born inside a previous thread ;)
I've no pipeline app to redesign, it was only an example born inside a
discussion
from me an Larry about the scheduler that has migrated to an SMP one.

> [More off-topic ranting: sorry people]
> For the same reason, I'm still doing my real work in C and not C++.
> While C++ can make some (good) things much easier, it's just *too*
> seductive. Too much of a good thing is bad for you. Man, if feels good
> at the time, but it's stealing your soul. A trick here, a feature
> there, and suddenly your object code no longer fits on a CD-ROM. But
> hey! It was quicker to write. And it's object-oriented! Wow!
>
> You can lead a horse to water, but you can't force it to drink.
>
> And if people say that Java (or another flavour-of-the-month language)
> forces large run queues, and that application programmers can't help
> it, I'll just say Java is broken. I don't care how big it is, who
> wrote it, or who says it's so wonderful. If it spawns processes like
> the proverbial rabbit, it's flawed. Fix Java.
> BTW: don't flame me if Java doesn't do this. I'm just going from what
> I've seen on the list. I know bugger-all about Java, and that's just
> fine by me.

<OT>
I use C far from 1985 and it's my favourite language ( as long as C++ and
Assembler ).
I choose one of these depending on what I want to do, for code oriented
projects I use
C, for data oriented project I prefer C++ while when I want 100% code
generation control
I use Assembler.
For project that is not easy to classify I use C.
But probably this is a bit off topic here ;)
</OT>

Cheers,
Davide.

--
All this stuff is IMVHO

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Jan 31 2000 - 21:00:27 EST