Re: [PATCH] - Reduce overhead of calc_load
From: Andrew Morton
Date: Sat Mar 18 2006 - 00:26:19 EST
Nick Piggin <nickpiggin@xxxxxxxxxxxx> wrote:
>
> Andrew Morton wrote:
> > Nick Piggin <nickpiggin@xxxxxxxxxxxx> wrote:
>
> >>Is there a need? Do they (except calc_load) use multiple values at
> >>the same time?
> >
> >
> > Don't know. It might happen in the future. And the additional cost is
> > practically zero.
> >
>
> Unless it happens to hit another cacheline (cachelines for all other
> CPUs but our own will most likely be invalid on this cpu). In which
> case the cost could double quite easily.
>
That would be an inefficient implementation. Let's not implement it
inefficiently.
> I think it might be better to leave it for the moment. If something comes
> up we can always take a look at it then (it isn't particularly tricky code).
What we're seeing here is a proliferation of little functions, all of which
do the same thing, some of them in different ways.
Take a look at (for example) nr_iowait. We forget to spill the count out
of the departing CPU's runqueue and hence we have to sum it across all
possible CPUs and we don't bother accounting for the possibility of the sum
going negative because we happen to dink with the runqueue of a
now-possibly-downed CPU. It's inefficient and it's inconsistent and some
of it is, or will become incorrect. The other counters there probably have
various combinations of these problems but I can't be bothered checking
them all because they're all implemented differently.
Better to do them all in the one place and do them all the same way. I'd
suggest a cacheline-aligned struct of local_t's which can be queried into a
struct of ulongs.
That query should only look at online CPUs, which becomes rather necessary
if we're to allocate runqueues only for online CPUs (desirable - the thing
is huge).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/