Re: statistics infrastructure (in -mm tree) review

From: Martin Peschke
Date: Wed Jun 14 2006 - 18:47:28 EST


Randy.Dunlap wrote:
On Tue, 13 Jun 2006 16:47:39 -0700 Greg KH wrote:

First cut at reviewing this code.

Initial impression is, "damm, that's a complex interface". I'd really
like to see some other, real-world usages of this. Like perhaps the
io-schedular statistics? Some other /proc stats that have nothing to do
with processes?

Agreed with complexity.

Well, roughly 1500 lines of code is sort of complex, even if already
being reviewed and cleaned up several times.

Please, let's try to break it down into design details that add
their measure of complexity. A flat "too comlex" doesn't help on.

Could you ACK / NACK the following assumptions, so that we can
figure out how far our (dis)agreement goes and how to continue?


1) There are various kernel components that gather statistical data.
(kernel/profile.c, network stack, genhd, memory management, taskstats,
S390 DASD driver, zfcp driver, ...). Requirements for other statistics
aren't unusual.

2) Basically, they all implement similar things (smp-safe and efficient
data structures used for data gathering, implement algorithms for
on-the-fly data preprocessing, delivery of data through some user
interface, sometimes a switch for turning statistics on/off)

3) They all introduce their own macros / functions, resulting in code
duplication (bad), while they usually have their unique way to show
data to users (bad, too).

4) Possible ways to aggregate statistics data include plain counters,
histograms, a utilisation indicator (min, max, average etc.), and
potentially other algorithms people might come up with.

5) Statistics counters should be maintained in kernel. That's cheapest.
No bursts of zillions of incremental updates relayed to user space.
(please see also other comment at bottom of message)

6) Some library routines would suffice to take over data gathering
and preprocessing. Avoids further code duplication, avoids bugs,
speeds up development and test.

7) With regard to the delivery of statistic data to user land,
a library maintaining statistic counters, histograms or whatever
on behalf of exploiters doesn't need any help from the exploiter.
We can avoid the usual callbacks and code bloat in exploiters
this way.

8) If some library functions are responsible for showing data, and the
exploiter is not, we can achieve a common format for statistics data.
For example, a histogram about block I/O has the same format as
a histogram about network I/O.
This provides ease of use and minimises the effort of writing
scripts that could do further processing (e.g. formatting as
spreadsheats or bar charts, comparison and summarisation of
statistics, ...)

9) For performance reasons, per-cpu data and minimal locking
(local_irq_save/restore) should be used.
Adds to complexity, though.

10) If data is per-cpu, we want to be very careful with regard to
memory footprint. That is why, memory is only allocated for online
cpus (requires cpu hot(un)plug handling, which adds to complexity),

11) At least for data processing modes more expensive than plain
counters, like histograms, an on/off state makes sense.

12) In order to minimise the memory footprint, a released/allocated
state makes sense.

13) Unconfigured/released/off/on states should be handled by a tiny
state machine and a single check on statistic updates.

14) Kernel code delivering statistics data through library routines
can, at best, guess whether a user wants incremental updates be
aggregated in a single counter, a set of counters (histograms), or
in the form of other results. Users might want to change how much
detail is retained in aggregated statistic results.
Adds to complexity.

15) Nonetheless, exploiters are kindly requested to provide some
default settings that are a good starting point for general
purpose use.

16) Aggregated statistic results, in many cases, don't need to be
pushed to user space through a high-speed, high-volume interface.
Debugfs, for example, is fine for this purpose.

17) If the requirement for pushing data comes up anyway, we could,
for example, add relay-entries in debugfs anytime.
(For example, we could implement forwarding of incremental
updates to user space. Just another conceivable data processing
mode that fits into the current design.)

18) The programming interface of a statistics library can be rougly as
simple as statistic_create(), statistics_remove(), statistic_add().

19) Statistic_add() should come in different flavours:
statistic_add/inc() (just for convenience), and
statistic_*_nolock() (more efficient locking for a bundle of updates)

20) Statistic_add() takes a (X, Y) pair, with X being the main
characteristics of the statistics (e.g. a request size) and with
Y quantifying the update reported for a particular X (e.g. number
of observed requests of a particular request size).

21) Processing of (X, Y) according to abstract rules imposed by
counters, histograms etc. doesn't require any knowledge about the
semantics of X or Y.

22) There might be statistic counters that exploiters want to use and
maintain on their own, and which users still may want to have a look at
along with other statistics. Statistic_set() fits in here nicely.

And what does this mean for relayfs? Those developers tuned that code
to the nth degree to get speed and other goodness, and here you go just
ignoring that stuff and add yet another way to get stats out of the
kernel. Why should I use this instead of my own code with relayfs?
>
> Good questions.

Relayfs is a nice feature, but not appropriate here.

For example, during a performance measurements I have seen
SCSI I/O related statistics being updated millions of times while
I was just having a short lunch break. Some of them just increased
a counter, which is pretty fast if done immediately in the kernel.
If all these updates update would have to be relayed to user space
to just increase a counter maintained in user space.. urgh, surely
more expensive and not the way to go.

And what if user space isn't interested at all? Would we keep
pumping zillions of unused updates into buffers instead of
discarding them right away?

Profile.c, taskstats, genhd and all the other statistics listed
above... they all maintain their counters in the kernel and
show aggregated statistics to users.

And is the need for the in-kernel parser really necessary? I know it
makes the userspace tools simpler (cat and echo), but should we be
telling the kernel how to filter and adjust the data? Shouldn't we just
dump it all to userspace and use tools there to manipulate it?

I agree again.

Assumimg we can agree on in-kernel counters, histograms etc.
allowing for attributes being adjusted by users makes sense.

The parser stuff required for these attributes is implemented
using match_token() & friends, which should be acceptible.
But, I think that the standard way of using match_token() and
strsep() needs improvement (strsep is destructive to strings
parsed, which is painful).

Thanks, Martin

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/