Re: [PATCH] perf, tools: Support spark lines in perf stat v3

From: Jiri Olsa
Date: Mon May 26 2014 - 13:28:31 EST


On Wed, Apr 16, 2014 at 11:41:18AM -0700, Andi Kleen wrote:
> From: Andi Kleen <ak@xxxxxxxxxxxxxxx>
>
> perf stat -rX prints the stddev for multiple measurements.
> Just looking at the stddev for judging the quality of the data
> is a bit dangerous The simplest sanity check is to just look
> at a simple plot. This patchs add a sparkline to the end
> of the measurements to make it simple to judge the data.
>
> The sparkline only uses UTF-8, so should be readable
> in all modern tools and terminals.
>
> The sparkline is between the minimum and maximum of the data,
> so it's mainly a indicator of variance. To keep the code
> simple and make the output not too wide only the first
> 8 values are printed. If more values are there it adds '..'
>
> The code is inspired by Zach Holman's spark shell script.
>
> Example output (view in non-proportial font):
>
> Performance counter stats for 'true' (10 runs):
>
> 0.175672 task-clock (msec) # 0.555 CPUs utilized ( +- 1.77% ) ââââââââ..
> 0 context-switches # 0.000 K/sec
> 0 cpu-migrations # 0.000 K/sec
> 114 page-faults # 0.647 M/sec ( +- 0.14% ) ââââââââ..
> 520,798 cycles # 2.965 GHz ( +- 1.75% ) ââââââââ..
> 433,525 instructions # 0.83 insns per cycle ( +- 0.28% ) ââââââââ..
> 83,012 branches # 472.537 M/sec ( +- 0.31% ) ââââââââ..
> 3,157 branch-misses # 3.80% of all branches ( +- 2.55% ) ââââââââ..
>
> 0.000316660 seconds time elapsed ( +- 1.78% ) ââââââââ..
>
> As you can see even in the most simple run there are quite interesting
> patterns. The time sparkline suggests it would be also useful to have an option
> to throw the first measurement away.

hi,
sorry for delay...

Could you please also update doc with some of above info?
Other than that and one comment below, I'd like to take this patch.

thanks,
jirka


> diff --git a/tools/perf/util/spark.c b/tools/perf/util/spark.c
> new file mode 100644
> index 0000000..ac5b3a5
> --- /dev/null
> +++ b/tools/perf/util/spark.c
> @@ -0,0 +1,28 @@
> +#include <stdio.h>
> +#include <limits.h>
> +#include "spark.h"
> +
> +#define NUM_SPARKS 8
> +#define SPARK_SHIFT 8
> +
> +/* Print spark lines on outf for numval values in val. */
> +void print_spark(FILE *outf, unsigned long *val, int numval)
> +{
> + static const char *ticks[NUM_SPARKS] = {
> + "â", "â", "â", "â", "â", "â", "â", "â"
> + };
> + int i;
> + unsigned long min = ULONG_MAX, max = 0, f;
> +
> + for (i = 0; i < numval; i++) {
> + if (val[i] < min)
> + min = val[i];
> + if (val[i] > max)
> + max = val[i];
> + }
> + f = ((max - min) << SPARK_SHIFT) / (NUM_SPARKS - 1);
> + if (f < 1)
> + f = 1;
> + for (i = 0; i < numval; i++)
> + fputs(ticks[((val[i] - min) << SPARK_SHIFT) / f], outf);
> +}
> diff --git a/tools/perf/util/spark.h b/tools/perf/util/spark.h
> new file mode 100644
> index 0000000..f2d5ac5
> --- /dev/null
> +++ b/tools/perf/util/spark.h
> @@ -0,0 +1,3 @@
> +#pragma once
> +void print_spark(FILE *outf, unsigned long *val, int numval);
> +

google says this pragma got obsolete.. any reason for using this?

SNIP

> +
> +void print_stat_spark(FILE *f, struct stats *stat)
> +{
> + int n = stat->n, len;
> +
> + if (n <= 1)
> + return;
> +
> + len = n;
> + if (len > NUM_SPARK_VALS)
> + len = NUM_SPARK_VALS;
> + if (all_the_same(stat->svals, len))
> + return;

I still dont understand why 'n' variable is needed in here,
but I can live with that ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/