Re: [tip:perf/core] perf test shell: Install shell tests
From: Michael Petlan
Date: Tue Aug 15 2017 - 18:32:05 EST
On Tue, 15 Aug 2017, Arnaldo Carvalho de Melo wrote:
[...]
> > > Perhaps its time, yes. Some questions:
>
> > > Do these tests assume that perf was built in some particular way, i.e.
> > > as it is packaged for RHEL?
> >
> > Of course I run the testsuite most often on RHEL, but it should be
> > distro-agnostic, worked on Debian with their perf as well as with
> > vanilla kernel/perf build from Linus' repo...
>
> Right, but I mean more generally, i.e. the only report so far of these
> new tests failing came from Kim Phillips, and his setup didn't had the
> devel packages needed to build 'perf probe', which is enabled, AFAIK in
> all general purpose distro 'perf' (linux-tools, etc) packages.
So basically this point is OK. My suite should be generic enough to
cover basic general purpose distro 'perf' packages and if we find out
that it fails on some specific configuration, we can always fix it.
>
> > It somehow assumes having kernel-debuginfo available (but this does
> > not necessarily mean kernel-debuginfo RHEL package). It runs against
> > 'perf' from path or against $CMD_PERF if this variable is defined.
>
> Right, that is interesting, to be able to use a development version
> while having some other perf version installed.
So this should be also OK then.
>
> Yeah, what you have seems great for general purpose distros, while we
> have to go on adding tests and trying to have people trying it in more
> different environments to see if everything works as expected or at
> least we detect what is needed for each test and skip when the
> pre-requisites are not in place.
I can make it more "conditional" to avoid fails when something is not
supported. However, it always has served for RHEL testing, so such fails
have been useful to warn RHEL QE that some always-available features
are broken. I think this is solvable.
>
> Right now it returns 2 for Skip, probably we need a way for the test to
> state what needs to be built-in or available from the processor/system
> to be able to perform some test.
It would be great to connect it with the way perf is built, thus the
tests could detect e.g. '--call-graph=dwarf' availability from the
build... However, my testsuite has been designed as standalone, thus
what I cannot detect from the environment or perf itself, I cannot
detect at all, at least for now.
>
> > Anyway, it is easily fixable... The suite has a mechanism for skipping
> > particular tests. If there is a way to detect a feature support, it
> > is easy to use it as a condition. The dwarf support might be more
> > difficult, because afaik, there's no way to find out whether dwarf
> > support just does not work or is disabled on purpose...
>
> Well, I'm detecting this for the tests already in place, for instance,
> for:
>
> # perf test ping
> probe libc's inet_pton & backtrace it with ping: Ok
> #
>
> [acme@jouet linux]$ grep ^# tools/perf/tests/shell/trace+probe_libc_inet_pton.sh
> # probe libc's inet_pton & backtrace it with ping
> # Installs a probe on libc's inet_pton function, that will use uprobes,
> # then use 'perf trace' on a ping to localhost asking for just one packet
> # with a backtrace 3 levels deep, check that it is what we expect.
> # This needs no debuginfo package, all is done using the libc ELF symtab
> # and the CFI info in the binaries.
> # Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>, 2017
> [acme@jouet linux]$
>
> # rpm -q glibc-debuginfo iputils-debuginfo
> package glibc-debuginfo is not installed
> package iputils-debuginfo is not installed
> #
>
> But tests that requires full DWARF support will be skipped because of
> this check:
>
> $ cat tools/perf/tests/shell/lib/probe_vfs_getname.sh
> <SNIP>
> skip_if_no_debuginfo() {
> add_probe_vfs_getname -v 2>&1 | egrep -q "^(Failed to find the path for kernel|Debuginfo-analysis is not supported)" && return 2
> return 1
> }
>
> So there are ways to figure out that a test fails because support for
> what it needs is not builtin.
>
I use similar way to figure that out:
All the related tests share functions like the following:
check_kprobes_available()
{
test -e /sys/kernel/debug/tracing/kprobe_events
}
check_uprobes_available()
{
test -e /sys/kernel/debug/tracing/uprobe_events
}
[...]
And in a particular test, I check e.g. for uprobes support:
check_uprobes_available
if [ $? -ne 0 ]; then
print_overall_skipped
exit 0
fi
> > > > A little problem might be different design, since the testsuite
> > > > has multiple levels of hierarchy of sub-sub-sub-tests, like:
>
> Right, having subdirectories in the tests dir to group tests per area
> should be no problem, and probably we can ask 'perf test' to test just
> some sub hierarchy, say, 'perf probe' tests.
>
Might be `perf test suite probe` which would run the tests in "base_probe"
directory. Or even `perf test suite probe listing` would run
base_probe/test_listing.sh ...
> So we should try to merge your tests, trying to make them emit test
> names and results like the other 'perf test' entries, and allowing for
> substring tests matching, i.e. the first line of the test should have
> a one line description used for the perf test indexed output, etc.
I don't actually understand the substring matching feature purpose...
It is a good idea, but the set of current perf-test names looks a bit
chaotic to me. As it is, it seems to me that this feature groups together
tests that aren't actually related to each other except of having common
word in names...
# perf test cpu
3: detect openat syscall event on all cpus : Ok
39: Test cpu map synthesize : Ok
46: Test cpu map print : Ok
Also, `perf test list` prints a list of subtests, thus 'list' is a
special word that does not get into substring matching, but from the
outside, it is not obvious that it is something else...
>
> What we want is to add more tests that will not disrupt people already
> using 'perf test' to validate backports in distros, ports to new
> architectures, etc. All this people will see is a growing number of
> tests that will -help- them to make sure 'perf' works well in their
> environments.
Sure.
>
> - Arnaldo
Cheers,
Michael
>