Re: [RFC][PATCH 3/9] perf: export registerred pmus via sysfs
From: Paul Mundt
Date: Fri May 14 2010 - 03:05:52 EST
On Wed, May 12, 2010 at 10:37:23AM +0200, Peter Zijlstra wrote:
> On Wed, 2010-05-12 at 14:51 +0900, Paul Mundt wrote:
> > On Tue, May 11, 2010 at 11:48:42AM +0200, Peter Zijlstra wrote:
>
> > > No all the cpus would have the same event sources. I'm not sure if we
> > > can make sysfs understand that though (added GregKH and Kay to CC).
> > >
> > This is something I've been thinking about, too. On SH we have a
> > large set of perf counter events that are entirely dependent on the
> > configuration of the CPU they're on, with no requirement that these
> > configurations are identical on all CPUs in an SMP configuration.
> >
> > As an example, it's possible to halve the L1 dcache and use that part of
> > it as a small and fast memory which has completely different events
> > associated with it from the regular L1 dcache events. These events would
> > be invalid on a CPU that was running with all cache ways enabled but
> > might also be valid on other CPUs that bolt these events to an extra SRAM
> > outside of the cache topology completely.
> >
> > In any event, the events are at least consistent across all CPUs, it's
> > only which ones are valid on a given CPU at a given time that can change.
>
> So you're running with asymmetric SMP systems? I really hadn't
> considered that. Will this change at runtime or is it a system boot time
> thing?
At the moment it's a boot time thing, but we're moving towards runtime
switching via CPU hotplug (which at the moment we primarily use for
runtime power management). This has specifically been a recurring
requirement from some of our automotive customers, so it's gradually
becoming more prevalent.
We also have the multi-core case where multiple architectures are
combined but we still have memory-mapped access to the slave CPUs
performance counters (SH-Mobile G series has this behaviour where we have
both an ARM and an SH core and it doesn't really matter which one is
running the primary linux kernel, the slave on the other hand might be
running linux or it may be running a fixed application that depends on
control and input from the primary linux-running MPU). Presently we just
tie in through the hardware debug interfaces for monitoring and
controlling the secondary counters, but being able to make this sort of
thing workload granular via perf would obviously be a huge benefit.
Supporting these sorts of configurations is going to take a bit of doing
though, especially on the topology side.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/