Re: [PATCH] oprofile: check whether oprofile perf enabled inop_overflow_handler()

From: Robert Richter
Date: Thu Jan 16 2014 - 06:53:24 EST


(cc'ing Will)

Weng,

thanks for testing.

On 16.01.14 17:33:04, Weng Meiling wrote:
> Using the same test case, the problem also exists in the same kernel with the new patch applied:
>
>
> # opcontrol --start
>
> Using 2.6+ OProfile kernel interface.
> Using log file /var/lib/oprofile/samples/oprofiled.log
> Daemon started.
> [ 508.456878] INFO: rcu_sched self-detected stall on CPU { 0} (t=2100 jiffies g=685 c=684 q=83)
> [ 571.496856] INFO: rcu_sched self-detected stall on CPU { 0} (t=8404 jiffies g=685 c=684 q=83)
> [ 634.526855] INFO: rcu_sched self-detected stall on CPU { 0} (t=14707 jiffies g=685 c=684 q=83)

Yes, the patch does not prevent an interrupt storm. The same happened
on x86 and was there solved also by limiting the minimum cycle period
as the kernel was not able to ratelimit.

> ARM: events: increase minimum cycle period to 100k

> -event:0xFF counters:0 um:zero minimum:500 name:CPU_CYCLES : CPU cycle
> +event:0xFF counters:0 um:zero minimum:100000 name:CPU_CYCLES : CPU cycle

However, an arbitrary hardcoded value migth not fit for all kind of
cpus esp. on ARM where the variety is high. It also looks like there
is no way other than patching the events file to force lower values
than the minimum on cpus there this might be necessary.

The problem of too low sample periods could be solved on ARM by using
perf's interrupt throttling, you might play around with:

/proc/sys/kernel/perf_event_max_sample_rate:100000

I am not quite sure whether this works esp. for kernel counters and
how userland can be notified about throttling. Throttling could be
worth for operf too, not only for the oprofile kernel driver.

>From a quick look it seems there is also code in x86 that dynamically
adjusts the rate which might be worth being implemented for ARM too.

-Robert
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/