Re: [PATCH 3/4] tracing: Add stack_tracer_disable/enable() functions
From: Paul E. McKenney
Date: Thu Apr 06 2017 - 16:23:00 EST
On Thu, Apr 06, 2017 at 02:48:03PM -0400, Steven Rostedt wrote:
> On Thu, 6 Apr 2017 11:12:22 -0700
> "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
>
> > On Thu, Apr 06, 2017 at 12:42:40PM -0400, Steven Rostedt wrote:
> > > From: "Steven Rostedt (VMware)" <rostedt@xxxxxxxxxxx>
> > >
> > > There are certain parts of the kernel that can not let stack tracing
> > > proceed (namely in RCU), because the stack tracer uses RCU, and parts of RCU
> > > internals can not handle having RCU read side locks taken.
> > >
> > > Add stack_tracer_disable() and stack_tracer_enable() functions to let RCU
> > > stop stack tracing on the current CPU as it is in those critical sections.
> >
> > s/as it is in/when it is in/?
> >
> > > Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
> >
> > One quibble above, one objection below.
> >
> > Thanx, Paul
> >
> > > ---
> > > include/linux/ftrace.h | 6 ++++++
> > > kernel/trace/trace_stack.c | 28 ++++++++++++++++++++++++++++
> > > 2 files changed, 34 insertions(+)
> > >
> > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> > > index ef7123219f14..40afee35565a 100644
> > > --- a/include/linux/ftrace.h
> > > +++ b/include/linux/ftrace.h
> > > @@ -286,6 +286,12 @@ int
> > > stack_trace_sysctl(struct ctl_table *table, int write,
> > > void __user *buffer, size_t *lenp,
> > > loff_t *ppos);
> > > +
> > > +void stack_tracer_disable(void);
> > > +void stack_tracer_enable(void);
> > > +#else
> > > +static inline void stack_tracer_disable(void) { }
> > > +static inline void stack_tracer_enabe(void) { }
> > > #endif
> > >
> > > struct ftrace_func_command {
> > > diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
> > > index 05ad2b86461e..5adbb73ec2ec 100644
> > > --- a/kernel/trace/trace_stack.c
> > > +++ b/kernel/trace/trace_stack.c
> > > @@ -41,6 +41,34 @@ static DEFINE_MUTEX(stack_sysctl_mutex);
> > > int stack_tracer_enabled;
> > > static int last_stack_tracer_enabled;
> > >
> > > +/**
> > > + * stack_tracer_disable - temporarily disable the stack tracer
> > > + *
> > > + * There's a few locations (namely in RCU) where stack tracing
> > > + * can not be executed. This function is used to disable stack
> > > + * tracing during those critical sections.
> > > + *
> > > + * This function will disable preemption. stack_tracer_enable()
> > > + * must be called shortly after this is called.
> > > + */
> > > +void stack_tracer_disable(void)
> > > +{
> > > + preempt_disable_notrace();
> >
> > Interrupts are disabled in all current call points, so you don't really
> > need to disable preemption. I would normally not worry, given the
> > ease-of-use improvements, but some people get annoyed about even slight
> > increases in idle-entry overhead.
>
> My worry is that we add another caller that doesn't disable interrupts
> or preemption.
>
> I could add a __stack_trace_disable() that skips the disabling of
> preemption, as the "__" usually denotes the call is "special".
Given that interrupts are disabled at that point, and given also that
NMI skips stack tracing if growth is required, could we just leave
out the stack_tracer_disable() and stack_tracer_enable()?
Thanx, Paul
> -- Steve
>
> >
> > > + this_cpu_inc(trace_active);
> > > +}
> > > +
> > > +/**
> > > + * stack_tracer_enable - re-enable the stack tracer
> > > + *
> > > + * After stack_tracer_disable() is called, stack_tracer_enable()
> > > + * must shortly be called afterward.
> > > + */
> > > +void stack_tracer_enable(void)
> > > +{
> > > + this_cpu_dec(trace_active);
> > > + preempt_enable_notrace();
> >
> > Ditto...
> >
> > > +}
> > > +
> > > void stack_trace_print(void)
> > > {
> > > long i;
> > > --
> > > 2.10.2
> > >
> > >
>