Re: [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
From: Frederic Weisbecker
Date: Sun Sep 18 2016 - 09:47:59 EST
On Wed, Sep 07, 2016 at 09:59:13AM +0200, Peter Zijlstra wrote:
> On Fri, Sep 02, 2016 at 04:03:02PM +0200, Frederic Weisbecker wrote:
>
> > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> > index f111076..d4d12a9 100644
> > --- a/kernel/sched/cputime.c
> > +++ b/kernel/sched/cputime.c
> > @@ -78,30 +78,26 @@ EXPORT_SYMBOL_GPL(irqtime_account_irq);
> > static cputime_t irqtime_account_hi_update(cputime_t maxtime)
> > {
> > u64 *cpustat = kcpustat_this_cpu->cpustat;
> > - unsigned long flags;
> > cputime_t irq_cputime;
> >
> + lockdep_assert_irqs_disabled();
>
> > - local_irq_save(flags);
> > irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
> > cpustat[CPUTIME_IRQ];
> > irq_cputime = min(irq_cputime, maxtime);
> > cpustat[CPUTIME_IRQ] += irq_cputime;
> > - local_irq_restore(flags);
> > +
> > return irq_cputime;
> > }
> >
> > static cputime_t irqtime_account_si_update(cputime_t maxtime)
> > {
> > u64 *cpustat = kcpustat_this_cpu->cpustat;
> > - unsigned long flags;
> > cputime_t softirq_cputime;
> >
> + lockdep_assert_irqs_disabled();
>
> > - local_irq_save(flags);
> > softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
> > cpustat[CPUTIME_SOFTIRQ];
> > softirq_cputime = min(softirq_cputime, maxtime);
> > cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
> > - local_irq_restore(flags);
> > +
> > return softirq_cputime;
> > }
>
> ---
> Subject: locking/lockdep: Provide IRQ state assertion helpers
>
> Provide a cheap alternative to:
>
> WARN_ON(!irqs_disabled());
>
> This patch provides:
>
> lockdep_assert_irqs_disabled();
> lockdep_assert_softirqs_disabled();
>
> Which compile away for CONFIG_LOCKDEP=n kernels.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> ---
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -526,6 +526,23 @@ static inline void print_irqtrace_events
> }
> #endif
>
> +#if defined(CONFIG_LOCKDEP) && defined(CONFIG_TRACE_IRQFLAGS)
> +
> +#define lockdep_assert_irqs_disabled() do { \
> + WARN_ON(debug_locks && current->hardirqs_enabled); \
> + } while (0)
> +
> +#define lockdep_assert_softirqs_disabled() do { \
> + WARN_ON(debug_locks && current->softirqs_enabled); \
> + } while (0)
> +
> +#else
> +
> +#define lockdep_assert_irqs_disabled() do { } while (0)
> +#define lockdep_assert_softirqs_disabled() do { } while (0)
> +
> +#endif
> +
> /*
> * For trivial one-depth nesting of a lock-class, the following
> * global define can be used. (Subsystems with multiple levels
I love that! I've seen so many cases where I wanted this runtime check without
the overhead of it on production kernels.
I can take that patch, now since this is lockdep code and my series is scheduler's
code that depend on it, how can we manage the dependency between the two branches?
Perhaps I can add the lockdep patch in the series, there will likely be no
wicked conflicts agains potential changes in the lockdep tree.
Another alternative is to use WARN_ON_(!irqs_disabled()) on my series, then
on the lockdep branch we can convert all the potential users including the current
one.. The lockdep branch would then depend on the others. That way looks better as I
can think of several sites to convert.
Thanks.