Re: [PATCH] specific do_timer_cpu value for nohz off mode

From: Dimitri Sivanich
Date: Fri Dec 02 2011 - 15:14:55 EST


On Thu, Dec 01, 2011 at 02:56:23PM -0800, Andrew Morton wrote:
> On Thu, 1 Dec 2011 10:37:40 -0600
> Dimitri Sivanich <sivanich@xxxxxxx> wrote:
>
> > +static ssize_t sysfs_store_do_timer_cpu(struct sys_device *dev,
> > + struct sysdev_attribute *attr,
> > + const char *buf, size_t size)
> > +{
> > + struct sysdev_ext_attribute *ea = SYSDEV_TO_EXT_ATTR(attr);
> > + unsigned int new;
> > + int rv;
> > +
> > +#ifdef CONFIG_NO_HZ
> > + /* nohz mode not supported */
> > + if (tick_nohz_enabled)
> > + return -EINVAL;
> > +#endif
> > +
> > + rv = kstrtouint(buf, 0, &new);
> > + if (rv)
> > + return rv;
> > +
> > + /* Protect against cpu-hotplug */
> > + get_online_cpus();
> > +
> > + if (new >= nr_cpu_ids || !cpu_online(new)) {
> > + put_online_cpus();
> > + return -ERANGE;
> > + }
> > +
> > + *(unsigned int *)(ea->var) = new;
> > +
> > + put_online_cpus();
> > +
> > + return size;
> > +}
>
> OK, I think this fixes one race. We modify tick_do_timer_cpu inside
> get_online_cpus(). If that cpu goes offline then
> tick_handover_do_timer() will correctly hand the timer functions over
> to a new CPU, and tick_handover_do_timer() runs in the CPU hotplug
> handler which I assume is locked by get_online_cpus(). Please check
> all this.

Yes, _cpu_down() runs cpu_hotplug_begin(), which locks and holds the mutex
that get_online_cpus() needs in order to update the refcount
(cpu_hotplug_begin doesn't exit until refcount==0).

The notification that calls tick_handover_do_timer() is done in both the
CPU_DYING and CPU_DYING_FROZEN (CPU_TASKS_FROZEN), but I believe this always
comes from _cpu_down() in either case.

>
> Now, the above code can alter tick_do_timer_cpu while a timer interrupt
> is actually executing on another CPU. Will this disrupt aything? I
> think it might cause problems. If we take an interrupt on CPU 5 and
> that CPU enters tick_periodic() and another CPU alters
> tick_do_timer_cpu from 5 to 4 at exactly the correct time, tick_periodic()
> might fail to run do_timer(). Or it might run do_timer() on both CPUs 4 and
> 5 concurrently?
>

Well, we do have to take the write_seqlock() in tick_periodic, so there's
no danger of do_timer running exactly concurrently.

But yes, we may end up with 2 jiffies ticks occurring close together
(when 5 runs do_timer while 4 waits for the seqlock), or we might end up
missing a jiffies update for almost a full tick (when it changes from 5
to 4 immediately after 4 has done the 'tick_do_timer_cpu == cpu' check).

So at that time, we could be off +- almost a tick. The question is, how
critical is that? When you down a cpu, the same sort of thing could
happen via tick_handover_do_timer(), which itself does nothing more than
change tick_do_timer_cpu.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/