Re: [RFC PATCH v4 1/9] CPU hotplug: Provide APIs to prevent CPUoffline from atomic context
From: Oleg Nesterov
Date: Wed Dec 12 2012 - 16:12:30 EST
On 12/13, Srivatsa S. Bhat wrote:
>
> On 12/13/2012 01:06 AM, Oleg Nesterov wrote:
> >
> > But perhaps there is another reason to make it per-cpu...
Actually this is not the reason, please see below. But let me repeat,
it is not that I suggest to remove "per-cpu".
> > It seems we can avoid cpu_hotplug.active_writer == current check in
> > get/put.
> >
> > take_cpu_down() can clear this_cpu(writer_signal) right after it takes
> > hotplug_rwlock for writing. It runs with irqs and preemption disabled,
> > nobody else will ever look at writer_signal on its CPU.
> >
>
> Hmm.. And then the get/put_ on that CPU will increment/decrement the per-cpu
> refcount, but we don't care.. because we only need to ensure that they don't
> deadlock by taking the rwlock for read.
Yes, but...
Probably it would be more clean to simply do this_cpu_inc(reader_percpu_refcnt)
after write_lock(hotplug_rwlock). This will have the same effect for get/put,
and we still can make writer_signal global (if we want).
And note that this will also simplify the lockdep annotations which we (imho)
should add later.
Ignoring all complications get_online_cpus_atomic() does:
if (this_cpu_read(reader_percpu_refcnt))
this_cpu_inc(reader_percpu_refcnt);
else if (!writer_signal)
this_cpu_inc(reader_percpu_refcnt); // same as above
else
read_lock(&hotplug_rwlock);
But for lockdep it should do:
if (this_cpu_read(reader_percpu_refcnt))
this_cpu_inc(reader_percpu_refcnt);
else if (!writer_signal) {
this_cpu_inc(reader_percpu_refcnt);
// pretend we take hotplug_rwlock for lockdep
rwlock_acquire_read(&hotplug_rwlock.dep_map, 0, 0);
}
else
read_lock(&hotplug_rwlock);
And we need to ensure that rwlock_acquire_read() is not called under
write_lock(hotplug_rwlock).
If we use reader_percpu_refcnt to fool get/put, we should not worry.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/