Re: [PATCH V2 2/3] arch_topology: Avoid use-after-free for scale_freq_data

From: Viresh Kumar
Date: Wed Jun 16 2021 - 04:19:05 EST


On 16-06-21, 09:57, Greg Kroah-Hartman wrote:
> On Wed, Jun 16, 2021 at 12:18:08PM +0530, Viresh Kumar wrote:
> > Currently topology_scale_freq_tick() may end up using a pointer to
> > struct scale_freq_data, which was previously cleared by
> > topology_clear_scale_freq_source(), as there is no protection in place
> > here. The users of topology_clear_scale_freq_source() though needs a
> > guarantee that the previous scale_freq_data isn't used anymore.
> >
> > Since topology_scale_freq_tick() is called from scheduler tick, we don't
> > want to add locking in there. Use the RCU update mechanism instead
> > (which is already used by the scheduler's utilization update path) to
> > guarantee race free updates here.
> >
> > Cc: Paul E. McKenney <paulmck@xxxxxxxxxx>
> > Signed-off-by: Viresh Kumar <viresh.kumar@xxxxxxxxxx>
>
> So this is a bugfix for problems in the current codebase? What commit
> does this fix? Should it go to the stable kernels?

There is only one user of topology_clear_scale_freq_source()
(cppc-cpufreq driver, which is already reverted in pm/linux-next). So
in the upcoming 5.13 kernel release, there will be no one using this
API and so no one will break.

And so I skipped the fixes tag, I can add it though.

> > ---
> > drivers/base/arch_topology.c | 27 +++++++++++++++++++++------
> > 1 file changed, 21 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> > index c1179edc0f3b..921312a8d957 100644
> > --- a/drivers/base/arch_topology.c
> > +++ b/drivers/base/arch_topology.c
> > @@ -18,10 +18,11 @@
> > #include <linux/cpumask.h>
> > #include <linux/init.h>
> > #include <linux/percpu.h>
> > +#include <linux/rcupdate.h>
> > #include <linux/sched.h>
> > #include <linux/smp.h>
> >
> > -static DEFINE_PER_CPU(struct scale_freq_data *, sft_data);
> > +static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data);
> > static struct cpumask scale_freq_counters_mask;
> > static bool scale_freq_invariant;
> >
> > @@ -66,16 +67,20 @@ void topology_set_scale_freq_source(struct scale_freq_data *data,
> > if (cpumask_empty(&scale_freq_counters_mask))
> > scale_freq_invariant = topology_scale_freq_invariant();
> >
> > + rcu_read_lock();
> > +
> > for_each_cpu(cpu, cpus) {
> > - sfd = per_cpu(sft_data, cpu);
> > + sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
> >
> > /* Use ARCH provided counters whenever possible */
> > if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) {
> > - per_cpu(sft_data, cpu) = data;
> > + rcu_assign_pointer(per_cpu(sft_data, cpu), data);
> > cpumask_set_cpu(cpu, &scale_freq_counters_mask);
> > }
> > }
> >
> > + rcu_read_unlock();
> > +
> > update_scale_freq_invariant(true);
> > }
> > EXPORT_SYMBOL_GPL(topology_set_scale_freq_source);
> > @@ -86,22 +91,32 @@ void topology_clear_scale_freq_source(enum scale_freq_source source,
> > struct scale_freq_data *sfd;
> > int cpu;
> >
> > + rcu_read_lock();
> > +
> > for_each_cpu(cpu, cpus) {
> > - sfd = per_cpu(sft_data, cpu);
> > + sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
> >
> > if (sfd && sfd->source == source) {
> > - per_cpu(sft_data, cpu) = NULL;
> > + rcu_assign_pointer(per_cpu(sft_data, cpu), NULL);
> > cpumask_clear_cpu(cpu, &scale_freq_counters_mask);
> > }
> > }
> >
> > + rcu_read_unlock();
> > +
> > + /*
> > + * Make sure all references to previous sft_data are dropped to avoid
> > + * use-after-free races.
> > + */
> > + synchronize_rcu();
>
> What race is happening? How could the current code race? Only when a
> cpu is removed?

topology_scale_freq_tick() is called by the scheduler for each CPU
from scheduler_tick().

It is possible that topology_scale_freq_tick() ends up using an older
copy of sft_data pointer, while it is being removed by
topology_clear_scale_freq_source() because a CPU went away or a
cpufreq driver went away, or during normal suspend/resume (where CPUs
are hot-unplugged).

synchronize_rcu() makes sure that all RCU critical sections that
started before it is called, will finish before it returns. And so the
callers of topology_clear_scale_freq_source() don't need to worry
about their callback getting called anymore.

--
viresh