Re: [PATCH 0/2] incorrect cpumask behavior with CPUMASK_OFFSTACK

From: Rusty Russell
Date: Fri Feb 27 2015 - 07:11:24 EST


green@xxxxxxxxxxxxxx writes:
> From: Oleg Drokin <green@xxxxxxxxxxxxxx>
>
> I just got a report today from Tyson Whitehead <twhitehead@xxxxxxxxx>
> that Lustre crashes when CPUMASK_OFFSTACK is enabled.
>
> A little investigation revealed that this code:
> cpumask_t mask;
> ...
> cpumask_copy(&mask, topology_thread_cpumask(0));
> weight = cpus_weight(mask);

Yes. cpumask_weight should have been used here. The old cpus_* are
deprecated.

> The second patch that I am not sure if we wnat, but it seems to be useful
> until struct cpumask is fully dynamic is to convert what looks like
> whole-set operations e.g. copies, namely:
> cpumask_setall, cpumask_clear, cpumask_copy to always operate on NR_CPUS
> bits to ensure there's no stale garbage left in the mask should the
> cpu count increases later.

You can't do this, because dynamically allocated cpumasks don't have
NR_CPUS bits.

Let's just kill all the cpus_ functions. This wasn't done originally
because archs which didn't care about offline cpumasks didn't want the
churn. In particular, they must not copy struct cpumask by assignment,
and fixing those is a fair bit of churn.

The following is the minimal fix:

Cheers,
Rusty.

CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK.

Using these functions with offstack cpus is unsafe. They use all NR_CPUS
bits, unstead of nr_cpumask_bits.

Signed-off-by: Rusty Russell <rusty@xxxxxxxxxxxxxxx>

diff --git a/lib/Kconfig b/lib/Kconfig
index 87da53bb1fef..51b4210f3da9 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -398,8 +398,7 @@ config CPUMASK_OFFSTACK
stack overflow.

config DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
- bool "Disable obsolete cpumask functions" if DEBUG_PER_CPU_MAPS
- depends on BROKEN
+ bool "Disable obsolete cpumask functions" if CPUMASK_OFFSTACK

config CPU_RMAP
bool
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/