[PATCH v2 1/2] sched/membarrier: Use per-CPU mutexes for targeted commands

From: Aniket Gattani

Date: Wed Apr 15 2026 - 19:22:04 EST


Currently, the membarrier system call uses a single global mutex
(`membarrier_ipi_mutex`) to serialize expedited commands. This causes
significant contention on large systems when multiple threads invoke
membarrier concurrently, even if they target different CPUs.

This contention becomes critical when combined with CFS bandwidth
throttling/unthrottling, during which interrupts can be disabled for
relatively long periods on target CPUs. If membarrier is waiting for a
response from such a CPU, it holds the global mutex, blocking all other
membarrier calls on the system. This cascade effect can lead to hard
lockups when thousands of threads stall waiting for the mutex.

Optimize `MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ` when a specific CPU is
targeted by introducing per-CPU mutexes. Broadcast commands and commands
without a specific CPU target continue to use the global mutex.

This prevents the cascade lockup scenario. As measured by the stress test
introduced in the subsequent patch, on an AMD Turin machine with 384 CPUs
(2 NUMA nodes with SMT=2), this optimization yields 200x more
throughput.

Signed-off-by: Aniket Gattani <aniketgattani@xxxxxxxxxx>

---
Changes in v2:
- Use different mutex macros for global vs targeted cpu membarrier (Mathieu).
- Use (unsigned int) cpu_id >= nr_cpu_id (Peter).
---
kernel/sched/membarrier.c | 36 +++++++++++++++++++++++++-----------
1 file changed, 25 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index 623445603725..7f995bd48280 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -165,7 +165,20 @@
| MEMBARRIER_CMD_GET_REGISTRATIONS)

static DEFINE_MUTEX(membarrier_ipi_mutex);
+static DEFINE_PER_CPU(struct mutex, membarrier_cpu_mutexes);
+
#define SERIALIZE_IPI() guard(mutex)(&membarrier_ipi_mutex)
+#define SERIALIZE_IPI_CPU(cpu_id) guard(mutex)(&per_cpu(membarrier_cpu_mutexes, cpu_id))
+
+static int __init membarrier_init(void)
+{
+ int i;
+
+ for_each_possible_cpu(i)
+ mutex_init(&per_cpu(membarrier_cpu_mutexes, i));
+ return 0;
+}
+core_initcall(membarrier_init);

static void ipi_mb(void *info)
{
@@ -358,14 +371,19 @@ static int membarrier_private_expedited(int flags, int cpu_id)
if (cpu_id < 0 && !zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
return -ENOMEM;

- SERIALIZE_IPI();
+ if ((unsigned int)cpu_id >= nr_cpu_ids || !cpu_possible(cpu_id))
+ return 0;
+
+ SERIALIZE_IPI_CPU(cpu_id);
+
cpus_read_lock();

if (cpu_id >= 0) {
struct task_struct *p;

- if (cpu_id >= nr_cpu_ids || !cpu_online(cpu_id))
+ if (!cpu_online(cpu_id))
goto out;
+
rcu_read_lock();
p = rcu_dereference(cpu_rq(cpu_id)->curr);
if (!p || p->mm != mm) {
@@ -373,6 +391,11 @@ static int membarrier_private_expedited(int flags, int cpu_id)
goto out;
}
rcu_read_unlock();
+ /*
+ * smp_call_function_single() will call ipi_func() if cpu_id
+ * is the calling CPU.
+ */
+ smp_call_function_single(cpu_id, ipi_func, NULL, 1);
} else {
int cpu;

@@ -385,15 +408,6 @@ static int membarrier_private_expedited(int flags, int cpu_id)
__cpumask_set_cpu(cpu, tmpmask);
}
rcu_read_unlock();
- }
-
- if (cpu_id >= 0) {
- /*
- * smp_call_function_single() will call ipi_func() if cpu_id
- * is the calling CPU.
- */
- smp_call_function_single(cpu_id, ipi_func, NULL, 1);
- } else {
/*
* For regular membarrier, we can save a few cycles by
* skipping the current cpu -- we're about to do smp_mb()
--
2.54.0.rc1.513.gad8abe7a5a-goog