Re: [PATCH v2 03/12] rcu: Support runtime NOCB initialization and dynamic offloading

From: Frederic Weisbecker

Date: Wed Apr 15 2026 - 06:39:56 EST


Le Mon, Apr 13, 2026 at 03:43:09PM +0800, Qiliang Yuan a écrit :
> Context:
> The RCU Non-Callback (NOCB) infrastructure traditionally requires
> boot-time parameters (e.g., rcu_nocbs) to allocate masks and spawn
> management kthreads (rcuog/rcuo). This prevents systems from activating
> offloading on-demand without a reboot.
>
> Problem:
> Dynamic Housekeeping Management requires CPUs to transition to
> NOCB mode at runtime when they are newly isolated. Without boot-time
> setup, the NOCB masks are unallocated, and critical kthreads are missing,
> preventing effective tick suppression and isolation.
>
> Solution:
> Refactor RCU initialization to support dynamic on-demand setup.
> - Introduce rcu_init_nocb_dynamic() to allocate masks and organize
> kthreads if the system wasn't initially configured for NOCB.
> - Introduce rcu_housekeeping_reconfigure() to iterate over CPUs and
> perform safe offload/deoffload transitions via hotplug sequences
> (cpu_down -> offload -> cpu_up) when a housekeeping cpuset triggers
> a notifier event.
> - Remove __init from rcu_organize_nocb_kthreads to allow runtime
> reconfiguration of the callback management hierarchy.
>
> This enables a true "Zero-Conf" isolation experience where any CPU
> can be fully isolated at runtime regardless of boot parameters.
>
> Signed-off-by: Qiliang Yuan <realwujing@xxxxxxxxx>
> ---
> kernel/rcu/rcu.h | 4 +++
> kernel/rcu/tree.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++
> kernel/rcu/tree.h | 2 +-
> kernel/rcu/tree_nocb.h | 31 +++++++++++++--------
> 4 files changed, 100 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
> index 9b10b57b79ada..282874443c96b 100644
> --- a/kernel/rcu/rcu.h
> +++ b/kernel/rcu/rcu.h
> @@ -663,8 +663,12 @@ unsigned long srcu_batches_completed(struct srcu_struct *sp);
> #endif // #else // #ifdef CONFIG_TINY_SRCU
>
> #ifdef CONFIG_RCU_NOCB_CPU
> +void rcu_init_nocb_dynamic(void);
> +void rcu_spawn_cpu_nocb_kthread(int cpu);
> void rcu_bind_current_to_nocb(void);
> #else
> +static inline void rcu_init_nocb_dynamic(void) { }
> +static inline void rcu_spawn_cpu_nocb_kthread(int cpu) { }
> static inline void rcu_bind_current_to_nocb(void) { }
> #endif
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 55df6d37145e8..84c8388cf89a1 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -4928,4 +4928,79 @@ void __init rcu_init(void)
> #include "tree_stall.h"
> #include "tree_exp.h"
> #include "tree_nocb.h"
> +
> +#ifdef CONFIG_SMP
> +static int rcu_housekeeping_reconfigure(struct notifier_block *nb,
> + unsigned long action, void *data)
> +{
> + struct housekeeping_update *upd = data;
> + struct task_struct *t;
> + int cpu;
> +
> + if (action != HK_UPDATE_MASK || upd->type != HK_TYPE_RCU)
> + return NOTIFY_OK;
> +
> + rcu_init_nocb_dynamic();
> +
> + for_each_possible_cpu(cpu) {
> + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> + bool isolated = !cpumask_test_cpu(cpu, upd->new_mask);
> + bool offloaded = rcu_rdp_is_offloaded(rdp);
> +
> + if (isolated && !offloaded) {
> + /* Transition to NOCB */
> + pr_info("rcu: CPU %d transitioning to NOCB mode\n", cpu);
> + if (cpu_online(cpu)) {
> + remove_cpu(cpu);

We plan to assume that the CPU is offline while updating HK_TYPE_KERNEL_NOISE
through cpusets. So you shouldn't need to care about offlining here.


> + rcu_spawn_cpu_nocb_kthread(cpu);
> + rcu_nocb_cpu_offload(cpu);
> + add_cpu(cpu);
> + } else {
> + rcu_spawn_cpu_nocb_kthread(cpu);
> + rcu_nocb_cpu_offload(cpu);
> + }
> + } else if (!isolated && offloaded) {
> + /* Transition to CB */
> + pr_info("rcu: CPU %d transitioning to CB mode\n", cpu);
> + if (cpu_online(cpu)) {
> + remove_cpu(cpu);
> + rcu_nocb_cpu_deoffload(cpu);
> + add_cpu(cpu);
> + } else {
> + rcu_nocb_cpu_deoffload(cpu);
> + }
> + }
> + }
> +
> + t = READ_ONCE(rcu_state.gp_kthread);
> + if (t)
> + housekeeping_affine(t, HK_TYPE_RCU);
> +
> +#ifdef CONFIG_TASKS_RCU
> + t = get_rcu_tasks_gp_kthread();
> + if (t)
> + housekeeping_affine(t, HK_TYPE_RCU);
> +#endif
> +
> +#ifdef CONFIG_TASKS_RUDE_RCU
> + t = get_rcu_tasks_rude_gp_kthread();
> + if (t)
> + housekeeping_affine(t, HK_TYPE_RCU);
> +#endif

No need to handle kthreads affinities. This is already taken care of by isolated
cpuset partitions.

Thanks.

--
Frederic Weisbecker
SUSE Labs