Re: [PATCH v2 07/23] sched/cache: Introduce per runqueue task LLC preference counter

From: Peter Zijlstra
Date: Wed Dec 10 2025 - 07:43:54 EST


On Wed, Dec 03, 2025 at 03:07:26PM -0800, Tim Chen wrote:

> +static int resize_llc_pref(void)
> +{
> + unsigned int *__percpu *tmp_llc_pref;
> + int i, ret = 0;
> +
> + if (new_max_llcs <= max_llcs)
> + return 0;
> +
> + /*
> + * Allocate temp percpu pointer for old llc_pref,
> + * which will be released after switching to the
> + * new buffer.
> + */
> + tmp_llc_pref = alloc_percpu_noprof(unsigned int *);
> + if (!tmp_llc_pref)
> + return -ENOMEM;
> +
> + for_each_present_cpu(i)
> + *per_cpu_ptr(tmp_llc_pref, i) = NULL;
> +
> + /*
> + * Resize the per rq nr_pref_llc buffer and
> + * switch to this new buffer.
> + */
> + for_each_present_cpu(i) {
> + struct rq_flags rf;
> + unsigned int *new;
> + struct rq *rq;
> +
> + rq = cpu_rq(i);
> + new = alloc_new_pref_llcs(rq->nr_pref_llc, per_cpu_ptr(tmp_llc_pref, i));
> + if (!new) {
> + ret = -ENOMEM;
> +
> + goto release_old;
> + }
> +
> + /*
> + * Locking rq ensures that rq->nr_pref_llc values
> + * don't change with new task enqueue/dequeue
> + * when we repopulate the newly enlarged array.
> + */
> + rq_lock_irqsave(rq, &rf);
> + populate_new_pref_llcs(rq->nr_pref_llc, new);
> + rq->nr_pref_llc = new;
> + rq_unlock_irqrestore(rq, &rf);
> + }
> +
> +release_old:
> + /*
> + * Load balance is done under rcu_lock.
> + * Wait for load balance before and during resizing to
> + * be done. They may refer to old nr_pref_llc[]
> + * that hasn't been resized.
> + */
> + synchronize_rcu();
> + for_each_present_cpu(i)
> + kfree(*per_cpu_ptr(tmp_llc_pref, i));
> +
> + free_percpu(tmp_llc_pref);
> +
> + /* succeed and update */
> + if (!ret)
> + max_llcs = new_max_llcs;
> +
> + return ret;
> +}

> @@ -2674,6 +2787,8 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> if (has_cluster)
> static_branch_inc_cpuslocked(&sched_cluster_active);
>
> + resize_llc_pref();
> +
> if (rq && sched_debug_verbose)
> pr_info("root domain span: %*pbl\n", cpumask_pr_args(cpu_map));

I suspect people will hate on you for that synchronize_rcu() in there.

Specifically, we do build_sched_domain() for every CPU brought online,
this means booting 512 CPUs now includes 512 sync_rcu()s.

Worse, IIRC sync_rcu() is O(n) (or worse -- could be n*ln(n)) in number
of CPUs, so the total thing will be O(n^2) (or worse) for bringing CPUs
online.