Re: [PATCH v3 08/21] sched/cache: Calculate the percpu sd task LLC preference

From: Chen, Yu C

Date: Fri Feb 20 2026 - 12:26:18 EST


On 2/20/2026 10:02 PM, Peter Zijlstra wrote:
On Fri, Feb 20, 2026 at 12:02:22PM +0100, Peter Zijlstra wrote:
On Tue, Feb 10, 2026 at 02:18:48PM -0800, Tim Chen wrote:
static void account_llc_enqueue(struct rq *rq, struct task_struct *p)
{
+ struct sched_domain *sd;
int pref_llc;
pref_llc = p->preferred_llc;
- if (pref_llc < 0)
+ if (!valid_llc_id(pref_llc))
return;
rq->nr_llc_running++;
rq->nr_pref_llc_running += (pref_llc == task_llc(p));
+
+ scoped_guard (rcu) {
+ sd = rcu_dereference(rq->sd);
+ if (valid_llc_buf(sd, pref_llc))
+ sd->pf[pref_llc]++;
+ }
}
static void account_llc_dequeue(struct rq *rq, struct task_struct *p)
{
+ struct sched_domain *sd;
int pref_llc;
pref_llc = p->preferred_llc;
- if (pref_llc < 0)
+ if (!valid_llc_id(pref_llc))
return;
rq->nr_llc_running--;
rq->nr_pref_llc_running -= (pref_llc == task_llc(p));
+
+ scoped_guard (rcu) {
+ sd = rcu_dereference(rq->sd);
+ if (valid_llc_buf(sd, pref_llc)) {
+ /*
+ * There is a race condition between dequeue
+ * and CPU hotplug. After a task has been enqueued
+ * on CPUx, a CPU hotplug event occurs, and all online
+ * CPUs (including CPUx) rebuild their sched_domains
+ * and reset statistics to zero (including sd->pf).
+ * This can cause temporary undercount and we have to
+ * check for such underflow in sd->pf.
+ *
+ * This undercount is temporary and accurate accounting
+ * will resume once the rq has a chance to be idle.
+ */
+ if (sd->pf[pref_llc])
+ sd->pf[pref_llc]--;
+ }
+ }
}

FWIW, enqueue/dequeue must be with rq->lock held, and thus preemption
disabled and IRQs off. That RCU section is completely pointless.

That is, use rcu_dereference_all() and observe the warning go away.

OK we will remove rcu_read_lock() and use rcu_dereference_all() directly.

thanks,
Chenyu