Re: [PATCH 02/19] sched/fair: Record per-LLC utilization to guide cache-aware scheduling decisions
From: K Prateek Nayak
Date: Mon Oct 27 2025 - 22:50:49 EST
Hello Chenyu,
On 10/27/2025 7:37 PM, Chen, Yu C wrote:
> Hi Prateek,
>
> On 10/27/2025 1:01 PM, K Prateek Nayak wrote:
>> Hello Tim,
>>
>> On 10/11/2025 11:54 PM, Tim Chen wrote:
>>> +#ifdef CONFIG_SCHED_CACHE
>>> +/*
>>> + * Record the statistics for this scheduler group for later
>>> + * use. These values guide load balancing on aggregating tasks
>>> + * to a LLC.
>>> + */
>>> +static void record_sg_llc_stats(struct lb_env *env,
>>> + struct sg_lb_stats *sgs,
>>> + struct sched_group *group)
>>> +{
>>> + /*
>>> + * Find the child domain on env->dst_cpu. This domain
>>> + * is either the domain that spans this group(if the
>>> + * group is a local group), or the sibling domain of
>>> + * this group.
>>> + */
>>> + struct sched_domain *sd = env->sd->child;
>>
>> Was this intentionally done to limit the update to sg_llc_stats to the
>> load balancing period of "sd_llc->parent"?
>>
>> Can't this be done with update_idle_cpu_scan()? I believe it is more
>> frequent, "sds->total_capacity" from caller gives you the equivalent of
>> "group_capacity", and "group_util" is already calculated as "sum_util".
>>
>> Checking "sd_llc->parent" there should be sufficient to check if there
>> are multiple LLC domains or not. Thoughts?
>>
>
> The original idea was to calculate the statistics for the CPUs within
> one LLC, and set the tag for that sched group as well as its sg_lb_stats
> (but not at the sched domain scope). With this flag set in that sched group,
> we can perform some comparisons in update_sd_pick_busiest() to determine if
> that sched group has any tasks that need to be moved to other LLC sched groups.
> If we do this in update_idle_cpu_scan(), might it be a bit late for
> update_sd_pick_busiest()?
Once I got to Patch 10, the location of record_sg_llc_stats() became
more clear w.r.t. the following call to llc_balance(). Thank you for
clarifying and sorry for the noise.
--
Thanks and Regards,
Prateek