Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC

From: Dietmar Eggemann
Date: Wed Apr 28 2021 - 12:47:45 EST


On 28/04/2021 15:04, Vincent Guittot wrote:
> On Wed, 28 Apr 2021 at 11:51, Song Bao Hua (Barry Song)
> <song.bao.hua@xxxxxxxxxxxxx> wrote:
>>
>>> -----Original Message-----
>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@xxxxxxx]

[...]

>>> On 20/04/2021 02:18, Barry Song wrote:

[...]

>> I am really confused. The whole code has only checked if wake_flags
>> has WF_TTWU, it has never checked if sd_domain has SD_BALANCE_WAKE flag.
>
> look at :
> #define WF_TTWU 0x08 /* Wakeup; maps to SD_BALANCE_WAKE */
>
> so when wake_wide return false, we use the wake_affine mecanism but
> if it's false then we fllback to default mode which looks for:
> if (tmp->flags & sd_flag)
>
> This means looking for SD_BALANCE_WAKE which is never set
>
> so sd will stay NULL and you will end up calling select_idle_sibling anyway
>
>>
>> static int
>> select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
>> {
>> ...
>>
>> if (wake_flags & WF_TTWU) {
>> record_wakee(p);
>>
>> if (sched_energy_enabled()) {
>> new_cpu = find_energy_efficient_cpu(p, prev_cpu);
>> if (new_cpu >= 0)
>> return new_cpu;
>> new_cpu = prev_cpu;
>> }
>>
>> want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
>> }
>> }
>>
>> And try_to_wake_up() has always set WF_TTWU:
>> static int
>> try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
>> {
>> cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
>> ...
>> }
>>
>> So the change in wake_wide will actually affect the value of want_affine.
>> And I did also see code entered slow path during my benchmark.

Yes, this is happening but IMHO not for wakeups. Check wake_flags for
the tasks which go through `slow path` on your machine. They should have
WF_EXEC or WF_FORK, not WF_TTWU (& WF_SYNC).

>> One issue I mentioned during linaro open discussion is that
>> since I have moved to use cluster size to decide the value
>> of wake_wide, relatively less tasks will make wake_wide()
>> decide to go to slow path, thus, tasks begin to spread to
>> other NUMA, but actually llc_size might be able to contain
>> those tasks. So a possible model might be:
>> static int wake_wide(struct task_struct *p)
>> {
>> tasksize < cluster : scan cluster
>> tasksize > llc : slow path
>> tasksize > cluster && tasksize < llc: scan llc
>> }
>>
>> thoughts?

Like Vincent explained, the return value of wake_wide() doesn't matter.
For wakeups you always end up in sis().

[...]