Re: [External] Re: PSI idle-shutoff
From: Chengming Zhou
Date: Mon Oct 10 2022 - 02:57:47 EST
On 2022/10/10 14:43, Pavan Kondeti wrote:
> On Mon, Oct 10, 2022 at 11:48:49AM +0530, Pavan Kondeti wrote:
>> On Sun, Oct 09, 2022 at 09:17:34PM +0800, Chengming Zhou wrote:
>>> On 2022/10/9 20:41, Chengming Zhou wrote:
>>>> Hello,
>>>>
>>>> I just saw these emails, sorry for late.
>>>>
>>>> On 2022/10/6 00:32, Suren Baghdasaryan wrote:
>>>>> On Sun, Oct 2, 2022 at 11:11 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
>>>>>>
>>>>>> On Fri, Sep 16, 2022 at 10:45 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
>>>>>>>
>>>>>>> On Wed, Sep 14, 2022 at 11:20 PM Pavan Kondeti
>>>>>>> <quic_pkondeti@xxxxxxxxxxx> wrote:
>>>>>>>>
>>>>>>>> On Tue, Sep 13, 2022 at 07:38:17PM +0530, Pavan Kondeti wrote:
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> The fact that psi_avgs_work()->collect_percpu_times()->get_recent_times()
>>>>>>>>> run from a kworker thread, PSI_NONIDLE condition would be observed as
>>>>>>>>> there is a RUNNING task. So we would always end up re-arming the work.
>>>>>>>>>
>>>>>>>>> If the work is re-armed from the psi_avgs_work() it self, the backing off
>>>>>>>>> logic in psi_task_change() (will be moved to psi_task_switch soon) can't
>>>>>>>>> help. The work is already scheduled. so we don't do anything there.
>>>>>>>
>>>>>>> Hi Pavan,
>>>>>>> Thanks for reporting the issue. IIRC [1] was meant to fix exactly this
>>>>>>> issue. At the time it was written I tested it and it seemed to work.
>>>>>>> Maybe I missed something or some other change introduced afterwards
>>>>>>> affected the shutoff logic. I'll take a closer look next week when I'm
>>>>>>> back at my computer and will consult with Johannes.
>>>>>>
>>>>>> Sorry for the delay. I had some time to look into this and test psi
>>>>>> shutoff on my device and I think you are right. The patch I mentioned
>>>>>> prevents new psi_avgs_work from being scheduled when the only non-idle
>>>>>> task is psi_avgs_work itself, however the regular 2sec averaging work
>>>>>> will still go on. I think we could record the fact that the only
>>>>>> active task is psi_avgs_work in record_times() using a new
>>>>>> psi_group_cpu.state_mask flag and then prevent psi_avgs_work() from
>>>>>> rescheduling itself if that flag is set for all non-idle cpus. I'll
>>>>>> test this approach and will post a patch for review if that works.
>>>>>
>>>>> Hi Pavan,
>>>>> Testing PSI shutoff on Android proved more difficult than I expected.
>>>>> Lots of tasks to silence and I keep encountering new ones.
>>>>> The approach I was thinking about is something like this:
>>>>>
>>>>> ---
>>>>> include/linux/psi_types.h | 3 +++
>>>>> kernel/sched/psi.c | 12 +++++++++---
>>>>> 2 files changed, 12 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
>>>>> index c7fe7c089718..8d936f22cb5b 100644
>>>>> --- a/include/linux/psi_types.h
>>>>> +++ b/include/linux/psi_types.h
>>>>> @@ -68,6 +68,9 @@ enum psi_states {
>>>>> NR_PSI_STATES = 7,
>>>>> };
>>>>>
>>>>> +/* state_mask flag to keep re-arming averaging work */
>>>>> +#define PSI_STATE_WAKE_CLOCK (1 << NR_PSI_STATES)
>>>>> +
>>>>> enum psi_aggregators {
>>>>> PSI_AVGS = 0,
>>>>> PSI_POLL,
>>>>> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
>>>>> index ecb4b4ff4ce0..dd62ad28bacd 100644
>>>>> --- a/kernel/sched/psi.c
>>>>> +++ b/kernel/sched/psi.c
>>>>> @@ -278,6 +278,7 @@ static void get_recent_times(struct psi_group
>>>>> *group, int cpu,
>>>>> if (delta)
>>>>> *pchanged_states |= (1 << s);
>>>>> }
>>>>> + *pchanged_states |= (state_mask & PSI_STATE_WAKE_CLOCK);
>>>>
>>>> If the avgs_work kworker is running on this CPU, it will still see
>>>> PSI_STATE_WAKE_CLOCK set in state_mask? So the work will be re-armed?
>>>>
>>>> Maybe I missed something... but I have another different idea which
>>>> I want to implement later only for discussion.
>>>
>>> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
>>> index ee2ecc081422..f322e8fd8d41 100644
>>> --- a/kernel/sched/psi.c
>>> +++ b/kernel/sched/psi.c
>>> @@ -241,11 +241,13 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>> enum psi_aggregators aggregator, u32 *times,
>>> u32 *pchanged_states)
>>> {
>>> + int current_cpu = raw_smp_processor_id();
>>> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
>>> u64 now, state_start;
>>> enum psi_states s;
>>> unsigned int seq;
>>> u32 state_mask;
>>> + bool only_avgs_work = false;
>>>
>>> *pchanged_states = 0;
>>>
>>> @@ -256,6 +258,14 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>> memcpy(times, groupc->times, sizeof(groupc->times));
>>> state_mask = groupc->state_mask;
>>> state_start = groupc->state_start;
>>> + /*
>>> + * This CPU has only avgs_work kworker running, snapshot the
>>> + * newest times then don't need to re-arm work for this groupc.
>>> + * Normally this kworker will sleep soon and won't
>>> + * wake_clock in psi_group_change().
>>> + */
>>> + if (current_cpu == cpu && groupc->tasks[NR_RUNNING] == 1)
>>> + only_avgs_work = true;
>>> } while (read_seqcount_retry(&groupc->seq, seq));
>>>
>>> /* Calculate state time deltas against the previous snapshot */
>>> @@ -280,6 +290,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>> if (delta)
>>> *pchanged_states |= (1 << s);
>>> }
>>> +
>>> + /* Clear PSI_NONIDLE so avgs_work won't be re-armed for this groupc */
>>> + if (only_avgs_work)
>>> + *pchanged_states &= ~(1 << PSI_NONIDLE);
>>> }
>>>
>> Thanks Chengming for the patch. I will test this patch and report my
>> observations. It makes sense to consider this CPU as non-idle if the PSI kworker
>> is the only task running. It could run other works but that decision is now
>> deferred to schedule out path. Ideally if this is the only (or last) work
>> running, we should not see PSI work not re-arming it self.
>>
>
> is condition groupc->tasks[NR_RUNNING] == 1 alone sufficient to clear NONIDLE?
> or should we also make sure that !NR_IOWAIT && !NR_MEMSTALL condition on this CPU?
Yes, I think you're correct, we should check !NR_IOWAIT && !NR_MEMSTALL too.
Thanks!