Re: [PATCH] sched/psi: Fix avgs_work re-arm in psi_avgs_work()
From: Chengming Zhou
Date: Thu Oct 13 2022 - 22:04:17 EST
On 2022/10/14 00:10, Suren Baghdasaryan wrote:
> On Thu, Oct 13, 2022 at 8:52 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>>
>> On Thu, Oct 13, 2022 at 07:06:55PM +0800, Chengming Zhou wrote:
>>> Should I still need to copy groupc->tasks[] out for the current_cpu as you
>>> suggested before?
>>
>> It'd be my preference as well. This way the resched logic can be
>> consolidated into a single block of comment + code at the end of the
>> function.
>
> Sounds good to me. If we are copying times in the retry loop then
> let's move the `reschedule =` decision out of that loop completely. At
> the end of get_recent_times we can do:
>
> if (cpu == current_cpu)
> reschedule = tasks[NR_RUNNING] +
> tasks[NR_IOWAIT] +
> tasks[NR_MEMSTALL] > 1;
> else
> reschedule = *pchanged_states & (1 << PSI_NONIDLE);
>
Ok, I will send an updated patch later.
Thanks!
>
>>
>>> @@ -242,6 +242,8 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>> u32 *pchanged_states)
>>> {
>>> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
>>> + int current_cpu = raw_smp_processor_id();
>>> + bool reschedule;
>>> u64 now, state_start;
>>> enum psi_states s;
>>> unsigned int seq;
>>> @@ -256,6 +258,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
>>> memcpy(times, groupc->times, sizeof(groupc->times));
>>> state_mask = groupc->state_mask;
>>> state_start = groupc->state_start;
>>> + if (cpu == current_cpu)
>>> + reschedule = groupc->tasks[NR_RUNNING] +
>>> + groupc->tasks[NR_IOWAIT] +
>>> + groupc->tasks[NR_MEMSTALL] > 1;
>>> } while (read_seqcount_retry(&groupc->seq, seq));
>>
>> This also matches psi_show() and the poll worker. They don't currently
>> use the flag, but it's somewhat fragile and confusing. Add a test for
>> current_work() == &group->avgs_work?
>
> Good point. (tasks[NR_RUNNING] + tasks[NR_IOWAIT] + tasks[NR_MEMSTALL]
>> 1) condition should also contain this check.