Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
From: Chen Yu
Date: Mon Dec 05 2022 - 09:39:16 EST
On 2022-12-05 at 10:59:06 +0100, Vincent Guittot wrote:
> On Mon, 5 Dec 2022 at 09:39, Chen Yu <yu.c.chen@xxxxxxxxx> wrote:
> >
> > Hi Joel,
> > On 2022-12-03 at 10:35:46 -0500, Joel Fernandes wrote:
> > >
> > >
> > > > On Dec 3, 2022, at 2:50 AM, Chen Yu <yu.c.chen@xxxxxxxxx> wrote:
> > > >
> > > > Hi Honglei,
> > > >> On 2022-12-02 at 15:44:18 +0800, Honglei Wang wrote:
> > > >>
> > > >>
> > > >>> On 2022/12/1 16:44, Chen Yu wrote:
> > > >>> Introduce short-duration task checks, as there is requirement
> > > >>> to leverage this attribute for better task placement.
> > > >>>
> > > >>> There are several choices of metrics that could be used to
> > > >>> indicate if a task is a short-duration task.
> > > >>>
> > > >>> At first thought the (p->se.sum_exec_runtime / p->nvcsw)
> > > >>> could be used to measure the task duration. However, the
> > > >>> history long past was factored too heavily in such a formula.
> > > >>> Ideally, the old activity should decay and not affect
> > > >>> the current status too much.
> > > >>>
> > > >>> Although something based on PELT could be used, se.util_avg might
> > > >>> not be appropriate to describe the task duration:
> > > >>> 1. Task p1 and task p2 are doing frequent ping-pong scheduling on
> > > >>> one CPU, both p1 and p2 have a short duration, but the util_avg
> > > >>> can be up to 50%.
> > > >>> 2. Suppose a task lasting less than 4ms is regarded as a short task.
> > > >>> If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
> > > >>> short-duration task. However, PELT would decay p3's accumulated
> > > >>> running time from 6ms to 3ms, because 32ms is the half-life in PELT.
> > > >>> As a result, p3 would be incorrectly treated as a short task.
> > > >>>
> > > >>> It was found that there was once a similar feature to track the
> > > >>> duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
> > > >>> new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
> > > >>> was reverted because it was an experiment. So pick the patch up
> > > >>> again, by recording the average duration when a task voluntarily
> > > >>> switches out. Introduce SIS_SHORT to control this strategy.
> > > >>>
> > > >>> The threshold of short duration reuses sysctl_sched_min_granularity,
> > > >>> so it can be tuned by the user. Ideally there should be a dedicated
> > > >>> parameter for the threshold, but that might introduce complexity.
> > > >>>
> > > >>> Suggested-by: Tim Chen <tim.c.chen@xxxxxxxxx>
> > > >>> Suggested-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> > > >>> Signed-off-by: Chen Yu <yu.c.chen@xxxxxxxxx>
> > > >>> ---
> > > >>> include/linux/sched.h | 4 ++++
> > > >>> kernel/sched/core.c | 2 ++
> > > >>> kernel/sched/fair.c | 17 +++++++++++++++++
> > > >>> kernel/sched/features.h | 1 +
> > > >>> 4 files changed, 24 insertions(+)
> > > >>>
> > > >>> diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > >>> index ffb6eb55cd13..64b7acb77a11 100644
> > > >>> --- a/include/linux/sched.h
> > > >>> +++ b/include/linux/sched.h
> > > >>> @@ -558,6 +558,10 @@ struct sched_entity {
> > > >>> u64 nr_migrations;
> > > >>> + u64 prev_sum_exec_runtime_vol;
> > > >>> + /* average duration of a task */
> > > >>> + u64 dur_avg;
> > > >>> +
> > > >>> #ifdef CONFIG_FAIR_GROUP_SCHED
> > > >>> int depth;
> > > >>> struct sched_entity *parent;
> > > >>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > >>> index daff72f00385..c5202f1be3f7 100644
> > > >>> --- a/kernel/sched/core.c
> > > >>> +++ b/kernel/sched/core.c
> > > >>> @@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> > > >>> p->se.prev_sum_exec_runtime = 0;
> > > >>> p->se.nr_migrations = 0;
> > > >>> p->se.vruntime = 0;
> > > >>> + p->se.dur_avg = 0;
> > > >>> + p->se.prev_sum_exec_runtime_vol = 0;
> > > >>> INIT_LIST_HEAD(&p->se.group_node);
> > > >>> #ifdef CONFIG_FAIR_GROUP_SCHED
> > > >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > >>> index e4a0b8bd941c..a4b314b664f8 100644
> > > >>> --- a/kernel/sched/fair.c
> > > >>> +++ b/kernel/sched/fair.c
> > > >>> @@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
> > > >>> return 1;
> > > >>> }
> > > >>> +/*
> > > >>> + * If a task switches in and then voluntarily relinquishes the
> > > >>> + * CPU quickly, it is regarded as a short duration task.
> > > >>> + */
> > > >>> +static inline int is_short_task(struct task_struct *p)
> > > >>> +{
> > > >>> + return sched_feat(SIS_SHORT) &&
> > > >>> + (p->se.dur_avg <= sysctl_sched_min_granularity);
> > > >>> +}
> > > >>> +
> > > >>
> > > >> Hi Yu,
> > > >>
> > > >> I still have a bit concern about the sysctl_sched_min_granularity stuff..
> > > >> This grab can be set to different value which will impact the action of this
> > > >> patch and make things not totally under control.
> > >
> > > There are already ways to misconfigure sched sysctl to make bad/weird things happen.
> > >
> > > >> Not sure if we can add a new grab for this.. The test result shows good
> > > >> improvement for short task, and with this grab, admins will be able to
> > > >> custom the system base on their own 'short task' view.
> > > >>
> > > > It would be ideal to have a dedicated parameter to tweak this. For example,
> > > > something under /sys/kernel/debug/sched/, and initilized to sysctl_sched_min_granularity
> > > > by default.
> > >
> > > It would be nice to not have to introduce a new knob for this. IMO, min_granularity is reasonable.
> > >
> > OK, got it, thanks for the suggestion.
>
> Sorry for the late answer.
> We don't want to add more dedicated knobs. So using
> sysctl_sched_min_granularity as you are doing in this patch looks ok
>
I see, thanks Vincent.
thanks,
Chenyu
> >
> > thanks,
> > Chenyu