Re: [PATCH v2] sched/fair: Make SCHED_IDLE entity be preempted in strict hierarchy
From: Peter Zijlstra
Date: Mon Jul 08 2024 - 08:11:32 EST
On Wed, Jun 26, 2024 at 10:35:05AM +0800, Tianchen Ding wrote:
> Consider the following cgroup:
>
> root
> |
> ------------------------
> | |
> normal_cgroup idle_cgroup
> | |
> SCHED_IDLE task_A SCHED_NORMAL task_B
>
> According to the cgroup hierarchy, A should preempt B. But current
> check_preempt_wakeup_fair() treats cgroup se and task separately, so B
> will preempt A unexpectedly.
> Unify the wakeup logic by {c,p}se_is_idle only. This makes SCHED_IDLE of
> a task a relative policy that is effective only within its own cgroup,
> similar to the behavior of NICE.
>
> Also fix se_is_idle() definition when !CONFIG_FAIR_GROUP_SCHED.
>
> Fixes: 304000390f88 ("sched: Cgroup SCHED_IDLE support")
> Signed-off-by: Tianchen Ding <dtcccc@xxxxxxxxxxxxxxxxx>
> Reviewed-by: Josh Don <joshdon@xxxxxxxxxx>
> Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> ---
> v2:
> Use entity_is_task() to check whether pse is a task.
> Improve comments and commit log.
>
> v1: https://lore.kernel.org/all/20240624073900.10343-1-dtcccc@xxxxxxxxxxxxxxxxx/
> ---
> kernel/sched/fair.c | 24 ++++++++++++------------
> 1 file changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 41b58387023d..f0b038de99ce 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -511,7 +511,7 @@ static int cfs_rq_is_idle(struct cfs_rq *cfs_rq)
>
> static int se_is_idle(struct sched_entity *se)
> {
> - return 0;
> + return task_has_idle_policy(task_of(se));
> }
>
> #endif /* CONFIG_FAIR_GROUP_SCHED */
> @@ -8382,16 +8382,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
> if (test_tsk_need_resched(curr))
> return;
>
> - /* Idle tasks are by definition preempted by non-idle tasks. */
> - if (unlikely(task_has_idle_policy(curr)) &&
> - likely(!task_has_idle_policy(p)))
> - goto preempt;
> -
> - /*
> - * Batch and idle tasks do not preempt non-idle tasks (their preemption
> - * is driven by the tick):
> - */
> - if (unlikely(p->policy != SCHED_NORMAL) || !sched_feat(WAKEUP_PREEMPTION))
> + if (!sched_feat(WAKEUP_PREEMPTION))
> return;
>
> find_matching_se(&se, &pse);
> @@ -8401,7 +8392,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
> pse_is_idle = se_is_idle(pse);
>
> /*
> - * Preempt an idle group in favor of a non-idle group (and don't preempt
> + * Preempt an idle entity in favor of a non-idle entity (and don't preempt
> * in the inverse case).
> */
> if (cse_is_idle && !pse_is_idle)
> @@ -8409,6 +8400,15 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
> if (cse_is_idle != pse_is_idle)
> return;
>
> + /*
> + * Batch tasks do not preempt non-idle tasks (their preemption
> + * is driven by the tick).
> + * We've done the check about "only one of the entities is idle",
> + * so cse must be non-idle if p is a batch task.
> + */
> + if (unlikely(entity_is_task(pse) && p->policy == SCHED_BATCH))
> + return;
I'm not convinced this condition is right. The current behaviour of
SCHED_BATCH doesn't care about pse, only p.
That is, if p is SCHED_BATCH it will not preempt -- except an
SCHED_IDLE.
So I'm tempted to delete this first part of your condition and have it
be:
if (p->policy == SCHED_BATCH)
return;
That is, suppose you have:
root
|
------------------------
| |
normal_cgroup normal_cgroup
| |
SCHED_BATCH task_A SCHED_BATCH task_B
Then the preemption crud will end up comparing the groups to one another
and still allow A to preempt B -- except we explicitly do not want this.
The 'problem' is that the whole BATCH thing isn't cgroup aware ofcourse,
but I'm not sure we want to go fix that -- esp. not in this patch.
Hmm?