[PATCH v2 00/10] sched: task load tracking optimization and cleanup
From: Chengming Zhou
Date: Wed Jul 13 2022 - 00:05:08 EST
Hi all,
This patch series is optimization and cleanup for task load tracking
when task migrate CPU/cgroup or switched_from/to_fair().
There are three types of detach/attach_entity_load_avg (except fork and
exit case) for a fair task:
1. task migrate CPU (on_rq migrate or wake_up migrate)
2. task migrate cgroup (detach then attach)
3. task switched_from/to_fair (detach later attach)
patch1 optimize the on_rq migrate CPU case by combine detach into dequeue,
so we don't need to do detach_entity_cfs_rq() in migrate_task_rq_fair()
any more.
patch3-4 cleanup the migrate cgroup case by remove cpu_cgrp_subsys->fork(),
since we already do the same thing in sched_cgroup_fork().
patch1-4 have been reviewed earlier, but conflicts with the current tip
tree, so include them here as a patchset. Sorry for the inconvenience.
patch6-7 use update_load_avg() to do attach/detach after check sched_avg
last_update_time, is preparation patch for the following patches.
patch8-9 fix load tracking for new forked !fair task and when task
switched_from_fair().
After these changes, the task sched_avg last_update_time is reset to 0
when migrate from CPU/cgroup or switched_from_fair(), to save updated
sched_avg for next attach.
Thanks.
Changes in v2:
- split task se depth maintainence into a separate patch3, suggested
by Peter.
- reorder patch6-7 before patch8-9, since we need update_load_avg()
to do conditional attach/detach to avoid corner cases like twice
attach problem.
Chengming Zhou (10):
sched/fair: combine detach into dequeue when migrating task
sched/fair: update comments in enqueue/dequeue_entity()
sched/fair: maintain task se depth in set_task_rq()
sched/fair: remove redundant cpu_cgrp_subsys->fork()
sched/fair: reset sched_avg last_update_time before set_task_rq()
sched/fair: delete superfluous SKIP_AGE_LOAD
sched/fair: use update_load_avg() to attach/detach entity load_avg
sched/fair: fix load tracking for new forked !fair task
sched/fair: stop load tracking when task switched_from_fair()
sched/fair: delete superfluous set_task_rq_fair()
kernel/sched/core.c | 27 ++------
kernel/sched/fair.c | 144 ++++++++++------------------------------
kernel/sched/features.h | 1 -
kernel/sched/sched.h | 14 +---
4 files changed, 41 insertions(+), 145 deletions(-)
--
2.36.1