[PATCH v2 RESEND] sched/rt:fix the missing of rt_rq runtime check in rt-period timer

From: Hailong Liu
Date: Sat May 29 2021 - 10:13:00 EST


From: Hailong Liu <liu.hailong6@xxxxxxxxxx>

The rq->rd->span of isolcpus may not be serviced by the timer, which leaves
these CPUs indefinitely throttled.

Steps to reproduce on my on my 8-CPUs machine:
1 enable CONFIG_RT_GROUP_SCHED=y, and boot kernel with command-line
"isolcpus=4-7"

2 create a child group and init it:
mount -t cgroup -o cpu cpu /sys/fs/cgroup
mkdir /sys/fs/cgroup/child0
echo 950000 > /sys/fs/cgroup/child0/cpu.rt_runtime_us
3 run two rt-loop tasks, assume their pids are $pid1 and $pid2
4 affinity a rt task to the isolated cpu-sets
taskset -p 0xf0 $pid2
5 add tasks created above into child cpu-group
echo $pid1 > /sys/fs/cgroup/child0/tasks
echo $pid2 > /sys/fs/cgroup/child0/tasks
6 check wat happened:
"top": one of the task will fail to has cpu usage, but its stat is "R"
"kill": the task on the problem rt_rq can't be killed

Fix it by checking all online CPUs in do_sched_rt_period_timer.

Signed-off-by: Hailong Liu <liu.hailong6@xxxxxxxxxx>
---
kernel/sched/rt.c | 32 +-------------------------------
1 file changed, 1 insertion(+), 31 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index c286e5ba3c94..0bda43e756d7 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -579,18 +579,6 @@ static int rt_se_boosted(struct sched_rt_entity *rt_se)
return p->prio != p->normal_prio;
}

-#ifdef CONFIG_SMP
-static inline const struct cpumask *sched_rt_period_mask(void)
-{
- return this_rq()->rd->span;
-}
-#else
-static inline const struct cpumask *sched_rt_period_mask(void)
-{
- return cpu_online_mask;
-}
-#endif
-
static inline
struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu)
{
@@ -648,11 +636,6 @@ static inline int rt_rq_throttled(struct rt_rq *rt_rq)
return rt_rq->rt_throttled;
}

-static inline const struct cpumask *sched_rt_period_mask(void)
-{
- return cpu_online_mask;
-}
-
static inline
struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu)
{
@@ -856,20 +839,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
int i, idle = 1, throttled = 0;
const struct cpumask *span;

- span = sched_rt_period_mask();
-#ifdef CONFIG_RT_GROUP_SCHED
- /*
- * FIXME: isolated CPUs should really leave the root task group,
- * whether they are isolcpus or were isolated via cpusets, lest
- * the timer run on a CPU which does not service all runqueues,
- * potentially leaving other CPUs indefinitely throttled. If
- * isolation is really required, the user will turn the throttle
- * off to kill the perturbations it causes anyway. Meanwhile,
- * this maintains functionality for boot and/or troubleshooting.
- */
- if (rt_b == &root_task_group.rt_bandwidth)
- span = cpu_online_mask;
-#endif
+ span = cpu_online_mask;
for_each_cpu(i, span) {
int enqueue = 0;
struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
--
2.17.1