[PATCH sched_ext/for-6.12] sched_ext: Allow p->scx.disallow only while loading

From: Tejun Heo
Date: Wed Jul 31 2024 - 15:19:21 EST


p->scx.disallow provides a way for the BPF scheduler to reject certain tasks
from attaching. It's currently allowed for both the load and fork paths;
however, the latter doesn't actually work as p->sched_class is already set
by the time scx_ops_init_task() is called during fork.

This is a convenience feature which is mostly useful from the load path
anyway. Allow it only from the load path.

Signed-off-by: Tejun Heo <tj@xxxxxxxxxx>
Reported-by: "Zhangqiao (2012 lab)" <zhangqiao22@xxxxxxxxxx>
Link: http://lkml.kernel.org/r/20240711110720.1285-1-zhangqiao22@xxxxxxxxxx
Fixes: 7bb6f0810ecf ("sched_ext: Allow BPF schedulers to disallow specific tasks from joining SCHED_EXT")
---
include/linux/sched/ext.h | 11 ++++++-----
kernel/sched/ext.c | 14 ++++++++------
2 files changed, 14 insertions(+), 11 deletions(-)

--- a/include/linux/sched/ext.h
+++ b/include/linux/sched/ext.h
@@ -181,11 +181,12 @@ struct sched_ext_entity {
* If set, reject future sched_setscheduler(2) calls updating the policy
* to %SCHED_EXT with -%EACCES.
*
- * If set from ops.init_task() and the task's policy is already
- * %SCHED_EXT, which can happen while the BPF scheduler is being loaded
- * or by inhering the parent's policy during fork, the task's policy is
- * rejected and forcefully reverted to %SCHED_NORMAL. The number of
- * such events are reported through /sys/kernel/debug/sched_ext::nr_rejected.
+ * Can be set from ops.init_task() while the BPF scheduler is being
+ * loaded (!scx_init_task_args->fork). If set and the task's policy is
+ * already %SCHED_EXT, the task's policy is rejected and forcefully
+ * reverted to %SCHED_NORMAL. The number of such events are reported
+ * through /sys/kernel/debug/sched_ext::nr_rejected. Setting this flag
+ * during fork is not allowed.
*/
bool disallow; /* reject switching into SCX */

--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3399,18 +3399,17 @@ static int scx_ops_init_task(struct task

scx_set_task_state(p, SCX_TASK_INIT);

- if (p->scx.disallow) {
+ if (!fork && p->scx.disallow) {
struct rq *rq;
struct rq_flags rf;

rq = task_rq_lock(p, &rf);

/*
- * We're either in fork or load path and @p->policy will be
- * applied right after. Reverting @p->policy here and rejecting
- * %SCHED_EXT transitions from scx_check_setscheduler()
- * guarantees that if ops.init_task() sets @p->disallow, @p can
- * never be in SCX.
+ * We're in the load path and @p->policy will be applied right
+ * after. Reverting @p->policy here and rejecting %SCHED_EXT
+ * transitions from scx_check_setscheduler() guarantees that if
+ * ops.init_task() sets @p->disallow, @p can never be in SCX.
*/
if (p->policy == SCHED_EXT) {
p->policy = SCHED_NORMAL;
@@ -3418,6 +3417,9 @@ static int scx_ops_init_task(struct task
}

task_rq_unlock(rq, p, &rf);
+ } else if (p->scx.disallow) {
+ scx_ops_error("ops.init_task() set task->scx.disallow for %s[%d] during fork",
+ p->comm, p->pid);
}

p->scx.flags |= SCX_TASK_RESET_RUNNABLE_AT;