+static inline bool task_has_throttle_work(struct task_struct *p)
+{
+ return p->sched_throttle_work.next != &p->sched_throttle_work;
+}
+
+static inline void task_throttle_setup_work(struct task_struct *p)
+{
+ /*
+ * Kthreads and exiting tasks don't return to userspace, so adding the
+ * work is pointless
+ */
+ if ((p->flags & (PF_EXITING | PF_KTHREAD)))
+ return;
+
+ if (task_has_throttle_work(p))
+ return;
+
+ task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
+}
+
static int tg_throttle_down(struct task_group *tg, void *data)
{
struct rq *rq = data;
struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
+ struct task_struct *p;
+ struct rb_node *node;
+
+ cfs_rq->throttle_count++;
+ if (cfs_rq->throttle_count > 1)
+ return 0;
General question: Do we need the throttled_lb_pair() check in
can_migrate_task() with the per-task throttle? Moving a throttled task
to another CPU can ensures that the task can run quicker and exit to
user space as quickly as possible and once the task dequeues, it will
remove itself from the list of fair tasks making it unreachable for
the load balancer. Thoughts?
That's a good point.
The current approach dequeued the task and removed it from rq's
cfs_tasks list, causing it lose the load balance opportunity. This is
pretty sad.
I'll need to think about this. I hope we can somehow keep the throttled
tasks in cfs_tasks list, I'll see how to make this happen.
Thanks,
Aaron