sched: allow resubmits to queue_balance_callback()

From: Barret Rhoden
Date: Thu Mar 18 2021 - 15:58:44 EST


Prior to this commit, if you submitted the same callback_head twice, it
would be enqueued twice, but only if it was the last callback on the
list. The first time it was submitted, rq->balance_callback was NULL,
so head->next is NULL. That defeated the check in
queue_balance_callback().

This commit changes the callback list such that whenever an item is on
the list, its head->next is not NULL. The last element (first inserted)
will point to itself. This allows us to detect and ignore any attempt
to reenqueue a callback_head.

Signed-off-by: Barret Rhoden <brho@xxxxxxxxxx>
---

i might be missing something here, but this was my interpretation of
what the "if (unlikely(head->next))" check was for in
queue_balance_callback.

kernel/sched/core.c | 3 ++-
kernel/sched/sched.h | 6 +++++-
2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d95dc3f4644..6322975032ea 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3669,7 +3669,8 @@ static void __balance_callback(struct rq *rq)
rq->balance_callback = NULL;
while (head) {
func = (void (*)(struct rq *))head->func;
- next = head->next;
+ /* The last element pointed to itself */
+ next = head->next == head ? NULL : head->next;
head->next = NULL;
head = next;

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 28709f6b0975..42629e153f83 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1389,11 +1389,15 @@ queue_balance_callback(struct rq *rq,
{
lockdep_assert_held(&rq->lock);

+ /*
+ * The last element on the list points to itself, so we can always
+ * detect if head is already enqueued.
+ */
if (unlikely(head->next))
return;

head->func = (void (*)(struct callback_head *))func;
- head->next = rq->balance_callback;
+ head->next = rq->balance_callback ?: NULL;
rq->balance_callback = head;
}

--
2.31.0.rc2.261.g7f71774620-goog