Re: [RFC v2 1/2] sched/fair: filter out overloaded cpus in SIS

From: Abel Wu
Date: Tue Apr 12 2022 - 13:55:56 EST


Hi Josh, thanks very much for your reviewing.

On 4/12/22 9:23 AM, Josh Don Wrote:
Hi Abel,

Thanks for the patch, a few comments:

/*
+ * It would be very unlikely to find an unoccupied cpu when system is heavily
+ * overloaded. Even if we could, the cost might bury the benefit.
+ */
+static inline bool sched_domain_overloaded(struct sched_domain *sd, int nr_overloaded)
+{
+ return nr_overloaded > sd->span_weight - (sd->span_weight >> 4);
+}
+
+/*
* Scan the LLC domain for idle CPUs; this is dynamically regulated by
* comparing the average scan cost (tracked in sd->avg_scan_cost) against the
* average idle time for this rq (as found in rq->avg_idle).
@@ -6291,7 +6300,7 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool has_idle_core, int target)
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
- int i, cpu, idle_cpu = -1, nr = INT_MAX;
+ int i, cpu, idle_cpu = -1, nr = INT_MAX, nro;
struct rq *this_rq = this_rq();
int this = smp_processor_id();
struct sched_domain *this_sd;
@@ -6301,7 +6310,13 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
if (!this_sd)
return -1;

+ nro = atomic_read(&sd->shared->nr_overloaded);
+ if (sched_domain_overloaded(sd, nro))
+ return -1;

This early bail out doesn't seem to be related to the main idea of
your patch. Apart from deciding the exact heuristic value for what is

I agree that this early check doesn't seem to have a strong bound with
the idea "filter out the overloaded cpus", but this check is aligned
with the goal of "search less when becoming more overloaded".

As to the heuristic value, which is about 95%, I think it would be nice
if I can show more test results? I also have tested sd->imbalance_pct
and 100% (nro == sd->span_weight), seems like 95% is a better choice.

considered too unlikely to find an idle cpu, this doesn't work well
with tasks constrained by affinity; a task may have a small affinity
mask containing idle cpus it may wake onto, even if most of the node
is overloaded.

Yes, indeed. And I haven't come to a solution except that remove this
check entirely. Ideas?


cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+ if (nro)
+ cpumask_andnot(cpus, cpus, sdo_mask(sd->shared));

To prevent us from exhausting our search attempts too quickly, this
only needs to go under the sched_feat(SIS_PROP) && !has_idle_core case
below. But by doing this unconditionally here, I guess your secondary
goal is to reduce total search cost in both cases. Just wondering, did

Yes, it's unnecessary to try the overloaded cpus. But this makes sense
only if the overloaded cpumask is relatively accurate as you pointed
out below.

you observe significant time spent here that you are trying to
optimize? By reducing our search space by the overload mask, it is
important that the mask is relatively up to date, or else we could
miss an opportunity to find an idle cpu.

I think that's why Mel asked for the SIS statistics. The result in the
cover letter shows improvement on the search efficiency, and that is
what the overhead of the cpumask calculation trade for. Would it be
better if skip the update when nro is small?


if (sched_feat(SIS_PROP) && !has_idle_core) {
u64 avg_cost, avg_idle, span_avg;
@@ -7018,6 +7033,51 @@ balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)

return newidle_balance(rq, rf) != 0;
}
+
+static inline bool cfs_rq_overloaded(struct rq *rq)
+{
+ return rq->cfs.h_nr_running - rq->cfs.idle_h_nr_running > 1;
+}

Why > 1 instead of > 0? If a cpu is running 1 non-idle task and any
number of idle tasks, I'd think it is still "occupied" in the way
you've defined. We'd want to steer wakeups to cpus running 0 non-idle
tasks.

The idea behind "> 1" is telling the unoccupied cpus to pull non-idle
tasks from it (in the next patch). Although "> 0" is more efficient in
wakeup, it blinds us when pulling tasks.


+static void update_overload_status(struct rq *rq)
+{
+ struct sched_domain_shared *sds;
+ bool overloaded = cfs_rq_overloaded(rq);
+ int cpu = cpu_of(rq);
+
+ lockdep_assert_rq_held(rq);
+
+ if (rq->overloaded == overloaded)
+ return;
+
+ rcu_read_lock();
+ sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
+ if (unlikely(!sds))
+ goto unlock;
+
+ if (overloaded) {
+ cpumask_set_cpu(cpu, sdo_mask(sds));
+ atomic_inc(&sds->nr_overloaded);
+ } else {
+ cpumask_clear_cpu(cpu, sdo_mask(sds));
+ atomic_dec(&sds->nr_overloaded);
+ }

Why are these cpu mask writes not atomic?

They are atomic. The non-atomic version is __cpumask_{set,clear}_cpu.
Did I miss something?


+
+ rq->overloaded = overloaded;
+unlock:
+ rcu_read_unlock();
+}
+
+#else
+
+static inline void update_overload_status(struct rq *rq) { }
+
#endif /* CONFIG_SMP */

static unsigned long wakeup_gran(struct sched_entity *se)
@@ -7365,6 +7425,8 @@ done: __maybe_unused;
if (new_tasks > 0)
goto again;

+ update_overload_status(rq);
+
/*
* rq is about to be idle, check if we need to update the
* lost_idle_time of clock_pelt
@@ -11183,6 +11245,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
if (static_branch_unlikely(&sched_numa_balancing))
task_tick_numa(rq, curr);

+ update_overload_status(rq);
update_misfit_status(curr, rq);
update_overutilized_status(task_rq(curr));

I'd caution about using task_tick and pick_next_task_fair as the
places we set and clear overload.

Some issues with task_tick:
- ticks may be disabled in NO_HZ_FULL (an issue if we define overload
as > 0 non-idle tasks)
- most ticks will have the same state, so somewhat redundant checking.
Could use an edge based trigger instead, such as enqueue/dequeue
(somewhat similar to rq->rd->overload).

1. In NO_HZ_FULL, given rq is overloaded, say have non-idle task A and
B enqueued, if A is dequeued before next tick then tick will be off
and the rq will keep "overloaded" while it's actually not. But this
doesn't necessarily be a bad thing because this cpu will be skipped
in wakeup path which helps in improving searching efficiency.

2. Yes, that's why I use rq->overloaded to save the last update state.
So when the overloaded state doesn't change, what we all do is a
simple check on a local variable.
The enqueue/dequeue path is not well bounded, and it could be very
frequent on short running workloads, which would introduce great
overhead to update the LLC shared atomic/cpumask.


With pick_next_task_fair():
- there's a window between a thread dequeuing, and then scheduler
running through to the end of pick_next_task_fair(), during which we
falsely observe the cpu as overloaded
- this breaks with core scheduling, since we may use pick_task_fair
rather than pick_next_task_fair

1. I'm afraid I don't understand what the problem is, can you explain
more on this? Thanks.

2. Nice catch, I will fix it in next update. (Maybe by updating the
overloaded status in do_idle()?)

Thanks & BR,
Abel