Re: [PATCH v4 02/16] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

From: Patrick Bellasi
Date: Wed Sep 12 2018 - 11:56:28 EST


On 12-Sep 15:49, Peter Zijlstra wrote:
> On Tue, Aug 28, 2018 at 02:53:10PM +0100, Patrick Bellasi wrote:
> > +/**
> > + * Utilization's clamp group
> > + *
> > + * A utilization clamp group maps a "clamp value" (value), i.e.
> > + * util_{min,max}, to a "clamp group index" (group_id).
> > + */
> > +struct uclamp_se {
> > + unsigned int value;
> > + unsigned int group_id;
> > +};
>
> > +/**
> > + * uclamp_map: reference counts a utilization "clamp value"
> > + * @value: the utilization "clamp value" required
> > + * @se_count: the number of scheduling entities requiring the "clamp value"
> > + * @se_lock: serialize reference count updates by protecting se_count
>
> Why do you have a spinlock to serialize a single value? Don't we have
> atomics for that?

There are some code paths where it's used to protect clamp groups
mapping and initialization, e.g.

uclamp_group_get()
spin_lock()
// initialize clamp group (if required) and then...
se_count += 1
spin_unlock()

Almost all these paths are triggered from user-space and protected
by a global uclamp_mutex, but fork/exit paths.

To serialize these paths I'm using the spinlock above, does it make
sense ? Can we use the global uclamp_mutex on forks/exit too ?

One additional observations is that, if in the future we want to add a
kernel space API, (e.g. driver asking for a new clamp value), maybe we
will need to have a serialized non-sleeping uclamp_group_get() API ?

> > + */
> > +struct uclamp_map {
> > + int value;
> > + int se_count;
> > + raw_spinlock_t se_lock;
> > +};
> > +
> > +/**
> > + * uclamp_maps: maps each SEs "clamp value" into a CPUs "clamp group"
> > + *
> > + * Since only a limited number of different "clamp values" are supported, we
> > + * need to map each different clamp value into a "clamp group" (group_id) to
> > + * be used by the per-CPU accounting in the fast-path, when tasks are
> > + * enqueued and dequeued.
> > + * We also support different kind of utilization clamping, min and max
> > + * utilization for example, each representing what we call a "clamp index"
> > + * (clamp_id).
> > + *
> > + * A matrix is thus required to map "clamp values" to "clamp groups"
> > + * (group_id), for each "clamp index" (clamp_id), where:
> > + * - rows are indexed by clamp_id and they collect the clamp groups for a
> > + * given clamp index
> > + * - columns are indexed by group_id and they collect the clamp values which
> > + * maps to that clamp group
> > + *
> > + * Thus, the column index of a given (clamp_id, value) pair represents the
> > + * clamp group (group_id) used by the fast-path's per-CPU accounting.
> > + *
> > + * NOTE: first clamp group (group_id=0) is reserved for tracking of non
> > + * clamped tasks. Thus we allocate one more slot than the value of
> > + * CONFIG_UCLAMP_GROUPS_COUNT.
> > + *
> > + * Here is the map layout and, right below, how entries are accessed by the
> > + * following code.
> > + *
> > + * uclamp_maps is a matrix of
> > + * +------- UCLAMP_CNT by CONFIG_UCLAMP_GROUPS_COUNT+1 entries
> > + * | |
> > + * | /---------------+---------------\
> > + * | +------------+ +------------+
> > + * | / UCLAMP_MIN | value | | value |
> > + * | | | se_count |...... | se_count |
> > + * | | +------------+ +------------+
> > + * +--+ +------------+ +------------+
> > + * | | value | | value |
> > + * \ UCLAMP_MAX | se_count |...... | se_count |
> > + * +-----^------+ +----^-------+
> > + * | |
> > + * uc_map = + |
> > + * &uclamp_maps[clamp_id][0] +
> > + * clamp_value =
> > + * uc_map[group_id].value
> > + */
> > +static struct uclamp_map uclamp_maps[UCLAMP_CNT]
> > + [CONFIG_UCLAMP_GROUPS_COUNT + 1]
> > + ____cacheline_aligned_in_smp;
> > +
>
> I'm still completely confused by all this.
>
> sizeof(uclamp_map) = 12
>
> that array is 2*6=12 of those, so the whole thing is 144 bytes. which is
> more than 2 (64 byte) cachelines.

This data structure is *not* used in the hot-path, that's why I did not
care about fitting it exactly into few cache lines.

It's used to map a user-space "clamp value" into a kernel-space "clamp
group" when user-space:
- changes a task specific clamp value
- changes a cgroup clamp value
- a task forks/exits

I assume we can consider all those as "slow" code paths, is that correct ?

At enqueue/dequeue time we use instead struct uclamp_cpu, introduced
by the next patch:

[PATCH v4 03/16] sched/core: uclamp: add CPU's clamp groups accounting
https://lore.kernel.org/lkml/20180828135324.21976-4-patrick.bellasi@xxxxxxx/

That's where we refcount RUNNABLE tasks and we have to figure out
the current clamp value for a CPU.

That data structure, with CONFIG_UCLAMP_GROUPS_COUNT=5, is:

struct uclamp_cpu {
struct uclamp_group group[2][6]; /* 0 96 */
/* --- cacheline 1 boundary (64 bytes) was 32 bytes ago --- */
int value[2]; /* 96 8 */
int flags; /* 104 4 */

/* size: 108, cachelines: 2, members: 3 */
/* last cacheline: 44 bytes */
};

and we fit into 2 cache lines with this data layout:

util_min[0..5] | util_max[0..5] | other data

> What's the purpose of that cacheline align statement?

In uclamp_maps, we still need to scan the array when a clamp value is
changed from user-space, i.e. the cases reported above. Thus, that
alignment is just to ensure that we minimize the number of cache lines
used. Does that make sense ?

Maybe that alignment implicitly generated by the compiler ?

> Note that without that apparently superfluous lock, it would be 8*12 =
> 96 bytes, which is 1.5 lines and would indeed suggest you default to
> GROUP_COUNT=7 by default to fill 2 lines.

Yes, will check better if we can count on just the uclamp_mutex

> Why are the min and max things torn up like that? I'm fairly sure I
> asked some of that last time; but the above comments only try to explain
> what, not why.

We use that organization to speedup scanning for clamp values of the
same clamp_id. That's more important in the hot-path than above, where
we need to scan struct uclamp_cpu when a new aggregated clamp value
has to be computed. This is done in:

[PATCH v4 03/16] sched/core: uclamp: add CPU's clamp groups accounting
https://lore.kernel.org/lkml/20180828135324.21976-4-patrick.bellasi@xxxxxxx/

Specifically:

dequeue_task()
uclamp_cpu_put()
uclamp_cpu_put_id(clamp_id)
uclamp_cpu_update(clamp_id)
// Here we have an array scan by clamp_id

With the given data layout I reported above, when we update the
min_clamp value (boost) we have all the data required in a single
cache line.

If that makes sense, I can certainly improve the comment above to
justify its layout.

Cheers,
Patrick

--
#include <best/regards.h>

Patrick Bellasi