Re: [PATCH v11b 18/20] timers: Implement the hierarchical pull model

From: Frederic Weisbecker
Date: Thu Feb 22 2024 - 05:52:38 EST


On Thu, Feb 22, 2024 at 11:37:10AM +0100, Anna-Maria Behnsen wrote:
> +static bool tmigr_inactive_up(struct tmigr_group *group,
> + struct tmigr_group *child,
> + void *ptr)
> +{
> + union tmigr_state curstate, newstate, childstate;
> + struct tmigr_walk *data = ptr;
> + bool walk_done;
> + u8 childmask;
> +
> + childmask = data->childmask;
> + curstate.state = atomic_read_acquire(&group->migr_state);
> + childstate.state = 0;
> +
> + for (;;) {
> + if (child)
> + childstate.state = atomic_read(&child->migr_state);
> +
> + newstate = curstate;
> + walk_done = true;
> +
> + /* Reset active bit when the child is no longer active */
> + if (!childstate.active)
> + newstate.active &= ~childmask;
> +
> + if (newstate.migrator == childmask) {
> + /*
> + * Find a new migrator for the group, because the child
> + * group is idle!
> + */
> + if (!childstate.active) {
> + unsigned long new_migr_bit, active = newstate.active;
> +
> + new_migr_bit = find_first_bit(&active, BIT_CNT);
> +
> + if (new_migr_bit != BIT_CNT) {
> + newstate.migrator = BIT(new_migr_bit);
> + } else {
> + newstate.migrator = TMIGR_NONE;
> +
> + /* Changes need to be propagated */
> + walk_done = false;
> + }
> + }
> + }
> +
> + newstate.seq++;
> +
> + WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active));
> +
> + if (atomic_try_cmpxchg(&group->migr_state, &curstate.state,
> + newstate.state))
> + break;
> + /*
> + * Add memory barrier to make sure child and group states order
> + * of read is preserved. This barrier is only required when
> + * atomic_try_cmpxchg() failed.
> + */
> + smp_mb__after_atomic();

Ideally memory barrier comments should tell:

1) What it orders (you did)
2) What it pairs with (the acquire)

And also the acquire barrier should be commented as well accordingly. This all
of course can be done in a further patch (I might lose track beyond 11c)

Reviewed-by: Frederic Weisbecker <frederic@xxxxxxxxxx>