Re: wake_up_process implied memory barrier clarification

From: Oleg Nesterov
Date: Thu Aug 27 2015 - 14:29:30 EST


On 08/27, Michal Hocko wrote:
>
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -2031,6 +2031,9 @@ something up. The barrier occurs before the task state is cleared, and so sits
> <general barrier> STORE current->state
> LOAD event_indicated
>
> +Please note that wake_up_process is an exception here because it implies
> +the write memory barrier unconditionally.
> +

I simply can't understand (can't even parse) this part of memory-barriers.txt.

> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1967,8 +1967,7 @@ static void try_to_wake_up_local(struct task_struct *p)
> *
> * Return: 1 if the process was woken up, 0 if it was already running.
> *
> - * It may be assumed that this function implies a write memory barrier before
> - * changing the task state if and only if any tasks are woken up.
> + * It may be assumed that this function implies a write memory barrier.
> */

I won't argue, technically this is correct of course. And I agree that
the old comment is misleading.



But the new comment looks as if it is fine to avoid wmb() if you do
wake_up_process(). Say,

void w(void)
{
A = 1;
wake_up_process(something_unrelated);
// we know that it implies wmb().
B = 1;
}

void r(void)
{
int a, b;

b = B;
rmb();
a = A;

BUG_ON(b && !a);
}

Perhaps this part of the comment should be simply removed, the unconditional
wmb() in ttwu() is just implementation detail. And note that initially the
documented behaviour of smp_mb__before_spinlock() was only the STORE - LOAD
serialization. Then people noticed that it actually does wmb() and started
to rely on this fact.

To me, this comment should just explain that this function implies a barrier
but only in a sense that you do not need another one after CONDITION = T and
before wake_up_process().

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/