Re: [PATCH -tip] introduce sys_membarrier(): process-wide memorybarrier (v9)

From: Mathieu Desnoyers
Date: Thu Mar 04 2010 - 12:57:10 EST

* Linus Torvalds (torvalds@xxxxxxxxxxxxxxxxxxxx) wrote:
> > - SA_RUNNING: a way to signal only running threads - as a way for user-space
> > based concurrency control mechanisms to deschedule running threads (or, like
> > in your case, to implement barrier / garbage collection schemes).
> Hmm. This sounds less fundamentally broken, but at the same time also
> _way_ more invasive in the signal handling layer. It's already one of our
> more "exciting" layers out there.

Hrm, thinking about it a bit further, the only way I see we could provide a
usable SA_RUNNING flag would be to add hooks to the scheduler. These hooks would
somehow have to call user-space code (!) when scheduling in/out a thread. Yes,
this sounds utterly broken (since these hooks would have to be preemptable).

The idea is this: if we look, for instance, at the kernel preemptable RCU
implementations, they consist of two parts: one is iteration on all CPUs to
consider all active CPUs, and the other is a modification of the scheduler to
note all preempted tasks that were in a preemptable RCU C.S..

Just for the memory barrier we consider for sys_membarrier(), I had to ensure
that the scheduler issues memory barriers to order accesses to user-space memory
and mm_cpumask modifications. In reality, what we are doing is to ensure that
the operation required on the running thread is done by the scheduler too when
scheduling in/out the task.

As soon as we have signal handlers which perform more than a simple memory
barrier (e.g. something that has side-effects outside of the processor), I doubt
it would ever make sense to only run the handler on running threads unless we
have hooks in the scheduler too.



Mathieu Desnoyers
Operating System Efficiency Consultant
EfficiOS Inc.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at