Re: [RFC PATCH v2] membarrier: expedited private command

From: Mathieu Desnoyers
Date: Fri Jul 28 2017 - 13:04:46 EST


----- On Jul 28, 2017, at 12:46 PM, Peter Zijlstra peterz@xxxxxxxxxxxxx wrote:

> On Fri, Jul 28, 2017 at 03:38:15PM +0000, Mathieu Desnoyers wrote:
>> > Which only leaves PPC stranded.. but the 'good' news is that mpe says
>> > they'll probably need a barrier in switch_mm() in any case.
>>
>> As I pointed out in my other email, I plan to do this:
>>
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2636,6 +2636,11 @@ static struct rq *finish_task_switch(struct task_struct
>> *prev)
>> vtime_task_switch(prev);
>> perf_event_task_sched_in(prev, current);
>
> Here would place it _inside_ the rq->lock, which seems to make more
> sense given the purpose of the barrier, but either way works given its
> definition.

Given its naming "...after_unlock_lock", I thought it would be clearer to put
it after the unlock. Anyway, this barrier does not seem to be used to ensure
the release barrier per se (unlock already has release semantic), but rather
ensures a full memory barrier wrt memory accesses that are synchronized by
means other than this this lock.

>
>> finish_lock_switch(rq, prev);
>
> You could put the whole thing inside IS_ENABLED(CONFIG_SYSMEMBARRIER) or
> something.

I'm tempted to wait until we hear from powerpc maintainers, so we learn
whether they deeply care about this extra barrier in finish_task_switch()
before making it conditional on CONFIG_MEMBARRIER.

Having a guaranteed barrier after context switch on all architectures may
have other uses.

Thanks,

Mathieu

>
>> + /*
>> + * The membarrier system call requires a full memory barrier
>> + * after storing to rq->curr, before going back to user-space.
>> + */
>> + smp_mb__after_unlock_lock();
>> finish_arch_post_lock_switch();
>>
> > fire_sched_in_preempt_notifiers(current);

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com