Re: [RFC PATCH 4/4] Fix: sched/membarrier: p->mm->membarrier_state racy load (v2)

From: Mathieu Desnoyers
Date: Fri Sep 13 2019 - 11:21:05 EST


----- On Sep 9, 2019, at 7:00 AM, Oleg Nesterov oleg@xxxxxxxxxx wrote:

> On 09/08, Mathieu Desnoyers wrote:
>>
>> +static void sync_runqueues_membarrier_state(struct mm_struct *mm)
>> +{
>> + int membarrier_state = atomic_read(&mm->membarrier_state);
>> + bool fallback = false;
>> + cpumask_var_t tmpmask;
>> + int cpu;
>> +
>> + if (atomic_read(&mm->mm_users) == 1 || num_online_cpus() == 1) {
>> + WRITE_ONCE(this_rq()->membarrier_state, membarrier_state);
>
> This doesn't look safe, this caller can migrate to another CPU after
> it calculates the per-cpu ptr.
>
> I think you need do disable preemption or simply use this_cpu_write().

Good point! I'll use this_cpu_write() there and within
membarrier_exec_mmap(), which seems to be affected by the same problem.

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com