Re: [PATCH 11/18] mm: fix TLB flush race between migration, and change_protection_range

From: Rik van Riel
Date: Tue Dec 10 2013 - 09:26:18 EST


On 12/09/2013 02:09 AM, Mel Gorman wrote:

After reading the locking thread that Paul McKenney started,
I wonder if I got the barriers wrong in these functions...

> +#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
> +/*
> + * Memory barriers to keep this state in sync are graciously provided by
> + * the page table locks, outside of which no page table modifications happen.
> + * The barriers below prevent the compiler from re-ordering the instructions
> + * around the memory barriers that are already present in the code.
> + */
> +static inline bool tlb_flush_pending(struct mm_struct *mm)
> +{
> + barrier();

Should this be smp_mb__after_unlock_lock(); ?

> + return mm->tlb_flush_pending;
> +}
> +static inline void set_tlb_flush_pending(struct mm_struct *mm)
> +{
> + mm->tlb_flush_pending = true;
> + barrier();
> +}
> +/* Clearing is done after a TLB flush, which also provides a barrier. */
> +static inline void clear_tlb_flush_pending(struct mm_struct *mm)
> +{
> + barrier();
> + mm->tlb_flush_pending = false;
> +}

And these smp_mb__before_spinlock() ?

Paul? Peter?

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/