[PATCH/RFC 2/7] mm: replace page table access via ACCESS_ONCE with barriers

From: Christian Borntraeger
Date: Mon Nov 24 2014 - 08:04:51 EST


ACCESS_ONCE does not work reliably on non-scalar types. For
example gcc 4.6 and 4.7 might remove the volatile tag for such
accesses during the SRA (scalar replacement of aggregates) step
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)

Let's change the code to access the page table elements with
a barrier afterwards.

Signed-off-by: Christian Borntraeger <borntraeger@xxxxxxxxxx>
---
mm/gup.c | 4 +++-
mm/memory.c | 3 ++-
mm/rmap.c | 3 ++-
3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index cd62c8c..e44af3c 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -917,7 +917,9 @@ static int gup_pud_range(pgd_t *pgdp, unsigned long addr, unsigned long end,

pudp = pud_offset(pgdp, addr);
do {
- pud_t pud = ACCESS_ONCE(*pudp);
+ pud_t pud = *pudp;
+
+ barrier();

next = pud_addr_end(addr, end);
if (pud_none(pud))
diff --git a/mm/memory.c b/mm/memory.c
index 3e50383..d982e35 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3202,7 +3202,8 @@ static int handle_pte_fault(struct mm_struct *mm,
pte_t entry;
spinlock_t *ptl;

- entry = ACCESS_ONCE(*pte);
+ entry = *pte;
+ barrier();
if (!pte_present(entry)) {
if (pte_none(entry)) {
if (vma->vm_ops) {
diff --git a/mm/rmap.c b/mm/rmap.c
index 19886fb..1e54274 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -581,7 +581,8 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
* without holding anon_vma lock for write. So when looking for a
* genuine pmde (in which to find pte), test present and !THP together.
*/
- pmde = ACCESS_ONCE(*pmd);
+ pmde = *pmd;
+ barrier();
if (!pmd_present(pmde) || pmd_trans_huge(pmde))
pmd = NULL;
out:
--
1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/