[PATCH 5.1 115/115] powerpc/64s: Fix THP PMD collapse serialisation
From: Greg Kroah-Hartman
Date: Mon Jun 17 2019 - 17:40:04 EST
From: Nicholas Piggin <npiggin@xxxxxxxxx>
commit 33258a1db165cf43a9e6382587ad06e9b7f8187c upstream.
Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian
conversion in pte helpers") changed the actual bitwise tests in
pte_access_permitted by using pte_write() and pte_present() helpers
rather than raw bitwise testing _PAGE_WRITE and _PAGE_PRESENT bits.
The pte_present() change now returns true for PTEs which are
!_PAGE_PRESENT and _PAGE_INVALID, which is the combination used by
pmdp_invalidate() to synchronize access from lock-free lookups.
pte_access_permitted() is used by pmd_access_permitted(), so allowing
GUP lock free access to proceed with such PTEs breaks this
This bug has been observed on a host using the hash page table MMU,
with random crashes and corruption in guests, usually together with
bad PMD messages in the host.
Fix this by adding an explicit check in pmd_access_permitted(), and
documenting the condition explicitly.
The pte_write() change should be okay, and would prevent GUP from
falling back to the slow path when encountering savedwrite PTEs, which
matches what x86 (that does not implement savedwrite) does.
Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in pte helpers")
Cc: stable@xxxxxxxxxxxxxxx # v4.20+
Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx>
Signed-off-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
arch/powerpc/include/asm/book3s/64/pgtable.h | 30 +++++++++++++++++++++++++++
arch/powerpc/mm/pgtable-book3s64.c | 3 ++
2 files changed, 33 insertions(+)
@@ -875,6 +875,23 @@ static inline int pmd_present(pmd_t pmd)
+static inline int pmd_is_serializing(pmd_t pmd)
+ * If the pmd is undergoing a split, the _PAGE_PRESENT bit is clear
+ * and _PAGE_INVALID is set (see pmd_present, pmdp_invalidate).
+ * This condition may also occur when flushing a pmd while flushing
+ * it (see ptep_modify_prot_start), so callers must ensure this
+ * case is fine as well.
+ if ((pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID)) ==
+ return true;
+ return false;
static inline int pmd_bad(pmd_t pmd)
@@ -1090,6 +1107,19 @@ static inline int pmd_protnone(pmd_t pmd
#define pmd_access_permitted pmd_access_permitted
static inline bool pmd_access_permitted(pmd_t pmd, bool write)
+ * pmdp_invalidate sets this combination (which is not caught by
+ * !pte_present() check in pte_access_permitted), to prevent
+ * lock-free lookups, as part of the serialize_against_pte_lookup()
+ * synchronisation.
+ * This also catches the case where the PTE's hardware PRESENT bit is
+ * cleared while TLB is flushed, which is suboptimal but should not
+ * be frequent.
+ if (pmd_is_serializing(pmd))
+ return false;
return pte_access_permitted(pmd_pte(pmd), write);
@@ -116,6 +116,9 @@ pmd_t pmdp_invalidate(struct vm_area_str
* This ensures that generic code that rely on IRQ disabling
* to prevent a parallel THP split work as expected.
+ * Marking the entry with _PAGE_INVALID && ~_PAGE_PRESENT requires
+ * a special case check in pmd_access_permitted.