Re: [syzbot] [mm?] BUG: Bad page map (7)

From: Matthew Wilcox
Date: Tue Sep 12 2023 - 01:02:11 EST


On Mon, Sep 11, 2023 at 01:22:51PM -0700, Dave Hansen wrote:
> On 9/11/23 12:12, Matthew Wilcox wrote:
> > On Mon, Sep 11, 2023 at 09:55:37AM -0700, Dave Hansen wrote:
> >> On 9/11/23 09:44, Matthew Wilcox wrote:
> >>> After fixing your two typos, this assembles to 176 bytes more code than
> >>> my version. Not sure that's great.
> >> Maybe I'm a fool, but 176 bytes of text bloat isn't scaring me off too
> >> much. I'd much rather have that than another window into x86 goofiness
> >> to maintain.
> >>
> >> Does that 176 bytes translate into meaningful performance, or is it just
> >> a bunch of register bit twiddling that the CPU will sail through?
> > I'm ... not sure how to tell. It's 1120 bytes vs 944 bytes and crawling
> > through that much x86 assembly isn't my idea of a great time. I can
> > send you objdump -dr for all three options if you like? Maybe there's
> > a quick way to compare them that I've never known about.
>
> Working patches would be great if you're got 'em handy, plus your
> .config and generally what compiler you're on.

gcc (Debian 13.2.0-2) 13.2.0

I don't think there's anything particularly strange about my .config

If you compile this patch as-is, you'll get your preferred code.
Remove the #define DH and you get mine.

I would say that 176 bytes is 3 cachelines of I$, which isn't free,
even if all the insns in it can be executed while the CPU is waiting
for cache misses. This ought to be a pretty tight loop anyway; we're
just filling in adjacent PTEs. There may not be many spare cycles
for "free" uops to execute.

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index d6ad98ca1288..c9781b8b14af 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -955,6 +955,14 @@ static inline int pte_same(pte_t a, pte_t b)
return a.pte == b.pte;
}

+static inline pte_t pte_next(pte_t pte)
+{
+ if (__pte_needs_invert(pte_val(pte)))
+ return __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT));
+ return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT));
+}
+#define pte_next pte_next
+
static inline int pte_present(pte_t a)
{
return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE);
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 1fba072b3dac..25333cf3c865 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -205,6 +205,10 @@ static inline int pmd_young(pmd_t pmd)
#define arch_flush_lazy_mmu_mode() do {} while (0)
#endif

+#ifndef pte_next
+#define pte_next(pte) ((pte) + (1UL << PFN_PTE_SHIFT))
+#endif
+
#ifndef set_ptes
/**
* set_ptes - Map consecutive pages to a contiguous range of addresses.
@@ -223,6 +227,11 @@ static inline int pmd_young(pmd_t pmd)
static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, unsigned int nr)
{
+#define DH
+#ifdef DH
+ pgprot_t prot = pte_pgprot(pte);
+ unsigned long pfn = pte_pfn(pte);
+#endif
page_table_check_ptes_set(mm, ptep, pte, nr);

arch_enter_lazy_mmu_mode();
@@ -231,7 +240,12 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
if (--nr == 0)
break;
ptep++;
- pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT));
+#ifdef DH
+ pfn++;
+ pte = pfn_pte(pfn, prot);
+#else
+ pte = pte_next(pte);
+#endif
}
arch_leave_lazy_mmu_mode();
}