Re: [PATCH] mm: Exclude HugeTLB pages from THP page_mapped logic
From: Steve Capper
Date: Fri Apr 01 2016 - 09:24:20 EST
Hi Andrew,
On Thu, Mar 31, 2016 at 04:06:50PM -0700, Andrew Morton wrote:
> On Tue, 29 Mar 2016 17:39:41 +0100 Steve Capper <steve.capper@xxxxxxx> wrote:
>
> > HugeTLB pages cannot be split, thus use the compound_mapcount to
> > track rmaps.
> >
> > Currently the page_mapped function will check the compound_mapcount, but
>
> s/the page_mapped function/page_mapped()/. It's so much simpler!
Thanks, agreed :-).
>
> > will also go through the constituent pages of a THP compound page and
> > query the individual _mapcount's too.
> >
> > Unfortunately, the page_mapped function does not distinguish between
> > HugeTLB and THP compound pages and assumes that a compound page always
> > needs to have HPAGE_PMD_NR pages querying.
> >
> > For most cases when dealing with HugeTLB this is just inefficient, but
> > for scenarios where the HugeTLB page size is less than the pmd block
> > size (e.g. when using contiguous bit on ARM) this can lead to crashes.
> >
> > This patch adjusts the page_mapped function such that we skip the
> > unnecessary THP reference checks for HugeTLB pages.
> >
> > Fixes: e1534ae95004 ("mm: differentiate page_mapped() from page_mapcount() for compound pages")
> > Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> > Signed-off-by: Steve Capper <steve.capper@xxxxxxx>
> > ---
> >
> > Hi,
> >
> > This patch is my approach to fixing a problem that unearthed with
> > HugeTLB pages on arm64. We ran with PAGE_SIZE=64KB and placed down 32
> > contiguous ptes to create 2MB HugeTLB pages. (We can provide hints to
> > the MMU that page table entries are contiguous thus larger TLB entries
> > can be used to represent them).
>
> So which kernel version(s) need this patch? I think both 4.4 and 4.5
> will crash in this manner? Should we backport the fix into 4.4.x and
> 4.5.x?
We de-activated the contiguous hint support just before 4.5 (as we ran
into the problem too late). So no kernels are currently crashing due to
this. If this goes in, we can then re-enable contiguous hint on ARM.
>
> >
> > ...
> >
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1031,6 +1031,8 @@ static inline bool page_mapped(struct page *page)
> > page = compound_head(page);
> > if (atomic_read(compound_mapcount_ptr(page)) >= 0)
> > return true;
> > + if (PageHuge(page))
> > + return false;
> > for (i = 0; i < hpage_nr_pages(page); i++) {
> > if (atomic_read(&page[i]._mapcount) >= 0)
> > return true;
>
> page_mapped() is moronically huge. Uninlining it saves 206 bytes per
> callsite. It has 40+ callsites.
>
>
>
>
> btw, is anyone else seeing this `make M=' breakage?
>
> akpm3:/usr/src/25> make M=mm
> Makefile:679: Cannot use CONFIG_KCOV: -fsanitize-coverage=trace-pc is not supported by compiler
>
> WARNING: Symbol version dump ./Module.symvers
> is missing; modules will have no dependencies and modversions.
>
> make[1]: *** No rule to make target `mm/filemap.o', needed by `mm/built-in.o'. Stop.
> make: *** [_module_mm] Error 2
>
> It's a post-4.5 thing.
Sorry I have not yet tried out KCOV.
>
>
>
> From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Subject: mm: uninline page_mapped()
>
> It's huge. Uninlining it saves 206 bytes per callsite. Shaves 4924 bytes
> from the x86_64 allmodconfig vmlinux.
>
> Cc: Steve Capper <steve.capper@xxxxxxx>
> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
The below looks reasonable to me, I don't have any benchmarks handy to
test for a performance regression on this though.
>
> include/linux/mm.h | 21 +--------------------
> mm/util.c | 22 ++++++++++++++++++++++
> 2 files changed, 23 insertions(+), 20 deletions(-)
>
> diff -puN include/linux/mm.h~mm-uninline-page_mapped include/linux/mm.h
> --- a/include/linux/mm.h~mm-uninline-page_mapped
> +++ a/include/linux/mm.h
> @@ -1019,26 +1019,7 @@ static inline pgoff_t page_file_index(st
> return page->index;
> }
>
> -/*
> - * Return true if this page is mapped into pagetables.
> - * For compound page it returns true if any subpage of compound page is mapped.
> - */
> -static inline bool page_mapped(struct page *page)
> -{
> - int i;
> - if (likely(!PageCompound(page)))
> - return atomic_read(&page->_mapcount) >= 0;
> - page = compound_head(page);
> - if (atomic_read(compound_mapcount_ptr(page)) >= 0)
> - return true;
> - if (PageHuge(page))
> - return false;
> - for (i = 0; i < hpage_nr_pages(page); i++) {
> - if (atomic_read(&page[i]._mapcount) >= 0)
> - return true;
> - }
> - return false;
> -}
> +bool page_mapped(struct page *page);
>
> /*
> * Return true only if the page has been allocated with
> diff -puN mm/util.c~mm-uninline-page_mapped mm/util.c
> --- a/mm/util.c~mm-uninline-page_mapped
> +++ a/mm/util.c
> @@ -346,6 +346,28 @@ void *page_rmapping(struct page *page)
> return __page_rmapping(page);
> }
>
> +/*
> + * Return true if this page is mapped into pagetables.
> + * For compound page it returns true if any subpage of compound page is mapped.
> + */
> +bool page_mapped(struct page *page)
> +{
> + int i;
> + if (likely(!PageCompound(page)))
> + return atomic_read(&page->_mapcount) >= 0;
> + page = compound_head(page);
> + if (atomic_read(compound_mapcount_ptr(page)) >= 0)
> + return true;
> + if (PageHuge(page))
> + return false;
> + for (i = 0; i < hpage_nr_pages(page); i++) {
> + if (atomic_read(&page[i]._mapcount) >= 0)
> + return true;
> + }
> + return false;
> +}
> +EXPORT_SYMBOL(page_mapped);
> +
> struct anon_vma *page_anon_vma(struct page *page)
> {
> unsigned long mapping;
> _
>
Cheers,
--
Steve