Re: [PATCH] mm: fix kernel NULL pointer dereference in page_vma_mapped_walk

From: Matthew Wilcox
Date: Thu Apr 07 2022 - 09:54:57 EST


On Thu, Apr 07, 2022 at 02:40:08PM +0800, zhenwei pi wrote:
> size_to_hstate(4K) returns NULL pointer, this leads kernel BUG in
> function page_vma_mapped_walk.

Yes, I think this is the right fix. It's not immediately obvious from
the bug and the patch, but what's going on is:

page_mapped_in_vma() sets nr_pages to 1. This is correct because we
usually only want to know about the precise page, and not about the
folio containing it. But hugetlbfs is special (... in so many ways ...)
and actually wants to work on the entire folio. We could set nr_pages
specially for hugetlb pages, but it's better to ignore it in
page_vma_mapped_walk() for the hugetlb case.

I'll fix up the changelog and add it to my pile of fixes that I'm
sending tomorrow.
https://git.infradead.org/users/willy/pagecache.git/shortlog/refs/heads/for-next

> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 1187f9c1ec5b..a39ec23581c9 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -163,7 +163,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> return not_found(pvmw);
>
> if (unlikely(is_vm_hugetlb_page(vma))) {
> - unsigned long size = pvmw->nr_pages * PAGE_SIZE;
> + unsigned long size = huge_page_size(hstate_vma(vma));
> /* The only possible mapping was handled on last iteration */
> if (pvmw->pte)
> return not_found(pvmw);
> --
> 2.25.1
>