Re: [syzbot] [mm?] BUG: Bad page map (7)

From: Yin Fengwei
Date: Mon Sep 11 2023 - 03:24:41 EST


Hi Matthew,

On 9/10/23 11:02, Matthew Wilcox wrote:
> On Sat, Sep 09, 2023 at 10:12:48AM -0700, syzbot wrote:
>> commit 617c28ecab22d98a3809370eb6cb50fa24b7bfe1
>> Author: Yin Fengwei <fengwei.yin@xxxxxxxxx>
>> Date: Wed Aug 2 15:14:05 2023 +0000
>>
>> filemap: batch PTE mappings
>
> Hmm ... I don't know if this is the bug, but ...
I do think we should merge your patch here. LKP already noticed some performance
regressions. I suppose this patch can fix some of them.


I root caused the this "bad page map" issue in my local env. It's related with pte
with protnone on x86_64. So if pte is not protnone, advancing pte by adding
1UL << PFN_PTE_SHIFT is correct. But if pte is protnone, should subtract
1UL << PFN_PTE_SHIFT. I saw pfn_pte() had pfn ^= protnone_mask() and just realized
it.


The producer mmap with PROT_NONE and then trigger SIGXFSZ and create core file.
That will cause GUP with FOLL_FORCE and create protnone pte.

I submitted request to sysbot to test the fixing worked on my local env. Thanks.


Regards
Yin, Fengwei

>
> #syz test
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 582f5317ff71..580d0b2b1a7c 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3506,7 +3506,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> folio_ref_add(folio, count);
> - if (in_range(vmf->address, addr, count))
> + if (in_range(vmf->address, addr, count * PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
>
> @@ -3520,7 +3520,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> folio_ref_add(folio, count);
> - if (in_range(vmf->address, addr, count))
> + if (in_range(vmf->address, addr, count * PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
>
>