Re: [PATCH v2 3/3] arm: extend pfn_valid to take into accound freed memory map alignment
From: Mike Rapoport
Date: Tue Jun 29 2021 - 08:52:02 EST
On Tue, Jun 29, 2021 at 02:52:39PM +0300, Tony Lindgren wrote:
> * Mike Rapoport <rppt@xxxxxxxxxxxxx> [210629 10:51]:
> > As it seems, the new version of pfn_valid() decides that last pages are not
> > valid because of the overflow in memblock_overlaps_region(). As the result,
> > __sync_icache_dcache() skips flushing these pages.
> >
> > The patch below should fix this. I've left the prints for now, hopefully
> > they will not appear anymore.
>
> Yes this allows the system to boot for me :)
>
> I'm still seeing these three prints though:
>
> ...
> smp: Brought up 1 node, 2 CPUs
> SMP: Total of 2 processors activated (3994.41 BogoMIPS).
> CPU: All CPU(s) started in SVC mode.
> pfn_valid(__pageblock_pfn_to_page+0x14/0xa8): pfn: afe00: is_map: 0 overlaps: 1
> pfn_valid(__pageblock_pfn_to_page+0x28/0xa8): pfn: affff: is_map: 0 overlaps: 1
> pfn_valid(__pageblock_pfn_to_page+0x38/0xa8): pfn: afe00: is_map: 0 overlaps: 1
These pfns do have memory map despite they are stolen in
arm_memblock_steal():
memblock_free: [0xaff00000-0xafffffff] arm_memblock_steal+0x50/0x70
memblock_remove: [0xaff00000-0xafffffff] arm_memblock_steal+0x5c/0x70
...
memblock_free: [0xafe00000-0xafefffff] arm_memblock_steal+0x50/0x70
memblock_remove: [0xafe00000-0xafefffff] arm_memblock_steal+0x5c/0x70
But the struct pages there are never initialized.
I'll resend v3 of the entire set with an addition patch to take care of
that as well.
> devtmpfs: initialized
> ...
>
> Regards,
>
> Tony
>
>
> > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > index 6162a070a410..7ba22d23eca4 100644
> > --- a/arch/arm/mm/init.c
> > +++ b/arch/arm/mm/init.c
> > @@ -126,10 +126,16 @@ int pfn_valid(unsigned long pfn)
> > {
> > phys_addr_t addr = __pfn_to_phys(pfn);
> > unsigned long pageblock_size = PAGE_SIZE * pageblock_nr_pages;
> > + bool overlaps = memblock_overlaps_region(&memblock.memory,
> > + ALIGN_DOWN(addr, pageblock_size),
> > + pageblock_size - 1);
> >
> > if (__phys_to_pfn(addr) != pfn)
> > return 0;
> >
> > + if (memblock_is_map_memory(addr) != overlaps)
> > + pr_info("%s(%pS): pfn: %lx: is_map: %d overlaps: %d\n", __func__, (void *)_RET_IP_, pfn, memblock_is_map_memory(addr), overlaps);
> > +
> > /*
> > * If address less than pageblock_size bytes away from a present
> > * memory chunk there still will be a memory map entry for it
> > @@ -137,7 +143,7 @@ int pfn_valid(unsigned long pfn)
> > */
> > if (memblock_overlaps_region(&memblock.memory,
> > ALIGN_DOWN(addr, pageblock_size),
> > - pageblock_size))
> > + pageblock_size - 1))
> > return 1;
> >
> > return 0;
> >
> > --
> > Sincerely yours,
> > Mike.
--
Sincerely yours,
Mike.