Re: [PATCH v2 3/3] arm: extend pfn_valid to take into accound freed memory map alignment
From: Tony Lindgren
Date: Tue Jun 29 2021 - 07:52:52 EST
* Mike Rapoport <rppt@xxxxxxxxxxxxx> [210629 10:51]:
> As it seems, the new version of pfn_valid() decides that last pages are not
> valid because of the overflow in memblock_overlaps_region(). As the result,
> __sync_icache_dcache() skips flushing these pages.
>
> The patch below should fix this. I've left the prints for now, hopefully
> they will not appear anymore.
Yes this allows the system to boot for me :)
I'm still seeing these three prints though:
...
smp: Brought up 1 node, 2 CPUs
SMP: Total of 2 processors activated (3994.41 BogoMIPS).
CPU: All CPU(s) started in SVC mode.
pfn_valid(__pageblock_pfn_to_page+0x14/0xa8): pfn: afe00: is_map: 0 overlaps: 1
pfn_valid(__pageblock_pfn_to_page+0x28/0xa8): pfn: affff: is_map: 0 overlaps: 1
pfn_valid(__pageblock_pfn_to_page+0x38/0xa8): pfn: afe00: is_map: 0 overlaps: 1
devtmpfs: initialized
...
Regards,
Tony
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index 6162a070a410..7ba22d23eca4 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -126,10 +126,16 @@ int pfn_valid(unsigned long pfn)
> {
> phys_addr_t addr = __pfn_to_phys(pfn);
> unsigned long pageblock_size = PAGE_SIZE * pageblock_nr_pages;
> + bool overlaps = memblock_overlaps_region(&memblock.memory,
> + ALIGN_DOWN(addr, pageblock_size),
> + pageblock_size - 1);
>
> if (__phys_to_pfn(addr) != pfn)
> return 0;
>
> + if (memblock_is_map_memory(addr) != overlaps)
> + pr_info("%s(%pS): pfn: %lx: is_map: %d overlaps: %d\n", __func__, (void *)_RET_IP_, pfn, memblock_is_map_memory(addr), overlaps);
> +
> /*
> * If address less than pageblock_size bytes away from a present
> * memory chunk there still will be a memory map entry for it
> @@ -137,7 +143,7 @@ int pfn_valid(unsigned long pfn)
> */
> if (memblock_overlaps_region(&memblock.memory,
> ALIGN_DOWN(addr, pageblock_size),
> - pageblock_size))
> + pageblock_size - 1))
> return 1;
>
> return 0;
>
> --
> Sincerely yours,
> Mike.