Re: [PATCH 3/3] arm: extend pfn_valid to take into accound freed memory map alignment

From: Kefeng Wang
Date: Tue May 18 2021 - 21:50:52 EST




On 2021/5/18 23:52, Mike Rapoport wrote:
On Tue, May 18, 2021 at 08:49:43PM +0800, Kefeng Wang wrote:


On 2021/5/18 17:06, Mike Rapoport wrote:
From: Mike Rapoport <rppt@xxxxxxxxxxxxx>

When unused memory map is freed the preserved part of the memory map is
extended to match pageblock boundaries because lots of core mm
functionality relies on homogeneity of the memory map within pageblock
boundaries.

Since pfn_valid() is used to check whether there is a valid memory map
entry for a PFN, make it return true also for PFNs that have memory map
entries even if there is no actual memory populated there.

Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx>
---
arch/arm/mm/init.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 9d4744a632c6..bb678c0ba143 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -125,11 +125,24 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
int pfn_valid(unsigned long pfn)
{
phys_addr_t addr = __pfn_to_phys(pfn);
+ unsigned long pageblock_size = PAGE_SIZE * pageblock_nr_pages;
if (__phys_to_pfn(addr) != pfn)
return 0;
- return memblock_is_map_memory(addr);
+ if (memblock_is_map_memory(addr))
+ return 1;
+
+ /*
+ * If address less than pageblock_size bytes away from a present
+ * memory chunk there still will be a memory map entry for it
+ * because we round freed memory map to the pageblock boundaries
+ */
+ if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) ||
+ memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size)))
+ return 1;

Hi Mike, with patch3, the system won't boot.

Hmm, apparently I've miscalculated the ranges...

Can you please check with the below patch on top of this series:

Yes, it works,

On node 0 totalpages: 311551
Normal zone: 1230 pages used for memmap
Normal zone: 0 pages reserved
Normal zone: 157440 pages, LIFO batch:31
Normal zone: 17152 pages in unavailable ranges
HighMem zone: 154111 pages, LIFO batch:31
HighMem zone: 513 pages in unavailable ranges

and the oom testcase could pass.

Tested-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>


There is memblock_is_region_reserved(check if a region intersects reserved memory), it also checks the size, should we add a similar func?


diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index bb678c0ba143..2fafbbc8e73b 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -138,8 +138,9 @@ int pfn_valid(unsigned long pfn)
* memory chunk there still will be a memory map entry for it
* because we round freed memory map to the pageblock boundaries
*/
- if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) ||
- memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size)))
+ if (memblock_overlaps_region(&memblock.memory,
+ ALIGN_DOWN(addr, pageblock_size),
+ pageblock_size);
return 1;
return 0;