Re: arm32: panic in move_freepages (Was [PATCH v2 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid())
From: Kefeng Wang
Date: Thu May 06 2021 - 08:48:02 EST
On 2021/5/3 16:44, Mike Rapoport wrote:
On Mon, May 03, 2021 at 10:07:01AM +0200, David Hildenbrand wrote:
On 03.05.21 08:26, Mike Rapoport wrote:
On Fri, Apr 30, 2021 at 07:24:37PM +0800, Kefeng Wang wrote:
On 2021/4/30 17:51, Mike Rapoport wrote:
On Thu, Apr 29, 2021 at 06:22:55PM +0800, Kefeng Wang wrote:
On 2021/4/29 14:57, Mike Rapoport wrote:
Do you use SPARSMEM? If yes, what is your section size?
What is the value if CONFIG_FORCE_MAX_ZONEORDER in your configuration?
Yes,
CONFIG_SPARSEMEM=y
CONFIG_SPARSEMEM_STATIC=y
CONFIG_FORCE_MAX_ZONEORDER = 11
CONFIG_PAGE_OFFSET=0xC0000000
CONFIG_HAVE_ARCH_PFN_VALID=y
CONFIG_HIGHMEM=y
#define SECTION_SIZE_BITS 26
#define MAX_PHYSADDR_BITS 32
#define MAX_PHYSMEM_BITS 32
With the patch, the addr is aligned, but the panic still occurred,
Is this the same panic at move_freepages() for range [de600, de7ff]?
Do you enable CONFIG_ARM_LPAE?
no, the CONFIG_ARM_LPAE is not set, and yes with same panic at
move_freepages at
start_pfn/end_pfn [de600, de7ff], [de600000, de7ff000] : pfn =de600, page
=ef3cc000, page-flags = ffffffff, pfn2phy = de600000
__free_memory_core, range: 0xb0200000 - 0xc0000000, pfn: b0200 - b0200
__free_memory_core, range: 0xcc000000 - 0xdca00000, pfn: cc000 - b0200
__free_memory_core, range: 0xde700000 - 0xdea00000, pfn: de700 - b0200
Hmm, [de600, de7ff] is not added to the free lists which is correct. But
then it's unclear how the page for de600 gets to move_freepages()...
Can't say I have any bright ideas to try here...
Are we missing some checks (e.g., PageReserved()) that pfn_valid_within()
would have "caught" before?
Unless I'm missing something the crash happens in __rmqueue_fallback():
do_steal:
page = get_page_from_free_area(area, fallback_mt);
steal_suitable_fallback(zone, page, alloc_flags, start_migratetype,
can_steal);
-> move_freepages()
-> BUG()
So a page from free area should be sane as the freed range was never added
it to the free lists.
Sorry for the late response due to the vacation.
The pfn in range [de600, de7ff] won't be added into the free lists via
__free_memory_core(), but the pfn could be added into freelists via
free_highmem_page()
I add some debug[1] in add_to_free_list(), we could see the calltrace
free_highpages, range_pfn [b0200, c0000], range_addr [b0200000, c0000000]
free_highpages, range_pfn [cc000, dca00], range_addr [cc000000, dca00000]
free_highpages, range_pfn [de700, dea00], range_addr [de700000, dea00000]
add_to_free_list, ===> pfn = de700
------------[ cut here ]------------
WARNING: CPU: 0 PID: 0 at mm/page_alloc.c:900 add_to_free_list+0x8c/0xec
pfn = de700
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.10.0+ #48
Hardware name: Hisilicon A9
[<c010a600>] (show_stack) from [<c04b21c4>] (dump_stack+0x9c/0xc0)
[<c04b21c4>] (dump_stack) from [<c011c708>] (__warn+0xc0/0xec)
[<c011c708>] (__warn) from [<c011c7a8>] (warn_slowpath_fmt+0x74/0xa4)
[<c011c7a8>] (warn_slowpath_fmt) from [<c023721c>]
(add_to_free_list+0x8c/0xec)
[<c023721c>] (add_to_free_list) from [<c0237e00>]
(free_pcppages_bulk+0x200/0x278)
[<c0237e00>] (free_pcppages_bulk) from [<c0238d14>]
(free_unref_page+0x58/0x68)
[<c0238d14>] (free_unref_page) from [<c023bb54>]
(free_highmem_page+0xc/0x50)
[<c023bb54>] (free_highmem_page) from [<c070620c>] (mem_init+0x21c/0x254)
[<c070620c>] (mem_init) from [<c0700b38>] (start_kernel+0x258/0x5c0)
[<c0700b38>] (start_kernel) from [<00000000>] (0x0)
so any idea?
[1] debug
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 1ba9f9f9dbd8..ee3619c04f93 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -286,7 +286,7 @@ static void __init free_highpages(void)
/* Truncate partial highmem entries */
if (start < max_low)
start = max_low;
-
+ pr_info("%s, range_pfn [%lx, %lx], range_addr [%x,
%x]\n", __func__, start, end, range_start, range_end);
for (; start < end; start++)
free_highmem_page(pfn_to_page(start));
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 592479f43c74..920f041f0c6f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -892,7 +892,14 @@ compaction_capture(struct capture_control *capc,
struct page *page,
static inline void add_to_free_list(struct page *page, struct zone *zone,
unsigned int order, int migratetype)
{
+ unsigned long pfn;
struct free_area *area = &zone->free_area[order];
+ pfn = page_to_pfn(page);
+ if (pfn >= 0xde600 && pfn < 0xde7ff) {
+ pr_info("%s, ===> pfn = %lx", __func__, pfn);
+ WARN_ONCE(pfn == 0xde700, "pfn = %lx", pfn);
+ }
And honestly, with the memory layout reported elsewhere in the stack I'd
say that the bootloader/fdt beg for fixes...