memory offline infinite loop after soft offline
From: Qian Cai
Date: Fri Oct 11 2019 - 17:32:51 EST
# /opt/ltp/runtest/bin/move_pages12
move_pages12.c:263: INFO: Free RAM 258988928 kB
move_pages12.c:281: INFO: Increasing 2048kB hugepages pool on node 0 to 4
move_pages12.c:291: INFO: Increasing 2048kB hugepages pool on node 8 to 4
move_pages12.c:207: INFO: Allocating and freeing 4 hugepages on node 0
move_pages12.c:207: INFO: Allocating and freeing 4 hugepages on node 8
move_pages12.c:197: PASS: Bug not reproduced
move_pages12.c:197: PASS: Bug not reproduced
for mem in $(ls -d /sys/devices/system/memory/memory*); do
ÂÂÂÂÂÂÂÂecho offline > $mem/state
ÂÂÂÂÂÂÂÂecho online > $mem/state
done
That LTP move_pages12 test will first madvise(MADV_SOFT_OFFLINE) for a range.
Then, one of "echo offline" will trigger an infinite loop in __offline_pages()
here,
/* check again */
ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn,
ÂÂÂÂNULL, check_pages_isolated_cb);
} while (ret);
because check_pages_isolated_cb() always return -EBUSY from
test_pages_isolated(),
pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
skip_hwpoisoned_pages);
...
return pfn < end_pfn ? -EBUSY : 0;
The root cause is in __test_page_isolated_in_pageblock() where "pfn" is always
less than "end_pfn" because the associated page is not a PageBuddy.
while (pfn < end_pfn) {
...
else
break;
return pfn;
Adding a dump_page() for that pfn shows,
[ÂÂ101.665160][ T8885] pfn = 77501, end_pfn = 78000
[ÂÂ101.665245][ T8885] page:c00c000001dd4040 refcount:0 mapcount:0
mapping:0000000000000000 index:0x0
[ÂÂ101.665329][ T8885] flags: 0x3fffc000000000()
[ÂÂ101.665391][ T8885] raw: 003fffc000000000 0000000000000000 ffffffff01dd0500
0000000000000000
[ÂÂ101.665498][ T8885] raw: 0000000000000000 0000000000000000 00000000ffffffff
0000000000000000
[ÂÂ101.665588][ T8885] page dumped because: soft_offline
[ÂÂ101.665639][ T8885] page_owner tracks the page as freed
[ÂÂ101.665697][ T8885] page last allocated via order 5, migratetype Movable,
gfp_mask
0x346cca(GFP_HIGHUSER_MOVABLE|__GFP_NOWARN|__GFP_RETRY_MAYFAIL|__GFP_COMP|__GFP_
THISNODE)
[ÂÂ101.665924][ T8885]ÂÂprep_new_page+0x3c0/0x440
[ÂÂ101.665962][ T8885]ÂÂget_page_from_freelist+0x2568/0x2bb0
[ÂÂ101.666059][ T8885]ÂÂ__alloc_pages_nodemask+0x1b4/0x670
[ÂÂ101.666115][ T8885]ÂÂalloc_fresh_huge_page+0x244/0x6e0
[ÂÂ101.666183][ T8885]ÂÂalloc_migrate_huge_page+0x30/0x70
[ÂÂ101.666254][ T8885]ÂÂalloc_new_node_page+0xc4/0x380
[ÂÂ101.666325][ T8885]ÂÂmigrate_pages+0x3b4/0x19e0
[ÂÂ101.666375][ T8885]ÂÂdo_move_pages_to_node.isra.29.part.30+0x44/0xa0
[ÂÂ101.666464][ T8885]ÂÂkernel_move_pages+0x498/0xfc0
[ÂÂ101.666520][ T8885]ÂÂsys_move_pages+0x28/0x40
[ÂÂ101.666643][ T8885]ÂÂsystem_call+0x5c/0x68
[ÂÂ101.666665][ T8885] page last free stack trace:
[ÂÂ101.666704][ T8885]ÂÂ__free_pages_ok+0xa4c/0xd40
[ÂÂ101.666773][ T8885]ÂÂupdate_and_free_page+0x2dc/0x5b0
[ÂÂ101.666821][ T8885]ÂÂfree_huge_page+0x2dc/0x740
[ÂÂ101.666875][ T8885]ÂÂ__put_compound_page+0x64/0xc0
[ÂÂ101.666926][ T8885]ÂÂputback_active_hugepage+0x228/0x390
[ÂÂ101.666990][ T8885]ÂÂmigrate_pages+0xa78/0x19e0
[ÂÂ101.667048][ T8885]ÂÂsoft_offline_page+0x314/0x1050
[ÂÂ101.667117][ T8885]ÂÂsys_madvise+0x1068/0x1080
[ÂÂ101.667185][ T8885]ÂÂsystem_call+0x5c/0x68