Re: [PATCH 1/2] mm, memory_hotplug: is_mem_section_removable do not pass the end of a zone
From: Oscar Salvador
Date: Tue Jan 29 2019 - 04:06:10 EST
On Mon, Jan 28, 2019 at 03:45:05PM +0100, Michal Hocko wrote:
> From: Michal Hocko <mhocko@xxxxxxxx>
>
> Mikhail has reported the following VM_BUG_ON triggered when reading
> sysfs removable state of a memory block:
> page:000003d082008000 is uninitialized and poisoned
> page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
> Call Trace:
> ([<0000000000385b26>] test_pages_in_a_zone+0xde/0x160)
> [<00000000008f15c4>] show_valid_zones+0x5c/0x190
> [<00000000008cf9c4>] dev_attr_show+0x34/0x70
> [<0000000000463ad0>] sysfs_kf_seq_show+0xc8/0x148
> [<00000000003e4194>] seq_read+0x204/0x480
> [<00000000003b53ea>] __vfs_read+0x32/0x178
> [<00000000003b55b2>] vfs_read+0x82/0x138
> [<00000000003b5be2>] ksys_read+0x5a/0xb0
> [<0000000000b86ba0>] system_call+0xdc/0x2d8
> Last Breaking-Event-Address:
> [<0000000000385b26>] test_pages_in_a_zone+0xde/0x160
> Kernel panic - not syncing: Fatal exception: panic_on_oops
>
> The reason is that the memory block spans the zone boundary and we are
> stumbling over an unitialized struct page. Fix this by enforcing zone
> range in is_mem_section_removable so that we never run away from a
> zone.
Does that mean that the remaining pages(escaping from the current zone) are not tied to
any other zone? Why? Are these pages "holes" or how that came to be?
>
> Reported-by: Mikhail Zaslonko <zaslonko@xxxxxxxxxxxxx>
> Debugged-by: Mikhail Zaslonko <zaslonko@xxxxxxxxxxxxx>
> Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
> ---
> mm/memory_hotplug.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index b9a667d36c55..07872789d778 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1233,7 +1233,8 @@ static bool is_pageblock_removable_nolock(struct page *page)
> bool is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages)
> {
> struct page *page = pfn_to_page(start_pfn);
> - struct page *end_page = page + nr_pages;
> + unsigned long end_pfn = min(start_pfn + nr_pages, zone_end_pfn(page_zone(page)));
> + struct page *end_page = pfn_to_page(end_pfn);
>
> /* Check the starting page of each pageblock within the range */
> for (; page < end_page; page = next_active_pageblock(page)) {
> --
> 2.20.1
>
--
Oscar Salvador
SUSE L3