Re: [PATCH v2] mm/sparse: Fix kernel crash with pfn_section_valid check

From: Wei Yang
Date: Thu Mar 26 2020 - 18:12:44 EST


On Thu, Mar 26, 2020 at 07:02:35PM +0530, Aneesh Kumar K.V wrote:
>Fixes the below crash
>
>BUG: Kernel NULL pointer dereference on read at 0x00000000
>Faulting instruction address: 0xc000000000c3447c
>Oops: Kernel access of bad area, sig: 11 [#1]
>LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
>CPU: 11 PID: 7519 Comm: lt-ndctl Not tainted 5.6.0-rc7-autotest #1
>...
>NIP [c000000000c3447c] vmemmap_populated+0x98/0xc0
>LR [c000000000088354] vmemmap_free+0x144/0x320
>Call Trace:
> section_deactivate+0x220/0x240
> __remove_pages+0x118/0x170
> arch_remove_memory+0x3c/0x150
> memunmap_pages+0x1cc/0x2f0
> devm_action_release+0x30/0x50
> release_nodes+0x2f8/0x3e0
> device_release_driver_internal+0x168/0x270
> unbind_store+0x130/0x170
> drv_attr_store+0x44/0x60
> sysfs_kf_write+0x68/0x80
> kernfs_fop_write+0x100/0x290
> __vfs_write+0x3c/0x70
> vfs_write+0xcc/0x240
> ksys_write+0x7c/0x140
> system_call+0x5c/0x68
>
>The crash is due to NULL dereference at
>
>test_bit(idx, ms->usage->subsection_map); due to ms->usage = NULL; in pfn_section_valid()
>
>With commit: d41e2f3bd546 ("mm/hotplug: fix hot remove failure in SPARSEMEM|!VMEMMAP case")
>section_mem_map is set to NULL after depopulate_section_mem(). This
>was done so that pfn_page() can work correctly with kernel config that disables
>SPARSEMEM_VMEMMAP. With that config pfn_to_page does
>
> __section_mem_map_addr(__sec) + __pfn;
>where
>
>static inline struct page *__section_mem_map_addr(struct mem_section *section)
>{
> unsigned long map = section->section_mem_map;
> map &= SECTION_MAP_MASK;
> return (struct page *)map;
>}
>
>Now with SPASEMEM_VMEMAP enabled, mem_section->usage->subsection_map is used to
>check the pfn validity (pfn_valid()). Since section_deactivate release
>mem_section->usage if a section is fully deactivated, pfn_valid() check after
>a subsection_deactivate cause a kernel crash.
>
>static inline int pfn_valid(unsigned long pfn)
>{
>...
> return early_section(ms) || pfn_section_valid(ms, pfn);
>}
>
>where
>
>static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
>{
> int idx = subsection_map_index(pfn);
>
> return test_bit(idx, ms->usage->subsection_map);
>}
>
>Avoid this by clearing SECTION_HAS_MEM_MAP when mem_section->usage is freed.
>For architectures like ppc64 where large pages are used for vmmemap mapping (16MB),
>a specific vmemmap mapping can cover multiple sections. Hence before a vmemmap
>mapping page can be freed, the kernel needs to make sure there are no valid sections
>within that mapping. Clearing the section valid bit before
>depopulate_section_memap enables this.
>
>Fixes: d41e2f3bd546 ("mm/hotplug: fix hot remove failure in SPARSEMEM|!VMEMMAP case")
>Reported-by: Sachin Sant <sachinp@xxxxxxxxxxxxxxxxxx>
>Tested-by: Sachin Sant <sachinp@xxxxxxxxxxxxxxxxxx>
>Cc: Baoquan He <bhe@xxxxxxxxxx>
>Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
>Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
>Cc: Pankaj Gupta <pankaj.gupta.linux@xxxxxxxxx>
>Cc: David Hildenbrand <david@xxxxxxxxxx>
>Cc: Michal Hocko <mhocko@xxxxxxxx>
>Cc: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx>
>Cc: Oscar Salvador <osalvador@xxxxxxx>
>Cc: Mike Rapoport <rppt@xxxxxxxxxxxxx>
>Cc: <stable@xxxxxxxxxxxxxxx>
>Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx>

Reviewed-by: Wei Yang <richard.weiyang@xxxxxxxxx>

--
Wei Yang
Help you, Help me