Re: [PATCH V2 2/2] arm64/mm: Enable memory hot remove
From: Anshuman Khandual
Date: Tue Apr 16 2019 - 05:52:38 EST
On 04/15/2019 07:25 PM, David Hildenbrand wrote:
>> +
>> +#ifdef CONFIG_MEMORY_HOTREMOVE
>> +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
>> +{
>> + unsigned long start_pfn = start >> PAGE_SHIFT;
>> + unsigned long nr_pages = size >> PAGE_SHIFT;
>> + struct zone *zone = page_zone(pfn_to_page(start_pfn));
>> + int ret;
>> +
>> + ret = __remove_pages(zone, start_pfn, nr_pages, altmap);
>> + if (!ret)
> Please note that I posted patches that remove all error handling
> from arch_remove_memory and __remove_pages(). They are already in next/master
>
> So this gets a lot simpler and more predictable.
>
>
> Author: David Hildenbrand <david@xxxxxxxxxx>
> Date: Wed Apr 10 11:02:27 2019 +1000
>
> mm/memory_hotplug: make __remove_pages() and arch_remove_memory() never fail
>
> All callers of arch_remove_memory() ignore errors. And we should really
> try to remove any errors from the memory removal path. No more errors are
> reported from __remove_pages(). BUG() in s390x code in case
> arch_remove_memory() is triggered. We may implement that properly later.
> WARN in case powerpc code failed to remove the section mapping, which is
> better than ignoring the error completely right now
Sure will follow suit next time around.