[PATCH V5 0/3] arm64/mm: Enable memory hot remove
From: Anshuman Khandual
Date: Wed May 29 2019 - 05:19:49 EST
This series enables memory hot remove on arm64 after fixing a memblock
removal ordering problem in generic __remove_memory() and one possible
arm64 platform specific kernel page table race condition. This series
is based on latest v5.2-rc2 tag.
Testing:
Memory hot remove has been tested on arm64 for 4K, 16K, 64K page config
options with all possible CONFIG_ARM64_VA_BITS and CONFIG_PGTABLE_LEVELS
combinations. Its only build tested on non-arm64 platforms.
Changes in V5:
- Have some agreement [1] over using memory_hotplug_lock for arm64 ptdump
- Change 7ba36eccb3f8 ("arm64/mm: Inhibit huge-vmap with ptdump") already merged
- Dropped the above patch from this series
- Fixed indentation problem in arch_[add|remove]_memory() as per David
- Collected all new Acked-by tags
Changes in V4: (https://lkml.org/lkml/2019/5/20/19)
- Implemented most of the suggestions from Mark Rutland
- Interchanged patch [PATCH 2/4] <---> [PATCH 3/4] and updated commit message
- Moved CONFIG_PGTABLE_LEVELS inside free_[pud|pmd]_table()
- Used READ_ONCE() in missing instances while accessing page table entries
- s/p???_present()/p???_none() for checking valid kernel page table entries
- WARN_ON() when an entry is !p???_none() and !p???_present() at the same time
- Updated memory hot-remove commit message with additional details as suggested
- Rebased the series on 5.2-rc1 with hotplug changes from David and Michal Hocko
- Collected all new Acked-by tags
Changes in V3: (https://lkml.org/lkml/2019/5/14/197)
- Implemented most of the suggestions from Mark Rutland for remove_pagetable()
- Fixed applicable PGTABLE_LEVEL wrappers around pgtable page freeing functions
- Replaced 'direct' with 'sparse_vmap' in remove_pagetable() with inverted polarity
- Changed pointer names ('p' at end) and removed tmp from iterations
- Perform intermediate TLB invalidation while clearing pgtable entries
- Dropped flush_tlb_kernel_range() in remove_pagetable()
- Added flush_tlb_kernel_range() in remove_pte_table() instead
- Renamed page freeing functions for pgtable page and mapped pages
- Used page range size instead of order while freeing mapped or pgtable pages
- Removed all PageReserved() handling while freeing mapped or pgtable pages
- Replaced XXX_index() with XXX_offset() while walking the kernel page table
- Used READ_ONCE() while fetching individual pgtable entries
- Taken overall init_mm.page_table_lock instead of just while changing an entry
- Dropped previously added [pmd|pud]_index() which are not required anymore
- Added a new patch to protect kernel page table race condition for ptdump
- Added a new patch from Mark Rutland to prevent huge-vmap with ptdump
Changes in V2: (https://lkml.org/lkml/2019/4/14/5)
- Added all received review and ack tags
- Split the series from ZONE_DEVICE enablement for better review
- Moved memblock re-order patch to the front as per Robin Murphy
- Updated commit message on memblock re-order patch per Michal Hocko
- Dropped [pmd|pud]_large() definitions
- Used existing [pmd|pud]_sect() instead of earlier [pmd|pud]_large()
- Removed __meminit and __ref tags as per Oscar Salvador
- Dropped unnecessary 'ret' init in arch_add_memory() per Robin Murphy
- Skipped calling into pgtable_page_dtor() for linear mapping page table
pages and updated all relevant functions
Changes in V1: (https://lkml.org/lkml/2019/4/3/28)
[1] https://lkml.org/lkml/2019/5/28/584
Anshuman Khandual (3):
mm/hotplug: Reorder arch_remove_memory() call in __remove_memory()
arm64/mm: Hold memory hotplug lock while walking for kernel page table dump
arm64/mm: Enable memory hot remove
arch/arm64/Kconfig | 3 +
arch/arm64/mm/mmu.c | 211 ++++++++++++++++++++++++++++++++++++++++-
arch/arm64/mm/ptdump_debugfs.c | 3 +
mm/memory_hotplug.c | 2 +-
4 files changed, 216 insertions(+), 3 deletions(-)
--
2.7.4