Re: [PATCH 1/4] mm: Kconfig: move swap and slab config options to the MM section
From: Vlastimil Babka
Date: Tue Aug 24 2021 - 07:47:12 EST
On 8/19/21 21:55, Johannes Weiner wrote:
> These are currently under General Setup. MM seems like a better fit.
Right. I've been also wondering about that occasionally.
> Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
> ---
> init/Kconfig | 120 ---------------------------------------------------
> mm/Kconfig | 120 +++++++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 120 insertions(+), 120 deletions(-)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index a61c92066c2e..a2358cd5498a 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -331,23 +331,6 @@ config DEFAULT_HOSTNAME
> but you may wish to use a different default here to make a minimal
> system more usable with less configuration.
>
> -#
> -# For some reason microblaze and nios2 hard code SWAP=n. Hopefully we can
> -# add proper SWAP support to them, in which case this can be remove.
> -#
> -config ARCH_NO_SWAP
> - bool
> -
> -config SWAP
> - bool "Support for paging of anonymous memory (swap)"
> - depends on MMU && BLOCK && !ARCH_NO_SWAP
> - default y
> - help
> - This option allows you to choose whether you want to have support
> - for so called swap devices or swap files in your kernel that are
> - used to provide more virtual memory than the actual RAM present
> - in your computer. If unsure say Y.
> -
> config SYSVIPC
> bool "System V IPC"
> help
> @@ -1862,109 +1845,6 @@ config COMPAT_BRK
>
> On non-ancient distros (post-2000 ones) N is usually a safe choice.
>
> -choice
> - prompt "Choose SLAB allocator"
> - default SLUB
> - help
> - This option allows to select a slab allocator.
> -
> -config SLAB
> - bool "SLAB"
> - select HAVE_HARDENED_USERCOPY_ALLOCATOR
> - help
> - The regular slab allocator that is established and known to work
> - well in all environments. It organizes cache hot objects in
> - per cpu and per node queues.
> -
> -config SLUB
> - bool "SLUB (Unqueued Allocator)"
> - select HAVE_HARDENED_USERCOPY_ALLOCATOR
> - help
> - SLUB is a slab allocator that minimizes cache line usage
> - instead of managing queues of cached objects (SLAB approach).
> - Per cpu caching is realized using slabs of objects instead
> - of queues of objects. SLUB can use memory efficiently
> - and has enhanced diagnostics. SLUB is the default choice for
> - a slab allocator.
> -
> -config SLOB
> - depends on EXPERT
> - bool "SLOB (Simple Allocator)"
> - help
> - SLOB replaces the stock allocator with a drastically simpler
> - allocator. SLOB is generally more space efficient but
> - does not perform as well on large systems.
> -
> -endchoice
> -
> -config SLAB_MERGE_DEFAULT
> - bool "Allow slab caches to be merged"
> - default y
> - help
> - For reduced kernel memory fragmentation, slab caches can be
> - merged when they share the same size and other characteristics.
> - This carries a risk of kernel heap overflows being able to
> - overwrite objects from merged caches (and more easily control
> - cache layout), which makes such heap attacks easier to exploit
> - by attackers. By keeping caches unmerged, these kinds of exploits
> - can usually only damage objects in the same cache. To disable
> - merging at runtime, "slab_nomerge" can be passed on the kernel
> - command line.
> -
> -config SLAB_FREELIST_RANDOM
> - bool "Randomize slab freelist"
> - depends on SLAB || SLUB
> - help
> - Randomizes the freelist order used on creating new pages. This
> - security feature reduces the predictability of the kernel slab
> - allocator against heap overflows.
> -
> -config SLAB_FREELIST_HARDENED
> - bool "Harden slab freelist metadata"
> - depends on SLAB || SLUB
> - help
> - Many kernel heap attacks try to target slab cache metadata and
> - other infrastructure. This options makes minor performance
> - sacrifices to harden the kernel slab allocator against common
> - freelist exploit methods. Some slab implementations have more
> - sanity-checking than others. This option is most effective with
> - CONFIG_SLUB.
> -
> -config SHUFFLE_PAGE_ALLOCATOR
> - bool "Page allocator randomization"
> - default SLAB_FREELIST_RANDOM && ACPI_NUMA
> - help
> - Randomization of the page allocator improves the average
> - utilization of a direct-mapped memory-side-cache. See section
> - 5.2.27 Heterogeneous Memory Attribute Table (HMAT) in the ACPI
> - 6.2a specification for an example of how a platform advertises
> - the presence of a memory-side-cache. There are also incidental
> - security benefits as it reduces the predictability of page
> - allocations to compliment SLAB_FREELIST_RANDOM, but the
> - default granularity of shuffling on the "MAX_ORDER - 1" i.e,
> - 10th order of pages is selected based on cache utilization
> - benefits on x86.
> -
> - While the randomization improves cache utilization it may
> - negatively impact workloads on platforms without a cache. For
> - this reason, by default, the randomization is enabled only
> - after runtime detection of a direct-mapped memory-side-cache.
> - Otherwise, the randomization may be force enabled with the
> - 'page_alloc.shuffle' kernel command line parameter.
> -
> - Say Y if unsure.
> -
> -config SLUB_CPU_PARTIAL
> - default y
> - depends on SLUB && SMP
> - bool "SLUB per cpu partial cache"
> - help
> - Per cpu partial caches accelerate objects allocation and freeing
> - that is local to a processor at the price of more indeterminism
> - in the latency of the free. On overflow these caches will be cleared
> - which requires the taking of locks that may cause latency spikes.
> - Typically one would choose no for a realtime system.
> -
> config MMAP_ALLOW_UNINITIALIZED
> bool "Allow mmapped anonymous memory to be uninitialized"
> depends on EXPERT && !MMU
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 02d44e3420f5..894858536e7f 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -2,6 +2,126 @@
>
> menu "Memory Management options"
>
> +#
> +# For some reason microblaze and nios2 hard code SWAP=n. Hopefully we can
> +# add proper SWAP support to them, in which case this can be remove.
> +#
> +config ARCH_NO_SWAP
> + bool
> +
> +config SWAP
> + bool "Support for paging of anonymous memory (swap)"
> + depends on MMU && BLOCK && !ARCH_NO_SWAP
> + default y
> + help
> + This option allows you to choose whether you want to have support
> + for so called swap devices or swap files in your kernel that are
> + used to provide more virtual memory than the actual RAM present
> + in your computer. If unsure say Y.
> +
> +choice
> + prompt "Choose SLAB allocator"
> + default SLUB
> + help
> + This option allows to select a slab allocator.
> +
> +config SLAB
> + bool "SLAB"
> + select HAVE_HARDENED_USERCOPY_ALLOCATOR
> + help
> + The regular slab allocator that is established and known to work
> + well in all environments. It organizes cache hot objects in
> + per cpu and per node queues.
> +
> +config SLUB
> + bool "SLUB (Unqueued Allocator)"
> + select HAVE_HARDENED_USERCOPY_ALLOCATOR
> + help
> + SLUB is a slab allocator that minimizes cache line usage
> + instead of managing queues of cached objects (SLAB approach).
> + Per cpu caching is realized using slabs of objects instead
> + of queues of objects. SLUB can use memory efficiently
> + and has enhanced diagnostics. SLUB is the default choice for
> + a slab allocator.
> +
> +config SLOB
> + depends on EXPERT
> + bool "SLOB (Simple Allocator)"
> + help
> + SLOB replaces the stock allocator with a drastically simpler
> + allocator. SLOB is generally more space efficient but
> + does not perform as well on large systems.
> +
> +endchoice
> +
> +config SLAB_MERGE_DEFAULT
> + bool "Allow slab caches to be merged"
> + default y
> + help
> + For reduced kernel memory fragmentation, slab caches can be
> + merged when they share the same size and other characteristics.
> + This carries a risk of kernel heap overflows being able to
> + overwrite objects from merged caches (and more easily control
> + cache layout), which makes such heap attacks easier to exploit
> + by attackers. By keeping caches unmerged, these kinds of exploits
> + can usually only damage objects in the same cache. To disable
> + merging at runtime, "slab_nomerge" can be passed on the kernel
> + command line.
> +
> +config SLAB_FREELIST_RANDOM
> + bool "Randomize slab freelist"
> + depends on SLAB || SLUB
> + help
> + Randomizes the freelist order used on creating new pages. This
> + security feature reduces the predictability of the kernel slab
> + allocator against heap overflows.
> +
> +config SLAB_FREELIST_HARDENED
> + bool "Harden slab freelist metadata"
> + depends on SLAB || SLUB
> + help
> + Many kernel heap attacks try to target slab cache metadata and
> + other infrastructure. This options makes minor performance
> + sacrifices to harden the kernel slab allocator against common
> + freelist exploit methods. Some slab implementations have more
> + sanity-checking than others. This option is most effective with
> + CONFIG_SLUB.
> +
> +config SHUFFLE_PAGE_ALLOCATOR
> + bool "Page allocator randomization"
> + default SLAB_FREELIST_RANDOM && ACPI_NUMA
> + help
> + Randomization of the page allocator improves the average
> + utilization of a direct-mapped memory-side-cache. See section
> + 5.2.27 Heterogeneous Memory Attribute Table (HMAT) in the ACPI
> + 6.2a specification for an example of how a platform advertises
> + the presence of a memory-side-cache. There are also incidental
> + security benefits as it reduces the predictability of page
> + allocations to compliment SLAB_FREELIST_RANDOM, but the
> + default granularity of shuffling on the "MAX_ORDER - 1" i.e,
> + 10th order of pages is selected based on cache utilization
> + benefits on x86.
> +
> + While the randomization improves cache utilization it may
> + negatively impact workloads on platforms without a cache. For
> + this reason, by default, the randomization is enabled only
> + after runtime detection of a direct-mapped memory-side-cache.
> + Otherwise, the randomization may be force enabled with the
> + 'page_alloc.shuffle' kernel command line parameter.
> +
> + Say Y if unsure.
> +
> +config SLUB_CPU_PARTIAL
> + default y
> + depends on SLUB && SMP
> + bool "SLUB per cpu partial cache"
> + help
> + Per cpu partial caches accelerate objects allocation and freeing
> + that is local to a processor at the price of more indeterminism
> + in the latency of the free. On overflow these caches will be cleared
> + which requires the taking of locks that may cause latency spikes.
> + Typically one would choose no for a realtime system.
> +
> config SELECT_MEMORY_MODEL
> def_bool y
> depends on ARCH_SELECT_MEMORY_MODEL
>