[PATCH 00/46] Dynamic allocation of reserved_mem array.
From: Oreoluwa Babatunde
Date: Fri Jan 26 2024 - 19:04:28 EST
The reserved_mem array is used to store data for the different
reserved memory regions defined in the DT of a device. The array
stores information such as region name, node, start-address, and size
of the reserved memory regions.
The array is currently statically allocated with a size of
MAX_RESERVED_REGIONS(64). This means that any system that specifies a
number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
will not have enough space to store the information for all the regions.
Therefore, this series extends the use of the static array for
reserved_mem, and introduces a dynamically allocated array using
memblock_alloc() based on the number of reserved memory regions
specified in the DT.
Some architectures such as arm64 require the page tables to be setup
before memblock allocated memory is writable. Therefore, the dynamic
allocation of the reserved_mem array will need to be done after the
page tables have been setup on these architectures. In most cases that
will be after paging_init().
Reserved memory regions can be divided into 2 groups.
i) Statically-placed reserved memory regions
i.e. regions defined in the DT using the @reg property.
ii) Dynamically-placed reserved memory regions.
i.e. regions specified in the DT using the @alloc_ranges
and @size properties.
It is possible to call memblock_reserve() and memblock_mark_nomap() on
the statically-placed reserved memory regions and not need to save them
to the reserved_mem array until memory is allocated for it using
memblock, which will be after the page tables have been setup.
For the dynamically-placed reserved memory regions, it is not possible
to wait to store its information because the starting address is
allocated only at run time, and hence they need to be stored somewhere
after they are allocated.
Waiting until after the page tables have been setup to allocate memory
for the dynamically-placed regions is also not an option because the
allocations will come from memory that have already been added to the
page tables, which is not good for memory that is supposed to be
reserved and/or marked as nomap.
Therefore, this series splits up the processing of the reserved memory
regions into two stages, of which the first stage is carried out by
early_init_fdt_scan_reserved_mem() and the second is carried out by
fdt_init_reserved_mem().
The early_init_fdt_scan_reserved_mem(), which is called before the page
tables are setup is used to:
1. Call memblock_reserve() and memblock_mark_nomap() on all the
statically-placed reserved memory regions as needed.
2. Allocate memory from memblock for the dynamically-placed reserved
memory regions and store them in the static array for reserved_mem.
memblock_reserve() and memblock_mark_nomap() are also called as
needed on all the memory allocated for the dynamically-placed
regions.
3. Count the total number of reserved memory regions found in the DT.
fdt_init_reserved_mem(), which should be called after the page tables
have been setup, is used to carry out the following:
1. Allocate memory for the reserved_mem array based on the number of
reserved memory regions counted as mentioned above.
2. Copy all the information for the dynamically-placed reserved memory
regions from the static array into the new allocated memory for the
reserved_mem array.
3. Add the information for the statically-placed reserved memory into
reserved_mem array.
4. Run the region specific init functions for each of the reserve memory
regions saved in the reserved_mem array.
Once the above steps have been completed and the init process is done
running, the original statically allocated reserved_mem array of size
MAX_RESERVED_REGIONS(64) will be automatically freed back to buddy
because it is no longer needed. This is done by marking the array as an
"__initdata" object in Patch 0018.
Note:
- Per Architecture, this series is effectively only 10 patches. The
code for each architecture is split up into separate patches to
allow each architecture to be tested independently of changes from
other architectures. Should this series be accepted, this should
allow for each arcitecture change to be picked up independently as
well.
Patch 0001: Splits up the processing of the reserved memory regions
between early_init_fdt_scan_reserved_mem and fdt_init_reserved_mem.
Patch 0002: Introduces a copy of early_init_fdt_scan_reserved_mem()
which is used to separate it from fdt_init_reserved_mem() so that the
two functions can be called independently of each other.
Patch 0003 - Patch 0016: Duplicated change for each architecture to
call early_init_fdt_scan_reserved_mem() and fdt_init_reserved_mem()
at their appropriate locations. Here fdt_init_reserved_mem() is called
either before of after the page tables have been setup depending on
the architecture requirements.
Patch 0017: Deletes the early_init_fdt_scan_reserved_mem() function
since all architectures are now using the copy introduced in
Patch 0002.
Patch 0018: Dynamically allocate memory for the reserved_mem array
based on the total number of reserved memory regions specified in the
DT.
Patch 0019 - Patch 0029: Duplicated change for each architecture to
move the fdt_init_reserved_mem() function call to below the
unflatten_devicetree() function call. This is so that the unflatten
devicetree APIs can be used to process the reserved memory regions.
Patch 0030: Make code changes to start using the unflatten devicetree
APIs to access the reserved memory regions defined in the DT.
Patch 0031: Rename fdt_* functions as dt_* to refelct that the
flattened devicetree (fdt) APIs have been replaced with the unflatten
devicetree APIs.
Patch 0032 - Patch 0045: Duplicated change for each architecture to
switch from the use of fdt_init_reserved_mem() to
dt_init_reserved_mem(), which is the same function but the later uses
the unflatten devicetree APIs.
Patch 0046: Delete the fdt_init_reserved_mem() function as all
architectures have switched to using dt_init_reserved_mem() which was
introduced in Patch 0031.
- The limitation to this approach is that there is still a limit of
64 for dynamically-placed reserved memory regions. But from my current
analysis, these types of reserved memory regions are generally less
in number when compared to the statically-placed reserved memory
regions.
- I have looked through all architectures and placed the call to
memblock_alloc() for the reserved_mem array at points where I
believe memblock allocated memory are available to be written to.
I currently only have access to an arm64 device and this is where I am
testing the functionality of this series. Hence, I will need help from
architecture maintainers to test this series on other architectures to
ensure that the code is functioning properly on there.
Previous patch revisions:
1. [RFC V1 Patchset]:
https://lore.kernel.org/all/20231019184825.9712-1-quic_obabatun@xxxxxxxxxxx/
2. [RFC V2 Patchset]:
https://lore.kernel.org/all/20231204041339.9902-1-quic_obabatun@xxxxxxxxxxx/
- Extend changes to all other relevant architectures.
- Add code to use unflatten devicetree APIs to process the reserved
memory regions.
Oreoluwa Babatunde (46):
of: reserved_mem: Change the order that reserved_mem regions are
stored
of: reserved_mem: Introduce new early reserved memory scan function
ARC: reserved_mem: Implement the new processing order for reserved
memory
ARM: reserved_mem: Implement the new processing order for reserved
memory
arm64: reserved_mem: Implement the new processing order for reserved
memory
csky: reserved_mem: Implement the new processing order for reserved
memory
Loongarch: reserved_mem: Implement the new processing order for
reserved memory
microblaze: reserved_mem: Implement the new processing order for
reserved memory
mips: reserved_mem: Implement the new processing order for reserved
memory
nios2: reserved_mem: Implement the new processing order for reserved
memory
openrisc: reserved_mem: Implement the new processing order for
reserved memory
powerpc: reserved_mem: Implement the new processing order for reserved
memory
riscv: reserved_mem: Implement the new processing order for reserved
memory
sh: reserved_mem: Implement the new processing order for reserved
memory
um: reserved_mem: Implement the new processing order for reserved
memory
xtensa: reserved_mem: Implement the new processing order for reserved
memory
of: reserved_mem: Delete the early_init_fdt_scan_reserved_mem()
function
of: reserved_mem: Add code to dynamically allocate reserved_mem array
ARC: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
ARM: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
arm64: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
csky: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
microblaze: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
mips: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
nios2: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
powerpc: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
riscv: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
um: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
xtensa: resrved_mem: Move fdt_init_reserved_mem() below
unflatten_device_tree()
of: reserved_mem: Add code to use unflattened DT for reserved_mem
nodes
of: reserved_mem: Rename fdt_* functions to refelct use of unflattened
devicetree APIs
ARC: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
ARM: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
arm64: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
csky: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
loongarch: reserved_mem: Switch fdt_init_reserved_mem to
dt_init_reserved_mem
microblaze: reserved_mem: Switch fdt_init_reserved_mem to
dt_init_reserved_mem
mips: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
nios2: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
openrisc: reserved_mem: Switch fdt_init_reserved_mem to
dt_init_reserved_mem
powerpc: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
riscv: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
sh: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
um: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
xtensa: reserved_mem: Switch fdt_init_reserved_mem() to
dt_init_reserved_mem()
of: reserved_mem: Delete the fdt_init_reserved_mem() function
arch/arc/kernel/setup.c | 2 +
arch/arc/mm/init.c | 2 +-
arch/arm/kernel/setup.c | 4 +
arch/arm/mm/init.c | 2 +-
arch/arm64/kernel/setup.c | 3 +
arch/arm64/mm/init.c | 2 +-
arch/csky/kernel/setup.c | 4 +-
arch/loongarch/kernel/setup.c | 4 +-
arch/microblaze/kernel/setup.c | 3 +
arch/microblaze/mm/init.c | 2 +-
arch/mips/kernel/setup.c | 4 +-
arch/nios2/kernel/setup.c | 5 +-
arch/openrisc/kernel/setup.c | 4 +-
arch/powerpc/kernel/prom.c | 2 +-
arch/powerpc/kernel/setup-common.c | 3 +
arch/riscv/kernel/setup.c | 3 +
arch/riscv/mm/init.c | 2 +-
arch/sh/boards/of-generic.c | 4 +-
arch/um/kernel/dtb.c | 4 +-
arch/xtensa/kernel/setup.c | 2 +
arch/xtensa/mm/init.c | 2 +-
drivers/of/fdt.c | 42 +++++--
drivers/of/of_private.h | 5 +-
drivers/of/of_reserved_mem.c | 178 +++++++++++++++++++++--------
include/linux/of_fdt.h | 4 +-
include/linux/of_reserved_mem.h | 11 +-
kernel/dma/coherent.c | 4 +-
kernel/dma/contiguous.c | 8 +-
kernel/dma/swiotlb.c | 10 +-
29 files changed, 234 insertions(+), 91 deletions(-)
--
2.17.1