Re: [RFC PATCH 0/3] Prototype for direct map awareness in page allocator

From: Hyeonggon Yoo
Date: Tue Apr 26 2022 - 05:39:00 EST


On Thu, Jan 27, 2022 at 10:56:05AM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@xxxxxxxxxxxxx>
>
> Hi,
>
> This is a second attempt to make page allocator aware of the direct map
> layout and allow grouping of the pages that must be mapped at PTE level in
> the direct map.
>

Hello mike, It may be a silly question...

Looking at implementation of set_memory*(), they only split
PMD/PUD-sized entries. But why not _merge_ them when all entries
have same permissions after changing permission of an entry?

I think grouping __GFP_UNMAPPED allocations would help reducing
direct map fragmentation, but IMHO merging split entries seems better
to be done in those helpers than in page allocator.

For example:
1) set_memory_ro() splits 1 RW PMD entry into 511 RW PTE
entries and 1 RO PTE entry.

2) before freeing the pages, we call set_memory_rw() and we have
512 RW PTE entries. Then we can merge it to 1 RW PMD entry.

3) after 2) we can do same thing about PMD-sized entries
and merge them into 1 PUD entry if 512 PMD entries have
same permissions.

[...]

> Mike Rapoport (3):
> mm/page_alloc: introduce __GFP_UNMAPPED and MIGRATE_UNMAPPED
> mm/secretmem: use __GFP_UNMAPPED to allocate pages
> EXPERIMENTAL: x86/module: use __GFP_UNMAPPED in module_alloc
>
> arch/Kconfig | 7 ++
> arch/x86/Kconfig | 1 +
> arch/x86/kernel/module.c | 2 +-
> include/linux/gfp.h | 13 +++-
> include/linux/mmzone.h | 11 +++
> include/trace/events/mmflags.h | 3 +-
> mm/internal.h | 2 +-
> mm/page_alloc.c | 129 ++++++++++++++++++++++++++++++++-
> mm/secretmem.c | 8 +-
> 9 files changed, 162 insertions(+), 14 deletions(-)
>
>
> base-commit: e783362eb54cd99b2cac8b3a9aeac942e6f6ac07
> --
> 2.34.1
>

--
Thanks,
Hyeonggon