Re: [RFC PATCH 0/5] madvise MADV_DOEXEC

From: David Hildenbrand
Date: Mon Aug 16 2021 - 08:43:20 EST


On 16.08.21 14:20, David Hildenbrand wrote:
On 16.08.21 14:07, Matthew Wilcox wrote:
On Mon, Aug 16, 2021 at 10:02:22AM +0200, David Hildenbrand wrote:
Mappings within this address range behave as if they were shared
between threads, so a write to a MAP_PRIVATE mapping will create a
page which is shared between all the sharers. The first process that
declares an address range mshare'd can continue to map objects in the
shared area. All other processes that want mshare'd access to this
memory area can do so by calling mshare(). After this call, the
address range given by mshare becomes a shared range in its address
space. Anonymous mappings will be shared and not COWed.

Did I understand correctly that you want to share actual page tables between
processes and consequently different MMs? That sounds like a very bad idea.

That is the entire point. Consider a machine with 10,000 instances
of an application running (process model, not thread model). If each
application wants to map 1TB of RAM using 2MB pages, that's 4MB of page
tables per process or 40GB of RAM for the whole machine.

What speaks against 1 GB pages then?


There's a reason hugetlbfs was enhanced to allow this page table sharing.
I'm not a fan of the implementation as it gets some locks upside down,
so this is an attempt to generalise the concept beyond hugetlbfs.

Who do we account the page tables to? What are MADV_DONTNEED semantics?
Who cleans up the page tables? What happens during munmap? How does the
rmap even work? How to we actually synchronize page table walkers?

See how hugetlbfs just doesn't raise these problems because we are
sharing pages and not page tables?

I found what you were referring to: CONFIG_ARCH_WANT_HUGE_PMD_SHARE

I was not aware that we have such a monstrosity in the kernel.

--
Thanks,

David / dhildenb