On 01/24/2018 02:32 PM, Christophe Leroy wrote:
An application running with libhugetlbfs fails to allocate
additional pages to HEAP due to the hugemap being done
inconditionally as topdown mapping:
mmap(0x10080000, 1572864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|0x40000, -1, 0) = 0x73e80000
[...]
mmap(0x74000000, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|0x40000, -1, 0x180000) = 0x73d80000
munmap(0x73d80000, 1048576)ÂÂÂÂÂÂÂÂÂÂÂÂ = 0
[...]
mmap(0x74000000, 1572864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|0x40000, -1, 0x180000) = 0x73d00000
munmap(0x73d00000, 1572864)ÂÂÂÂÂÂÂÂÂÂÂÂ = 0
[...]
mmap(0x74000000, 1572864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|0x40000, -1, 0x180000) = 0x73d00000
munmap(0x73d00000, 1572864)ÂÂÂÂÂÂÂÂÂÂÂÂ = 0
[...]
As one can see from the above strace log, mmap() allocates further
pages below the initial one because no space is available on top of it.
This patch fixes it by requesting bottomup mapping as the non
generic hugetlb_get_unmapped_area() does
Fixes: d0f13e3c20b6f ("[POWERPC] Introduce address space "slices" ")
Signed-off-by: Christophe Leroy <christophe.leroy@xxxxxx>
---
 v3: Was a standalone patch before, but conflicts with this serie.
 arch/powerpc/mm/hugetlbpage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 79e1378ee303..368ea6b248ad 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -558,7 +558,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
ÂÂÂÂÂÂÂÂÂ return radix__hugetlb_get_unmapped_area(file, addr, len,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ pgoff, flags);
 #endif
-ÂÂÂ return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
+ÂÂÂ return slice_get_unmapped_area(addr, len, flags, mmu_psize, 0);
 }
 #endif
Why make this change also for PPC64? Can you do this #ifdef 8xx?.You can ideally move hugetlb_get_unmapped_area to slice.h and then make this much simpler for 8xxx?