mm: pages are not freed from lru_add_pvecs after process termination
From: Odzioba, Lukasz
Date: Wed Apr 27 2016 - 13:02:06 EST
Hi,
I encounter a problem which I'd like to discuss here (tested on 3.10 and 4.5).
While running some workloads we noticed that in case of "improper" application
exit (like SIGTERM) quite a bit (a few GBs) of memory is not being reclaimed
after process termination.
Executing echo 1 > /proc/sys/vm/compact_memory makes the memory available again.
This memory is not reclaimed so OOM will kill process trying to allocate memory
which technically should be available.
Such behavior is present only when THP are [always] enabled.
Disabling it makes the issue not visible to the naked eye.
An important information is that it is visible mostly due to large amount of CPUs
in the system (>200) and amount of missing memory varies with the number of CPUs.
This memory seems to not be accounted anywhere, but I was able to found it on
per cpu lru_add_pvec lists thanks to Dave Hansen's suggestion.
Knowing that I am able to reproduce this problem with much simpler code:
//compile with: gcc repro.c -o repro -fopenmp
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include "omp.h"
int main() {
#pragma omp parallel
{
size_t size = 55*1000*1000; // tweaked for 288cpus, "leaks" ~3.5GB
unsigned long nodemask = 1;
void *p = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS , -1, 0);
if(p)
memset(p, 0, size);
//munmap(p, size); // uncomment to make the problem go away
}
return 0;
}
Exemplary execution:
$ numactl -H | grep "node 1" | grep MB
node 1 size: 16122 MB
node 1 free: 16026 MB
$ ./repro
$ numactl -H | grep "node 1" | grep MB
node 1 size: 16122 MB
node 1 free: 13527 MB
After a couple of minutes on idle system some of this memory is reclaimed, but never all
unless I run tasks on every CPU:
node 1 size: 16122 MB
node 1 free: 14823 MB
Pieces of the puzzle:
A) after process termination memory is not getting freed nor accounted as free
B) memory cannot be allocated by other processes (unless it is allocated by all CPUs)
I am not sure whether it is expected behavior or a side effect of something else not
going as it should. Temporarily I added lru_add_drain_all() to try_to_free_pages()
which sort of hammers B case, but A is still present.
I am not familiar with this code, but I feel like draining lru_add work should be split
into smaller pieces and done by kswapd to fix A and drain only as much pages as
needed in try_to_free_pages to fix B.
Any comments/ideas/patches for a proper fix are welcome.
Thanks,
Lukas