This series tries to improve CMA.
CMA is introduced to provide physically contiguous pages at runtime
without reserving memory area. But, current implementation works like as
reserving memory approach, because allocation on cma reserved region only
occurs as fallback of migrate_movable allocation. We can allocate from it
when there is no movable page. In that situation, kswapd would be invoked
easily since unmovable and reclaimable allocation consider
(free pages - free CMA pages) as free memory on the system and free memory
may be lower than high watermark in that case. If kswapd start to reclaim
memory, then fallback allocation doesn't occur much.
In my experiment, I found that if system memory has 1024 MB memory and
has 512 MB reserved memory for CMA, kswapd is mostly invoked around
the 512MB free memory boundary. And invoked kswapd tries to make free
memory until (free pages - free CMA pages) is higher than high watermark,
so free memory on meminfo is moving around 512MB boundary consistently.
To fix this problem, we should allocate the pages on cma reserved memory
more aggressively and intelligenetly. Patch 2 implements the solution.
Patch 1 is the simple optimization which remove useless re-trial and patch 3
is for removing useless alloc flag, so these are not important.
See patch 2 for more detailed description.
This patchset is based on v3.15-rc4.
Thanks.
Joonsoo Kim (3):
CMA: remove redundant retrying code in __alloc_contig_migrate_range
CMA: aggressively allocate the pages on cma reserved memory when not
used
CMA: always treat free cma pages as non-free on watermark checking
include/linux/mmzone.h | 6 +++
mm/compaction.c | 4 --
mm/internal.h | 3 +-
mm/page_alloc.c | 117 +++++++++++++++++++++++++++++++++++++++---------
4 files changed, 102 insertions(+), 28 deletions(-)