Re: [PATCH 0/4] (CMA_AGGRESSIVE) Make CMA memory be more aggressive about allocation

From: Joonsoo Kim
Date: Mon Nov 03 2014 - 03:04:18 EST


On Mon, Nov 03, 2014 at 03:28:38PM +0800, Hui Zhu wrote:
> On Fri, Oct 24, 2014 at 1:25 PM, Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> wrote:
> > On Thu, Oct 16, 2014 at 11:35:47AM +0800, Hui Zhu wrote:
> >> In fallbacks of page_alloc.c, MIGRATE_CMA is the fallback of
> >> MIGRATE_MOVABLE.
> >> MIGRATE_MOVABLE will use MIGRATE_CMA when it doesn't have a page in
> >> order that Linux kernel want.
> >>
> >> If a system that has a lot of user space program is running, for
> >> instance, an Android board, most of memory is in MIGRATE_MOVABLE and
> >> allocated. Before function __rmqueue_fallback get memory from
> >> MIGRATE_CMA, the oom_killer will kill a task to release memory when
> >> kernel want get MIGRATE_UNMOVABLE memory because fallbacks of
> >> MIGRATE_UNMOVABLE are MIGRATE_RECLAIMABLE and MIGRATE_MOVABLE.
> >> This status is odd. The MIGRATE_CMA has a lot free memory but Linux
> >> kernel kill some tasks to release memory.
> >>
> >> This patch series adds a new function CMA_AGGRESSIVE to make CMA memory
> >> be more aggressive about allocation.
> >> If function CMA_AGGRESSIVE is available, when Linux kernel call function
> >> __rmqueue try to get pages from MIGRATE_MOVABLE and conditions allow,
> >> MIGRATE_CMA will be allocated as MIGRATE_MOVABLE first. If MIGRATE_CMA
> >> doesn't have enough pages for allocation, go back to allocate memory from
> >> MIGRATE_MOVABLE.
> >> Then the memory of MIGRATE_MOVABLE can be kept for MIGRATE_UNMOVABLE and
> >> MIGRATE_RECLAIMABLE which doesn't have fallback MIGRATE_CMA.
> >
> > Hello,
> >
> > I did some work similar to this.
> > Please reference following links.
> >
> > https://lkml.org/lkml/2014/5/28/64
> > https://lkml.org/lkml/2014/5/28/57
>
> > I tested #1 approach and found the problem. Although free memory on
> > meminfo can move around low watermark, there is large fluctuation on free
> > memory, because too many pages are reclaimed when kswapd is invoked.
> > Reason for this behaviour is that successive allocated CMA pages are
> > on the LRU list in that order and kswapd reclaim them in same order.
> > These memory doesn't help watermark checking from kwapd, so too many
> > pages are reclaimed, I guess.
>
> This issue can be handle with some change around shrink code. I am
> trying to integrate a patch for them.
> But I am not sure we met the same issue. Do you mind give me more
> info about this part?

I forgot the issue because there is so big time-gap. I need sometime
to bring issue back to my brain. I will answer it soon after some thinking.

>
> >
> > And, aggressive allocation should be postponed until freepage counting
> > bug is fixed, because aggressive allocation enlarge the possiblity
> > of problem occurence. I tried to fix that bug, too. See following link.
> >
> > https://lkml.org/lkml/2014/10/23/90
>
> I am following these patches. They are great! Thanks for your work.

Thanks. :)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/