Re: [PATCH] mm: cma: free cma page to buddy instead of being cpu hotpage

From: Mel Gorman
Date: Tue Oct 29 2013 - 08:27:28 EST


On Tue, Oct 29, 2013 at 07:49:30PM +0800, Zhang Mingjun wrote:
> On Tue, Oct 29, 2013 at 5:33 PM, Mel Gorman <mgorman@xxxxxxx> wrote:
>
> > On Mon, Oct 28, 2013 at 07:42:49PM +0800, zhang.mingjun@xxxxxxxxxx wrote:
> > > From: Mingjun Zhang <troy.zhangmingjun@xxxxxxxxxx>
> > >
> > > free_contig_range frees cma pages one by one and MIGRATE_CMA pages will
> > be
> > > used as MIGRATE_MOVEABLE pages in the pcp list, it causes unnecessary
> > > migration action when these pages reused by CMA.
> > >
> > > Signed-off-by: Mingjun Zhang <troy.zhangmingjun@xxxxxxxxxx>
> > > ---
> > > mm/page_alloc.c | 3 ++-
> > > 1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 0ee638f..84b9d84 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -1362,7 +1362,8 @@ void free_hot_cold_page(struct page *page, int
> > cold)
> > > * excessively into the page allocator
> > > */
> > > if (migratetype >= MIGRATE_PCPTYPES) {
> > > - if (unlikely(is_migrate_isolate(migratetype))) {
> > > + if (unlikely(is_migrate_isolate(migratetype))
> > > + || is_migrate_cma(migratetype))
> > > free_one_page(zone, page, 0, migratetype);
> > > goto out;
> >
> > This slightly impacts the page allocator free path for a marginal gain
> > on CMA which are relatively rare allocations. There is no obvious
> > benefit to this patch as I expect CMA allocations to flush the PCP lists
> >
> how about keeping the migrate type of CMA page block as MIGRATE_ISOLATED
> after
> the alloc_contig_range , and undo_isolate_page_range at the end of
> free_contig_range?

It would move the cost to the CMA paths so I would complain less. Bear
in mind as well that forcing everything to go through free_one_page()
means that every free goes through the zone lock. I doubt you have any
machine large enough but it is possible for simultaneous CMA allocations
to now contend on the zone lock that would have been previously fine.
Hence, I'm interesting in knowing the underlying cause of the problem you
are experiencing.

> of course, it will waste the memory outside of the alloc range but in the
> pageblocks.
>

I would hope/expect that the loss would only last for the duration of
the allocation attempt and a small amount of memory.

> > when a range of pages have been isolated and migrated. Is there any
> > measurable benefit to this patch?
> >
> after applying this patch, the video player on my platform works more
> fluent,

fluent almost always refers to ones command of a spoken language. I do
not see how a video player can be fluent in anything. What is measurably
better?

For example, are allocations faster? If so, why? What cost from another
path is removed as a result of this patch? If the cost is in the PCP
flush then can it be checked if the PCP flush was unnecessary and called
unconditionally even though all the pages were freed already? We had
problems in the past where drain_all_pages() or similar were called
unnecessarily causing long sync stalls related to IPIs. I'm wondering if
we are seeing a similar problem here.

Maybe the problem is the complete opposite. Are allocations failing
because there are PCP pages in the way? In that case, it real fix might
be to insert a PCP flush if the allocation is failing due to per-cpu
pages.

> and the driver of video decoder on my test platform using cma alloc/free
> frequently.
>

CMA allocations are almost never used outside of these contexts. While I
appreciate that embedded use is important I'm reluctant to see an impact
in fast paths unless there is a good reason for every other use case. I
also am a bit unhappy to see CMA allocations making the zone->lock
hotter than necessary even if no embedded use case it likely to
experience the problem in the short-term.

--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/