Re: [PATCH v2 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used
From: Joonsoo Kim
Date: Thu May 29 2014 - 20:42:12 EST
On Thu, May 29, 2014 at 05:09:34PM +0900, Gioh Kim wrote:
>
> >>>+
> >>> /*
> >>> * Do the hard work of removing an element from the buddy allocator.
> >>> * Call me with the zone->lock already held.
> >>>@@ -1143,10 +1223,15 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
> >>> static struct page *__rmqueue(struct zone *zone, unsigned int order,
> >>> int migratetype)
> >>> {
> >>>- struct page *page;
> >>>+ struct page *page = NULL;
> >>>+
> >>>+ if (IS_ENABLED(CONFIG_CMA) &&
> >>
> >>You might know that CONFIG_CMA is enabled and there is no CMA memory, because CONFIG_CMA_SIZE_MBYTES can be zero.
> >>Is IS_ENABLED(CONFIG_CMA) alright in that case?
> >
> >next line checks whether zone->managed_cma_pages is positive or not.
> >If there is no CMA memory, zone->managed_cma_pages will be zero and
> >we will skip to call __rmqueue_cma().
>
> Is IS_ENABLED(CONFIG_CMA) necessary?
> What about if (migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages) ?
Yes, field, managed_cma_pages exists only if CONFIG_CMA is enabled, so
removing IS_ENABLE(CONFIG_CMA) would break the build.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/