On Mon, May 19, 2014 at 10:47:12AM +0900, Gioh Kim wrote:--
Thank you for your advice. I didn't notice it.
I'm adding followings according to your advice:
- range restrict for CMA_SIZE_MBYTES and *CMA_SIZE_PERCENTAGE*
I think this can prevent the wrong kernel option.
- change size_cmdline into default value SZ_16M
I am not sure this can prevent if cma=0 cmdline option is also with base and limit options.
Hello,
I think that this problem is originated from atomic_pool_init().
If configured coherent_pool size is larger than default cma size,
it can be failed even if this patch is applied.
How about below patch?
It uses fallback allocation if CMA is failed.
Thanks.
-----------------8<---------------------
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 6b00be1..2909ab9 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -379,7 +379,7 @@ static int __init atomic_pool_init(void)
unsigned long *bitmap;
struct page *page;
struct page **pages;
- void *ptr;
+ void *ptr = NULL;
int bitmap_size = BITS_TO_LONGS(nr_pages) * sizeof(long);
bitmap = kzalloc(bitmap_size, GFP_KERNEL);
@@ -393,7 +393,7 @@ static int __init atomic_pool_init(void)
if (IS_ENABLED(CONFIG_DMA_CMA))
ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page,
atomic_pool_init);
- else
+ if (!ptr)
ptr = __alloc_remap_buffer(NULL, pool->size, gfp, prot, &page,
atomic_pool_init);
if (ptr) {