During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
since pageblock_order is still zero and it gets initialized
later during paging_init() e.g.
paging_init() -> free_area_init() -> set_pageblock_order().
One such use case is -
early_setup() -> early_init_devtree() -> fadump_reserve_mem()
This causes CMA memory alignment check to be bypassed in
cma_init_reserved_mem(). Then later cma_activate_area() can hit
a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
area was not pageblock_order aligned.
Instead of fixing it locally for fadump case on PowerPC, I believe
this should be fixed for CMA_MIN_ALIGNMENT_BYTES.