[PATCH 3.17 108/146] mm/cma: fix cma bitmap aligned mask computing
From: Greg Kroah-Hartman
Date: Mon Oct 27 2014 - 23:41:08 EST
3.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Weijie Yang <weijie.yang@xxxxxxxxxxx>
commit 68faed630fc151a7a1c4853df00fb3dcacf782b4 upstream.
The current cma bitmap aligned mask computation is incorrect. It could
cause an unexpected alignment when using cma_alloc() if the wanted align
order is larger than cma->order_per_bit.
Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to
6. When kvm_alloc_rma() tries to alloc kvm_rma_pages, it will use 15 as
the expected align value. After using the current implementation however,
we get 0 as cma bitmap aligned mask other than 511.
This patch fixes the cma bitmap aligned mask calculation.
[akpm@xxxxxxxxxxxxxxxxxxxx: coding-style fixes]
Signed-off-by: Weijie Yang <weijie.yang@xxxxxxxxxxx>
Acked-by: Michal Nazarewicz <mina86@xxxxxxxxxx>
Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
mm/cma.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -57,7 +57,9 @@ unsigned long cma_get_size(struct cma *c
static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order)
{
- return (1UL << (align_order >> cma->order_per_bit)) - 1;
+ if (align_order <= cma->order_per_bit)
+ return 0;
+ return (1UL << (align_order - cma->order_per_bit)) - 1;
}
static unsigned long cma_bitmap_maxno(struct cma *cma)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/