Re: [PATCH 1/2] mm/cma: drop incorrect alignment check in cma_init_reserved_mem

From: Frank van der Linden
Date: Thu Apr 04 2024 - 16:46:01 EST


On Thu, Apr 4, 2024 at 1:15 PM Andrew Morton <akpm@linux-foundationorg> wrote:
>
> On Thu, 4 Apr 2024 16:25:14 +0000 Frank van der Linden <fvdl@xxxxxxxxxx> wrote:
>
> > cma_init_reserved_mem uses IS_ALIGNED to check if the size
> > represented by one bit in the cma allocation bitmask is
> > aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size).
> >
> > However, this is too strict, as this will fail if
> > order_per_bit > pageblock_order, which is a valid configuration.
> >
> > We could check IS_ALIGNED both ways, but since both numbers are
> > powers of two, no check is needed at all.
>
> What are the userspace visible effects of this bug?

None that I know of. This bug was exposed because I made the hugetlb
code correctly pass the right order_per_bit argument (see the
accompanying hugetlb cma fix), which then tripped this check when I
backported it to an older kernel, passing an order of 30 (1G hugetlb
page) as order_per_bit. This actually won't happen for 6.9-rc, since
the (intended) order_per_bit was reduced to HUGETLB_PAGE_ORDER because
of hugetlb page demotion.

So, no user visible effects. However, if the other fix is going to be
backported, this one is a prereq.

- Frank