Re: [PATCH v2 00/21] Refine memblock API
From: Mike Rapoport
Date: Thu Oct 03 2019 - 01:35:14 EST
(trimmed the CC)
On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <rppt@xxxxxxxxxxxxx> wrote:
> >
>
> Before the patch:
>
> # cat /sys/kernel/debug/memblock/memory
> 0: 0x10000000..0x8fffffff
> # cat /sys/kernel/debug/memblock/reserved
> 0: 0x10004000..0x10007fff
> 34: 0x2fffff88..0x3fffffff
>
>
> After the patch:
> # cat /sys/kernel/debug/memblock/memory
> 0: 0x10000000..0x8fffffff
> # cat /sys/kernel/debug/memblock/reserved
> 0: 0x10004000..0x10007fff
> 36: 0x80000000..0x8fffffff
I'm still not convinced that the memblock refactoring didn't uncovered an
issue in etnaviv driver.
Why moving the CMA area from 0x80000000 to 0x30000000 makes it fail?
BTW, the code that complained about "command buffer outside valid memory
window" has been removed by the commit 17e4660ae3d7 ("drm/etnaviv:
implement per-process address spaces on MMUv2").
Could be that recent changes to MMU management of etnaviv resolve the
issue?
> > From 06529f861772b7dea2912fc2245debe4690139b8 Mon Sep 17 00:00:00 2001
> > From: Mike Rapoport <rppt@xxxxxxxxxxxxx>
> > Date: Wed, 2 Oct 2019 10:14:17 +0300
> > Subject: [PATCH] mm: memblock: do not enforce current limit for memblock_phys*
> > family
> >
> > Until commit 92d12f9544b7 ("memblock: refactor internal allocation
> > functions") the maximal address for memblock allocations was forced to
> > memblock.current_limit only for the allocation functions returning virtual
> > address. The changes introduced by that commit moved the limit enforcement
> > into the allocation core and as a result the allocation functions returning
> > physical address also started to limit allocations to
> > memblock.current_limit.
> >
> > This caused breakage of etnaviv GPU driver:
> >
> > [ 3.682347] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops)
> > [ 3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
> > [ 3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
> > [ 3.700800] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108
> > [ 3.723013] etnaviv-gpu 130000.gpu: command buffer outside valid
> > memory window
> > [ 3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
> > [ 3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
> > memory window
> > [ 3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
> > [ 3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
> >
> > Restore the behaviour of memblock_phys* family so that these functions will
> > not enforce memblock.current_limit.
> >
>
> This fixed the issue. Thank you
>
> Tested-by: Adam Ford <aford173@xxxxxxxxx> #imx6q-logicpd
>
> > Fixes: 92d12f9544b7 ("memblock: refactor internal allocation functions")
> > Reported-by: Adam Ford <aford173@xxxxxxxxx>
> > Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx>
> > ---
> > mm/memblock.c | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index 7d4f61a..c4b16ca 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -1356,9 +1356,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
> > align = SMP_CACHE_BYTES;
> > }
> >
> > - if (end > memblock.current_limit)
> > - end = memblock.current_limit;
> > -
> > again:
> > found = memblock_find_in_range_node(size, align, start, end, nid,
> > flags);
> > @@ -1469,6 +1466,9 @@ static void * __init memblock_alloc_internal(
> > if (WARN_ON_ONCE(slab_is_available()))
> > return kzalloc_node(size, GFP_NOWAIT, nid);
> >
> > + if (max_addr > memblock.current_limit)
> > + max_addr = memblock.current_limit;
> > +
> > alloc = memblock_alloc_range_nid(size, align, min_addr, max_addr, nid);
> >
> > /* retry allocation without lower limit */
> > --
> > 2.7.4
> >
> >
> > > > adam
> > > >
> > > > On Sat, Sep 28, 2019 at 2:33 AM Mike Rapoport <rppt@xxxxxxxxxxxxx> wrote:
> > > > >
> > > > > On Thu, Sep 26, 2019 at 02:35:53PM -0500, Adam Ford wrote:
> > > > > > On Thu, Sep 26, 2019 at 11:04 AM Mike Rapoport <rppt@xxxxxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> > > > > > > > On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam <festevam@xxxxxxxxx> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford <aford173@xxxxxxxxx> wrote:
> > > > > > > > >
> > > > > > > > > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > > > > > > > > change. Do we need to setup a reserved-memory node like
> > > > > > > > > > imx6ul-ccimx6ulsom.dtsi did?
> > > > > > > > >
> > > > > > > > > I don't think so.
> > > > > > > > >
> > > > > > > > > Were you able to identify what was the exact commit that caused such regression?
> > > > > > > >
> > > > > > > > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > > > > > > > internal allocation functions") that caused the regression with
> > > > > > > > Etnaviv.
> > > > > > >
> > > > > > >
> > > > > > > Can you please test with this change:
> > > > > > >
> > > > > >
> > > > > > That appears to have fixed my issue. I am not sure what the impact
> > > > > > is, but is this a safe option?
> > > > >
> > > > > It's not really a fix, I just wanted to see how exactly 92d12f9544b7 ("memblock:
> > > > > refactor internal allocation functions") broke your setup.
> > > > >
> > > > > Can you share the dts you are using and the full kernel log?
> > > > >
> > > > > > adam
> > > > > >
> > > > > > > diff --git a/mm/memblock.c b/mm/memblock.c
> > > > > > > index 7d4f61a..1f5a0eb 100644
> > > > > > > --- a/mm/memblock.c
> > > > > > > +++ b/mm/memblock.c
> > > > > > > @@ -1356,9 +1356,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
> > > > > > > align = SMP_CACHE_BYTES;
> > > > > > > }
> > > > > > >
> > > > > > > - if (end > memblock.current_limit)
> > > > > > > - end = memblock.current_limit;
> > > > > > > -
> > > > > > > again:
> > > > > > > found = memblock_find_in_range_node(size, align, start, end, nid,
> > > > > > > flags);
> > > > > > >
> > > > > > > > I also noticed that if I create a reserved memory node as was done one
> > > > > > > > imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> > > > > > > > was getting errors regardless of the 'cma=256M' or not.
> > > > > > > > I don't have a problem using the reserved memory, but I guess I am not
> > > > > > > > sure what the amount should be. I know for the video decoding 1080p,
> > > > > > > > I have historically used cma=128M, but with the 3D also needing some
> > > > > > > > memory allocation, is that enough or should I use 256M?
> > > > > > > >
> > > > > > > > adam
> > > > > > >
> > > > > > > --
> > > > > > > Sincerely yours,
> > > > > > > Mike.
> > > > > > >
> > > > >
> > > > > --
> > > > > Sincerely yours,
> > > > > Mike.
> > > > >
> >
> > --
> > Sincerely yours,
> > Mike.
> >
--
Sincerely yours,
Mike.