Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear mapping range check

From: Will Deacon
Date: Wed Apr 03 2019 - 13:24:36 EST


On Thu, Apr 04, 2019 at 12:44:25AM +0800, pierre kuo wrote:
> > On Mon, Apr 01, 2019 at 10:59:53PM +0800, pierre kuo wrote:
> > > > > With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
> > > > > after the place where you moved the initrd_{start,end} setting, which
> > > > > would result in a different value for __phys_to_virt(phys_initrd_start).
> > > >
> > > > I found what you mean, since __phys_to_virt will use PHYS_OFFSET
> > > > (memstart_addr) for calculating.
> > > > How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
> > > > CONFIG_BLK_DEV_INITRD checking?
> > > >
> > > > That means below (d) moving ahead of (c)
> > > > prvious:
> > > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> > > > if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> > > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> > > >
> > > > now:
> > > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
> > > > if (memory_limit != PHYS_ADDR_MAX) {} ----------------(b)
> > > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} --------------(d)
> > > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > > >
> > >
> > > After tracing the kernel code,
> > > is it even possible to move CONFIG_RANDOMIZE_BASE part of code ahead
> > > of memory_limit?
> > >
> > > that mean the flow may look like below:
> > > now2:
> > > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
> > > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
> > > if (memory_limit != PHYS_ADDR_MAX) {} ---(b)
> > > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
> > >
> > > in (b), the result of __pa_symbol(_text), memory_limit,
> > > memblock_mem_limit_remove_map and memblock_add
> > > are not depended on memsart_addr.
> > > So the now2 flow can grouping modification of memstart_address, put
> > > (a) and (d) together.
> >
> > I'm afraid that you've lost me with this.
> welcome for ur kind suggestion ^^
>
> >Why isn't it just as simple as
> > the diff below?
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index c29b17b520cd..ec3487c94b10 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -377,7 +377,7 @@ void __init arm64_memblock_init(void)
> > base + size > memblock_start_of_DRAM() +
> > linear_region_size,
> > "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
> > - initrd_start = 0;
> > + phys_initrd_size = 0;
> > } else {
> > memblock_remove(base, size); /* clear MEMBLOCK_ flags */
> > memblock_add(base, size);
>
> This patch will also fix the issue, but it still needs 2 "if
> comparisions" for getting initrd_start/initrd_end.
> By possible grouping modification of memstart_address, and put
> initrd_start/initrd_end to be calculated only when linear mapping check
> pass. Maybe (just if) can let the code be more concise.

Maybe, but I don't think we've seen a patch which accomplishes that. I think
I'll go ahead and commit the basic one-liner, then we can always improve it
afterwards if somebody sends a patch. It's not like this is a fastpath.

Will