Re: [PATCH] arm64/mm: Validate hotplug range before creating linear mapping

From: Ard Biesheuvel
Date: Wed Oct 14 2020 - 02:38:05 EST


On Wed, 14 Oct 2020 at 07:07, Anshuman Khandual
<anshuman.khandual@xxxxxxx> wrote:
>
>
>
> On 10/12/2020 12:59 PM, Ard Biesheuvel wrote:
> > On Tue, 6 Oct 2020 at 08:36, Anshuman Khandual
> > <anshuman.khandual@xxxxxxx> wrote:
> >>
> >>
> >>
> >> On 09/30/2020 01:32 PM, Anshuman Khandual wrote:
> >>> But if __is_lm_address() checks against the effective linear range instead
> >>> i.e [_PAGE_OFFSET(vabits_actual)..(PAGE_END - 1)], it can be used for hot
> >>> plug physical range check there after. Perhaps something like this, though
> >>> not tested properly.
> >>>
> >>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> >>> index afa722504bfd..6da046b479d4 100644
> >>> --- a/arch/arm64/include/asm/memory.h
> >>> +++ b/arch/arm64/include/asm/memory.h
> >>> @@ -238,7 +238,10 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> >>> * space. Testing the top bit for the start of the region is a
> >>> * sufficient check and avoids having to worry about the tag.
> >>> */
> >>> -#define __is_lm_address(addr) (!(((u64)addr) & BIT(vabits_actual - 1)))
> >>> +static inline bool __is_lm_address(unsigned long addr)
> >>> +{
> >>> + return ((addr >= _PAGE_OFFSET(vabits_actual)) && (addr <= (PAGE_END - 1)));
> >>> +}
> >>>
> >>> #define __lm_to_phys(addr) (((addr) + physvirt_offset))
> >>> #define __kimg_to_phys(addr) ((addr) - kimage_voffset)
> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> >>> index d59ffabb9c84..5750370a7e8c 100644
> >>> --- a/arch/arm64/mm/mmu.c
> >>> +++ b/arch/arm64/mm/mmu.c
> >>> @@ -1451,8 +1451,7 @@ static bool inside_linear_region(u64 start, u64 size)
> >>> * address range mapped by the linear map, the start address should
> >>> * be calculated using vabits_actual.
> >>> */
> >>> - return ((start >= __pa(_PAGE_OFFSET(vabits_actual)))
> >>> - && ((start + size) <= __pa(PAGE_END - 1)));
> >>> + return __is_lm_address(__va(start)) && __is_lm_address(__va(start + size));
> >>> }
> >>>
> >>> int arch_add_memory(int nid, u64 start, u64 size,
> >>
> >> Will/Ard,
> >>
> >> Any thoughts about this ? __is_lm_address() now checks for a range instead
> >> of a bit. This will be compatible later on, even if linear mapping range
> >> changes from current lower half scheme.
> >>
> >
> > As I'm sure you have noticed, I sent out some patches that get rid of
> > physvirt_offset, and which simplify __is_lm_address() to only take
> > compile time constants into account (unless KASAN is enabled). This
> > means that in the 52-bit VA case, __is_lm_address() does not
> > distinguish between virtual addresses that can be mapped by the
> > hardware and ones that cannot.
>
> Yeah, though was bit late in getting to the series. So with that change
> there might be areas in the linear mapping which cannot be addressed by
> the hardware and hence should also need be checked apart from proposed
> linear mapping coverage test, during memory hotplug ?
>

Yes.

> >
> > In the memory hotplug case, we need to decide whether the added memory
> > will appear in the addressable area, which is a different question. So
> > it makes sense to duplicate some of the logic that exists in
> > arm64_memblock_init() (or factor it out) to decide whether this newly
> > added memory will appear in the addressable window or not.
>
> It seems unlikely that any hotplug agent (e.g. firmware) will ever push
> through a memory range which is not accessible in the hardware but then
> it is not impossible either. In summary, arch_add_memory() should check
>
> 1. Range can be covered inside linear mapping
> 2. Range is accessible by the hardware
>
> Before the VA space organization series, (2) was not necessary as it was
> contained inside (1) ?
>

Not really. We have a problem with KASLR randomization of the linear
region, which may choose memstart_addr in such a way that we lose
access to regions that are beyond the boot time value of
memblock_end_of_DRAM().

I think we should probably rework that code to take
ID_AA64MMFR0_EL1.PARange into account instead.

> >
> > So I think your original approach makes more sense here, although I
> > think you want '(start + size - 1) <= __pa(PAGE_END - 1)' in the
> > comparison above (and please drop the redundant parens)
> >
>
> Sure, will accommodate these changes.