Re: [PATCH] arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings

From: Dev Jain
Date: Sun Mar 30 2025 - 03:12:35 EST




On 28/03/25 8:09 pm, Ryan Roberts wrote:
On 28/03/2025 02:21, Dev Jain wrote:
arm64 uses apply_to_page_range to change permissions for kernel VA mappings,
which does not support changing permissions for leaf mappings. This function

I think you mean "block" mappings here? A leaf mapping refers to a page table
entry that maps a piece of memory at any level in the pgtable (i.e. a present
entry that does not map a table).

A block mapping is an Arm ARM term used to mean a leaf mapping at a level other
than the last level (e.g. pmd, pud). A page mapping is an Arm ARM term used to
mean a leaf mapping at the last level (e.g. pte).

will change permissions until it encounters a leaf mapping, and will bail

block mapping

out. To avoid this partial change, explicitly disallow changing permissions
for VM_ALLOW_HUGE_VMAP mappings.

It will also emit a warning. Since there are no reports of this triggering, it
implies that there are currently no cases of code doing a vmalloc_huge()
followed by partial permission change, at least on arm64 (I'm told BPF does do
this on x86 though). But this is a footgun waiting to go off, so let's detect it
early and avoid the possibility of permissions in an intermediate state. (It
might be worth wordsmithing this into the commit log).

Thanks, I will add this.



Signed-off-by: Dev Jain <dev.jain@xxxxxxx>

With the commit log fixed up:

Reviewed-by: Ryan Roberts <ryan.roberts@xxxxxxx>

Thanks!


---
arch/arm64/mm/pageattr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 39fd1f7ff02a..8337c88eec69 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -96,7 +96,7 @@ static int change_memory_common(unsigned long addr, int numpages,
* we are operating on does not result in such splitting.
*
* Let's restrict ourselves to mappings created by vmalloc (or vmap).
- * Those are guaranteed to consist entirely of page mappings, and
+ * Disallow VM_ALLOW_HUGE_VMAP vmalloc mappings so that
* splitting is never needed.
*
* So check whether the [addr, addr + size) interval is entirely
@@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages,
area = find_vm_area((void *)addr);
if (!area ||
end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
- !(area->flags & VM_ALLOC))
+ ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
return -EINVAL;
if (!numpages)