Re: [PATCH] mm: prevent droppable mappings from being locked

From: anthony . yznaga

Date: Mon Mar 09 2026 - 11:57:04 EST



On 3/9/26 7:15 AM, David Hildenbrand (Arm) wrote:
On 3/6/26 21:45, Anthony Yznaga wrote:
Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
to the check in mlock_fixup(). However, they will be locked indirectly
if they are created after mlockall(MCL_FUTURE).

Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
Signed-off-by: Anthony Yznaga <anthony.yznaga@xxxxxxxxxx>
---
include/linux/mm.h | 3 +++
mm/mlock.c | 4 ++--
mm/vma.c | 2 +-
3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5be3d8a8f806..bb830574d112 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -574,6 +574,9 @@ enum {
/* This mask represents all the VMA flag bits used by mlock */
#define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT)
+/* This mask prevents VMAs from being mlock'd */
+#define VM_NO_MLOCK_MASK (VM_SPECIAL | VM_DROPPABLE)
Instead of adding that, could we cleanup further by doing something like the following?

The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
the tools/testing/vma test happy.

Not even compile tested, so will require some more work.

Thanks, David. This is a better approach that I'll implement. One thing to note is that the check for secretmem has to stay in mlock_fixup() because it's preventing the always-locked memory from being unlocked. I can add an extra comment for that.

Anthony



diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
index 593f5d4e108b..755281fab23d 100644
--- a/include/linux/hugetlb_inline.h
+++ b/include/linux/hugetlb_inline.h
@@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
#endif
-static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
+static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
{
return is_vm_hugetlb_flags(vma->vm_flags);
}
diff --git a/mm/internal.h b/mm/internal.h
index 6e1162e13289..b70ebbdafe00 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
}
return fpin;
}
+
+static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
+{
+ if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
+ return false;
+ if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
+ return false;
+ return vma != get_gate_vma(vma->vm_mm);
+}
#else /* !CONFIG_MMU */
static inline void unmap_mapping_folio(struct folio *folio) { }
static inline void mlock_new_folio(struct folio *folio) { }
diff --git a/mm/mlock.c b/mm/mlock.c
index 1a92d16f3684..e16b2ea234f7 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
int ret = 0;
vm_flags_t oldflags = vma->vm_flags;
- if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
- is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
- vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
+ if (newflags == oldflags || !vma_supports_mlock(vma))
/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
goto out;
diff --git a/mm/vma.c b/mm/vma.c
index e95fd5a5fe5c..b7055c264b5d 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
vm_stat_account(mm, vma->vm_flags, map->pglen);
if (vm_flags & VM_LOCKED) {
- if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
- is_vm_hugetlb_page(vma) ||
- vma == get_gate_vma(mm))
+ if (!vma_supports_mlock(vma))
vm_flags_clear(vma, VM_LOCKED_MASK);
else
mm->locked_vm += map->pglen;