On 07/26/2018 08:10 PM, Yang Shi wrote:
Introduces three new helper functions:Since it's returning bool, the proper naming scheme would be something
* munmap_addr_sanity()
* munmap_lookup_vma()
* munmap_mlock_vma()
They will be used by do_munmap() and the new do_munmap with zapping
large mapping early in the later patch.
There is no functional change, just code refactor.
Reviewed-by: Laurent Dufour <ldufour@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
---
mm/mmap.c | 120 ++++++++++++++++++++++++++++++++++++++++++--------------------
1 file changed, 82 insertions(+), 38 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index d1eb87e..2504094 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2686,34 +2686,44 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
return __split_vma(mm, vma, addr, new_below);
}
-/* Munmap is split into 2 main parts -- this part which finds
- * what needs doing, and the areas themselves, which do the
- * work. This now handles partial unmappings.
- * Jeremy Fitzhardinge <jeremy@xxxxxxxx>
- */
-int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
- struct list_head *uf)
+static inline bool munmap_addr_sanity(unsigned long start, size_t len)
like "munmap_addr_ok()". I don't know how I would replace the "munmap_"
prefix myself though.
{Agree with Michal that you could simply return vma, NULL, or error.
- unsigned long end;
- struct vm_area_struct *vma, *prev, *last;
-
if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
- return -EINVAL;
+ return false;
- len = PAGE_ALIGN(len);
- if (len == 0)
- return -EINVAL;
+ if (PAGE_ALIGN(len) == 0)
+ return false;
+
+ return true;
+}
+
+/*
+ * munmap_lookup_vma: find the first overlap vma and split overlap vmas.
+ * @mm: mm_struct
+ * @vma: the first overlapping vma
+ * @prev: vma's prev
+ * @start: start address
+ * @end: end address
+ *
+ * returns 1 if successful, 0 or errno otherwise
+ */
+static int munmap_lookup_vma(struct mm_struct *mm, struct vm_area_struct **vma,
+ struct vm_area_struct **prev, unsigned long start,
+ unsigned long end)
Caller can easily find out prev from that, it's not like we have to
count each cpu cycle here. It will be a bit less tricky code as well,
which is a plus.
...
+static inline void munmap_mlock_vma(struct vm_area_struct *vma,This function does munlock, not mlock. You could call it e.g.
+ unsigned long end)
munlock_vmas().
+{You keep 'vma' just for the vm_mm? Better extract mm pointer first and
+ struct vm_area_struct *tmp = vma;
+
+ while (tmp && tmp->vm_start < end) {
+ if (tmp->vm_flags & VM_LOCKED) {
+ vma->vm_mm->locked_vm -= vma_pages(tmp);
then you don't need the 'tmp'.
+ munlock_vma_pages_all(tmp);
+ }
+ tmp = tmp->vm_next;
+ }
+}