For my information, what could be unsafe for these paths ?Since vm_munmap() is called in other paths too, i.e. drm driver, kvm, etc. I'm+static int vm_munmap_zap_rlock(unsigned long start, size_t len)A stupid question, since the overhead of vm_munmap_zap_rlock() compared to
+{
+ÂÂÂ int ret;
+ÂÂÂ struct mm_struct *mm = current->mm;
+ÂÂÂ LIST_HEAD(uf);
+
+ÂÂÂ ret = do_munmap_zap_rlock(mm, start, len, &uf);
+ÂÂÂ userfaultfd_unmap_complete(mm, &uf);
+ÂÂÂ return ret;
+}
+
 int vm_munmap(unsigned long start, size_t len)
 {
ÂÂÂÂÂ int ret;
vm_munmap() is not significant, why not putting that in vm_munmap() instead of
introducing a new vm_munmap_zap_rlock() ?
not quite sure if those paths are safe enough to this optimization. And, it
looks they are not the main sources of the latency, so here I introduced
vm_munmap_zap_rlock() for munmap() only.
If someone reports or we see they are the sources of latency too, and the
optimization is proved safe to them, we can definitely extend this to all
vm_munmap() calls
Thanks,
Yang
@@ -2855,10 +2939,9 @@ int vm_munmap(unsigned long start, size_t len)
 SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len)
 {
ÂÂÂÂÂ profile_munmap(addr);
-ÂÂÂ return vm_munmap(addr, len);
+ÂÂÂ return vm_munmap_zap_rlock(addr, len);
 }
-
 /*
ÂÂ * Emulation of deprecated remap_file_pages() syscall.
ÂÂ */
@@ -3146,7 +3229,7 @@ void exit_mmap(struct mm_struct *mm)
ÂÂÂÂÂ tlb_gather_mmu(&tlb, mm, 0, -1);
ÂÂÂÂÂ /* update_hiwater_rss(mm) here? but nobody should be looking */
ÂÂÂÂÂ /* Use -1 here to ensure all VMAs in the mm are unmapped */
-ÂÂÂ unmap_vmas(&tlb, vma, 0, -1);
+ÂÂÂ unmap_vmas(&tlb, vma, 0, -1, false);
ÂÂÂÂÂ free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
ÂÂÂÂÂ tlb_finish_mmu(&tlb, 0, -1);