Why have you removed this guard? Previously we had pprev==NULL and
returned mm->mmap.
This seems like a semantic change without any explanation. Could you
clarify?
Scratch that. I have misread the code. find_vma will return mm->mmap if
the given address is bellow all vmas. Sorry about noise.
The only concern left would be the caching. Are you sure this will not
break some workloads which benefit from mmap_cache usage and would
interfere with find_vma_prev callers now? Anyway this could be fixed
trivially.
Here is callers list.
find_vma_prev 115 arch/ia64/mm/fault.c vma =
find_vma_prev(mm, address,&prev_vma);
find_vma_prev 183 arch/parisc/mm/fault.c vma =
find_vma_prev(mm, address,&prev_vma);
find_vma_prev 229 arch/tile/mm/hugetlbpage.c vma =
find_vma_prev(mm, addr,&prev_vma);
find_vma_prev 336 arch/x86/mm/hugetlbpage.c if
(!(vma = find_vma_prev(mm, addr,&prev_vma)))
find_vma_prev 388 mm/madvise.c vma =
find_vma_prev(current->mm, start,&prev);
find_vma_prev 642 mm/mempolicy.c vma = find_vma_prev(mm, start,&prev);
find_vma_prev 388 mm/mlock.c vma =
find_vma_prev(current->mm, start,&prev);
find_vma_prev 265 mm/mprotect.c vma =
find_vma_prev(current->mm, start,&prev);
In short, find_find_prev() is only used from page fault, madvise, mbind, mlock
and mprotect. And page fault is only performance impact callsite because other
don't used frequently on regular workload.
So, I wouldn't say, this patch has zero negative impact, but I think
it is enough
small and benefit is enough much.