[PATCH 3/4] pagewalk: add locking-rule commnets
From: KOSAKI Motohiro
Date: Wed May 25 2011 - 03:10:50 EST
Originally, walk_hugetlb_range() didn't require a caller take any lock.
But commit d33b9f45bd (mm: hugetlb: fix hugepage memory leak in
walk_page_range) changed its rule. Because it added find_vma() call
in walk_hugetlb_range().
Any locking-rule change commit should write a doc too.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx>
---
include/linux/mm.h | 1 +
mm/pagewalk.c | 3 +++
2 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index dd87a78..7337b66 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -921,6 +921,7 @@ unsigned long unmap_vmas(struct mmu_gather **tlb,
* @pte_entry: if set, called for each non-empty PTE (4th-level) entry
* @pte_hole: if set, called for each hole at all levels
* @hugetlb_entry: if set, called for each hugetlb entry
+ * *Caution*: The caller must hold mmap_sem() if it's used.
*
* (see walk_page_range for more details)
*/
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index ee4ff87..f792940 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -181,6 +181,9 @@ static int walk_hugetlb_range(struct vm_area_struct *vma,
*
* If any callback returns a non-zero value, the walk is aborted and
* the return value is propagated back to the caller. Otherwise 0 is returned.
+ *
+ * walk->mm->mmap_sem must be held for at least read if walk->hugetlb_entry
+ * is !NULL.
*/
int walk_page_range(unsigned long addr, unsigned long end,
struct mm_walk *walk)
--
1.7.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/