hugetlb_wp calls vmf_anon_prepare() after having allocated a page, which
means that we might need to call restore_reserve_on_error() upon error.
vmf_anon_prepare() releases the vma lock before returning, but
restore_reserve_on_error() expects the vma lock to be held by the caller.
Fix it by calling vmf_anon_prepare() before allocating the page.
Signed-off-by: Oscar Salvador <osalvador@xxxxxxx>
Fixes: 9acad7ba3e25 ("hugetlb: use vmf_anon_prepare() instead of anon_vma_prepare()")
---
I did not hit this bug, I just spotted this because I was looking at hugetlb_wp
for some other reason. And I did not want to get creative to see if I could
trigger this so I could get a backtrace.
My assumption is that we could trigger this if 1) this was a shared mapping,
so no anon_vma and 2) we call in GUP code with FOLL_WRITE, which would cause
the FLAG_UNSHARE to be passed, so we will end up in hugetlb_wp().
mm/hugetlb.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6be78e7d4f6e..eb0d8a45505e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6005,6 +6005,15 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
* be acquired again before returning to the caller, as expected.
*/
spin_unlock(vmf->ptl);
+
+ /*
+ * When the original hugepage is shared one, it does not have
+ * anon_vma prepared.
+ */
+ ret = vmf_anon_prepare(vmf);
+ if (unlikely(ret))
+ goto out_release_old;
+
new_folio = alloc_hugetlb_folio(vma, vmf->address, outside_reserve);
if (IS_ERR(new_folio)) {
@@ -6058,14 +6067,6 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
goto out_release_old;
}
- /*
- * When the original hugepage is shared one, it does not have
- * anon_vma prepared.
- */
- ret = vmf_anon_prepare(vmf);
- if (unlikely(ret))
- goto out_release_all;
-
if (copy_user_large_folio(new_folio, old_folio, vmf->real_address, vma)) {
ret = VM_FAULT_HWPOISON_LARGE | VM_FAULT_SET_HINDEX(hstate_index(h));
goto out_release_all;