From: Yishai Hadas <yishaih@xxxxxxxxxxxx>
As VMAs for a given range might not be available as part of the
registration phase in ODP, IB_ACCESS_HUGETLB/page_shift must be checked
as part of the page fault flow.
If the application didn't mmap the backed memory with huge pages or
released part of that hugepage area, an error will be set as part of the
page fault flow once be detected.
Fixes: 0008b84ea9af ("IB/umem: Add support to huge ODP")
Signed-off-by: Yishai Hadas <yishaih@xxxxxxxxxxxx>
Reviewed-by: Artemy Kovalyov <artemyko@xxxxxxxxxxxx>
Reviewed-by: Aviad Yehezkel <aviadye@xxxxxxxxxxxx>
Signed-off-by: Leon Romanovsky <leonro@xxxxxxxxxxxx>
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -241,22 +241,10 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr,
umem_odp->umem.owning_mm = mm = current->mm;
umem_odp->notifier.ops = ops;
- umem_odp->page_shift = PAGE_SHIFT;
- if (access & IB_ACCESS_HUGETLB) {
- struct vm_area_struct *vma;
- struct hstate *h;
-
- down_read(&mm->mmap_sem);
- vma = find_vma(mm, ib_umem_start(umem_odp));
- if (!vma || !is_vm_hugetlb_page(vma)) {
- up_read(&mm->mmap_sem);
- ret = -EINVAL;
- goto err_free;
- }
- h = hstate_vma(vma);
- umem_odp->page_shift = huge_page_shift(h);
- up_read(&mm->mmap_sem);
- }
+ if (access & IB_ACCESS_HUGETLB)
+ umem_odp->page_shift = HPAGE_SHIFT;
+ else
+ umem_odp->page_shift = PAGE_SHIFT;
umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID);
ret = ib_init_umem_odp(umem_odp, ops);