[PATCH 3.2 044/185] IB/qib: Convert qib_user_sdma_pin_pages() touse get_user_pages_fast()
From: Ben Hutchings
Date: Sat Dec 28 2013 - 21:23:24 EST
3.2.54-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Jan Kara <jack@xxxxxxx>
commit 603e7729920e42b3c2f4dbfab9eef4878cb6e8fa upstream.
qib_user_sdma_queue_pkts() gets called with mmap_sem held for
writing. Except for get_user_pages() deep down in
qib_user_sdma_pin_pages() we don't seem to need mmap_sem at all. Even
more interestingly the function qib_user_sdma_queue_pkts() (and also
qib_user_sdma_coalesce() called somewhat later) call copy_from_user()
which can hit a page fault and we deadlock on trying to get mmap_sem
when handling that fault.
So just make qib_user_sdma_pin_pages() use get_user_pages_fast() and
leave mmap_sem locking for mm.
This deadlock has actually been observed in the wild when the node
is under memory pressure.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@xxxxxxxxx>
Signed-off-by: Jan Kara <jack@xxxxxxx>
Signed-off-by: Roland Dreier <roland@xxxxxxxxxxxxxxx>
[bwh: Backported to 3.2:
- Adjust context
- Adjust indentation and nr_pages argument in qib_user_sdma_pin_pages()]
Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
---
drivers/infiniband/hw/qib/qib_user_sdma.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
--- a/drivers/infiniband/hw/qib/qib_user_sdma.c
+++ b/drivers/infiniband/hw/qib/qib_user_sdma.c
@@ -284,8 +284,7 @@ static int qib_user_sdma_pin_pages(const
int j;
int ret;
- ret = get_user_pages(current, current->mm, addr,
- npages, 0, 1, pages, NULL);
+ ret = get_user_pages_fast(addr, npages, 0, pages);
if (ret != npages) {
int i;
@@ -830,10 +829,7 @@ int qib_user_sdma_writev(struct qib_ctxt
while (dim) {
const int mxp = 8;
- down_write(¤t->mm->mmap_sem);
ret = qib_user_sdma_queue_pkts(dd, pq, &list, iov, dim, mxp);
- up_write(¤t->mm->mmap_sem);
-
if (ret <= 0)
goto done_unlock;
else {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/