[PATCH 3.11 194/198] aio: fix aio request leak when events are reaped by userspace

From: Luis Henriques
Date: Thu Jul 03 2014 - 05:26:37 EST


3.11.10.13 -stable review patch. If anyone has any objections, please let me know.

------------------

From: Benjamin LaHaise <bcrl@xxxxxxxxx>

commit f8567a3845ac05bb28f3c1b478ef752762bd39ef upstream.

The aio cleanups and optimizations by kmo that were merged into the 3.10
tree added a regression for userspace event reaping. Specifically, the
reference counts are not decremented if the event is reaped in userspace,
leading to the application being unable to submit further aio requests.
This patch applies to 3.12+. A separate backport is required for 3.10/3.11.
This issue was uncovered as part of CVE-2014-0206.

[jmoyer@xxxxxxxxxx: backported to 3.10]
Signed-off-by: Benjamin LaHaise <bcrl@xxxxxxxxx>
Cc: Kent Overstreet <kmo@xxxxxxxxxxxxx>
Cc: Mateusz Guzik <mguzik@xxxxxxxxxx>
Cc: Petr Matousek <pmatouse@xxxxxxxxxx>
Cc: Jeff Moyer <jmoyer@xxxxxxxxxx>
Signed-off-by: Luis Henriques <luis.henriques@xxxxxxxxxxxxx>
---
fs/aio.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index 975a5d5810a9..48f02745b876 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -312,7 +312,6 @@ static void free_ioctx(struct kioctx *ctx)

avail = (head <= ctx->tail ? ctx->tail : ctx->nr_events) - head;

- atomic_sub(avail, &ctx->reqs_active);
head += avail;
head %= ctx->nr_events;
}
@@ -680,6 +679,7 @@ void aio_complete(struct kiocb *iocb, long res, long res2)
put_rq:
/* everything turned out well, dispose of the aiocb. */
aio_put_req(iocb);
+ atomic_dec(&ctx->reqs_active);

/*
* We have to order our ring_info tail store above and test
@@ -757,8 +757,6 @@ static long aio_read_events_ring(struct kioctx *ctx,
flush_dcache_page(ctx->ring_pages[0]);

pr_debug("%li h%u t%u\n", ret, head, ctx->tail);
-
- atomic_sub(ret, &ctx->reqs_active);
out:
mutex_unlock(&ctx->ring_lock);

--
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/