Re: [patch] aio: remove aio-max-nr and instead use the memlock rlimitto limit the number of pages pinned for the aio completion ring

From: Avi Kivity
Date: Mon Mar 09 2009 - 15:46:20 EST


Jeff Moyer wrote:
Is it not possible to get rid of the pinning entirely? Pinning
interferes with page migration which is important for NUMA, among
other issues.

aio_complete is called from interrupt handlers, so can't block faulting
in a page. Zach mentions there is a possibility of handing completions
off to a kernel thread, with all of the performance worries and extra
bookkeeping that go along with such a scheme (to help frame my concerns,
I often get lambasted over .5% performance regressions).

Or you could queue the completions somewhere, and only copy them to user memory when io_getevents() is called. I think the plan was once to allow events to be consumed opportunistically even without io_getevents(), though.


I'm happy to look into such a scheme, should anyone show me data that
points to this NUMA issue as an actual performance problem today. In
the absence of such data, I simply can't justify the work at the moment.

Right now page migration is a dead duck. Outside HPC, there is now support for triggering it or for getting the scheduler to prefer a process's memory node. Only a minority of hosts are NUMA.

I think that will/should change in the near future. Nehalem-based servers mean that NUMA will be commonplace. The larger core counts will mean that hosts will run several unrelated applications (often through virtualization); such partitioning can easily benefit from page migration.

Thanks for taking a look!

Sorry, I didn't actually take a look at the patches. I only reacted to the description - I am allergic to pinned memory.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/