[PATCH 0/4] Use per-cpu allocator for !irq requests and prepare for a bulk allocator v4
From: Mel Gorman
Date: Tue Jan 17 2017 - 04:34:07 EST
For Vlastimil, this version passed a few tests with full debugging on
without triggering the additional !in_interrupt() checks. The biggest change
is patch 3 which avoids draining the per-cpu lists from IPI context.
Changelog since v3
o Debugging check in allocation path
o Make it harder to use the free path incorrectly
o Use preempt-safe stats counter
o Do not use IPIs to drain the per-cpu allocator
Changelog since v2
o Add ack's and benchmark data
o Rebase to 4.10-rc3
Changelog since v1
o Remove a scheduler point from the allocation path
o Finalise the bulk allocator and test it
This series is motivated by a conversation led by Jesper Dangaard Brouer at
the last LSF/MM proposing a generic page pool for DMA-coherent pages. Part
of his motivation was due to the overhead of allocating multiple order-0
that led some drivers to use high-order allocations and splitting them. This
is very slow in some cases.
The first two patches in this series restructure the page allocator such
that it is relatively easy to introduce an order-0 bulk page allocator.
A patch exists to do that and has been handed over to Jesper until an
in-kernel users is created. The third patch prevents the per-cpu allocator
being drained from IPI context as that can potentially corrupt the list
after patch four is merged. The final patch alters the per-cpu alloctor
to make it exclusive to !irq requests. This cuts allocation/free overhead
by roughly 30%.
Performance tests from both Jesper and I are included in the patch.
mm/page_alloc.c | 284 ++++++++++++++++++++++++++++++++++++--------------------
1 file changed, 181 insertions(+), 103 deletions(-)
--
2.11.0