Re: [PATCH 1/2] page-allocator: Split per-cpu list into one-list-per-migrate-type
From: Minchan Kim
Date: Fri Aug 28 2009 - 09:47:02 EST
On Fri, Aug 28, 2009 at 9:56 PM, Mel Gorman<mel@xxxxxxxxx> wrote:
> On Fri, Aug 28, 2009 at 09:00:25PM +0900, Minchan Kim wrote:
>> On Fri, Aug 28, 2009 at 8:52 PM, Minchan Kim<minchan.kim@xxxxxxxxx> wrote:
>> > Hi, Mel.
>> >
>> > On Fri, 28 Aug 2009 09:44:26 +0100
>> > Mel Gorman <mel@xxxxxxxxx> wrote:
>> >
>> >> Currently the per-cpu page allocator searches the PCP list for pages of the
>> >> correct migrate-type to reduce the possibility of pages being inappropriate
>> >> placed from a fragmentation perspective. This search is potentially expensive
>> >> in a fast-path and undesirable. Splitting the per-cpu list into multiple
>> >> lists increases the size of a per-cpu structure and this was potentially
>> >> a major problem at the time the search was introduced. These problem has
>> >> been mitigated as now only the necessary number of structures is allocated
>> >> for the running system.
>> >>
>> >> This patch replaces a list search in the per-cpu allocator with one list per
>> >> migrate type. The potential snag with this approach is when bulk freeing
>> >> pages. We round-robin free pages based on migrate type which has little
>> >> bearing on the cache hotness of the page and potentially checks empty lists
>> >> repeatedly in the event the majority of PCP pages are of one type.
>> >>
>> >> Signed-off-by: Mel Gorman <mel@xxxxxxxxx>
>> >> Acked-by: Nick Piggin <npiggin@xxxxxxx>
Reviewed-by: Minchan Kim <minchan.kim@xxxxxxxxx>
>> >> Â */
>> >> -static void free_pages_bulk(struct zone *zone, int count,
>> >> - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct list_head *list, int order)
>> >> +static void free_pcppages_bulk(struct zone *zone, int count,
>> >> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct per_cpu_pages *pcp)
>> >> Â{
>> >> + Â Â int migratetype = 0;
>> >> +
>> >
>> > How about caching the last sucess migratetype
>> > with 'per_cpu_pages->last_alloc_type'?
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â^^^^
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âfree
>> > I think it could prevent a litte spinning empty list.
>>
>> Anyway, Ignore me.
>> I didn't see your next patch.
>>
>
> Nah, it's a reasonable suggestion. Patch 2 was one effort to reduce
> spinning but the comment was in patch 1 in case someone thought of
> something better. I tried what you suggested before but it didn't work
> out. For any sort of workload that varies the type of allocation (very
> frequent), it didn't reduce spinning significantly.
Thanks for good information.
> --
> Mel Gorman
> Part-time Phd Student             ÂLinux Technology Center
> University of Limerick             IBM Dublin Software Lab
>
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/