Re: [patch 2/2] mm, compaction: persistently skip hugetlbfs pageblocks
From: Vlastimil Babka
Date: Fri Sep 01 2017 - 08:32:34 EST
On 08/23/2017 10:41 AM, Vlastimil Babka wrote:
> On 08/16/2017 01:39 AM, David Rientjes wrote:
>> It is pointless to migrate hugetlb memory as part of memory compaction if
>> the hugetlb size is equal to the pageblock order. No defragmentation is
>> occurring in this condition.
>>
>> It is also pointless to for the freeing scanner to scan a pageblock where
>> a hugetlb page is pinned. Unconditionally skip these pageblocks, and do
>> so peristently so that they are not rescanned until it is observed that
>> these hugepages are no longer pinned.
>>
>> It would also be possible to do this by involving the hugetlb subsystem
>> in marking pageblocks to no longer be skipped when they hugetlb pages are
>> freed. This is a simple solution that doesn't involve any additional
>> subsystems in pageblock skip manipulation.
>>
>> Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx>
>> ---
>> mm/compaction.c | 48 +++++++++++++++++++++++++++++++++++++-----------
>> 1 file changed, 37 insertions(+), 11 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -217,6 +217,20 @@ static void reset_cached_positions(struct zone *zone)
>> pageblock_start_pfn(zone_end_pfn(zone) - 1);
>> }
>>
>> +/*
>> + * Hugetlbfs pages should consistenly be skipped until updated by the hugetlb
>> + * subsystem. It is always pointless to compact pages of pageblock_order and
>> + * the free scanner can reconsider when no longer huge.
>> + */
>> +static bool pageblock_skip_persistent(struct page *page, unsigned int order)
>> +{
>> + if (!PageHuge(page))
>> + return false;
>> + if (order != pageblock_order)
>> + return false;
>> + return true;
>
> Why just HugeTLBfs? There's also no point in migrating/finding free
> pages in THPs. Actually, any compound page of pageblock order?
>
>> +}
>> +
>> /*
>> * This function is called to clear all cached information on pageblocks that
>> * should be skipped for page isolation when the migrate and free page scanner
>> @@ -241,6 +255,8 @@ static void __reset_isolation_suitable(struct zone *zone)
>> continue;
>> if (zone != page_zone(page))
>> continue;
>> + if (pageblock_skip_persistent(page, compound_order(page)))
>> + continue;
>
> I like the idea of how persistency is achieved by rechecking in the reset.
>
>>
>> clear_pageblock_skip(page);
>> }
>> @@ -448,13 +464,15 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>> * and the only danger is skipping too much.
>> */
>> if (PageCompound(page)) {
>> - unsigned int comp_order = compound_order(page);
>> -
>> - if (likely(comp_order < MAX_ORDER)) {
>> - blockpfn += (1UL << comp_order) - 1;
>> - cursor += (1UL << comp_order) - 1;
>> + const unsigned int order = compound_order(page);
>> +
>> + if (pageblock_skip_persistent(page, order)) {
>> + set_pageblock_skip(page);
>> + blockpfn = end_pfn;
>> + } else if (likely(order < MAX_ORDER)) {
>> + blockpfn += (1UL << order) - 1;
>> + cursor += (1UL << order) - 1;
>> }
>
> Is this new code (and below) really necessary? The existing code should
> already lead to skip bit being set via update_pageblock_skip()?
Ok, here's a patch implementing my suggestions.
----8<----