Re: [PATCH V2] mm/vmstat: Add events for THP migration without split

From: Zi Yan
Date: Fri Jun 05 2020 - 10:24:20 EST


On 4 Jun 2020, at 23:35, Anshuman Khandual wrote:

> On 06/04/2020 10:19 PM, Zi Yan wrote:
>> On 4 Jun 2020, at 12:36, Matthew Wilcox wrote:
>>
>>> On Thu, Jun 04, 2020 at 09:51:10AM -0400, Zi Yan wrote:
>>>> On 4 Jun 2020, at 7:34, Matthew Wilcox wrote:
>>>>> On Thu, Jun 04, 2020 at 09:30:45AM +0530, Anshuman Khandual wrote:
>>>>>> +Quantifying Migration
>>>>>> +=====================
>>>>>> +Following events can be used to quantify page migration.
>>>>>> +
>>>>>> +- PGMIGRATE_SUCCESS
>>>>>> +- PGMIGRATE_FAIL
>>>>>> +- THP_MIGRATION_SUCCESS
>>>>>> +- THP_MIGRATION_FAILURE
>>>>>> +
>>>>>> +THP_MIGRATION_FAILURE in particular represents an event when a THP could not be
>>>>>> +migrated as a single entity following an allocation failure and ended up getting
>>>>>> +split into constituent normal pages before being retried. This event, along with
>>>>>> +PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will help in quantifying and analyzing THP
>>>>>> +migration events including both success and failure cases.
>>>>>
>>>>> First, I'd suggest running this paragraph through 'fmt'. That way you
>>>>> don't have to care about line lengths.
>>>>>
>>>>> Second, this paragraph doesn't really explain what I need to know to
>>>>> understand the meaning of these numbers. When Linux attempts to migrate
>>>>> a THP, one of three things can happen:
>>>>>
>>>>> - It is migrated as a single THP
>>>>> - It is migrated, but had to be split
>>>>> - Migration fails
>>>>>
>>>>> How do I turn these four numbers into an understanding of how often each
>>>>> of those three situations happen? And why do we need four numbers to
>>>>> report three situations?
>>>>>
>>>>> Or is there something else that can happen? If so, I'd like that explained
>>>>> here too ;-)
>>>>
>>>> PGMIGRATE_SUCCESS and PGMIGRATE_FAIL record a combination of different events,
>>>> so it is not easy to interpret them. Let me try to explain them.
>>>
>>> Thanks! Very helpful explanation.
>>>
>>>> 1. migrating only base pages: PGMIGRATE_SUCCESS and PGMIGRATE_FAIL just mean
>>>> these base pages are migrated and fail to migrate respectively.
>>>> THP_MIGRATION_SUCCESS and THP_MIGRATION_FAILURE should be 0 in this case.
>>>> Simple.
>>>>
>>>> 2. migrating only THPs:
>>>> - PGMIGRATE_SUCCESS means THPs that are migrated and base pages
>>>> (from the split of THPs) that are migrated,
>>>>
>>>> - PGMIGRATE_FAIL means THPs that fail to migrate and base pages that fail to migrated.
>>>>
>>>> - THP_MIGRATION_SUCCESS means THPs that are migrated.
>>>>
>>>> - THP_MIGRATION_FAILURE means THPs that are split.
>>>>
>>>> So PGMIGRATE_SUCCESS - THP_MIGRATION_SUCCESS means the number of migrated base pages,
>>>> which are from the split of THPs.
>>>
>>> Are you sure about that? If I split a THP and each of those subpages
>>> migrates, won't I then see PGMIGRATE_SUCCESS increase by 512?
>>
>> That is what I mean. I guess my words did not work. I should have used subpages.
>>
>>>
>>>> When it comes to analyze failed migration, PGMIGRATE_FAIL - THP_MIGRATION_FAILURE
>>>> means the number of pages that are failed to migrate, but we cannot tell how many
>>>> are base pages and how many are THPs.
>>>>
>>>> 3. migrating base pages and THP:
>>>>
>>>> The math should be very similar to the second case, except that
>>>> a) from PGMIGRATE_SUCCESS - THP_MIGRATION_SUCCESS, we cannot tell how many are pages begin
>>>> as base pages and how many are pages begin as THPs but become base pages after split;
>>>> b) from PGMIGRATE_FAIL - THP_MIGRATION_FAILURE, an additional case,
>>>> base pages that begin as base pages fail to migrate, is mixed into the number and we
>>>> cannot tell three cases apart.
>>>
>>> So why don't we just expose PGMIGRATE_SPLIT? That would be defined as
>>> the number of times we succeeded in migrating a THP but had to split it
>>> to succeed.
>>
>> It might need extra code to get that number. Currently, the subpages from split
>> THPs are appended to the end of the original page list, so we might need a separate
>> page list for these subpages to count PGMIGRATE_SPLIT. Also what if some of the
>> subpages fail to migrate? Do we increase PGMIGRATE_SPLIT or not?
>
> Thanks Zi, for such a detailed explanation. Ideally, we should separate THP
> migration from base page migration in terms of statistics. PGMIGRATE_SUCCESS
> and PGMIGRATE_FAIL should continue to track statistics when migration starts
> with base pages. But for THP, we should track the following events.

You mean PGMIGRATE_SUCCESS and PGMIGRATE_FAIL will not track the number of migrated subpages
from split THPs? Will it cause userspace issues since their semantics are changed?

>
> 1. THP_MIGRATION_SUCCESS - THP migration is successful, without split
> 2. THP_MIGRATION_FAILURE - THP could neither be migrated, nor be split

They make sense to me.

> 3. THP_MIGRATION_SPLIT_SUCCESS - THP got split and all sub pages migrated
> 4. THP_MIGRATION_SPLIT_FAILURE - THP got split but all sub pages could not be migrated
>
> THP_MIGRATION_SPLIT_FAILURE could either increment once for a single THP or
> number of subpages that did not get migrated after split. As you mentioned,
> this will need some extra code in the core migration. Nonetheless, if these
> new events look good, will be happy to make required changes.

Maybe THP_MIGRATION_SPLIT would be simpler? My concern is that whether we need such
detailed information or not. Maybe trace points would be good enough for 3 and 4.
But if you think it is useful to you, feel free to implement them.

BTW, in terms of stats tracking, what do you think of my patch below? I am trying to
aggregate all stats counting in one place. Feel free to use it if you think it works
for you.


diff --git a/mm/migrate.c b/mm/migrate.c
index 7bfd0962149e..0f3c60470489 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1429,9 +1429,14 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
enum migrate_mode mode, int reason)
{
int retry = 1;
+ int thp_retry = 1;
int nr_failed = 0;
+ int nr_thp_failed = 0;
+ int nr_thp_split = 0;
int nr_succeeded = 0;
+ int nr_thp_succeeded = 0;
int pass = 0;
+ bool is_thp = false;
struct page *page;
struct page *page2;
int swapwrite = current->flags & PF_SWAPWRITE;
@@ -1440,11 +1445,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
if (!swapwrite)
current->flags |= PF_SWAPWRITE;

- for(pass = 0; pass < 10 && retry; pass++) {
+ for(pass = 0; pass < 10 && (retry || thp_retry); pass++) {
retry = 0;
+ thp_retry = 0;

list_for_each_entry_safe(page, page2, from, lru) {
retry:
+ is_thp = PageTransHuge(page);
cond_resched();

if (PageHuge(page))
@@ -1475,15 +1482,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
unlock_page(page);
if (!rc) {
list_safe_reset_next(page, page2, lru);
+ nr_thp_split++;
goto retry;
}
}
nr_failed++;
goto out;
case -EAGAIN:
+ if (is_thp)
+ thp_retry++;
retry++;
break;
case MIGRATEPAGE_SUCCESS:
+ if (is_thp)
+ nr_thp_succeeded++;
nr_succeeded++;
break;
default:
@@ -1493,18 +1505,27 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
* removed from migration page list and not
* retried in the next outer loop.
*/
+ if (is_thp)
+ nr_thp_failed++;
nr_failed++;
break;
}
}
}
nr_failed += retry;
+ nr_thp_failed += thp_retry;
rc = nr_failed;
out:
if (nr_succeeded)
count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded);
if (nr_failed)
count_vm_events(PGMIGRATE_FAIL, nr_failed);
+ if (nr_thp_succeeded)
+ count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded);
+ if (nr_thp_failed)
+ count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed);
+ if (nr_thp_split)
+ count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split);
trace_mm_migrate_pages(nr_succeeded, nr_failed, mode, reason);

if (!swapwrite)


â
Best Regards,
Yan Zi

Attachment: signature.asc
Description: OpenPGP digital signature