Re: [PATCH v4 05/11] mm: thp: enable thp migration in generic path
From: Zi Yan
Date: Fri Mar 24 2017 - 11:30:45 EST
Hi Kirill,
Kirill A. Shutemov wrote:
> On Mon, Mar 13, 2017 at 11:45:01AM -0400, Zi Yan wrote:
>> From: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
>>
>> This patch adds thp migration's core code, including conversions
>> between a PMD entry and a swap entry, setting PMD migration entry,
>> removing PMD migration entry, and waiting on PMD migration entries.
>>
>> This patch makes it possible to support thp migration.
>> If you fail to allocate a destination page as a thp, you just split
>> the source thp as we do now, and then enter the normal page migration.
>> If you succeed to allocate destination thp, you enter thp migration.
>> Subsequent patches actually enable thp migration for each caller of
>> page migration by allowing its get_new_page() callback to
>> allocate thps.
>>
>> ChangeLog v1 -> v2:
>> - support pte-mapped thp, doubly-mapped thp
>>
>> Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
>>
>> ChangeLog v2 -> v3:
>> - use page_vma_mapped_walk()
>>
>> ChangeLog v3 -> v4:
>> - factor out the code of removing pte pgtable page in zap_huge_pmd()
>>
>> Signed-off-by: Zi Yan <zi.yan@xxxxxxxxxxxxxx>
>
> See few questions below.
>
> It would be nice to split it into few patches. Probably three or four.
This patch was two separate ones in v2:
1. introduce remove_pmd_migration_entry(), set_migration_pmd() and other
auxiliary functions,
2. enable THP migration in the migration path.
But the first one of these two patches would be dead code, since no one
else uses it. Michal also suggested merging two patches into one when he
reviewed v2.
If you have any suggestion, I am OK to split this patch and make it
smaller.
<snip>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index cda4c2778d04..0bbad6dcf95a 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -211,6 +211,12 @@ static int remove_migration_pte(struct page *page, struct vm_area_struct *vma,
>> new = page - pvmw.page->index +
>> linear_page_index(vma, pvmw.address);
>>
>> + /* PMD-mapped THP migration entry */
>> + if (!PageHuge(page) && PageTransCompound(page)) {
>> + remove_migration_pmd(&pvmw, new);
>> + continue;
>> + }
>> +
>
> Any reason not to share PTE handling of non-THP with THP?
You mean PTE-mapped THPs? I was mostly reuse Naoya's patches. But at
first look, it seems PTE-mapped THP handling code is the same as
existing PTE handling code.
This part of code can be changed to:
+ /* PMD-mapped THP migration entry */
+ if (!pvmw.pte && pvmw.page) {
+ VM_BUG_ON_PAGE(!PageTransCompound(page), page);
+ remove_migration_pmd(&pvmw, new);
+ continue;
+ }
+
>
>> get_page(new);
>> pte = pte_mkold(mk_pte(new, READ_ONCE(vma->vm_page_prot)));
>> if (pte_swp_soft_dirty(*pvmw.pte))
<snip>
>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>> index 4ed5908c65b0..9d550a8a0c71 100644
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
>> @@ -118,7 +118,8 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
>> {
>> pmd_t pmd;
>> VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>> - VM_BUG_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp));
>> + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) &&
>> + !pmd_devmap(*pmdp));
>
> How does this? _flush doesn't make sense for !present.
Right. It should be:
- VM_BUG_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp));
+ VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) &&
+ !pmd_devmap(*pmdp)) || !pmd_present(*pmdp));
>
>> pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
>> flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
>> return pmd;
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 555cc7ebacf6..2c65abbd7a0e 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1298,6 +1298,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>> int ret = SWAP_AGAIN;
>> enum ttu_flags flags = (enum ttu_flags)arg;
>>
>> +
>> /* munlock has nothing to gain from examining un-locked vmas */
>> if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED))
>> return SWAP_AGAIN;
>> @@ -1308,6 +1309,14 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>> }
>>
>> while (page_vma_mapped_walk(&pvmw)) {
>> + /* THP migration */
>> + if (flags & TTU_MIGRATION) {
>> + if (!PageHuge(page) && PageTransCompound(page)) {
>> + set_pmd_migration_entry(&pvmw, page);
>
> Again, it would be nice share PTE handling. It should be rather similar,
> no?
At first look, it should work. I will change it. If it works, it will be
included in the next version.
This can also shrink the patch size.
Thanks.
--
Best Regards,
Yan Zi
Attachment:
signature.asc
Description: OpenPGP digital signature