Re: [PATCH v3 08/14] mm: thp: enable thp migration in generic path

From: Zi Yan
Date: Thu Feb 09 2017 - 10:17:17 EST


On 9 Feb 2017, at 3:15, Naoya Horiguchi wrote:

> On Sun, Feb 05, 2017 at 11:12:46AM -0500, Zi Yan wrote:
>> From: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
>>
>> This patch adds thp migration's core code, including conversions
>> between a PMD entry and a swap entry, setting PMD migration entry,
>> removing PMD migration entry, and waiting on PMD migration entries.
>>
>> This patch makes it possible to support thp migration.
>> If you fail to allocate a destination page as a thp, you just split
>> the source thp as we do now, and then enter the normal page migration.
>> If you succeed to allocate destination thp, you enter thp migration.
>> Subsequent patches actually enable thp migration for each caller of
>> page migration by allowing its get_new_page() callback to
>> allocate thps.
>>
>> ChangeLog v1 -> v2:
>> - support pte-mapped thp, doubly-mapped thp
>>
>> Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
>>
>> ChangeLog v2 -> v3:
>> - use page_vma_mapped_walk()
>>
>> Signed-off-by: Zi Yan <zi.yan@xxxxxxxxxxxxxx>
>> ---
>> arch/x86/include/asm/pgtable_64.h | 2 +
>> include/linux/swapops.h | 70 +++++++++++++++++-
>> mm/huge_memory.c | 151 ++++++++++++++++++++++++++++++++++----
>> mm/migrate.c | 29 +++++++-
>> mm/page_vma_mapped.c | 13 +++-
>> mm/pgtable-generic.c | 3 +-
>> mm/rmap.c | 14 +++-
>> 7 files changed, 259 insertions(+), 23 deletions(-)
>>
> ...
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 6893c47428b6..fd54bbdc16cf 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1613,20 +1613,51 @@ int __zap_huge_pmd_locked(struct mmu_gather *tlb, struct vm_area_struct *vma,
>> atomic_long_dec(&tlb->mm->nr_ptes);
>> tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
>> } else {
>> - struct page *page = pmd_page(orig_pmd);
>> - page_remove_rmap(page, true);
>> - VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
>> - VM_BUG_ON_PAGE(!PageHead(page), page);
>> - if (PageAnon(page)) {
>> - pgtable_t pgtable;
>> - pgtable = pgtable_trans_huge_withdraw(tlb->mm, pmd);
>> - pte_free(tlb->mm, pgtable);
>> - atomic_long_dec(&tlb->mm->nr_ptes);
>> - add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
>> + struct page *page;
>> + int migration = 0;
>> +
>> + if (!is_pmd_migration_entry(orig_pmd)) {
>> + page = pmd_page(orig_pmd);
>> + VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
>> + VM_BUG_ON_PAGE(!PageHead(page), page);
>> + page_remove_rmap(page, true);
>
>> + if (PageAnon(page)) {
>> + pgtable_t pgtable;
>> +
>> + pgtable = pgtable_trans_huge_withdraw(tlb->mm,
>> + pmd);
>> + pte_free(tlb->mm, pgtable);
>> + atomic_long_dec(&tlb->mm->nr_ptes);
>> + add_mm_counter(tlb->mm, MM_ANONPAGES,
>> + -HPAGE_PMD_NR);
>> + } else {
>> + if (arch_needs_pgtable_deposit())
>> + zap_deposited_table(tlb->mm, pmd);
>> + add_mm_counter(tlb->mm, MM_FILEPAGES,
>> + -HPAGE_PMD_NR);
>> + }
>
> This block is exactly equal to the one in else block below,
> So you can factor out into some function.

Of course, I will do that.

>
>> } else {
>> - if (arch_needs_pgtable_deposit())
>> - zap_deposited_table(tlb->mm, pmd);
>> - add_mm_counter(tlb->mm, MM_FILEPAGES, -HPAGE_PMD_NR);
>> + swp_entry_t entry;
>> +
>> + entry = pmd_to_swp_entry(orig_pmd);
>> + page = pfn_to_page(swp_offset(entry));
>
>> + if (PageAnon(page)) {
>> + pgtable_t pgtable;
>> +
>> + pgtable = pgtable_trans_huge_withdraw(tlb->mm,
>> + pmd);
>> + pte_free(tlb->mm, pgtable);
>> + atomic_long_dec(&tlb->mm->nr_ptes);
>> + add_mm_counter(tlb->mm, MM_ANONPAGES,
>> + -HPAGE_PMD_NR);
>> + } else {
>> + if (arch_needs_pgtable_deposit())
>> + zap_deposited_table(tlb->mm, pmd);
>> + add_mm_counter(tlb->mm, MM_FILEPAGES,
>> + -HPAGE_PMD_NR);
>> + }
>
>> + free_swap_and_cache(entry); /* waring in failure? */
>> + migration = 1;
>> }
>> tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE);
>> }
>> @@ -2634,3 +2665,97 @@ static int __init split_huge_pages_debugfs(void)
>> }
>> late_initcall(split_huge_pages_debugfs);
>> #endif
>> +
>> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>> +void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>> + struct page *page)
>> +{
>> + struct vm_area_struct *vma = pvmw->vma;
>> + struct mm_struct *mm = vma->vm_mm;
>> + unsigned long address = pvmw->address;
>> + pmd_t pmdval;
>> + swp_entry_t entry;
>> +
>> + if (pvmw->pmd && !pvmw->pte) {
>> + pmd_t pmdswp;
>> +
>> + mmu_notifier_invalidate_range_start(mm, address,
>> + address + HPAGE_PMD_SIZE);
>
> Don't you have to put mmu_notifier_invalidate_range_* outside this if block?

I think I need to add mmu_notifier_invalidate_page() in else block.

Because Kirill's page_vma_mapped_walk() iterates each PMD or PTE.
In set_pmd_migration_etnry(), if the page is PMD-mapped, it will be called once
with PMD, then mmu_notifier_invalidate_range_* can be used. On the other hand,
if the page is PTE-mapped, the function will be called 1~512 times depending
on how many PTEs are present. mmu_notifier_invalidate_range_* is not suitable.
mmu_notifier_invalidate_page() on the corresponding subpage should work.



--
Best Regards
Yan Zi

Attachment: signature.asc
Description: OpenPGP digital signature