Re: [PATCH v3 2/2] mm, thp: avoid unnecessary swapin in khugepaged
From: Ebru Akagunduz
Date: Sun Mar 20 2016 - 14:06:37 EST
On Thu, Mar 17, 2016 at 12:07:44PM +0100, Vlastimil Babka wrote:
> On 03/14/2016 10:40 PM, Ebru Akagunduz wrote:
> >Currently khugepaged makes swapin readahead to improve
> >THP collapse rate. This patch checks vm statistics
> >to avoid workload of swapin, if unnecessary. So that
> >when system under pressure, khugepaged won't consume
> >resources to swapin.
> >
> >The patch was tested with a test program that allocates
> >800MB of memory, writes to it, and then sleeps. The system
> >was forced to swap out all. Afterwards, the test program
> >touches the area by writing, it skips a page in each
> >20 pages of the area. When waiting to swapin readahead
> >left part of the test, the memory forced to be busy
> >doing page reclaim. There was enough free memory during
> >test, khugepaged did not swapin readahead due to business.
> >
> >Test results:
> >
> > After swapped out
> >-------------------------------------------------------------------
> > | Anonymous | AnonHugePages | Swap | Fraction |
> >-------------------------------------------------------------------
> >With patch | 206608 kB | 204800 kB | 593392 kB | %99 |
> >-------------------------------------------------------------------
> >Without patch | 351308 kB | 350208 kB | 448692 kB | %99 |
> >-------------------------------------------------------------------
> >
> > After swapped in (waiting 10 minutes)
> >-------------------------------------------------------------------
> > | Anonymous | AnonHugePages | Swap | Fraction |
> >-------------------------------------------------------------------
> >With patch | 551992 kB | 368640 kB | 248008 kB | %66 |
> >-------------------------------------------------------------------
> >Without patch | 586816 kB | 464896 kB | 213184 kB | %79 |
> >-------------------------------------------------------------------
> >
> >Signed-off-by: Ebru Akagunduz <ebru.akagunduz@xxxxxxxxx>
>
> Looks like a step in a good direction. Still might be worthwile to
> also wait for the swapin to complete, and actually collapse
> immediately, no?
>
I'll send a patch to solve mmap_sem issues after getting this
patch series accepted.
> >---
> >Changes in v2:
> > - Add reference to specify which patch fixed (Ebru Akagunduz)
>
> The reference is again missing in v3.
>
> > - Fix commit subject line (Ebru Akagunduz)
> >
> >Changes in v3:
> > - Remove default values of allocstall (Kirill A. Shutemov)
> >
> > mm/huge_memory.c | 13 +++++++++++--
> > 1 file changed, 11 insertions(+), 2 deletions(-)
> >
> >diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >index 86e9666..67a398c 100644
> >--- a/mm/huge_memory.c
> >+++ b/mm/huge_memory.c
> >@@ -102,6 +102,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
> > */
> > static unsigned int khugepaged_max_ptes_none __read_mostly;
> > static unsigned int khugepaged_max_ptes_swap __read_mostly;
> >+static unsigned long int allocstall;
>
> "int" here is unnecessary
>
> >
> > static int khugepaged(void *none);
> > static int khugepaged_slab_init(void);
> >@@ -2438,7 +2439,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> > struct page *new_page;
> > spinlock_t *pmd_ptl, *pte_ptl;
> > int isolated = 0, result = 0;
> >- unsigned long hstart, hend;
> >+ unsigned long hstart, hend, swap, curr_allocstall;
> > struct mem_cgroup *memcg;
> > unsigned long mmun_start; /* For mmu_notifiers */
> > unsigned long mmun_end; /* For mmu_notifiers */
> >@@ -2493,7 +2494,14 @@ static void collapse_huge_page(struct mm_struct *mm,
> > goto out;
> > }
> >
> >- __collapse_huge_page_swapin(mm, vma, address, pmd);
> >+ swap = get_mm_counter(mm, MM_SWAPENTS);
> >+ curr_allocstall = sum_vm_event(ALLOCSTALL);
> >+ /*
> >+ * When system under pressure, don't swapin readahead.
> >+ * So that avoid unnecessary resource consuming.
> >+ */
> >+ if (allocstall == curr_allocstall && swap != 0)
> >+ __collapse_huge_page_swapin(mm, vma, address, pmd);
> >
> > anon_vma_lock_write(vma->anon_vma);
> >
> >@@ -2790,6 +2798,7 @@ skip:
> > VM_BUG_ON(khugepaged_scan.address < hstart ||
> > khugepaged_scan.address + HPAGE_PMD_SIZE >
> > hend);
> >+ allocstall = sum_vm_event(ALLOCSTALL);
>
> Why here? Rik said in v2:
>
> >Khugepaged stores the allocstall value when it goes to sleep,
> >and checks it before calling (or not) __collapse_huge_page_swapin.
>
> But that's not true, this is not "when it goes to sleep".
> So AFAICS it only observes the allocstalls between starting to scan
> a single pmd, and trying to collapse the pmd. So the window is quite
> tiny especially compared to I/O speeds, and this will IMHO catch
> only really frequent stalls. Placing it really at "when it goes to
> sleep" sounds better.
>
> > ret = khugepaged_scan_pmd(mm, vma,
> > khugepaged_scan.address,
> > hpage);
> >
>