Re: [PATCH v2] mm: show proportional swap share of the mapping

From: Jerome Marchand
Date: Wed Jul 29 2015 - 04:34:12 EST


On 06/15/2015 03:06 PM, Minchan Kim wrote:
> We want to know per-process workingset size for smart memory management
> on userland and we use swap(ex, zram) heavily to maximize memory efficiency
> so workingset includes swap as well as RSS.
>
> On such system, if there are lots of shared anonymous pages, it's
> really hard to figure out exactly how many each process consumes
> memory(ie, rss + wap) if the system has lots of shared anonymous
> memory(e.g, android).
>
> This patch introduces SwapPss field on /proc/<pid>/smaps so we can get
> more exact workingset size per process.
>
> Bongkyu tested it. Result is below.
>
> 1. 50M used swap
> SwapTotal: 461976 kB
> SwapFree: 411192 kB
>
> $ adb shell cat /proc/*/smaps | grep "SwapPss:" | awk '{sum += $2} END {print sum}';
> 48236
> $ adb shell cat /proc/*/smaps | grep "Swap:" | awk '{sum += $2} END {print sum}';
> 141184

Hi Minchan,

I just found out about this patch. What kind of shared memory is that?
Since it's android, I'm inclined to think something specific like
ashmem. I'm asking because this patch won't help for more common type of
shared memory. See my comment below.

>
> 2. 240M used swap
> SwapTotal: 461976 kB
> SwapFree: 216808 kB
>
> $ adb shell cat /proc/*/smaps | grep "SwapPss:" | awk '{sum += $2} END {print sum}';
> 230315
> $ adb shell cat /proc/*/smaps | grep "Swap:" | awk '{sum += $2} END {print sum}';
> 1387744
>
snip
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 6dee68d013ff..d537899f4b25 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -446,6 +446,7 @@ struct mem_size_stats {
> unsigned long anonymous_thp;
> unsigned long swap;
> u64 pss;
> + u64 swap_pss;
> };
>
> static void smaps_account(struct mem_size_stats *mss, struct page *page,
> @@ -492,9 +493,20 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
> } else if (is_swap_pte(*pte)) {

This won't work for sysV shm, tmpfs and MAP_SHARED | MAP_ANONYMOUS
mapping pages which are pte_none when paged out. They're currently not
accounted at all when in swap.

Jerome

> swp_entry_t swpent = pte_to_swp_entry(*pte);
>
> - if (!non_swap_entry(swpent))
> + if (!non_swap_entry(swpent)) {
> + int mapcount;
> +
> mss->swap += PAGE_SIZE;
> - else if (is_migration_entry(swpent))
> + mapcount = swp_swapcount(swpent);
> + if (mapcount >= 2) {
> + u64 pss_delta = (u64)PAGE_SIZE << PSS_SHIFT;
> +
> + do_div(pss_delta, mapcount);
> + mss->swap_pss += pss_delta;
> + } else {
> + mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT;
> + }
> + } else if (is_migration_entry(swpent))
> page = migration_entry_to_page(swpent);
> }
>
> @@ -638,6 +650,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
> "Anonymous: %8lu kB\n"
> "AnonHugePages: %8lu kB\n"
> "Swap: %8lu kB\n"
> + "SwapPss: %8lu kB\n"
> "KernelPageSize: %8lu kB\n"
> "MMUPageSize: %8lu kB\n"
> "Locked: %8lu kB\n",
> @@ -652,6 +665,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
> mss.anonymous >> 10,
> mss.anonymous_thp >> 10,
> mss.swap >> 10,
> + (unsigned long)(mss.swap_pss >> (10 + PSS_SHIFT)),
> vma_kernel_pagesize(vma) >> 10,
> vma_mmu_pagesize(vma) >> 10,
> (vma->vm_flags & VM_LOCKED) ?
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index cee108cbe2d5..afc9eb3cba48 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -432,6 +432,7 @@ extern unsigned int count_swap_pages(int, int);
> extern sector_t map_swap_page(struct page *, struct block_device **);
> extern sector_t swapdev_block(int, pgoff_t);
> extern int page_swapcount(struct page *);
> +extern int swp_swapcount(swp_entry_t entry);
> extern struct swap_info_struct *page_swap_info(struct page *);
> extern int reuse_swap_page(struct page *);
> extern int try_to_free_swap(struct page *);
> @@ -523,6 +524,11 @@ static inline int page_swapcount(struct page *page)
> return 0;
> }
>
> +static inline int swp_swapcount(swp_entry_t entry)
> +{
> + return 0;
> +}
> +
> #define reuse_swap_page(page) (page_mapcount(page) == 1)
>
> static inline int try_to_free_swap(struct page *page)
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index a7e72103f23b..7a6bd1e5a8e9 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -875,6 +875,48 @@ int page_swapcount(struct page *page)
> }
>
> /*
> + * How many references to @entry are currently swapped out?
> + * This considers COUNT_CONTINUED so it returns exact answer.
> + */
> +int swp_swapcount(swp_entry_t entry)
> +{
> + int count, tmp_count, n;
> + struct swap_info_struct *p;
> + struct page *page;
> + pgoff_t offset;
> + unsigned char *map;
> +
> + p = swap_info_get(entry);
> + if (!p)
> + return 0;
> +
> + count = swap_count(p->swap_map[swp_offset(entry)]);
> + if (!(count & COUNT_CONTINUED))
> + goto out;
> +
> + count &= ~COUNT_CONTINUED;
> + n = SWAP_MAP_MAX + 1;
> +
> + offset = swp_offset(entry);
> + page = vmalloc_to_page(p->swap_map + offset);
> + offset &= ~PAGE_MASK;
> + VM_BUG_ON(page_private(page) != SWP_CONTINUED);
> +
> + do {
> + page = list_entry(page->lru.next, struct page, lru);
> + map = kmap_atomic(page) + offset;
> + tmp_count = *map;
> + kunmap_atomic(map);
> +
> + count += (tmp_count & ~COUNT_CONTINUED) * n;
> + n *= (SWAP_CONT_MAX + 1);
> + } while (tmp_count & COUNT_CONTINUED);
> +out:
> + spin_unlock(&p->lock);
> + return count;
> +}
> +
> +/*
> * We can write to an anon page without COW if there are no other references
> * to it. And as a side-effect, free up its swap: because the old content
> * on disk will never be read, and seeking back there to write new content
>


Attachment: signature.asc
Description: OpenPGP digital signature