Re: [RFC PATCH 5/7] mm: Make /proc/pid/smaps use the new generic pagewalk API

From: Oscar Salvador

Date: Thu Apr 16 2026 - 04:58:28 EST


On Mon, Apr 13, 2026 at 04:31:37PM +0200, Oscar Salvador wrote:
> On Mon, Apr 13, 2026 at 07:18:00AM -0700, Usama Arif wrote:
>
> > The old smap_gather_stats had special handling for shmem swap
> > accounting. For shared or readonly shmem mappings it used
> > shmem_swap_usage() to efficiently account swapped-out shmem pages.
> > For private writable shmem mappings it used smaps_pte_hole() via
> > smaps_shmem_walk_ops to call shmem_partial_swap_usage() for each
> > PTE hole.
> >
> > The new code removes all of this. The pt_range_walk API does not
> > have pte_hole callbacks, so shmem pages that are swapped out (and
> > thus have no PTE) would not be counted in the Swap field of smaps?
>
> Yes, sorry, that is one of those parts which is incomplete.
> I am already working on that offline, but did not have the time to
> prepare it for this one.

So, I implemented it, quick test show it works:

--- fs/proc/task_mmu.c 2026-04-16 10:54:54.440974482 +0200
+++ task_mmu.c 2026-04-16 10:53:36.465147406 +0200
@@ -1105,13 +1105,38 @@
enum pt_range_walk_type type;
pt_type_flags_t flags = PT_TYPE_ALL;

- if (!start)
- start = vma->vm_start;
+ if (start >= vma->vm_end)
+ return;

flags &= ~(PT_TYPE_NONE|PT_TYPE_PFN);

+ if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
+ /*
+ * For shared or readonly shmem mappings we know that all
+ * swapped out pages belong to the shmem object, and we can
+ * obtain the swap value much more efficiently. For private
+ * writable mappings, we might have COW pages that are
+ * not affected by the parent swapped out pages of the shmem
+ * object, so we have to distinguish them during the page walk.
+ * Unless we know that the shmem object (or the part mapped by
+ * our VMA) has no swapped out pages at all.
+ */
+ unsigned long shmem_swapped = shmem_swap_usage(vma);
+
+ if (!start && (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
+ !(vma->vm_flags & VM_WRITE))) {
+ mss->swap += shmem_swapped;
+ } else {
+ flags |= PT_TYPE_NONE;
+ }
+ }
+
+ if (!start)
+ start = vma->vm_start;
+
type = pt_range_walk_start(&ptw, vma, start, vma->vm_end, flags);
while (type != PTW_DONE) {
+ unsigned long curr_addr = ptw.curr_addr;
bool locked = !!(vma->vm_flags & VM_LOCKED);
bool compound = false, account = false;
unsigned long swap_size;
@@ -1168,6 +1193,17 @@
mss->swap_pss += (u64)swap_size << PSS_SHIFT;
}
break;
+ case PTW_NONE:
+#ifdef CONFIG_SHMEM
+ unsigned long addr = ptw.curr_addr;
+ unsigned long end = ptw.next_addr;
+
+ if (ptw.level == PTW_PMD_LEVEL || ptw.level PTW_PTE_LEVEL)
+ mss->swap += shmem_partial_swap_usage(vma->vm_file->f_mapping,
+ linear_page_index(vma, addr),
+ linear_page_index(vma, end));
+#endif
+ break;
default:
/* Ooops */
break;



--
Oscar Salvador
SUSE Labs