Re: [RFC PATCH 0/1] pagemap: report swap location for shared pages
From: Peter Xu
Date: Wed Jul 14 2021 - 12:02:10 EST
On Wed, Jul 14, 2021 at 03:24:25PM +0000, Tiberiu Georgescu wrote:
> When a page allocated using the MAP_SHARED flag is swapped out, its pagemap
> entry is cleared. In many cases, there is no difference between swapped-out
> shared pages and newly allocated, non-dirty pages in the pagemap interface.
>
> Example pagemap-test code (Tested on Kernel Version 5.14-rc1):
>
> #define NPAGES (256)
> /* map 1MiB shared memory */
> size_t pagesize = getpagesize();
> char *p = mmap(NULL, pagesize * NPAGES, PROT_READ | PROT_WRITE,
> MAP_ANONYMOUS | MAP_SHARED, -1, 0);
> /* Dirty new pages. */
> for (i = 0; i < PAGES; i++)
> p[i * pagesize] = i;
>
> Run the above program in a small cgroup, which allows swapping:
>
> /* Initialise cgroup & run a program */
> $ echo 512K > foo/memory.limit_in_bytes
> $ echo 60 > foo/memory.swappiness
> $ cgexec -g memory:foo ./pagemap-test
>
> Check the pagemap report. This is an example of the current expected output:
>
> $ dd if=/proc/$PID/pagemap ibs=8 skip=$(($VADDR / $PAGESIZE)) count=$COUNT | hexdump -C
> 00000000 00 00 00 00 00 00 80 00 00 00 00 00 00 00 80 00 |................|
> *
> 00000710 e1 6b 06 00 00 00 80 a1 9e eb 06 00 00 00 80 a1 |.k..............|
> 00000720 6b ee 06 00 00 00 80 a1 a5 a4 05 00 00 00 80 a1 |k...............|
> 00000730 5c bf 06 00 00 00 80 a1 90 b6 06 00 00 00 80 a1 |\...............|
>
> The first pagemap entries are reported as zeroes, indicating the pages have
> never been allocated while they have actually been swapped out. It is
> possible for bit 55 (PTE is Soft-Dirty) to be set on all pages of the
> shared VMA, indicating some access to the page, but nothing else (frame
> location, presence in swap or otherwise).
>
> This patch addresses the behaviour and modifies pte_to_pagemap_entry() to
> make use of the XArray associated with the virtual memory area struct
> passed as an argument. The XArray contains the location of virtual pages in
> the page cache, swap cache or on disk. If they are on either of the caches,
> then the original implementation still works. If not, then the missing
> information will be retrieved from the XArray.
>
> The root cause of the missing functionality is that the PTE for the page
> itself is cleared when a swap out occurs on a shared page. Please take a
> look at the proposed patch. I would appreciate it if you could verify a
> couple of points:
>
> 1. Why do swappable and non-syncable shared pages have their PTEs cleared
> when they are swapped out ? Why does the behaviour differ so much
> between MAP_SHARED and MAP_PRIVATE pages? What are the origins of the
> approach?
My understanding is linux mm treat this differently for file-backed memories,
MAP_SHARED is one of this kind. For these memories, ptes can be dropped at any
time because it can be reloaded from page cache when faulted again.
Anonymous private memories cannot do that, so anonymous private memories keep
all things within ptes, including swap entry.
>
> 2. PM_SOFT_DIRTY and PM_UFFD_WP are two flags that seem to get lost once
> the shared page is swapped out. Is there any other way to retrieve
> their value in the proposed patch, other than ensuring these flags are
> set, when necessary, in the PTE?
uffd-wp has no problem on dropping them because uffd-wp does not yet support
shmem. Shmem support is posted upstream but still during review:
https://lore.kernel.org/lkml/20210527201927.29586-1-peterx@xxxxxxxxxx/
After that work they'll persist, then we won't have an issue using uffd-wp with
shmem swapping; the pagemap part is done in patch 25 of 27:
https://lore.kernel.org/lkml/20210527202340.32306-1-peterx@xxxxxxxxxx/
However I agree soft-dirty seems to be still broken with it.
(Cc Hugh and Andrea too)
Thanks,
--
Peter Xu