Re: [v2 PATCH] mm: shmem: allow split THP when truncating THP partially

From: Yang Shi
Date: Wed Feb 26 2020 - 13:01:06 EST




On 2/26/20 9:45 AM, David Hildenbrand wrote:
On 26.02.20 18:31, Yang Shi wrote:

On 2/21/20 4:24 PM, Alexander Duyck wrote:
On Fri, Feb 21, 2020 at 10:24 AM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:

On 2/20/20 10:16 AM, Alexander Duyck wrote:
On Tue, Dec 3, 2019 at 4:43 PM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:
Currently when truncating shmem file, if the range is partial of THP
(start or end is in the middle of THP), the pages actually will just get
cleared rather than being freed unless the range cover the whole THP.
Even though all the subpages are truncated (randomly or sequentially),
the THP may still be kept in page cache. This might be fine for some
usecases which prefer preserving THP.

But, when doing balloon inflation in QEMU, QEMU actually does hole punch
or MADV_DONTNEED in base page size granulairty if hugetlbfs is not used.
So, when using shmem THP as memory backend QEMU inflation actually doesn't
work as expected since it doesn't free memory. But, the inflation
usecase really needs get the memory freed. Anonymous THP will not get
freed right away too but it will be freed eventually when all subpages are
unmapped, but shmem THP would still stay in page cache.

Split THP right away when doing partial hole punch, and if split fails
just clear the page so that read to the hole punched area would return
zero.

Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
One question I would have is if this is really the desired behavior we
are looking for?

By proactively splitting the THP you are likely going to see a
performance regression with the virtio-balloon driver enabled in QEMU.
I would suspect the response to that would be to update the QEMU code
to identify the page size of the shared memory ramblock. At that
point I suspect it would start behaving the same as how it currently
handles anonymous memory, and the work done here would essentially
have been wasted other than triggering the desire to resolve this in
QEMU to avoid a performance regression.

The code for inflating a the balloon in virtio-balloon in QEMU can be
found here:
https://github.com/qemu/qemu/blob/master/hw/virtio/virtio-balloon.c#L66

If there is a way for us to just populate the value obtained via
qemu_ram_pagesize with the THP page size instead of leaving it at 4K,
which is the size I am assuming it is at since you indicated that it
is just freeing the base page size, then we could address the same
issue and likely get the desired outcome of freeing the entire THP
page when it is no longer used.
If qemu could punch hole (this is how qemu free file-backed memory) in
THP unit, either w/ or w/o the patch the THP won't get split since the
whole THP will get truncated. But, if qemu has to free memory in sub-THP
size due to whatever reason (for example, 1MB for every 2MB section),
then we have to split THP otherwise no memory will be freed actually
with the current code. It is not about performance, it is about really
giving memory back to host.
I get that, but at the same time I am not sure if everyone will be
happy with the trade-off. That is my concern.

You may want to change the patch description above if that is the
case. Based on the description above it makes it sound as if the issue
is that QEMU is using hole punch or MADV_DONTNEED with the wrong
granularity. Based on your comment here it sounds like you want to
have the ability to break up the larger THP page as soon as you want
to push out a single 4K page from it.
Yes, you are right. The commit log may be confusing. What I wanted to
convey is QEMU has no idea if THP is used or not so it treats memory
with base size unless hugetlbfs is used since QEMU is aware huge page is
used in this case.
This may sounds irrelevant to the problem, I would just remove that.

I am not sure the description for the behavior of anonymous THP with
respect to QEMU makes sense either. Based on the description you made
it sound like it was somehow using the same process used for huge
pages. That isn't the case right? My understanding is that in the case
of an anonymous THP it is getting broken into 4K subpages and then
those are freed individually. That should leave you with the same
performance regression that I had brought up earlier.
No, anonymous THP won't get split immediately and those memory also
won't get freed immediately if QEMU does MADV_DONTNEED on sub THP range
(for example, 1MB range in THP). The THP will get freed when:
1. Host has memory pressure. The THP will get split and unmapped pages
will be freed.
2. Other sub pages in the same THP are MADV_DONTNEED'ed (eventually the
whole THP get unmapped).

The difference between shmem and anonymous page is shmem will not get
freed unless hole punch the whole THP, anonymous page will get freed
sooner or later.

As far as I understood Hugh, the "page size" we'll see in QEMU via
fstatfs() is 4k, not 2MB. IMHO, that's the block size of the "device",
and breaking up THP is the right think to to obey the documentation of
"FALLOC_FL_PUNCH_HOLE".

This is what the patch attempts to accomplish.


IMHO THP is called "transparent" because it shouldn't have any such
visible side effects.

AFAICT, the lazy split is due to locking issue in partial unmap paths. Please refer to "Partial unmap and deferred_split_huge_page()" section in Documentation/vm/transhuge.rst.


As always, anybody correct me if I am wrong here.