On Tue, Dec 3, 2019 at 4:43 PM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:
Currently when truncating shmem file, if the range is partial of THPOne question I would have is if this is really the desired behavior we
(start or end is in the middle of THP), the pages actually will just get
cleared rather than being freed unless the range cover the whole THP.
Even though all the subpages are truncated (randomly or sequentially),
the THP may still be kept in page cache. This might be fine for some
usecases which prefer preserving THP.
But, when doing balloon inflation in QEMU, QEMU actually does hole punch
or MADV_DONTNEED in base page size granulairty if hugetlbfs is not used.
So, when using shmem THP as memory backend QEMU inflation actually doesn't
work as expected since it doesn't free memory. But, the inflation
usecase really needs get the memory freed. Anonymous THP will not get
freed right away too but it will be freed eventually when all subpages are
unmapped, but shmem THP would still stay in page cache.
Split THP right away when doing partial hole punch, and if split fails
just clear the page so that read to the hole punched area would return
zero.
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
are looking for?
By proactively splitting the THP you are likely going to see a
performance regression with the virtio-balloon driver enabled in QEMU.
I would suspect the response to that would be to update the QEMU code
to identify the page size of the shared memory ramblock. At that
point I suspect it would start behaving the same as how it currently
handles anonymous memory, and the work done here would essentially
have been wasted other than triggering the desire to resolve this in
QEMU to avoid a performance regression.
The code for inflating a the balloon in virtio-balloon in QEMU can be
found here:
https://github.com/qemu/qemu/blob/master/hw/virtio/virtio-balloon.c#L66
If there is a way for us to just populate the value obtained via
qemu_ram_pagesize with the THP page size instead of leaving it at 4K,
which is the size I am assuming it is at since you indicated that it
is just freeing the base page size, then we could address the same
issue and likely get the desired outcome of freeing the entire THP
page when it is no longer used.
- Alex