Re: [PATCH -v3 00/10] THP swap: Delay splitting THP during swapping out
From: Minchan Kim
Date: Fri Sep 09 2016 - 01:44:04 EST
Hi Huang,
On Wed, Sep 07, 2016 at 09:45:59AM -0700, Huang, Ying wrote:
> From: Huang Ying <ying.huang@xxxxxxxxx>
>
> This patchset is to optimize the performance of Transparent Huge Page
> (THP) swap.
>
> Hi, Andrew, could you help me to check whether the overall design is
> reasonable?
>
> Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
> swap part of the patchset? Especially [01/10], [04/10], [05/10],
> [06/10], [07/10], [10/10].
>
> Hi, Andrea and Kirill, could you help me to review the THP part of the
> patchset? Especially [02/10], [03/10], [09/10] and [10/10].
>
> Hi, Johannes, Michal and Vladimir, I am not very confident about the
> memory cgroup part, especially [02/10] and [03/10]. Could you help me
> to review it?
>
> And for all, Any comment is welcome!
>
>
> Recently, the performance of the storage devices improved so fast that
> we cannot saturate the disk bandwidth when do page swap out even on a
> high-end server machine. Because the performance of the storage
> device improved faster than that of CPU. And it seems that the trend
> will not change in the near future. On the other hand, the THP
> becomes more and more popular because of increased memory size. So it
> becomes necessary to optimize THP swap performance.
>
> The advantages of the THP swap support include:
>
> - Batch the swap operations for the THP to reduce lock
> acquiring/releasing, including allocating/freeing the swap space,
> adding/deleting to/from the swap cache, and writing/reading the swap
> space, etc. This will help improve the performance of the THP swap.
>
> - The THP swap space read/write will be 2M sequential IO. It is
> particularly helpful for the swap read, which usually are 4k random
> IO. This will improve the performance of the THP swap too.
>
> - It will help the memory fragmentation, especially when the THP is
> heavily used by the applications. The 2M continuous pages will be
> free up after THP swapping out.
I just read patchset right now and still doubt why the all changes
should be coupled with THP tightly. Many parts(e.g., you introduced
or modifying existing functions for making them THP specific) could
just take page_list and the number of pages then would handle them
without THP awareness.
For example, if the nr_pages is larger than SWAPFILE_CLUSTER, we
can try to allocate new cluster. With that, we could allocate new
clusters to meet nr_pages requested or bail out if we fail to allocate
and fallback to 0-order page swapout. With that, swap layer could
support multiple order-0 pages by batch.
IMO, I really want to land Tim Chen's batching swapout work first.
With Tim Chen's work, I expect we can make better refactoring
for batching swap before adding more confuse to the swap layer.
(I expect it would share several pieces of code for or would be base
for batching allocation of swapcache, swapslot)
After that, we could enhance swap for big contiguous batching
like THP and finally we might make it be aware of THP specific to
enhance further.
A thing I remember you aruged: you want to swapin 512 pages
all at once unconditionally. It's really worth to discuss if
your design is going for the way.
I doubt it's generally good idea. Because, currently, we try to
swap in swapped out pages in THP page with conservative approach
but your direction is going to opposite way.
[mm, thp: convert from optimistic swapin collapsing to conservative]
I think general approach(i.e., less effective than targeting
implement for your own specific goal but less hacky and better job
for many cases) is to rely/improve on the swap readahead.
If most of subpages of a THP page are really workingset, swap readahead
could work well.
Yeah, it's fairly vague feedback so sorry if I miss something clear.