Re: [PATCH] mm: Cleanup - Reorganize the shrink_page_list code into smaller functions

From: Tim Chen
Date: Wed Jun 01 2016 - 14:23:59 EST


On Wed, 2016-06-01 at 16:12 +0900, Minchan Kim wrote:
>Â
> Hi Tim,
>
> To me, this reorganization is too limited and not good for me,
> frankly speaking. It works for only your goal which allocate batch
> swap slot, I guess. :)
>
> My goal is to make them work with batch page_check_references,
> batch try_to_unmap and batch __remove_mapping where we can avoid frequent
> mapping->lock(e.g., anon_vma or i_mmap_lock with hoping such batch locking
> help system performance) if batch pages has same inode or anon.

This is also my goal to group pages that are either under the same
mapping or are anonymous pages together so we can reduce the i_mmap_lock
acquisition. ÂOne logic that's yet to be implemented in your patch
is the grouping of similar pages together so we only need one i_mmap_lock
acquisition. ÂDoing this efficiently is non-trivial. Â

I punted the problem somewhat in my patch and elected to defer the processing
of the anonymous pages at the end so they are naturally grouped without
having to traverse the page_list more than once. ÂSo I'm batching the
anonymous pages but the file mapped pages were not grouped.

In your implementation, you may need to traverse the page_list in two pass, where the
first one is to categorize the pages and grouping them and the second one
is the actual processing. ÂThen the lock batching can be implemented
for the pages. ÂOtherwise the locking is still done page by page in
your patch, and can only be batched if the next page on page_list happens
to have the same mapping. ÂYour idea of using a spl_batch_pages is pretty
neat. ÂIt may need some enhancement so it is known whether some locks
are already held for lock batching purpose.


Thanks.

Tim