Re: [PATCH v14 15/20] mm/swap: serialize memcg changes during pagevec_lru_move_fn

From: Alex Shi
Date: Sat Jul 04 2020 - 07:35:31 EST




å 2020/7/3 äå5:13, Konstantin Khlebnikov åé:
>> @@ -976,7 +983,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
>> */
>> void __pagevec_lru_add(struct pagevec *pvec)
>> {
>> - pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn);
>> + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn, true);
>> }
> It seems better to open code version in lru_add than adding a bool
> argument which is true just for one user.

Right, I will rewrite this part as your suggestion. Thanks!

>
> Also with this new lru protection logic lru_add could be optimized:
> It could prepare a list of pages and under lru_lock do only list
> splice and bumping counter.
> Since PageLRU isn't set yet nobody could touch these pages in lru.
> After that lru_add could iterate pages from first to last without
> lru_lock to set PageLRU and drop reference.
>
> So, lru_add will do O(1) operations under lru_lock regardless of the
> count of pages it added.
>
> Actually per-cpu vector for adding could be replaced with per-cpu
> lists and\or per-lruvec atomic slist.
> Thus incommig pages will be already in list structure rather than page vector.
> This allows to accumulate more pages and offload adding to kswapd or
> direct reclaim.
>

That's a great idea! Guess what the new struct we need would be like this?
I like to try this. :)


diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 081d934eda64..d62778c8c184 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -20,7 +20,7 @@
struct pagevec {
unsigned char nr;
bool percpu_pvec_drained;
- struct page *pages[PAGEVEC_SIZE];
+ struct list_head veclist;
};