Re: [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add

From: Vlastimil Babka
Date: Thu Nov 26 2020 - 10:44:08 EST


On 11/26/20 12:22 PM, Vlastimil Babka wrote:
On 11/26/20 8:24 AM, Yu Zhao wrote:
On Thu, Nov 26, 2020 at 02:39:03PM +0800, Alex Shi wrote:


在 2020/11/26 下午12:52, Yu Zhao 写道:
>> */
>> void __pagevec_lru_add(struct pagevec *pvec)
>> {
>> - int i;
>> - struct lruvec *lruvec = NULL;
>> + int i, nr_lruvec;
>> unsigned long flags = 0;
>> + struct page *page;
>> + struct lruvecs lruvecs;
>> >> - for (i = 0; i < pagevec_count(pvec); i++) {
>> - struct page *page = pvec->pages[i];
>> + nr_lruvec = sort_page_lruvec(&lruvecs, pvec);
> Simply looping pvec multiple times (15 at most) for different lruvecs
> would be better because 1) it requires no extra data structures and
> therefore has better cache locality (theoretically faster) 2) it only
> loops once when !CONFIG_MEMCG and !CONFIG_NUMA and therefore has no
> impact on Android and Chrome OS.
>
With multiple memcgs, it do help a lot, I had gotten 30% grain on readtwice
case. but yes, w/o MEMCG and NUMA, it's good to keep old behavior. So would you like has a proposal for this?

Oh, no, I'm not against your idea. I was saying it doesn't seem
necessary to sort -- a nested loop would just do the job given
pagevec is small.

Yeah that could work. The worst case doesn't look nice (O(n^2)) but it should be
rather rare to have pages from really multiple memcgs/nodes?

However, Matthew wanted to increase pagevec size [1] and once 15^2 becomes 63^2, it starts to be somewhat more worrying.

[1] https://lore.kernel.org/linux-mm/20201105172651.2455-1-willy@xxxxxxxxxxxxx/

Maybe with the following change? Avoids going through the nulls if we got lucky
(or have !MEMCG !NUMA).

diff --git a/mm/swap.c b/mm/swap.c
index cb3794e13b48..1d238edc2907 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -996,15 +996,26 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
*/
void __pagevec_lru_add(struct pagevec *pvec)
{
- int i;
+ int i, j;

int i, j, n;

struct lruvec *lruvec = NULL;
unsigned long flags = 0;

n = pagevec_count(pvec);
for (i = 0; i < pagevec_count(pvec); i++) {

for (i = 0; n; i++) {

struct page *page = pvec->pages[i];
+ if (!page)
+ continue;
+
lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
- __pagevec_lru_add_fn(page, lruvec);

n--;

+
+ for (j = i; j < pagevec_count(pvec); j++) {
+ if (page_to_nid(pvec->pages[j]) != page_to_nid(page) ||
+ page_memcg(pvec->pages[j]) != page_memcg(page))
+ continue;
+
+ __pagevec_lru_add_fn(pvec->pages[j], lruvec);
+ pvec->pages[j] = NULL;

n--;
+ }
}
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);