Re: [PATCH] mm/list_lru.c: use cond_resched_lock() for nlru->lock

From: Sahitya Tummala
Date: Fri Jun 16 2017 - 10:44:18 EST


On 6/16/2017 2:35 AM, Andrew Morton wrote:

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 5d8dffd..1af0709 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -249,6 +249,8 @@ restart:
default:
BUG();
}
+ if (cond_resched_lock(&nlru->lock))
+ goto restart;
}
spin_unlock(&nlru->lock);
This is rather worrying.

a) Why are we spending so long holding that lock that this is occurring?

At the time of crash I see that __list_lru_walk_one() shows number of
entries isolated as 1774475 with nr_items still pending as 130748. On my
system, I see that for dentries of 100000, it takes around 75ms for
__list_lru_walk_one() to complete. So for a total of 1900000 dentries as
in issue scenario, it will take upto 1425ms, which explains why the spin
lockup condition got hit on the other CPU.

It looks like __list_lru_walk_one() is expected to take more time if
there are more number of dentries present. And I think it is a valid
scenario to have those many number dentries.

b) With this patch, we're restarting the entire scan. Are there
situations in which this loop will never terminate, or will take a
very long time? Suppose that this process is getting rescheds
blasted at it for some reason?

In the above scenario, I observed that the dentry entries from lru list
are removedall the time i.e LRU_REMOVED is returned from the
isolate (dentry_lru_isolate()) callback. I don't know if there is any case
where we skip several entries in the lru list and restartseveral times due
to this cond_resched_lock(). This can happen even with theexisting code
if LRU_RETRY is returned often from the isolate callback.
IOW this looks like a bit of a band-aid and a deeper analysis and
understanding might be needed.

--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.