[PATCH 10/11] list_lru: don't need node lock in list_lru_count_node

From: Dave Chinner
Date: Wed Jul 31 2013 - 00:17:51 EST

From: Dave Chinner <dchinner@xxxxxxxxxx>

The overall count of objects on a node might be accurate, but the
moment it is returned to the caller it is no longer perfectly
accurate. Hence we don't really need to hold the node lock to
protect the read of the object. The count is a long, so can be read
atomically on all platforms and so we don't need the lock there,
either. And the cost of the lock is not trivial, either, as it is
showing up in profiles on 16-way lookup workloads like so:

- 15.44% [kernel] [k] __ticket_spin_trylock
- 46.59% _raw_spin_lock
+ 69.40% list_lru_add
17.65% list_lru_del
5.70% list_lru_count_node

IOWs, while the LRU locking scales, it is still costly. The locking
doesn't provide any real advantage for counting, so just kill the
locking in list_lru_count_node().

Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
mm/list_lru.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 7246791..9aadb6c 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -51,15 +51,9 @@ EXPORT_SYMBOL_GPL(list_lru_del);
unsigned long
list_lru_count_node(struct list_lru *lru, int nid)
- unsigned long count = 0;
struct list_lru_node *nlru = &lru->node[nid];
- spin_lock(&nlru->lock);
WARN_ON_ONCE(nlru->nr_items < 0);
- count += nlru->nr_items;
- spin_unlock(&nlru->lock);
- return count;
+ return nlru->nr_items;


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/