On Tue, 21 Feb 2012, Konstantin Khlebnikov wrote:
On lumpy/compaction isolate you do:
if (!PageLRU(page))
continue
__isolate_lru_page()
page_relock_rcu_vec()
rcu_read_lock()
rcu_dereference()...
spin_lock()...
rcu_read_unlock()
You protect page_relock_rcu_vec with switching pointers back to root.
I do:
catch_page_lru()
rcu_read_lock()
if (!PageLRU(page))
return false
rcu_dereference()...
spin_lock()...
rcu_read_unlock()
if (PageLRU())
return true
if true
__isolate_lru_page()
I protect my catch_page_lruvec() with PageLRU() under single rcu-interval
with locking.
Thus my code is better, because it not requires switching pointers back to
root memcg.
That sounds much better, yes - if it does work reliably.
I'll have to come back to think about your locking later too;
or maybe that's exactly where I need to look, when investigating
the mm_inline.h:41 BUG.
But at first sight, I have to say I'm very suspicious: I've never found
PageLRU a good enough test for whether we need such a lock, because of
races with those pages on percpu lruvec about to be put on the lru.
But maybe once I look closer, I'll find that's handled by your changes
away from lruvec; though I'd have thought the same issue exists,
independent of whether the pending pages are in vector or list.
Hugh
Meanwhile after seeing your patches, I realized that this rcu-protection is
required only for lock-by-pfn in lumpy/compaction isolation.
Thus my locking should be simplified and optimized.