Re: [patch 8/8] mm: make per-memcg lru lists exclusive
From: Hiroyuki Kamezawa
Date: Thu Jun 02 2011 - 11:54:46 EST
2011/6/2 Johannes Weiner <hannes@xxxxxxxxxxx>:
> On Thu, Jun 02, 2011 at 10:16:59PM +0900, Hiroyuki Kamezawa wrote:
>> 2011/6/1 Johannes Weiner <hannes@xxxxxxxxxxx>:
>> > All lru list walkers have been converted to operate on per-memcg
>> > lists, the global per-zone lists are no longer required.
>> > This patch makes the per-memcg lists exclusive and removes the global
>> > lists from memcg-enabled kernels.
>> > The per-memcg lists now string up page descriptors directly, which
>> > unifies/simplifies the list isolation code of page reclaim as well as
>> > it saves a full double-linked list head for each page in the system.
>> > At the core of this change is the introduction of the lruvec
>> > structure, an array of all lru list heads. It exists for each zone
>> > globally, and for each zone per memcg. All lru list operations are
>> > now done in generic code against lruvecs, with the memcg lru list
>> > primitives only doing accounting and returning the proper lruvec for
>> > the currently scanned memcg on isolation, or for the respective page
>> > on putback.
>> > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
>> could you divide this into
>> - introduce lruvec
>> - don't record section? information into pc->flags because we see
>> "page" on memcg LRU
>> and there is no requirement to get page from "pc".
>> - remove pc->lru completely
> Yes, that makes sense. It shall be fixed in the next version.
BTW, IIUC, Transparent hugepage has a code to link a page to the
And recent Minchan's work does the same kind of trick.
But it may put a page onto wrong memcgs if we do link a page to
another page's page->lru
because 2 pages may be in different cgroup each other.
Could you check there are more codes which does link page->lru to nearby page's
page->lru ? Now, I'm not sure there are other codes....but we need care.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/