Re: [RFC][PATCH 0/6] memcg updates (05/Nov)
From: Balbir Singh
Date: Thu Nov 06 2008 - 01:58:20 EST
KAMEZAWA Hiroyuki wrote:
> Weekly (RFC) update for memcg.
>
> This set includes
>
> 1. change force_empty to do move account rather than forget all
I would like this to be selectable, please. We don't want to break behaviour and
not everyone would like to pay the cost of movement.
> 2. swap cache handling
> 3. mem+swap controller kconfig
> 4. swap_cgroup for rememver swap account information
> 5. mem+swap controller core
> 6. synchronize memcg's LRU and global LRU.
>
> "1" is already sent, "6" is a newcomer.
> I'd like to push out "2" or "2-5" in the next week (if no bugs.)
>
> after 6, next candidates are
> - dirty_ratio handler
> - account move at task move.
>
> Some more explanation about purpose of "6". (see details in patch itself)
> Now, one of complicated logic in memcg is LRU handling. Because the place of
> lru_head depends on page_cgroup->mem_cgroup pointer, we have to take
> lock as following even under zone->lru_lock.
> ==
> pc = lookup_page_cgroup(page);
> if (!trylock_page_cgroup(pc))
> return -EBUSY;
>
> if (PageCgroupUsed(pc)) {
> struct mem_cgroup_per_zone *mz = page_cgroup_zoneinfo(pc);
> spin_lock_irqsave(&mz->lru_lock, flags);
> ....some operation on LRU.
> spin_unlock_irqrestore(&mz->lru_lock, flags);
> }
> unlock_page_cgroup(pc);
> ==
> Sigh..
>
> After "6", page_cgroup's LRU management can be done independently to some extent.
> == as
> (zone->lru_lock is held here)
> pc = lookup_page_cgroup(page);
> list operation on pc.
> (unlock zone->lru_lock)
> ==
> Maybe good for maintainance and as a bonus, we can make use of isolate_lru_page() when
> doing some racy operation.
>
> isolate_lru_page(page);
> pc = lookup_page_cgroup(page);
> do some jobs.
> putback_lru_page(page);
>
> Maybe this will be a help to implement "account move at task move".
Sounds promising!
--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/