Re: [PATCH 0/4] Memory controller soft limit patches (v3)

From: KAMEZAWA Hiroyuki
Date: Mon Mar 02 2009 - 00:34:21 EST


On Mon, 2 Mar 2009 10:10:43 +0530
Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:

> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> [2009-03-02 09:24:04]:
>
> > On Sun, 01 Mar 2009 11:59:59 +0530
> > Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:
> >
> > >
> > > From: Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx>

> >
> > At first, it's said "When cgroup people adds something, the kernel gets slow".
> > This is my start point of reviewing. Below is comments to this version of patch.
> >
> > 1. I think it's bad to add more hooks to res_counter. It's enough slow to give up
> > adding more fancy things..
>
> res_counters was desgined to be extensible, why is adding anything to
> it going to make it slow, unless we turn on soft_limits?
>
You inserted new "if" logic in the core loop.
(What I want to say here is not that this is definitely bad but that "isn't there
any alternatives which is less overhead.)


> >
> > 2. please avoid to add hooks to hot-path. In your patch, especially a hook to
> > mem_cgroup_uncharge_common() is annoying me.
>
> If soft limits are not enabled, the function does a small check and
> leaves.
>
&soft_fail_res is passed always even if memory.soft_limit==ULONG_MAX
res_counter_soft_limit_excess() adds one more function call and spinlock, and irq-off.

> >
> > 3. please avoid to use global spinlock more.
> > no lock is best. mutex is better, maybe.
> >
>
> No lock to update a tree which is update concurrently?
>
Using tree/sort itself is nonsense, I believe.


> > 4. RB-tree seems broken. Following is example. (please note you do all ops
> > in lazy manner (once in HZ/4.)
> >
> > i). while running, the tree is constructed as following
> >
> > R R=exceed=300M
> > / \
> > A B A=exceed=200M B=exceed=400M
> > ii) A process B exits, but and usage goes down.
>
> That is why we have the hook in uncharge. Even if we update and the
> usage goes down, the tree is ordered by usage_in_excess which is
> updated only when the tree is updated. So what you show below does not
> occur. I think I should document the design better.
>

time_check==true. So, update-tree at uncharge() only happens once in HZ/4
==
@@ -1422,6 +1520,7 @@ __mem_cgroup_uncharge_common(struct page *page, enum charge_type ctype)
mz = page_cgroup_zoneinfo(pc);
unlock_page_cgroup(pc);

+ mem_cgroup_check_and_update_tree(mem, true);
/* at swapout, this memcg will be accessed to record to swap */
if (ctype != MEM_CGROUP_CHARGE_TYPE_SWAPOUT)
css_put(&mem->css);
==
Then, not-sorted RB-tree can be there.

BTW,
time_after(jiffies, 0)
is buggy (see definition). If you want make this true always,
time_after(jiffies, jiffies +1)

Thanks,
-Kame



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/