Re: Low overhead patches for the memory cgroup controller (v5)

From: Balbir Singh
Date: Tue Jun 23 2009 - 00:54:11 EST


* KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> [2009-06-23 09:01:16]:

> On Mon, 22 Jun 2009 15:43:43 -0700
> Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> > On Mon, 15 Jun 2009 10:09:00 +0530
> > Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:
> >
> > >
> > > ...
> > >
> > > This patch changes the memory cgroup and removes the overhead associated
> > > with accounting all pages in the root cgroup. As a side-effect, we can
> > > no longer set a memory hard limit in the root cgroup.
> > >
> > > A new flag to track whether the page has been accounted or not
> > > has been added as well. Flags are now set atomically for page_cgroup,
> > > pcg_default_flags is now obsolete and removed.
> > >
> > > ...
> > >
> > > @@ -1114,9 +1121,22 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *mem,
> > > css_put(&mem->css);
> > > return;
> > > }
> > > +
> > > pc->mem_cgroup = mem;
> > > smp_wmb();
> > > - pc->flags = pcg_default_flags[ctype];
> > > + switch (ctype) {
> > > + case MEM_CGROUP_CHARGE_TYPE_CACHE:
> > > + case MEM_CGROUP_CHARGE_TYPE_SHMEM:
> > > + SetPageCgroupCache(pc);
> > > + SetPageCgroupUsed(pc);
> > > + break;
> > > + case MEM_CGROUP_CHARGE_TYPE_MAPPED:
> > > + ClearPageCgroupCache(pc);
> > > + SetPageCgroupUsed(pc);
> > > + break;
> > > + default:
> > > + break;
> > > + }
> >
> > Do we still need the smp_wmb()?
> >
> > It's hard to say, because we forgot to document it :(
> >
> Sorry for lack of documentation.
>
> pc->mem_cgroup should be visible before SetPageCgroupUsed(). Othrewise,
> A routine believes USED bit will see bad pc->mem_cgroup.
>
> I'd like to add a comment later (againt new mmotm.)
>

Thanks Kamezawa! We do use the barrier Andrew, an easy way to find
affected code is to look at the smp_rmb()'s we have. But it is better
documented.

--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/