Re: [RFC 0/3] Implementation of cgroup isolation

From: KAMEZAWA Hiroyuki
Date: Mon Mar 28 2011 - 22:52:32 EST


On Mon, 28 Mar 2011 19:46:41 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> On Mon, Mar 28, 2011 at 5:47 PM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:

> >
> >> By saying that, memcg simplified the memory accounting per-cgroup but
> >> the memory isolation is broken. This is one of examples where pages
> >> are shared between global LRU and per-memcg LRU. It is easy to get
> >> cgroup-A's page evicted by adding memory pressure to cgroup-B.
> >>
> > If you overcommit....Right ?
>
> yes, we want to support the configuration of over-committing the
> machine w/ limit_in_bytes.
>

Then, soft_limit is a feature for fixing the problem. If you have problem
with soft_limit, let's fix it.


> >
> >
> >> The approach we are thinking to make the page->lru exclusive solve the
> >> problem. and also we should be able to break the zone->lru_lock
> >> sharing.
> >>
> > Is zone->lru_lock is a problem even with the help of pagevecs ?
>
> > If LRU management guys acks you to isolate LRUs and to make kswapd etc..
> > more complex, okay, we'll go that way.
>
> I would assume the change only apply to memcg users , otherwise
> everything is leaving in the global LRU list.
>
> This will _change_ the whole memcg design and concepts Maybe memcg
> should have some kind of balloon driver to
> > work happy with isolated lru.
>
> We have soft_limit hierarchical reclaim for system memory pressure,
> and also we will add per-memcg background reclaim. Both of them do
> targeting reclaim on per-memcg LRUs, and where is the balloon driver
> needed?
>

If soft_limit is _not_ enough. And I think you background reclaim should
be work with soft_limit and be triggered by global memory pressure.

As wrote in other mail, it's not called via direct reclaim.
Maybe its the 1st point to be shooted rather than trying big change.




Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/