Re: [patch v3 -mm 1/3] memcg: integrate soft reclaim tighter withzone shrinking code

From: Michal Hocko
Date: Thu May 30 2013 - 04:46:17 EST


On Wed 29-05-13 16:01:54, Johannes Weiner wrote:
> On Wed, May 29, 2013 at 05:57:56PM +0200, Michal Hocko wrote:
> > On Wed 29-05-13 15:05:38, Michal Hocko wrote:
> > > On Mon 27-05-13 19:13:08, Michal Hocko wrote:
> > > [...]
> > > > Nevertheless I have encountered an issue while testing the huge number
> > > > of groups scenario. And the issue is not limitted to only to this
> > > > scenario unfortunately. As memcg iterators use per node-zone-priority
> > > > cache to prevent from over reclaim it might quite easily happen that
> > > > the walk will not visit all groups and will terminate the loop either
> > > > prematurely or skip some groups. An example could be the direct reclaim
> > > > racing with kswapd. This might cause that the loop misses over limit
> > > > groups so no pages are scanned and so we will fall back to all groups
> > > > reclaim.
> > >
> > > And after some more testing and head scratching it turned out that
> > > fallbacks to pass#2 I was seeing are caused by something else. It is
> > > not race between iterators but rather reclaiming from zone DMA which
> > > has troubles to scan anything despite there are pages on LRU and so we
> > > fall back. I have to look into that more but what-ever the issue is it
> > > shouldn't be related to the patch series.
> >
> > Think I know what is going on. get_scan_count sees relatively small
> > amount of pages in the lists (around 2k). This means that get_scan_count
> > will tell us to scan nothing for DEF_PRIORITY (as the DMA32 is usually
> > ~16M) then the DEF_PRIORITY is basically no-op and we have to wait and
> > fall down to a priority which actually let us scan something.
> >
> > Hmm, maybe ignoring soft reclaim for DMA zone would help to reduce
> > one pointless loop over groups.
>
> If you have a small group in excess of its soft limit and bigger
> groups that are not, you may reclaim something in the regular reclaim
> cycle before reclaiming anything in the soft limit cycle with the way
> the code is structured.

Yes the way how get_scan_count works might really cause this. Although
tageted reclaim is protected from this the global reclaim can really
suffer from this. I am not sure this is necessarily a problem though. If
we are under the global reclaim then a small group which doesn't have at
least 1<<DEF_PRIORITY pages probably doesn't matter that much. The soft
limit is not a guarantee anyway so we can sacrifice some pages from all
groups in such a case.
I also think that the force_scan logic should be enhanced a bit.
Especially for cases like DMA zone. The zone is clearly under watermaks
but we have to wait few priority cycles to reclaim something. But this
is a different issue in depended on the soft reclaim rework.

> The soft limit cycle probably needs to sit outside of the priority
> loop, not inside the loop, so that the soft limit reclaim cycle
> descends priority levels until it makes progress BEFORE it exits to
> the regular reclaim cycle.

I do not like this to be honest. shrink_zone is an ideal place as it is
shared among all reclaimers and we really want to obey priority in the
soft reclaim as well. The corner case mentioned above is probably
fixable on the get_scan_count layer and even if not then I wouldn't call
it a disaster.

--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/