Re: [PATCH] prevent to reclaim anon page of lumpy reclaim for noswap space
From: Lee Schermerhorn
Date: Thu Jun 25 2009 - 10:54:38 EST
On Thu, 2009-06-25 at 23:44 +0900, Minchan Kim wrote:
> On Thu, Jun 25, 2009 at 11:14 PM, KOSAKI
> Motohiro<kosaki.motohiro@xxxxxxxxxxxxxx> wrote:
> >> This patch prevent to reclaim anon page in case of no swap space.
> >> VM already prevent to reclaim anon page in various place.
> >> But it doesnt't prevent it for lumpy reclaim.
> >>
> >> It shuffles lru list unnecessary so that it is pointless.
> >
> > NAK.
> >
> > 1. if system have no swap, add_to_swap() never get swap entry.
> > eary check don't improve performance so much.
>
> Hmm. I mean no swap space but not no swap device.
> add_to_swap ? You mean Rik pointed me out ?
> If system have swap device, Rik's pointing is right.
> I will update his suggestion.
>
> > 2. __isolate_lru_page() is not only called lumpy reclaim case, but
> > also be called
> > normal reclaim.
>
> You mean about performance degradation ?
> I think most case have enough swap space and then one condition
> variable(nr_swap_page) check is trivial. I think.
> We can also use [un]likely but I am not sure it help us.
>
>
> > 3. if system have no swap, anon pages shuffuling doesn't cause any matter.
>
> Again, I mean no swap space but no swap device system.
> And I have a plan to remove anon_vma in no swap device system.
>
> As you point me out, it's pointless in no swap device system.
> I don't like unnecessary structure memory footprint and locking overhead.
> I think no swap device system is problem in server environment as well
> as embedded. but I am not sure when I will do. :)
>
How will we walk the reverse map for try_to_unmap() for page migration
or try_to_munlock() w/o anon_vma? Perhaps one can remove anon_vma when
there is no swap device and migration and the unevictable lru are not
configured--e.g., for embedded systems.
Lee
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/