Re: dentries: dentry defragmentation
From: Nick Piggin
Date: Mon Feb 01 2010 - 08:36:19 EST
On Mon, Feb 01, 2010 at 02:25:27PM +0100, Andi Kleen wrote:
> >
> > > > Right, but as you can see it is complex to do it this way. And I
> > > > think for reclaim driven targetted reclaim, then it needn't be so
> > > > inefficient because you aren't restricted to just one page, but
> > > > in any page which is heavily fragmented (and by definition there
> > > > should be a lot of them in the system).
> > >
> > > Assuming you can identify them quickly.
> >
> > Well because there are a large number of them, then you are likely
> > to encounter one very quickly just off the LRU list.
>
> There were some cases in the past where this wasn't the case.
> But yes some uptodate numbers on this would be good.
>
> Also it doesn't address the second case here quoted again.
>
> > > There are really two different cases here:
> > > - Run out of memory: in this case i just want to find all the objects
> > > of any page, ideally of not that recently used pages.
> > > - I am very fragmented and want a specific page freed to get a 2MB
> > > region back or for hwpoison: same, but do it for a specific page.
> > >
> >
> >
> > I still don't think it adds much weight. Especially if you can just
> > try an inefficient scan.
>
> Also see second point below.
> >
> >
> > > But soft hwpoison isn't the only user. The other big one would
> > > be for large pages or other large page allocations.
Well yes it's possible that it could help there.
But it is always possible to do the same reclaim work via the LRU, in
worst case it just requires reclaiming of most objects. So it
probably doesn't fundamentally enable something we can't do already.
More a matter of performance, so again, numbers are needed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/