Alexander Viro <viro@math.psu.edu> said:
> On Tue, 29 Jan 2002, Hans Reiser wrote:
> > This fails to recover an object (e.g. dcache entry) which is used once,
> > and then spends a year in cache on the same page as an object which is
> > hot all the time. This means that the hot set of objects becomes
> > diffused over an order of magnitude more pages than if garbage
> > collection squeezes them all together. That makes for very poor caching.
>
> Any GC that is going to move active dentries around is out of question.
> It would need a locking of such strength that you would be the first
> to cry bloody murder - about 5 seconds after you look at the scalability
> benchmarks.
Then you'd need to somehow ensure that "hot" objects get into pages with
other "hot" objects... an LRU list (as was proposed here), but not just for
cleaning but also for considering as places for new objects? Keep a/this
list (somewhat) sorted by fullness, so that you preferently place new
objects in fullish pages, not in emptyish ones (perhaps slab is doing this
already)?
-- Horst von Brand http://counter.li.org # 22616 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Jan 31 2002 - 21:01:14 EST