Re: [PATCH] anobjrmap 1/6 objrmap

From: Andrea Arcangeli
Date: Wed Mar 24 2004 - 12:08:55 EST


On Wed, Mar 24, 2004 at 08:35:46AM -0800, Martin J. Bligh wrote:
> > here it's not the fastest (though a 0.03 difference should be in the
> > error range with an unlimited -j)
> >
> > overall I think for the fast path we can conclude they're at least
> > equally fast.
>
> Yup, I think they're all within the noise level between anon_mm and anon_vma,
> though both are faster than partial by something at lease statistically
> significant (though maybe not even enough to care about for these
> workloads).

they are, and those workloads aren't anonymous memory intensive that's
why the difference is not significant (see the benchmark I posted a few
days ago comparing anon_vma with mainline, the boost is huge on a 1G
box, on a 64bit multi-gigabox the boost will be even bigger, close to
100% speedup infact).

> Yeah, it does 5 passes, and throws away the fastest and slowest. So it's
> only 3 it's calculating off ... I think you just got lucky with a 0.0% ;-)

I guess so ;)

> > it's one of the -mm patches probably that boosts those bits (the
> > cost page_add_rmap and the page faults should be the same with both
> > anon-vma and anonmm). as for the regression, the pgd_alloc slowdown is
> > the unslabify one from andrew that releases 8 bytes per page in 32bit
> > archs and 16 bytes per page in 64bit archs.
>
> OK, great ... thanks for the info. I think I'd happily pay that cost in
> pgd_alloc for the space gain - kernbench & SDET are about as bad as it

that's Andrew's point too and I cannot agree more ;)

> gets on pgd_alloc, so that seems like a good tradeoff.

pgd_alloc is the fork path, so yes it's a very good tradeoff (really I
didn't check if any memory is being wasted by the bigger allocations,
but that is solvable even without the slab and without page->list by
just chaining the pages like poll does).

However Wli suggested that we shouldn't stop using the slab for those
minuscle allocations, and that the slab should not use the lists
instead. You may want to ask Wli for details. For now I enjoy the
36bytes page_t ;)

> > My current page_t is now 36 bytes (compared to 48bytes of 2.4) in 32bit
> > archs, and 56bytes on 64bit archs (hope I counted right this time, Hugh
> > says I'm counting wrong the page_t, methinks we were looking different
> > source trees instead but maybe I was really counting wrong ;).
>
> IIRC, with PAE etc on, mainline is 44 bytes. So if we saved 8 from Andrew's

nitpick, it's not PAE but highmem that makes it worse (even with PAE off).

> change, and 4 from objrmap, I'd be hoping for 32?

I giveup counting, I used the compiler this time, and yes it's 32bytes
for every page_t of 2.6-aa compared to 48bytes of 2.4.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/