Re: [PATCH 1/3] slub: set a criteria for slub node partial adding

From: Shaohua Li
Date: Tue Dec 06 2011 - 23:59:29 EST


On Wed, 2011-12-07 at 05:06 +0800, David Rientjes wrote:
> On Mon, 5 Dec 2011, Alex,Shi wrote:
>
> > Previous testing depends on 3.2-rc1, that show hackbench performance has
> > no clear change, and netperf get some benefit. But seems after
> > irqsafe_cpu_cmpxchg patch, the result has some change. I am collecting
> > these results.
> >
>
> netperf will also degrade with this change on some machines, there's no
> clear heuristic that can be used to benefit all workloads when deciding
> where to add a partial slab into the list. Cache hotness is great but
> your patch doesn't address situations where frees happen to a partial slab
> such that they may be entirely free (or at least below your 1:4 inuse to
> nr_objs threshold) at the time you want to deactivate the cpu slab.
>
> I had a patchset that iterated the partial list and found the "most free"
> partial slab (and terminated prematurely if a threshold had been reached,
> much like yours) and selected that one, and it helped netperf 2-3% in my
> testing. So I disagree with determining where to add a partial slab to
> the list at the time of free because it doesn't infer its state at the
> time of cpu slab deactivation.
interesting. I did similar experiment before (try to sort the page
according to free number), but it appears quite hard. The free number of
a page is dynamic, eg more slabs can be freed when the page is in
partial list. And in netperf test, the partial list could be very very
long. Can you post your patch, I definitely what to look at it.
What I have about the partial list is it wastes a lot of memory. My test
shows about 50% memory is wasted. I'm thinking not always fetching the
oldest page from the partial list, because chances that objects of
oldest page can all be freed is high. I haven't done any test yet,
wondering if it could be helpful.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/