Re: tbench regression - Why process scheduler has impact on tbenchand why small per-cpu slab (SLUB) cache creates the scenario?

From: Zhang, Yanmin
Date: Wed Sep 05 2007 - 20:53:21 EST


On Wed, 2007-09-05 at 03:45 -0700, Christoph Lameter wrote:
> On Wed, 5 Sep 2007, Zhang, Yanmin wrote:
>
> > > > However, the approach treats the slabs in the same policy. Could we
> > > > implement a per-slab specific approach like direct b)?
> > >
> > > I am not sure what you mean by same policy. Same configuration for all
> > > slabs?
> > Yes.
>
> Ok. I could add the ability to specify parameters for some slabs.
Thanks. That will be more flexible.

>
> > > Would it be possible to try the two other approaches that I suggested? I
> > > think both of those may also solve the issue. Try booting with
> > > slab_max_order=0
> > 1) I tried slab_max_order=0 and the regression becomes 12.5%. It's still
> > not good.
> >
> > 2) I apllied patch
> > slub-direct-pass-through-of-page-size-or-higher-kmalloc.patch to kernel
> > 2.6.23-rc4. The new testing result is much better, only 1% less than
> > 2.6.22.
I retested 2.6.22 and booted kernel with "slub_max_order=3 slub_min_objects=8".
The result is about 8.7% better than without booting parameters.

So all with booting parameter "slub_max_order=3 slub_min_objects=8", 2.6.22 is
about 5.8% better than 2.6.23-rc4. I suspect process scheduler is responsible
for the 5.8% regressions.

>
> Ok. That seems to indicate that we should improve the alloc path in the
> page allocator. The page allocator performance needs to be competitive on
> page sized allocations. The problem will be largely going away when we
> merge the pass through patch in 2.6.24.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/