Re: [PATCH 5/5] slub: Only IPI CPUs that have per cpu obj to flush

From: Gilad Ben-Yossef
Date: Mon Sep 26 2011 - 04:07:49 EST


Hi,

On Mon, Sep 26, 2011 at 10:36 AM, Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote:
> On Mon, 2011-09-26 at 09:54 +0300, Pekka Enberg wrote:
>>
>> AFAICT, flush_all() isn't all that performance sensitive. Why do we
>> want to reduce IPIs here?
>
> Because it can wake up otherwise idle CPUs, wasting power. Or for the
> case I care more about, unnecessarily perturb a CPU that didn't actually
> have anything to flush but was running something, introducing jitter.
>
> on_each_cpu() things are bad when you have a ton of CPUs (which is
> pretty normal these days).
>


Peter basically already answered better then I could :-)

All I have to add is an example -

flush_all() is called for each kmem_cahce_destroy(). So every cache
being destroyed dynamically ends up sending an IPI to each CPU in the
system, regardless if the cache has ever been used there.

For example, if you close the Infinband ipath driver char device file,
the close file ops calls kmem_cache_destroy().So, if I understand
correctly, running some infiniband config tool on one a single CPU
dedicated to system tasks might interrupt the rest of the 127 CPUs I
dedicated to some CPU intensive task. This is the scenario I'm
tryingto avoid.

I suspect there is a good chance that every line in the output of "git
grep kmem_cache_destroy linux/ | grep '\->'" has a similar scenario
(there are 42 of them).

I hope this sheds some light on the motive of the work.

Thanks!
Gilad
--
Gilad Ben-Yossef
Chief Coffee Drinker
gilad@xxxxxxxxxxxxx
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

"I've seen things you people wouldn't believe. Goto statements used to
implement co-routines. I watched C structures being stored in
registers. All those moments will be lost in time... like tears in
rain... Time to die. "
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/