Re: [RFC] [PATCH] x86: avoid per_cpu for APIC id tables

From: Peter Zijlstra
Date: Thu Jul 18 2013 - 04:56:06 EST


On Thu, Jul 18, 2013 at 08:52:49AM +0200, Ingo Molnar wrote:
>
> * Andrew Hunter <ahh@xxxxxxxxxx> wrote:
>
> > Hi, I have a patch (following) that modifies handling of APIC id tables,
> > trading a small amount of space in the (NR_CPUS - nr_cpu_ids) >> 0 case for
> > faster accesses and slightly better cache layout (as APIC ids are mostly used
> > cross-cpu.) I'm not an APIC expert so I'd appreciate some eyes on this, but
> > it shouldn't change any behavior whatsoever. Thoughts? (We're likely to merge
> > this internally even if upstream judges the space loss too much of a cost, so
> > I'd like to know if there's some other problem I've missed that this causes.)
> >
> > I've tested this cursorily in most of our internal configurations but not in
> > any particularly exotic hardware/config.
> >
> >
> > From e6bf354c05d98651e8c27f96582f0ab56992e58a Mon Sep 17 00:00:00 2001
> > From: Andrew Hunter <ahh@xxxxxxxxxx>
> > Date: Tue, 16 Jul 2013 16:50:36 -0700
> > Subject: [PATCH] x86: avoid per_cpu for APIC id tables
> >
> > DEFINE_PER_CPU(var) and friends go to lengths to arrange all of cpu
> > i's per cpu variables as contiguous with each other; this requires a
> > double indirection to reference a variable.
> >
> > For data that is logically per-cpu but
> >
> > a) rarely modified
> > b) commonly accessed from other CPUs
> >
> > this is bad: no writes means we don't have to worry about cache ping
> > pong, and cross-CPU access means there's no cache savings from not
> > pulling in remote entries. (Actually, it's worse than "no" cache
> > savings: instead of one cache line containing 32 useful APIC ids, it
> > will contain 3 useful APIC ids and much other percpu data from the
> > remote CPU we don't want.) It's also slower to access, due to the
> > indirection.
> >
> > So instead use a flat array for APIC ids, most commonly used for IPIs
> > and the like. This makes a measurable improvement (up to 10%) in some
> > benchmarks that heavily stress remote wakeups.
> >
> > The one disadvantage is that we waste 8 bytes per unused CPU (NR_CPUS
> > - actual). But this is a fairly small amount of memory for reasonable
> > values of NR_CPUS.
> >
> > Tested: builds and boots, runs a suite of wakeup-intensive test without failure.
>
> 1)
>
> To make it easier to merge such patches it would also be nice to integrate
> a remote wakeup performance test into 'perf bench sched pipe', so that we
> can measure it more easily. You can also cite the results in your
> changelog.

While one could base the code (or even share) it with pipe, I'd like it
to appear a different benchmark from the outside. Also I'm fairly sure
they have a benchmark for this. Venki started this work, it looks like
Andrew is taking over, good! :-)

> 2)

> I'm wondering, why does this result in a 10% difference? Is the
> indirection cost perhaps the bigger effect? Is there maybe some other
> effect from changing the layout?

I suspect they came to this through measurements; it would indeed be
good to have those shared.

> Also, if the goal is to pack better then we could do even better than
> that: we could create a 'struct x86_apic_ids':
>
> struct x86_apic_ids {
> u16 bios_apicid;
> u16 apicid;
> u32 logical_apicid; /* NOTE: does this really have to be 32-bit? */
> };
>
> and put that into an explicit, [NR_CPUS] array. This preserves the tight
> coupling between fields that PER_CPU offered, requiring only a single
> cacheline fetch in the cache-cold case, while also giving efficient,
> packed caching for cache-hot remote wakeups.
>
> [ Assuming remote wakeups access all of these fields in the hot path to
> generate an IPI. Do they? ]
>
> Also, this NR_CPUS array should be cache-aligned and read-mostly, to avoid
> false sharing artifacts. Your current patch does not do either.

Agreed.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/