Re: [PATCH] x86: simplify interrupt dispatch table

From: Ingo Molnar
Date: Sat Apr 04 2015 - 03:07:03 EST

* Denys Vlasenko <dvlasenk@xxxxxxxxxx> wrote:

> Interrupt entry points are handled with the following code:
> Each 32-byte code block contains seven entry points
> ...
> [push][jump 22] // 4 bytes
> [push][jump 18] // 4 bytes
> [push][jump 14] // 4 bytes
> [push][jump 10] // 4 bytes
> [push][jump 6] // 4 bytes
> [push][jump 2] // 4 bytes
> [push][jump common_interrupt][padding] // 8 bytes
> [push][jump]
> [push][jump]
> [push][jump]
> [push][jump]
> [push][jump]
> [push][jump]
> [push][jump common_interrupt][padding]
> [padding_2]
> common_interrupt:
> And there is a table which holds pointers to every entry point,
> IOW: to every push.
> In cold cache, two jumps are still costlier than one, even though we get
> the benefit of them residing in the same cacheline.
> This change replaces short jumps with near ones to common_interrupt, and pads
> every push+jump pair to 8 bytes. This way, each interrupt takes only one jump.
> This change replaces ".p2align CONFIG_X86_L1_CACHE_SHIFT" before dispatch table
> with ".align 8" - we do not need anything stronger than that.
> The table of entry addresses (the interrupt[] array) is no longer
> necessary, the address of entries can be easily calculated as
> (irq_entries_start + i*8).
> text data bss dec hex filename
> 12546 0 0 12546 3102 entry_64.o.before
> 11626 0 0 11626 2d6a entry_64.o
> The size decrease is because 1656 bytes of .init.rodata are gone.
> That's initdata, though. The resident size does go up a bit.

So I like this a lot, as it's straight, simple and obvious, both to
hardware and to humans as well. (This is btw. quite close to the irq
entry code layout we used to have historically.)

We could do three other changes that would probably help a lot more in
practice than the addition or elimination of a single instruction:


We could try to not spread vectors, as modern APICs seem to handle
clustered vectors a lot better and we don't actually use irq
priority levels like other OSs so we are free to choose our vectors.

This compresses the I$ footprint a bit more if lots of related
irq sources are firing towards the same CPUs that share one or more
caches (HT threads, cores, node local siblings).

Even on single-node systems this would still compress the IDT and
the entry code cache footprint a bit.


We could allocate the IDT per CPU (or per node), lowering the D$
cache miss costs on NUMA systems. (This, if we allowed the IDTs to
diverge, would also allow more irq sources sent to separate

The simplest model of this, where each IDT is just a copy of each
other, is relatively easy to implement, as the IDT is page aligned
and ro mapped already.


We could allocate the entry code itself too per cpu (or per node),
lowering the I$ cache miss costs on NUMA systems. This would be a
bit trickier to implement, as that part of the image has to be
relinked during bootup, but is doable.

I'd do 3) only once we are done with the current audit/cleanup/rewrite
of the entry code.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at