Re: [2.6.36.1] IGB driver handles all ethX interrupts on single cpu core.

From: Robert Hancock
Date: Wed Dec 08 2010 - 10:22:32 EST


2010/12/8 Paweł Sikora <pluto@xxxxxxxx>:
> On Wednesday 08 of December 2010 02:44:25 Robert Hancock wrote:
>> On 12/07/2010 12:06 PM, Paweł Sikora wrote:
>> > hi,
>> >
>> > i'm currently testing a new server with 2x opteron-6128 with dual gigabit port
>> > and observing that the igb driver uses only single core for all ethX interrupts.
>> > is it a correct behaviour for this driver?
>> >
>> > BR,
>> > Pawel.
>>
>> The CPU affinity for the IRQ isn't really under the driver's control. It
>> looks like all your interrupts are being handled on CPU0. You likely
>> need to run the irqbalance daemon.
>
> ok, so why e.g. on one machine (dual amd opteron) irqbalance daemon is required
> and on second machine (single intel quad-core) irqs are balanced w/o daemon?
> this looks inconsistent to me.
>
> $ cat /proc/interrupts
>           CPU0       CPU1       CPU2       CPU3
>  0:         49          2          0          0   IO-APIC-edge      timer
>  1:          0          1          0          1   IO-APIC-edge      i8042
>  8:         13         11         12         13   IO-APIC-edge      rtc0
>  9:          0          0          0          0   IO-APIC-fasteoi   acpi
>  12:          0          0          3          1   IO-APIC-edge      i8042
>  16:        224        236        235        232   IO-APIC-fasteoi   pata_marvell, uhci_hcd:usb3
>  17:          0          0          0          0   IO-APIC-fasteoi   saa7133[0], saa7133[0]
>  18:          0          0          0          0   IO-APIC-fasteoi   ehci_hcd:usb1, uhci_hcd:usb5, uhci_hcd:usb8
>  19:        695        672        660        630   IO-APIC-fasteoi   uhci_hcd:usb7
>  21:          0          0          0          0   IO-APIC-fasteoi   uhci_hcd:usb4
>  23:          1          1          1          0   IO-APIC-fasteoi   ehci_hcd:usb2, uhci_hcd:usb6
>  40:       3409       3446       3441       3403   PCI-MSI-edge      ahci
>  41:         63         60         60         61   PCI-MSI-edge      hda_intel
>  42:       3219       3180       3237       3192   PCI-MSI-edge      radeon
>  43:        505        487        496        498   PCI-MSI-edge      eth0
> NMI:         11          7         12          8   Non-maskable interrupts
> LOC:      14822      15293      17577      14404   Local timer interrupts
> SPU:          0          0          0          0   Spurious interrupts
> PMI:         11          7         12          8   Performance monitoring interrupts
> PND:          0          0          0          0   Performance pending work
> RES:        493        498        525        495   Rescheduling interrupts
> CAL:       3975        248       2729        299   Function call interrupts
> TLB:        937       1727        913       1600   TLB shootdowns
> TRM:          0          0          0          0   Thermal event interrupts
> THR:          0          0          0          0   Threshold APIC interrupts
> MCE:          0          0          0          0   Machine check exceptions
> MCP:          2          2          2          2   Machine check polls
> ERR:          3
> MIS:          0

Is the kernel configuration the same? Also, I think some chipsets
handle IRQ distribution differently than others (I believe if
interrupts are enabled for more than one CPU for a given IRQ line,
some chipsets will distribute them across CPUs and some send them all
to the same CPU).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/