[PATCH] powerpc 2.6.21-rt6: replace preempt_schedule w/ preempt_schedule_irq

From: Tsutomu OWA
Date: Tue May 22 2007 - 07:48:26 EST



Hi Ingo and Thomas,

Please apply.

Replace preempt_schedule() w/ preempt_schedule_irq() in irq return path,
to avoid irq-entry recursion and stack overflow problems for powerpc64.
It hits when doing netperf from another machine to the machine running rt kernel.

This patch applies on top of linux-2.6.21 + patch-2.6.21-rt6.

Compile, boot and netperf tested on celleb.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
~ $ uname -a
Linux Linux 2.6.21-rt6 #1 SMP PREEMPT RT Tue May 22 19:18:00 JST 2007 ppc64 unkn
own
~ $ Unable to handle kernel paging request for data at address 0xc0000180004cd9b
0
Faulting instruction address: 0xc00000000003da48
cpu 0x0: Vector: 300 (Data Access) at [c00000000fffba00]
pc: c00000000003da48: .resched_task+0x34/0xc4
lr: c0000000000410b4: .try_to_wake_up+0x4cc/0x5a8
sp: c00000000fffbc80
msr: 9000000000001032
dar: c0000180004cd9b0
dsisr: 40000000
current = 0xc00000000244ed20
paca = 0xc0000000004cd980
pid = 425, comm = netserver
enter ? for help
[c00000000fffbd00] c0000000000410b4 .try_to_wake_up+0x4cc/0x5a8
[c00000000fffbde0] c0000000000880c8 .redirect_hardirq+0x68/0x88
[c00000000fffbe60] c00000000008aec8 .handle_level_irq+0x13c/0x220
[c00000000fffbf00] c000000000032538 .spider_irq_cascade+0x98/0xec
[c00000000fffbf90] c000000000022280 .call_handle_irq+0x1c/0x2c
[c0000000025abea0] c00000000000c33c .do_IRQ+0xc8/0x17c
[c0000000025abf30] c00000000000444c hardware_interrupt_entry+0x18/0x4c
--- Exception: 501 (Hardware Interrupt) at c00000000000afec .raw_local_irq_restore+0x3c/0x40
[link register ] c0000000003baaac .preempt_schedule+0xac/0xd0
[c0000000025ac220] c0000000003baa88 .preempt_schedule+0x88/0xd0 (unreliable)
[c0000000025ac2b0] c000000000008bd8 do_work+0x38/0x5c
--- Exception: 501 (Hardware Interrupt) at c00000000000afec .raw_local_irq_resto

<snip><snip><snip><snip><snip><snip><snip><snip><snip><snip><snip><snip><snip>

--- Exception: 501 (Hardware Interrupt) at c00000000000afec .raw_local_irq_restore+0x3c/0x40
[link register ] c0000000003baaac .preempt_schedule+0xac/0xd0
[c0000000025af320] c0000000003baa88 .preempt_schedule+0x88/0xd0 (unreliable)
[c0000000025af3b0] c000000000008bd8 do_work+0x38/0x5c
--- Exception: 501 (Hardware Interrupt) at c00000000002eb64 .__copy_tofrom_user+0x164/0x580
[link register ] c00000000002f26c .copy_to_user+0x34/0x50
[c0000000025af6a0] 00000000000005a8 (unreliable)
[c0000000025af710] c000000000318674 .memcpy_toiovec+0x58/0xc4
[c0000000025af7b0] c000000000318d4c .skb_copy_datagram_iovec+0x90/0x2d0
[c0000000025af870] c000000000349bf4 .tcp_recvmsg+0x534/0x8d8
[c0000000025af960] c000000000310fd8 .sock_common_recvmsg+0x5c/0x84
[c0000000025af9f0] c00000000030ea58 .sock_recvmsg+0x110/0x15c
[c0000000025afc00] c0000000003101b0 .sys_recvfrom+0xf0/0x174
[c0000000025afd90] c00000000032e20c .compat_sys_socketcall+0x178/0x214
[c0000000025afe30] c000000000008608 syscall_exit+0x0/0x40
--- Exception: c01 (System Call) at 000000000fe86df0
SP (ffe709c0) is in userspace
0:mon>
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

thanks in advance.
Signed-off-by: Tsutomu OWA <tsutomu.owa@xxxxxxxxxxxxx>
-- owa

--- mcount/arch/powerpc/kernel/entry_64.S 2007-05-15 18:46:38.000000000 +0900
+++ send/rt/arch/powerpc/kernel/entry_64.S 2007-05-15 15:44:40.000000000 +0900
@@ -559,14 +559,9 @@ do_work:
cmpdi r0,0
crandc eq,cr1*4+eq,eq
bne restore
- /* here we are preempting the current task */
1:
- li r0,1
- stb r0,PACASOFTIRQEN(r13)
- stb r0,PACAHARDIRQEN(r13)
- ori r10,r10,MSR_EE
- mtmsrd r10,1 /* reenable interrupts */
- bl .preempt_schedule
+ /* preempt_schedule_irq() expects interrupts disabled. */
+ bl .preempt_schedule_irq
mfmsr r10
clrrdi r9,r1,THREAD_SHIFT
rldicl r10,r10,48,1 /* disable interrupts again */
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/