Re: [patch] Real-Time Preemption, -RT-2.6.9-mm1-V0
From: K.R. Foley
Date: Tue Oct 26 2004 - 22:39:50 EST
Ingo Molnar wrote:
* K.R. Foley <kr@xxxxxxxxxx> wrote:
Several things in regard to V0.2:
1) Interactive responsiveness seems to be noticably sluggish at times on
all three of the systems I have tested this on.
2) My 450MHz UP system is definitely the worst by far. Scrolling through
the syslog in a telnet session produces pauses every few seconds for
about a second, that is while it's still responding. These problems seem
to be network related, but there are no indications of what the problem
is. This system also at times will just stop responding to network requests.
3) Both of the SMP systems are lacking the snappy responsiveness in X
that I have become accustomed to with previous patches, but the 2.6GHz
Xeon (w/HT) is worse than the 933MHz Xeon. Again no indications of
problems in the logs.
4) Using amlat to run the RTC at 1kHz will kill any of these systems
very quickly.
could you try this with -V0.3 too? I believe most of these problems
should be solved.
Ingo
I've repeated the above on the dual 933 Xeon:
Still problems with interactive behavior. Running KDE, with top running
in xterm, scrolling through the menus I get some pauses. When the pauses
occur I see kdeinit hit the top of the list and sometimes consuming 90%
or more of a CPU and idle usage drops to 30-40%. I do see some latency
traces (not really high ones) in the log that were generated by kdeinit
but I think they were generated prior to when these pauses occurred,
most likely when logging in.
Running amlat still hard locks the system. The last time this happened I
got this in the log:
Oct 26 21:43:56 porky kernel: BUG: sleeping function called from
invalid context amlat(3963) at kernel/mutex.c:28
Oct 26 21:43:56 porky kernel: in_atomic():1 [00000001], irqs_disabled():1
Oct 26 21:43:56 porky kernel: [<c011c7da>] __might_sleep+0xca/0xe0 (12)
Oct 26 21:43:56 porky kernel: [<c0137d89>] _mutex_lock+0x39/0x50 (36)
Oct 26 21:43:56 porky kernel: [<c0137df6>]
_mutex_lock_irqsave+0x16/0x20 (24)
Oct 26 21:43:56 porky kernel: [<c012977d>] __mod_timer+0x4d/0x1f0 (12)
Oct 26 21:43:56 porky kernel: [<c01f6535>] rtc_do_ioctl+0x185/0x970 (44)
Oct 26 21:43:56 porky kernel: [<c013838d>] __mcount+0x1d/0x30 (136)
Oct 26 21:43:56 porky kernel: [<c01f6d2b>] rtc_ioctl+0xb/0x30 (4)
Oct 26 21:43:56 porky kernel: [<c0179367>] sys_ioctl+0xe7/0x250 (4)
Oct 26 21:43:56 porky kernel: [<c01131f8>] mcount+0x14/0x18 (8)
Oct 26 21:43:56 porky kernel: [<c01f6d2b>] rtc_ioctl+0xb/0x30 (20)
Oct 26 21:43:56 porky kernel: [<c0179367>] sys_ioctl+0xe7/0x250 (20)
Oct 26 21:43:56 porky kernel: [<c0106739>] sysenter_past_esp+0x52/0x71 (48)
Oct 26 21:43:56 porky kernel: preempt count: 00000002
Oct 26 21:43:56 porky kernel: . 2-level deep critical section nesting:
Oct 26 21:43:56 porky kernel: .. entry 1: _spin_lock_irqsave+0x22/0x80
[<c02c71c2>] / (rtc_do_ioctl+0x158/0x970 [<c01f6508>])
Oct 26 21:43:56 porky kernel: .. entry 2: print_traces+0x1d/0x60
[<c01394bd>] / (dump_stack+0x23/0x30 [<c0107613>])
Oct 26 21:43:56 porky kernel:
Oct 26 21:43:56 porky kernel: BUG: scheduling while atomic: IRQ
8/0x00000001/672
Oct 26 21:43:56 porky kernel: caller is schedule+0x30/0xe0
Oct 26 21:43:57 porky kernel: [<c02c58c1>] __schedule+0x771/0x7d0 (12)
Oct 26 21:43:57 porky kernel: [<c02c5950>] schedule+0x30/0xe0 (8)
Oct 26 21:43:57 porky kernel: [<c013838d>] __mcount+0x1d/0x30 (60)
Oct 26 21:43:57 porky kernel: [<c02c592e>] schedule+0xe/0xe0 (4)
Oct 26 21:43:57 porky kernel: [<c02c6c4d>] down_write_mutex+0x12d/0x1e0 (4)
Oct 26 21:43:57 porky kernel: [<c01131f8>] mcount+0x14/0x18 (8)
Oct 26 21:43:57 porky kernel: [<c02c5950>] schedule+0x30/0xe0 (20)
Oct 26 21:43:57 porky kernel: [<c01131f8>] mcount+0x14/0x18 (4)
Oct 26 21:43:57 porky kernel: [<c02c74ea>] _spin_unlock+0x1a/0x40 (20)
Oct 26 21:43:57 porky kernel: [<c02c6c4d>] down_write_mutex+0x12d/0x1e0
(12)
Working on booting the 450 right now.
kr
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/