Re: [patch] Real-Time Preemption, -RT-2.6.10-rc1-mm3-V0.7.25-1

From: Mark_H_Johnson
Date: Mon Nov 15 2004 - 13:42:39 EST

Didn't see an announcement for V0.7.26, so I'll post this summary
under this title.

Built today with V0.7.26-4 without any problems. System booted up
and telnet 5 was uneventful as well.

Ran two series of tests with latencytest and my stress tests with
the following results:

[1] Appear to have a new symptom of 200 usec delays at raw_read_unlock
which doesn't make any sense to me. Have included a latency trace
at the end of this message with an example. An occasional 100 usec
hit I understand (disk DMA) but I don't recall seeing this symptom

[2] Still get the symptoms with truncated trace output and bad
ping responses. Refer to my previous messages for examples.

[3] The logging script (sleep 5 seconds, record data if slept for
over 10 seconds) was triggered about 30 times in an hour of testing.
None have the huge load average values reported last time but
several have 1 minute load averages above 15 (expect 6-8).

[4] All of the tests have bursts of long application level delays.
I'll be running another test program to see if I can find anything
with the user level tracing. Disk activity seems to make it worse
but all the tests had at least one CPU delay over a millisecond.
There seems to be a "short" (well > 500 usec) delay related to
disk reads and a longer one for disk writes.

[5] System after testing was done had a major "time shift"
as noted in the system log.

Nov 15 12:33:55 dws77 ntpd[2359]: synchronized to, stratum=3
Nov 15 12:33:57 dws77 ntpd[2359]: synchronized to, stratum=2
Nov 15 12:33:36 dws77 ntpd[2359]: time reset -21.466037 s
Nov 15 12:33:36 dws77 ntpd[2359]: frequency error -512 PPM exceeds
tolerance 500 PPM

No crashes nor any major stability problems.

--- 200 usec latency example ---
preemption latency trace v1.0.7 on 2.6.10-rc1-mm3-RT-V0.7.26-4
latency: 206 us, entries: 12 (12) | [VP:0 KP:1 SP:1 HP:1 #CPUS:2]
| task: kjournald/1209, uid:0 nice:0 policy:0 rt_prio:0
=> started at: __down_mutex+0x3f/0x300 <c032d9af>
=> ended at: __down_mutex+0x1a6/0x300 <c032db16>
1209 80000000 0.000ms (+0.002ms): __down_mutex (__spin_lock)
1209 80000000 0.002ms (+0.001ms): _raw_spin_lock (__down_mutex)
1209 80000000 0.004ms (+0.000ms): _raw_spin_lock (__down_mutex)
1209 80000000 0.004ms (+0.000ms): do_nmi (__down_mutex)
1209 80000000 0.005ms (+0.000ms): do_nmi (mcount)
1209 80000000 0.005ms (+0.000ms): do_nmi (<00200286>)
1209 80000000 0.006ms (+0.000ms): profile_hook (profile_tick)
1209 80000000 0.006ms (+0.000ms): _raw_read_lock (profile_hook)
1209 80000000 0.007ms (+0.196ms): _raw_read_unlock (profile_tick)
1209 80000000 0.204ms (+0.001ms): set_new_owner (__down_mutex)
1209 80000000 0.205ms (+0.000ms): _raw_spin_unlock (__down_mutex)
1209 80000000 0.205ms (+0.000ms): _raw_spin_unlock (__down_mutex)

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at