Re: [patch] voluntary-preempt-2.6.8.1-P7

From: K.R. Foley
Date: Sun Aug 22 2004 - 07:58:43 EST


Ingo Molnar wrote:
> * Lee Revell <rlrevell@xxxxxxxxxxx> wrote:
>
>
>>On Sat, 2004-08-21 at 20:06, K.R. Foley wrote:
>>
>>>I just posted a similar trace of ~4141 usec from P6 here:
>>>
>>>http://www.cybsft.com/testresults/2.6.8.1-P6/latency-trace1.txt
>>>
>>
>>This looks wrong:
>>
>>00000003 0.008ms (+0.001ms): dummy_socket_sock_rcv_skb (tcp_v4_rcv)
>>00000004 0.008ms (+0.000ms): tcp_v4_do_rcv (tcp_v4_rcv)
>>00000004 0.009ms (+0.000ms): tcp_rcv_established (tcp_v4_do_rcv)
>>00010004 3.998ms (+3.989ms): do_IRQ (tcp_rcv_established)
>>00010005 3.999ms (+0.000ms): mask_and_ack_8259A (do_IRQ)
>>00010005 4.001ms (+0.002ms): generic_redirect_hardirq (do_IRQ)
>>00010004 4.002ms (+0.000ms): generic_handle_IRQ_event (do_IRQ)
>>
>>Probably a false positive, Ingo would know better. What kind of
>>stress testing were you doing?
>
>
> indeed this looks suspect. Is this an SMP system?
>
> Ingo
>

Actually no. It is an SMP ready system, but with a single PII 450. As I
responded to Lee's response, I am not sure that I completely trust the
results of this trace anyway.

I would like to know why you guys think this may be a false positive. Is
it just the extremely long latency? Or is there something else that
makes it look suspect?

By the way I just posted two more traces, one that I caught last night
and one from this morning:

This is another one similar to the last but much more reasonably latency:

http://www.cybsft.com/testresults/2.6.8.1-P7/2.6.8.1-P7-1.txt

And this one this morning appears to be from updatedb running, while the
tests were running. It's worth noting that this one appears to have
happened about the same time today that the other ~4100+ one happened
yesterday. Also worth noting is that the system was probably swapping
pretty good when this occurred.

http://www.cybsft.com/testresults/2.6.8.1-P7/2.6.8.1-P7-2.txt

kr
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/