Re: scale sysctl_sched_shares_ratelimit with nr_cpus

From: Ingo Molnar
Date: Mon Aug 18 2008 - 04:42:24 EST



* Zhang, Yanmin <yanmin.zhang@xxxxxxxxx> wrote:

> >>Does a scheduler trace show anything about why that drop happens? Do
> >>something like this to trace the scheduler:
> >>
> >>assuming debugfs is mounted under /debug and CONFIG_SCHED_TRACER=y:
> >>
> >> echo 1 > /debug/tracing/tracing_cpumask
> >> echo sched_switch > /debug/tracing/available_tracers
> >> cat /debug/tracing/trace_pipe > trace.txt
> [YM] Thanks for your good pointer. I collected the data and didn't find
> anything abnormal except the pid about waker.
>
> Receiver-197-13665 [00] 1369.966423: 13665:120:R + 13607:120:S
> Receiver-197-13665 [00] 1369.966440: 13665:120:R + 13611:120:S
> Receiver-197-13665 [00] 1369.966458: 13665:120:R + 13615:120:S
> Receiver-197-13665 [00] 1369.966463: 13665:120:R + 13619:120:S
> Receiver-197-13665 [00] 1369.966466: 13665:120:R + 13623:120:S
> Receiver-197-13665 [00] 1369.966469: 13665:120:R + 13627:120:S
> Receiver-197-13665 [00] 1369.966475: 13665:120:R + 13631:120:S
> Receiver-197-13665 [00] 1369.966480: 13665:120:R + 13635:120:S
> Receiver-197-13665 [00] 1369.966485: 13665:120:R + 13639:120:S
> Receiver-197-13665 [00] 1369.966495: 13665:120:R + 13643:120:S
> Receiver-197-13665 [00] 1369.966507: 13871:120:R + 13647:120:S
> Above waker pid is 13871 while the current pid is 13665. I found lots of
> such mismatch data.
>
> Receiver-197-13665 [00] 1369.966513: 13465:120:R + 13651:120:S
> Receiver-197-13665 [00] 1369.966516: 13665:120:R + 13655:120:S
> Receiver-197-13665 [00] 1369.966521: 13665:120:R + 13659:120:S
> Receiver-197-13665 [00] 1369.966530: 13665:120:R + 13667:120:S
> Receiver-197-13665 [00] 1369.966544: 13883:120:R + 13663:120:S
> Receiver-197-13665 [00] 1369.966549: 13665:120:R ==> 13667:120:R
> Sender-140-13667 [00] 1369.966573: 13351:120:R + 13668:120:S
> Sender-140-13667 [00] 1369.966578: 13667:120:R ==> 13659:120:R
>
>
> BTW, I analyzed schedstat data and found wake_affine and
> load_balance_newidle seem abnormal. 2.6.27-rc has more task pulls. I
> set CONFIG_GROUP_SCHED=n with above testing.

hm, does this mean there's too much idle time during the testrun,
because we dont load-balance agressively enough?

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/