Hiya. I just had an initial look at fair.c--
There seems to be a 10ms averager in there?
You are aware that that means you work on delayed values?
Isn`t that counterintuitive to the principle of sharing?
That means short bursts of cpu-use will be filtered out, and given less cpu time.
Starting applications won`t have their cpu-usage before 5ms, which is quite a bit on modern machines. Well if you use a linearphase filter, I don`t know what kind of averager you use. The best would ofcourse be to use a minimalphase gaussian averager. Which might be overkill. Atleast a one-pole iir, buf = buf + (-buf + in) * cut)); One pole IIRs also have a better frequency response.
When you are working with low-latencies, wouldn`t it be better if such things are tuned for target latency. I think few care about latency after 0.2ms. So say the filter should be set to 0.4ms max.
Why would you want to filter cpu-usage also really?
Peace Be With You.
(please CC me.)