Re: frequent lockups in 3.18rc4
From: John Stultz
Date: Mon Jan 05 2015 - 21:05:37 EST
On Mon, Jan 5, 2015 at 5:25 PM, Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Mon, Jan 5, 2015 at 5:17 PM, John Stultz <john.stultz@xxxxxxxxxx> wrote:
>>
>> Anyway, It may be worth keeping the 50% margin (and dropping the 12%
>> reduction to simplify things)
>
> Again, the 50% margin is only on the multiplication overflow. Not on the mask.
Right, but we calculate the mult value based on the mask (or 10 mins,
whichever is shorter).
So then when we go back and calculate the max_cycles/max_idle_ns using
the mult, we end up with a value smaller then the mask. So the
scheduler shouldn't push idle times out beyond that and the debug
logic in my patch should be able to catch strangely large values.
> So it won't do anything at all for the case we actually care about,
> namely a broken HPET, afaik.
Yea, the case my code doesn't catch that yours did is for slightly
broken clocksources (I'm thinking two cpus which virtual hpets
embedded in them that are slightly off) where you could get negative
deltas right after the update. In that case the capping on read is
really needed since by the next update the stale value has grown large
enough to look like a reasonable offset. The TSC has a similar issue,
but its easier to check for negative values because it won't
reasonably ever overflow.
>
> I'd much rather limit to 50% of the mask too.
Ok, I'll try to rework the code to make this choice and make it more
explicitly clear.
> Also, why do we actually play games with ilog2 for that overflow
> calculation? It seems pointless. This is for the setup code, doing a
> real division there would seem to be a whole lot more straightforward,
> and not need that big comment. And there's no performance issue. Am I
> missing something?
I feel like there was a time when this may have been called by some of
the clocksource code if it they changed frequency (I think over
suspend/resume), but I'm not seeing it in the current source. So yea,
likely something to simplify.
>> I've also got a capping patch that I'm testing that keeps time reads
>> from passing that interval. The only thing I'm really cautious about
>> with that change is that we have to make sure the hrtimer that
>> triggers update_wall_clock is always set to expire within that cap (I
>> need to review it again) or else we'll hang ourselves.
>
> Yeah, that thing is fragile. And quite possibly part of the problem.
"Time is a flat circle..." and thus unfortunately requires some
circular logic. :)
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/