Re: Crashes with 874bbfe600a6 in 3.18.25
From: Thomas Gleixner
Date: Wed Feb 03 2016 - 13:47:37 EST
On Wed, 3 Feb 2016, Tejun Heo wrote:
> On Wed, Feb 03, 2016 at 01:28:56PM +0100, Michal Hocko wrote:
> > > The CPU was 168, and that one was offlined in the meantime. So
> > > __queue_work fails at:
> > > if (!(wq->flags & WQ_UNBOUND))
> > > pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
> > > else
> > > pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
> > > ^^^ ^^^^ NODE is -1
> > > \ pwq is NULL
> > >
> > > if (last_pool && last_pool != pwq->pool) { <--- BOOM
>
> So, the proper fix here is keeping cpu <-> node mapping stable across
> cpu on/offlining which has been being worked on for a long time now.
> The patchst is pending and it fixes other issues too.
>
> > So I think 874bbfe600a6 is really bogus. It should be reverted. We
> > already have a proper fix for vmstat 176bed1de5bf ("vmstat: explicitly
> > schedule per-cpu work on the CPU we need it to run on"). This which
> > should be used for the stable trees as a replacement.
>
> It's not bogus. We can't flip a property that has been guaranteed
> without any provision for verification. Why do you think vmstat blow
> up in the first place? vmstat would be the canary case as it runs
> frequently on all systems. It's exactly the sign that we can't break
> this guarantee willy-nilly.
You're in complete failure denial mode once again.
Fact is:
That patch breaks stuff because there is no stable cpu -> node mapping
accross cpu on/offlining. As a result this selects unbound_pwq_by_node() on
node -1.
The reason why you need to do that work->cpu assignment might be legitimate,
but that does not justify that you expose systems to a lurking out of bounds
access which results in a NULL pointer dereference.
As long as cpu_to_node(cpu) can return -1, we need a sanity check there. And
we need that now and not at some point in the future when the patches
establishing a stable cpu -> node mapping are finished.
Stop arguing around a bug which really exists and was exposed by this patch.
Thanks,
tglx