Re: [PATCH 1/2] sched/wait: Break up long wake list walk

From: Linus Torvalds
Date: Mon Aug 14 2017 - 22:52:22 EST


On Mon, Aug 14, 2017 at 7:27 PM, Andi Kleen <ak@xxxxxxxxxxxxxxx> wrote:
>
> We could try it and it may even help in this case and it may
> be a good idea in any case on such a system, but:
>
> - Even with a large hash table it might be that by chance all CPUs
> will be queued up on the same page
> - There are a lot of other wait queues in the kernel and they all
> could run into a similar problem
> - I suspect it's even possible to construct it from user space
> as a kind of DoS attack

Maybe. Which is why I didn't NAK the patch outright.

But I don't think it's the solution for the scalability issue you guys
found. It's just a workaround, and it's likely a bad one at that.

> Now in one case (on a smaller system) we debugged we had
>
> - 4S system with 208 logical threads
> - during the test the wait queue length was 3700 entries.
> - the last CPUs queued had to wait roughly 0.8s
>
> This gives a budget of roughly 1us per wake up.

I'm not at all convinced that follows.

When bad scaling happens, you often end up hitting quadratic (or
worse) behavior. So if you are able to fix the scaling by some fixed
amount, it's possible that almost _all_ the problems just go away.

The real issue is that "3700 entries" part. What was it that actually
triggered them? In particular, if it's just a hashing issue, and we
can trivially just make the hash table be bigger (256 entries is
*tiny*) then the whole thing goes away.

Which is why I really want to hear what happens if you just change
PAGE_WAIT_TABLE_BITS to 16. The right fix would be to just make it
scale by memory, but before we even do that, let's just look at what
happens when you increase the size the stupid way.

Maybe those 3700 entries will just shrink down to 14 entries because
the hash just works fine and 256 entries was just much much too small
when you have hundreds of thousands of threads or whatever

But it is *also* possible that it's actually all waiting on the exact
same page, and there's some way to do a thundering herd on the page
lock bit, for example. But then it would be really good to hear what
it is that triggers that.

The thing is, the reason we perform well on many loads in the kernel
is that I have *always* pushed back against bad workarounds.

We do *not* do lock back-off in our locks, for example, because I told
people that lock contention gets fixed by not contending, not by
trying to act better when things have already become bad.

This is the same issue. We don't "fix" things by papering over some
symptom. We try to fix the _actual_ underlying problem. Maybe there is
some caller that can simply be rewritten. Maybe we can do other tricks
than just make the wait tables bigger. But we should not say "3700
entries is ok, let's just make that sh*t be interruptible".

That is what the patch does now, and that is why I dislike the patch.

So I _am_ NAK'ing the patch if nobody is willing to even try alternatives.

Because a band-aid is ok for "some theoretical worst-case behavior".

But a band-aid is *not* ok for "we can't even be bothered to try to
figure out the right thing, so we're just adding this hack and leaving
it".

Linus