Re: [RFC PATCH] mm: silence soft lockups from unlock_page

From: Linus Torvalds
Date: Wed Jul 22 2020 - 14:29:41 EST


On Tue, Jul 21, 2020 at 8:33 AM Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> More likely, it's actually *caused* by that commit 11a19c7b099f, and
> what might be happening is that other CPU's are just adding new
> waiters to the list *while* we're waking things up, because somebody
> else already got the page lock again.
>
> Humor me.. Does something like this work instead?

I went back and looked at this, because it bothered me.

And I'm no longer convinced it can possibly make a difference.

Why?

Because __wake_up_locked_key_bookmark() just calls __wake_up_common(),
and that one checks the return value of the wakeup function:

ret = curr->func(curr, mode, wake_flags, key);
if (ret < 0)
break;

and will not add the bookmark back to the list if this triggers.

And the wakeup function does that same "stop walking" thing:

if (test_bit(key->bit_nr, &key->page->flags))
return -1;

So if somebody else took the page lock, I think we should already have
stopped walking the list.

Of course, the page table lock hash table is very small. It's only 256
entries. So maybe the list is basically all aliases for another page
entirely that is being hammered by that load, and we're just unlucky.

Because the wakeup function only does that "stop walking" if the page
key matched. So wait queue entries for another page that just hashes
to the same bucket (or even the same page, but a different bit in the
page) will confuse that logic.

Hmm.

I still can't see how you'd get so many entries (without re-adding
them) that you'd hit the softlockup timer.

So I wonder if maybe we really do hit the "aliasing with a really hot
page that gets re-added in the page wait table" case, but it seems a
bit contrived.

So I think that patch is still worth testing, but I'm not quite as
hopeful about it as I was originally.

I do wonder if we should make that PAGE_WAIT_TABLE_SIZE be larger. 256
entries seems potentially ridiculously small, and aliasing not only
increases the waitqueue length, it also potentially causes more
contention on the waitqueue spinlock (which is already likely seeing
some false sharing on a cacheline basis due to the fairly dense array
of waitqueue entries: wait_queue_head is intentionally fairly small
and dense unless you have lots of spinlock debugging options enabled).

That hashed wait-queue size is an independent issue, though. But it
might be part of "some loads can get into some really nasty behavior
in corner cases"

Linus