Re: frequent lockups in 3.18rc4

From: Linus Torvalds
Date: Mon Nov 17 2014 - 16:22:19 EST

On Fri, Nov 14, 2014 at 5:59 PM, Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> Judging by the code disassembly, it's the "csd_lock_wait(csd)" at the
> end.

Btw, looking at this, I grew really suspicious of this code in csd_unlock():

WARN_ON((csd->flags & CSD_FLAG_WAIT) && !(csd->flags & CSD_FLAG_LOCK));

because that makes no sense at all. It basically removes a sanity
check, yet that sanity check makes a hell of a lot of sense. Unlocking
a CSD that is not locked is *wrong*.

The crazy code code comes from commit c84a83e2aaab ("smp: don't warn
about csd->flags having CSD_FLAG_LOCK cleared for !wait") by Jens, but
the explanation and the code is pure crap.

There is no way in hell that it is ever correct to unlock an entry
that isn't locked, so that whole CSD_FLAG_WAIT thing is buggy as hell.

The explanation in commit c84a83e2aaab says that "blk-mq reuses the
request potentially immediately" and claims that that is somehow ok,
but that's utter BS. Even if you don't ever wait for it, the CSD lock
bit fundamentally also protects the "csd->llist" pointer. So what that
commit actually does is to just remove a safety check, and do so in a
very unsafe manner. And apparently block-mq re-uses something THAT IS
STILL ACTIVELY IN USE. That's just horrible.

Now, I think we might do this differently, by doing the "csd_unlock()"
after we have loaded everything from the csd, but *before* actually
calling the callback function. That would seem to be equivalent
(interrupts are disabled, so this will not result in the func()
possibly called twice), more efficient, _and_ not remove a useful

Hmm? Completely untested patch attached. Jens, does this still work for you?

Am I missing something?

kernel/smp.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index f38a1e692259..fbeb9827bdae 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -19,7 +19,6 @@

enum {
- CSD_FLAG_WAIT = 0x02,

struct call_function_data {
@@ -126,7 +125,7 @@ static void csd_lock(struct call_single_data *csd)

static void csd_unlock(struct call_single_data *csd)
- WARN_ON((csd->flags & CSD_FLAG_WAIT) && !(csd->flags & CSD_FLAG_LOCK));
+ WARN_ON(!(csd->flags & CSD_FLAG_LOCK));

* ensure we're all done before releasing data:
@@ -173,9 +172,6 @@ static int generic_exec_single(int cpu, struct call_single_data *csd,
csd->func = func;
csd->info = info;

- if (wait)
- csd->flags |= CSD_FLAG_WAIT;
* The list addition should be visible before sending the IPI
* handler locks the list to pull the entry off it because of
@@ -250,8 +246,11 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)

llist_for_each_entry_safe(csd, csd_next, entry, llist) {
- csd->func(csd->info);
+ smp_call_func_t func = csd->func;
+ void *info = csd->info;
+ func(info);