Re: workqueue, pci: INFO: possible recursive locking detected

From: Lai Jiangshan
Date: Mon Jul 22 2013 - 21:19:37 EST


On 07/23/2013 05:32 AM, Tejun Heo wrote:
> On Mon, Jul 22, 2013 at 07:52:34PM +0800, Lai Jiangshan wrote:
>> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
>> index f02c4a4..b021a45 100644
>> --- a/kernel/workqueue.c
>> +++ b/kernel/workqueue.c
>> @@ -4731,6 +4731,7 @@ struct work_for_cpu {
>> long (*fn)(void *);
>> void *arg;
>> long ret;
>> + struct completion done;
>> };
>>
>> static void work_for_cpu_fn(struct work_struct *work)
>> @@ -4738,6 +4739,7 @@ static void work_for_cpu_fn(struct work_struct *work)
>> struct work_for_cpu *wfc = container_of(work, struct work_for_cpu, work);
>>
>> wfc->ret = wfc->fn(wfc->arg);
>> + complete(&wfc->done);
>> }
>>
>> /**
>> @@ -4755,8 +4757,9 @@ long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
>> struct work_for_cpu wfc = { .fn = fn, .arg = arg };
>>
>> INIT_WORK_ONSTACK(&wfc.work, work_for_cpu_fn);
>> + init_completion(&wfc.done);
>> schedule_work_on(cpu, &wfc.work);
>> - flush_work(&wfc.work);
>> + wait_for_completion(&wfc.done);
>
> Hmmm... it's kinda nasty. Given how infrequently work_on_cpu() users
> nest, I think it'd be cleaner to have work_on_cpu_nested() which takes
> @subclass. It requires extra work on the caller's part but I think
> that actually is useful as nested work_on_cpu()s are pretty weird
> things.
>

The problem is that the userS may not know their work_on_cpu() nested,
especially when work_on_cpu()s are on different subsystems and the call depth
is deep enough but the nested work_on_cpu() depends on some conditions.

I prefer to change the user instead of introducing work_on_cpu_nested(), and
I accept to change the user only instead of change work_on_cpu() since there is only
one nested-calls case found.

But I'm thinking, since nested work_on_cpu() don't have any problem,
Why workqueue.c don't offer a more friendly API/behavior?

Thanks,
Lai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/