Re: nfsd oops on Linus' current tree.

From: Adamson, Dros
Date: Fri Jan 04 2013 - 12:10:59 EST

On Jan 3, 2013, at 6:26 PM, "Myklebust, Trond" <Trond.Myklebust@xxxxxxxxxx> wrote:

> On Thu, 2013-01-03 at 18:11 -0500, Trond Myklebust wrote:
>> On Thu, 2013-01-03 at 17:26 -0500, Tejun Heo wrote:
>>> Ooh, BTW, there was a bug where workqueue code created a false
>>> dependency between two work items. Workqueue currently considers two
>>> work items to be the same if they're on the same address and won't
>>> execute them concurrently - ie. it makes a work item which is queued
>>> again while being executed wait for the previous execution to
>>> complete.
>>> If a work function frees the work item, and then waits for an event
>>> which should be performed by another work item and *that* work item
>>> recycles the freed work item, it can create a false dependency loop.
>>> There really is no reliable way to detect this short of verifying
>>> every memory free. A patch is queued to make such occurrences less
>>> likely (work functions should also match for two work items considered
>>> the same), but if you're seeing this, the best thing to do is freeing
>>> the work item at the end of the work function.
>> That's interesting... I wonder if we may have been hitting that issue.
>> From what I can see, we do actually free the write RPC task (and hence
>> the work_struct) before we call the asynchronous unlink completion...
>> Dros, can you see if reverting commit
>> 324d003b0cd82151adbaecefef57b73f7959a469 + commit
>> 168e4b39d1afb79a7e3ea6c3bb246b4c82c6bdb9 and then applying the attached
>> patch also fixes the hang on a pristine 3.7.x kernel?
> Actually, we probably also need to look at rpc_free_task, so the
> following patch, instead...

Yes, this patch fixes the hang!

Thank you for the explanation Tejun - that makes a lot of sense and explains the workqueue behavior that we were seeing.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at