Re: [RFC][PATCH 0/7] lockdep: Support recurise-read locks

From: Yong Zhang
Date: Fri Apr 22 2011 - 03:27:34 EST


On Fri, Apr 22, 2011 at 3:19 PM, Yong Zhang <yong.zhang0@xxxxxxxxx> wrote:
> 2011/4/18 Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>:
>> (Continued from https://lkml.org/lkml/2011/3/31/240 "Re: [RFC] seqlock,lockdep: Add lock primitives to read_seqbegin().")
>> Test results for above program:
>>
>> "cat /proc/locktest1 /proc/locktest2" => Detect fail
>> "cat /proc/locktest2 /proc/locktest1" => Detect fail
>> "cat /proc/locktest3 /proc/locktest4" => Detect fail
>> "cat /proc/locktest4 /proc/locktest3" => Detect fail
>>
>> If I change from rwlock_acquire_read() to spin_acquire() in read_seqbegin2()
>> and from rwlock_release() to spin_release() in read_seqretry2():
>>
>> "cat /proc/locktest1 /proc/locktest2" => Detect fail
>> "cat /proc/locktest2 /proc/locktest1" => Detect OK (shown below)
>> "cat /proc/locktest3 /proc/locktest4" => Detect fail
>> "cat /proc/locktest4 /proc/locktest3" => Detect OK (shown below)
>>
>> Guessing from my testcases, read_seqbegin2() will fail to detect the problem
>> unless we use spin_acquire()/spin_release() rather than
>> rwlock_acquire_read()/rwlock_release().
>>
>> Also, something is still wrong because lockdep fails to detect the problem
>> for "cat /proc/locktest1 /proc/locktest2" and
>> "cat /proc/locktest3 /proc/locktest4" cases.
>
> It fails because we never add the recursive read to prev->after evev if
> it passed the validation.
>
> check_prev_add()::1671
> Â Â Â Â/*
> Â Â Â Â * For recursive read-locks we do all the dependency checks,
> Â Â Â Â * but we dont store read-triggered dependencies (only
> Â Â Â Â * write-triggered dependencies). This ensures that only the
> Â Â Â Â * write-side dependencies matter, and that if for example a
> Â Â Â Â * write-lock never takes any other locks, then the reads are
> Â Â Â Â * equivalent to a NOP.
> Â Â Â Â */
> Â Â Â Âif (next->read == 2 || prev->read == 2)

And the semantic doesn't change in Peter's patch.

> Â Â Â Â Â Â Â Âreturn 1;
>
> So we have no chain after opening locktest1 or locktest3.
>
> But adding recursive read to prev->after will introduce spurious
> lockdep warnings.
>
> Thanks,
> Yong
>
>>
>> [ Â 83.551455]
>> [ Â 83.551457] =======================================================
>> [ Â 83.555293] [ INFO: possible circular locking dependency detected ]
>> [ Â 83.555293] 2.6.39-rc3-00228-gd733ed6-dirty #259
>> [ Â 83.555293] -------------------------------------------------------
>> [ Â 83.555293] cat/2768 is trying to acquire lock:
>> [ Â 83.555293] Â(brlock1_lock_dep_map){++++..}, at: [<e08150b0>] brlock1_local_lock+0x0/0x90 [locktest]
>> [ Â 83.555293]
>> [ Â 83.555293] but task is already holding lock:
>> [ Â 83.555293] Â(&(&(&seqlock1)->lock)->rlock){+.+...}, at: [<e08154dd>] locktest_open1+0xd/0x40 [locktest]
>> [ Â 83.555293]
>> [ Â 83.555293] which lock already depends on the new lock.
>> [ Â 83.555293]
>> [ Â 83.555293]
>> [ Â 83.555293] the existing dependency chain (in reverse order) is:
>> [ Â 83.555293]
>> [ Â 83.555293] -> #1 (&(&(&seqlock1)->lock)->rlock){+.+...}:
>> [ Â 83.635281] Â Â Â Â[<c106c499>] check_prevs_add+0xb9/0x110
>> [ Â 83.635281] Â Â Â Â[<c106c840>] validate_chain+0x320/0x5a0
>> [ Â 83.635281] Â Â Â Â[<c106e927>] __lock_acquire+0x2a7/0x8f0
>> [ Â 83.635281] Â Â Â Â[<c107001a>] lock_acquire+0x7a/0xa0
>> [ Â 83.635281] Â Â Â Â[<e0815555>] locktest_open2+0x45/0x70 [locktest]
>> [ Â 83.635281] Â Â Â Â[<c1118355>] proc_reg_open+0x65/0xe0
>> [ Â 83.635281] Â Â Â Â[<c10ce78f>] __dentry_open+0x16f/0x2e0
>> [ Â 83.635281] Â Â Â Â[<c10ce9fe>] nameidata_to_filp+0x5e/0x70
>> [ Â 83.635281] Â Â Â Â[<c10dc1d8>] do_last+0xf8/0x6c0
>> [ Â 83.635281] Â Â Â Â[<c10dc846>] path_openat+0xa6/0x340
>> [ Â 83.635281] Â Â Â Â[<c10dcb10>] do_filp_open+0x30/0x80
>> [ Â 83.635281] Â Â Â Â[<c10cefa1>] do_sys_open+0x101/0x1a0
>> [ Â 83.635281] Â Â Â Â[<c10cf069>] sys_open+0x29/0x40
>> [ Â 83.635281] Â Â Â Â[<c13b43c1>] syscall_call+0x7/0xb
>> [ Â 83.635281]
>> [ Â 83.635281] -> #0 (brlock1_lock_dep_map){++++..}:
>> [ Â 83.635281] Â Â Â Â[<c106c34c>] check_prev_add+0x78c/0x820
>> [ Â 83.635281] Â Â Â Â[<c106c499>] check_prevs_add+0xb9/0x110
>> [ Â 83.635281] Â Â Â Â[<c106c840>] validate_chain+0x320/0x5a0
>> [ Â 83.635281] Â Â Â Â[<c106e927>] __lock_acquire+0x2a7/0x8f0
>> [ Â 83.635281] Â Â Â Â[<c107001a>] lock_acquire+0x7a/0xa0
>> [ Â 83.635281] Â Â Â Â[<e08150e3>] brlock1_local_lock+0x33/0x90 [locktest]
>> [ Â 83.635281] Â Â Â Â[<e08154e8>] locktest_open1+0x18/0x40 [locktest]
>> [ Â 83.635281] Â Â Â Â[<c1118355>] proc_reg_open+0x65/0xe0
>> [ Â 83.635281] Â Â Â Â[<c10ce78f>] __dentry_open+0x16f/0x2e0
>> [ Â 83.635281] Â Â Â Â[<c10ce9fe>] nameidata_to_filp+0x5e/0x70
>> [ Â 83.635281] Â Â Â Â[<c10dc1d8>] do_last+0xf8/0x6c0
>> [ Â 83.635281] Â Â Â Â[<c10dc846>] path_openat+0xa6/0x340
>> [ Â 83.635281] Â Â Â Â[<c10dcb10>] do_filp_open+0x30/0x80
>> [ Â 83.635281] Â Â Â Â[<c10cefa1>] do_sys_open+0x101/0x1a0
>> [ Â 83.635281] Â Â Â Â[<c10cf069>] sys_open+0x29/0x40
>> [ Â 83.635281] Â Â Â Â[<c13b43c1>] syscall_call+0x7/0xb
>> [ Â 83.635281]
>> [ Â 83.635281] other info that might help us debug this:
>> [ Â 83.635281]
>> [ Â 83.635281] 1 lock held by cat/2768:
>> [ Â 83.635281] Â#0: Â(&(&(&seqlock1)->lock)->rlock){+.+...}, at: [<e08154dd>] locktest_open1+0xd/0x40 [locktest]
>> [ Â 83.635281]
>> [ Â 83.635281] stack backtrace:
>> [ Â 83.635281] Pid: 2768, comm: cat Not tainted 2.6.39-rc3-00228-gd733ed6-dirty #259
>> [ Â 83.635281] Call Trace:
>> [ Â 83.635281] Â[<c106ade6>] print_circular_bug+0xc6/0xd0
>> [ Â 83.635281] Â[<c106c34c>] check_prev_add+0x78c/0x820
>> [ Â 83.635281] Â[<c1005d3b>] ? print_context_stack+0x3b/0xa0
>> [ Â 83.635281] Â[<c1004fa1>] ? dump_trace+0x81/0xe0
>> [ Â 83.635281] Â[<c106c499>] check_prevs_add+0xb9/0x110
>> [ Â 83.635281] Â[<c106c840>] validate_chain+0x320/0x5a0
>> [ Â 83.635281] Â[<c106df7c>] ? mark_lock+0x21c/0x3c0
>> [ Â 83.635281] Â[<c106e927>] __lock_acquire+0x2a7/0x8f0
>> [ Â 83.635281] Â[<c107001a>] lock_acquire+0x7a/0xa0
>> [ Â 83.635281] Â[<e08150b0>] ? brlock1_lock_init+0xb0/0xb0 [locktest]
>> [ Â 83.635281] Â[<e08154d0>] ? brlock1_global_unlock+0xa0/0xa0 [locktest]
>> [ Â 83.635281] Â[<e08150e3>] brlock1_local_lock+0x33/0x90 [locktest]
>> [ Â 83.635281] Â[<e08150b0>] ? brlock1_lock_init+0xb0/0xb0 [locktest]
>> [ Â 83.635281] Â[<e08154e8>] locktest_open1+0x18/0x40 [locktest]
>> [ Â 83.635281] Â[<c1118355>] proc_reg_open+0x65/0xe0
>> [ Â 83.635281] Â[<c10ce78f>] __dentry_open+0x16f/0x2e0
>> [ Â 83.635281] Â[<c10ce9fe>] nameidata_to_filp+0x5e/0x70
>> [ Â 83.635281] Â[<c11182f0>] ? proc_reg_mmap+0x80/0x80
>> [ Â 83.635281] Â[<c10dc1d8>] do_last+0xf8/0x6c0
>> [ Â 83.635281] Â[<c10db00c>] ? path_init+0x2cc/0x3c0
>> [ Â 83.635281] Â[<c10dc846>] path_openat+0xa6/0x340
>> [ Â 83.635281] Â[<c106d80b>] ? trace_hardirqs_off+0xb/0x10
>> [ Â 83.635281] Â[<c10dcb10>] do_filp_open+0x30/0x80
>> [ Â 83.635281] Â[<c13b3a5d>] ? _raw_spin_unlock+0x1d/0x20
>> [ Â 83.635281] Â[<c10e9f11>] ? alloc_fd+0xe1/0x1a0
>> [ Â 83.635281] Â[<c10cefa1>] do_sys_open+0x101/0x1a0
>> [ Â 83.635281] Â[<c10cfc8b>] ? vfs_write+0x10b/0x130
>> [ Â 83.635281] Â[<c10cf069>] sys_open+0x29/0x40
>> [ Â 83.635281] Â[<c13b43c1>] syscall_call+0x7/0xb
>>
>>
>>
>> [ Â 82.758647]
>> [ Â 82.758649] =======================================================
>> [ Â 82.762520] [ INFO: possible circular locking dependency detected ]
>> [ Â 82.762520] 2.6.39-rc3-00228-gd733ed6-dirty #259
>> [ Â 82.762520] -------------------------------------------------------
>> [ Â 82.762520] cat/2768 is trying to acquire lock:
>> [ Â 82.762520] Â(rwlock1){++++..}, at: [<e081559d>] locktest_open3+0x1d/0x40 [locktest]
>> [ Â 82.762520]
>> [ Â 82.762520] but task is already holding lock:
>> [ Â 82.762520] Â(&(&(&seqlock1)->lock)->rlock){+.+...}, at: [<e081558d>] locktest_open3+0xd/0x40 [locktest]
>> [ Â 82.762520]
>> [ Â 82.762520] which lock already depends on the new lock.
>> [ Â 82.762520]
>> [ Â 82.762520]
>> [ Â 82.762520] the existing dependency chain (in reverse order) is:
>> [ Â 82.762520]
>> [ Â 82.762520] -> #1 (&(&(&seqlock1)->lock)->rlock){+.+...}:
>> [ Â 82.841627] Â Â Â Â[<c106c499>] check_prevs_add+0xb9/0x110
>> [ Â 82.841627] Â Â Â Â[<c106c840>] validate_chain+0x320/0x5a0
>> [ Â 82.841627] Â Â Â Â[<c106e927>] __lock_acquire+0x2a7/0x8f0
>> [ Â 82.841627] Â Â Â Â[<c107001a>] lock_acquire+0x7a/0xa0
>> [ Â 82.841627] Â Â Â Â[<e081560a>] locktest_open4+0x4a/0x90 [locktest]
>> [ Â 82.841627] Â Â Â Â[<c1118355>] proc_reg_open+0x65/0xe0
>> [ Â 82.841627] Â Â Â Â[<c10ce78f>] __dentry_open+0x16f/0x2e0
>> [ Â 82.841627] Â Â Â Â[<c10ce9fe>] nameidata_to_filp+0x5e/0x70
>> [ Â 82.841627] Â Â Â Â[<c10dc1d8>] do_last+0xf8/0x6c0
>> [ Â 82.841627] Â Â Â Â[<c10dc846>] path_openat+0xa6/0x340
>> [ Â 82.841627] Â Â Â Â[<c10dcb10>] do_filp_open+0x30/0x80
>> [ Â 82.841627] Â Â Â Â[<c10cefa1>] do_sys_open+0x101/0x1a0
>> [ Â 82.841627] Â Â Â Â[<c10cf069>] sys_open+0x29/0x40
>> [ Â 82.841627] Â Â Â Â[<c13b43c1>] syscall_call+0x7/0xb
>> [ Â 82.841627]
>> [ Â 82.841627] -> #0 (rwlock1){++++..}:
>> [ Â 82.841627] Â Â Â Â[<c106c34c>] check_prev_add+0x78c/0x820
>> [ Â 82.841627] Â Â Â Â[<c106c499>] check_prevs_add+0xb9/0x110
>> [ Â 82.841627] Â Â Â Â[<c106c840>] validate_chain+0x320/0x5a0
>> [ Â 82.841627] Â Â Â Â[<c106e927>] __lock_acquire+0x2a7/0x8f0
>> [ Â 82.841627] Â Â Â Â[<c107001a>] lock_acquire+0x7a/0xa0
>> [ Â 82.841627] Â Â Â Â[<c13b3ba9>] _raw_read_lock+0x39/0x70
>> [ Â 82.841627] Â Â Â Â[<e081559d>] locktest_open3+0x1d/0x40 [locktest]
>> [ Â 82.841627] Â Â Â Â[<c1118355>] proc_reg_open+0x65/0xe0
>> [ Â 82.841627] Â Â Â Â[<c10ce78f>] __dentry_open+0x16f/0x2e0
>> [ Â 82.841627] Â Â Â Â[<c10ce9fe>] nameidata_to_filp+0x5e/0x70
>> [ Â 82.841627] Â Â Â Â[<c10dc1d8>] do_last+0xf8/0x6c0
>> [ Â 82.841627] Â Â Â Â[<c10dc846>] path_openat+0xa6/0x340
>> [ Â 82.841627] Â Â Â Â[<c10dcb10>] do_filp_open+0x30/0x80
>> [ Â 82.841627] Â Â Â Â[<c10cefa1>] do_sys_open+0x101/0x1a0
>> [ Â 82.841627] Â Â Â Â[<c10cf069>] sys_open+0x29/0x40
>> [ Â 82.841627] Â Â Â Â[<c13b43c1>] syscall_call+0x7/0xb
>> [ Â 82.841627]
>> [ Â 82.841627] other info that might help us debug this:
>> [ Â 82.841627]
>> [ Â 82.841627] 1 lock held by cat/2768:
>> [ Â 82.841627] Â#0: Â(&(&(&seqlock1)->lock)->rlock){+.+...}, at: [<e081558d>] locktest_open3+0xd/0x40 [locktest]
>> [ Â 82.841627]
>> [ Â 82.841627] stack backtrace:
>> [ Â 82.841627] Pid: 2768, comm: cat Not tainted 2.6.39-rc3-00228-gd733ed6-dirty #259
>> [ Â 82.841627] Call Trace:
>> [ Â 82.841627] Â[<c106ade6>] print_circular_bug+0xc6/0xd0
>> [ Â 82.841627] Â[<c106c34c>] check_prev_add+0x78c/0x820
>> [ Â 82.841627] Â[<c1005d3b>] ? print_context_stack+0x3b/0xa0
>> [ Â 82.841627] Â[<c1004fa1>] ? dump_trace+0x81/0xe0
>> [ Â 82.841627] Â[<c106c499>] check_prevs_add+0xb9/0x110
>> [ Â 82.841627] Â[<c106c840>] validate_chain+0x320/0x5a0
>> [ Â 82.841627] Â[<c106df7c>] ? mark_lock+0x21c/0x3c0
>> [ Â 82.841627] Â[<c106e927>] __lock_acquire+0x2a7/0x8f0
>> [ Â 82.841627] Â[<c107001a>] lock_acquire+0x7a/0xa0
>> [ Â 82.841627] Â[<e081559d>] ? locktest_open3+0x1d/0x40 [locktest]
>> [ Â 82.841627] Â[<e0815580>] ? locktest_open2+0x70/0x70 [locktest]
>> [ Â 82.841627] Â[<c13b3ba9>] _raw_read_lock+0x39/0x70
>> [ Â 82.841627] Â[<e081559d>] ? locktest_open3+0x1d/0x40 [locktest]
>> [ Â 82.841627] Â[<e081559d>] locktest_open3+0x1d/0x40 [locktest]
>> [ Â 82.841627] Â[<c1118355>] proc_reg_open+0x65/0xe0
>> [ Â 82.841627] Â[<c10ce78f>] __dentry_open+0x16f/0x2e0
>> [ Â 82.841627] Â[<c10ce9fe>] nameidata_to_filp+0x5e/0x70
>> [ Â 82.841627] Â[<c11182f0>] ? proc_reg_mmap+0x80/0x80
>> [ Â 82.841627] Â[<c10dc1d8>] do_last+0xf8/0x6c0
>> [ Â 82.841627] Â[<c10db00c>] ? path_init+0x2cc/0x3c0
>> [ Â 82.841627] Â[<c10dc846>] path_openat+0xa6/0x340
>> [ Â 82.841627] Â[<c106d80b>] ? trace_hardirqs_off+0xb/0x10
>> [ Â 82.841627] Â[<c10dcb10>] do_filp_open+0x30/0x80
>> [ Â 82.841627] Â[<c13b3a5d>] ? _raw_spin_unlock+0x1d/0x20
>> [ Â 82.841627] Â[<c10e9f11>] ? alloc_fd+0xe1/0x1a0
>> [ Â 82.841627] Â[<c10cefa1>] do_sys_open+0x101/0x1a0
>> [ Â 82.841627] Â[<c10cfc8b>] ? vfs_write+0x10b/0x130
>> [ Â 82.841627] Â[<c10cf069>] sys_open+0x29/0x40
>> [ Â 82.841627] Â[<c13b43c1>] syscall_call+0x7/0xb
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at Âhttp://www.tux.org/lkml/
>>
>
>
>
> --
> Only stand for myself
>



--
Only stand for myself
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/