Re: [PATCH 0/3] dma-debug: add additional checks
From: Ingo Molnar
Date: Wed Mar 18 2009 - 08:45:36 EST
* Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote:
> On Wed, 2009-03-18 at 13:19 +0100, Joerg Roedel wrote:
> > On Wed, Mar 18, 2009 at 12:38:47PM +0100, Peter Zijlstra wrote:
> > > On Wed, 2009-03-18 at 12:23 +0100, Ingo Molnar wrote:
> > > > another -tip testbox started triggering:
> > > >
> > > > BUG: MAX_LOCKDEP_ENTRIES too low!
> > > >
> > > > it triggers due to CONFIG_DMA_API_DEBUG=y. Config attached.
> > >
> > >
> > > I still have this laying about.. could be we're just at the limit due to
> > > lock bloat in the kernel, could be dma_api_debug is doing something
> > > all-together iffy
> >
> > I had a look and the maximum locking depth in dma-debug code was two.
> > Attached patch reduces this to one.
> >
> > From d28fc4a308bf66ed98c68e1db18e4e1434206541 Mon Sep 17 00:00:00 2001
> > From: Joerg Roedel <joerg.roedel@xxxxxxx>
> > Date: Wed, 18 Mar 2009 13:15:20 +0100
> > Subject: [PATCH] dma-debug: serialize locking in unmap path
> >
> > Impact: reduce maximum lockdepth to one
> >
> > This patch reduces the maximum spin lock depth from two to one in the
> > dma-debug code.
>
> While appreciated, this failure is not about lock depth, but about
> lock entries, that is items in the dependency chains.
>
> Of course, these two are not unrelated, deeper lock hierarchies
> lead to longer chains -> more entries.
>
> Assuming dma api debug doesn't do anything spectaculary odd, I'd
> say we've just lock bloated the kernel and might need to increase
> this static array a bit.
appears to be the case:
BUG: MAX_LOCKDEP_ENTRIES too low!
turning off the locking correctness validator.
Pid: 7508, comm: sshd Not tainted 2.6.29-rc8-tip-02759-g4bb5a10-dirty #21037
Call Trace:
[<ffffffff802679aa>] add_lock_to_list+0x53/0xba
[<ffffffff8065b0e9>] ? add_dma_entry+0x2f/0x5d
[<ffffffff80269398>] check_prev_add+0x14b/0x1c7
[<ffffffff8026985d>] validate_chain+0x449/0x4f7
[<ffffffff80269b96>] __lock_acquire+0x28b/0x302
[<ffffffff80269d07>] lock_acquire+0xfa/0x11e
[<ffffffff8065b0e9>] ? add_dma_entry+0x2f/0x5d
[<ffffffff80c9cf77>] _spin_lock_irqsave+0x4c/0x84
[<ffffffff8065b0e9>] ? add_dma_entry+0x2f/0x5d
[<ffffffff8065b0e9>] add_dma_entry+0x2f/0x5d
[<ffffffff8065bbd6>] debug_dma_map_page+0x110/0x11f
[<ffffffff807f2775>] pci_map_single+0xb5/0xc7
[<ffffffff807f36d7>] nv_start_xmit_optimized+0x174/0x49c
[<ffffffff80269fbd>] ? __lock_acquired+0x182/0x1a7
[<ffffffff80af7d20>] dev_hard_start_xmit+0xd4/0x147
[<ffffffff80b11c08>] __qdisc_run+0xf4/0x200
[<ffffffff80af80b0>] dev_queue_xmit+0x21f/0x32a
[<ffffffff80b3051f>] ip_finish_output2+0x205/0x24e
[<ffffffff80b305c9>] ip_finish_output+0x61/0x63
[<ffffffff80b3066d>] ip_output+0xa2/0xab
[<ffffffff80b2dfaf>] ip_local_out+0x65/0x67
[<ffffffff80b30122>] ip_queue_xmit+0x2f0/0x37b
[<ffffffff80267542>] ? register_lock_class+0x20/0x304
[<ffffffff80b40320>] tcp_transmit_skb+0x655/0x694
[<ffffffff80b42924>] tcp_write_xmit+0x2e2/0x3b6
[<ffffffff80b42a48>] __tcp_push_pending_frames+0x2f/0x61
[<ffffffff80b35320>] tcp_push+0x86/0x88
[<ffffffff80b377a5>] tcp_sendmsg+0x7a4/0x8aa
[<ffffffff80ae9206>] __sock_sendmsg+0x5e/0x67
[<ffffffff80ae92fc>] sock_aio_write+0xed/0xfd
[<ffffffff802d89e6>] do_sync_write+0xec/0x132
[<ffffffff8025d09b>] ? autoremove_wake_function+0x0/0x3d
[<ffffffff8026a688>] ? __lock_release+0xba/0xd3
[<ffffffff80241369>] ? get_parent_ip+0x16/0x46
[<ffffffff80242fef>] ? sub_preempt_count+0x67/0x7a
[<ffffffff80604287>] ? security_file_permission+0x16/0x18
[<ffffffff802d9275>] vfs_write+0xbf/0xe6
[<ffffffff802d936a>] sys_write+0x4c/0x74
[<ffffffff8020bd6b>] system_call_fastpath+0x16/0x1b
[ OK ]
so we need to bump up the limits some more.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/