Re: kvm+nouveau induced lockdep gripe
From: Sebastian Andrzej Siewior
Date: Mon Oct 26 2020 - 13:31:13 EST
On 2020-10-24 13:00:00 [+0800], Hillf Danton wrote:
>
> Hmm...curious how that word went into your mind. And when?
> > [ 30.457363]
> > other info that might help us debug this:
> > [ 30.457369] Possible unsafe locking scenario:
> >
> > [ 30.457375] CPU0
> > [ 30.457378] ----
> > [ 30.457381] lock(&mgr->vm_lock);
> > [ 30.457386] <Interrupt>
> > [ 30.457389] lock(&mgr->vm_lock);
> > [ 30.457394]
> > *** DEADLOCK ***
> >
> > <snips 999 lockdep lines and zillion ATOMIC_SLEEP gripes>
The backtrace contained the "normal" vm_lock. What should follow is the
backtrace of the in-softirq usage.
>
> Dunno if blocking softint is a right cure.
>
> --- a/drivers/gpu/drm/drm_vma_manager.c
> +++ b/drivers/gpu/drm/drm_vma_manager.c
> @@ -229,6 +229,7 @@ EXPORT_SYMBOL(drm_vma_offset_add);
> void drm_vma_offset_remove(struct drm_vma_offset_manager *mgr,
> struct drm_vma_offset_node *node)
> {
> + local_bh_disable();
There is write_lock_bh(). However changing only one will produce the
same backtrace somewhere else unless all other users already run BH
disabled region.
> write_lock(&mgr->vm_lock);
>
> if (drm_mm_node_allocated(&node->vm_node)) {
Sebastian