Re: Exiting with locks still held (was Re: [PATCH] kmemleak: Fixscheduling-while-atomic bug)

From: Ingo Molnar
Date: Fri Jul 03 2009 - 03:05:36 EST



* Catalin Marinas <catalin.marinas@xxxxxxx> wrote:

> Hi Ingo,
>
> On Wed, 2009-07-01 at 13:04 +0200, Ingo Molnar wrote:
> > * Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
> > > Since we are at locking, I just noticed this on my x86 laptop when
> > > running cat /sys/kernel/debug/kmemleak (I haven't got it on an ARM
> > > board):
> > >
> > > ================================================
> > > [ BUG: lock held when returning to user space! ]
> > > ------------------------------------------------
> > > cat/3687 is leaving the kernel with locks still held!
> > > 1 lock held by cat/3687:
> > > #0: (scan_mutex){+.+.+.}, at: [<c01e0c5c>] kmemleak_open+0x3c/0x70
> > >
> > > kmemleak_open() acquires scan_mutex and unconditionally releases
> > > it in kmemleak_release(). The mutex seems to be released as a
> > > subsequent acquiring works fine.
> > >
> > > Is this caused just because cat may have exited without closing
> > > the file descriptor (which should be done automatically anyway)?
> >
> > This lockdep warning has a 0% false positives track record so
> > far: all previous cases it triggered showed some real (and
> > fatal) bug in the underlying code.
>
> In this particular case, there is no fatal problem as the mutex is
> released shortly after this message.

Maybe - but holding locks in user-space is almost always bad.

What happens if user-space opens a second file descriptor before
closing the first one? We either lock up (which is bad and fatal) or
you already have some _other_ exclusion mechanism that prevents this
case, which calls into question the need to hold this particular
mutex in user-space to begin with.

I've yet to see a valid 'need to hold this kernel lock in
user-space' case, and this does not seem to be such a case either.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/