Re: false positive lockdep splat with loop device

From: Amir Goldstein
Date: Thu Sep 21 2017 - 02:44:37 EST


On Thu, Sep 21, 2017 at 1:22 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> [cc lkml, PeterZ and Byungchul]
...
> The thing is, this IO completion has nothing to do with the lower
> filesystem - it's the IO completion for the filesystem on the loop
> device (the upper filesystem) and is not in any way related to the
> IO completion from the dax device the lower filesystem is waiting
> on.
>
> IOWs, this is a false positive.
>
> Peter, this is the sort of false positive I mentioned were likely to
> occur without some serious work to annotate the IO stack to prevent
> them. We can nest multiple layers of IO completions and locking in
> the IO stack via things like loop and RAID devices. They can be
> nested to arbitrary depths, too (e.g. loop on fs on loop on fs on
> dm-raid on n * (loop on fs) on bdev) so this new completion lockdep
> checking is going to be a source of false positives until there is
> an effective (and simple!) way of providing context based completion
> annotations to avoid them...
>

IMO, the way to handle this is to add 'nesting_depth' information
on blockdev (or bdi?). 'nesting' in the sense of blockdev->fs->blockdev->fs.
AFAIK, the only blockdev drivers that need to bump nesting_depth
are loop and nbd??
Not sure if the kernel should limit loop blockdev nesting depth??
One problem that lack of loop blockdev nesting information causes is
incorrect emergency remount dependencies:
https://patchwork.kernel.org/patch/6266791/

When blockdev carries the nesting_depth information it should be
trivial to annotate "nested fs" inode locks, the same way we handled
"stackable fs" inode locks in overlayfs:
https://patchwork.kernel.org/patch/9460919/

Cheers,
Amir.