Re: [PATCH v2 2/2] xfs: Fix false positive lockdep warning with sb_internal & fs_reclaim

From: Darrick J. Wong
Date: Wed Jun 17 2020 - 21:37:21 EST


On Thu, Jun 18, 2020 at 10:45:05AM +1000, Dave Chinner wrote:
> On Wed, Jun 17, 2020 at 01:53:10PM -0400, Waiman Long wrote:
> > fs/xfs/xfs_log.c | 9 +++++++++
> > fs/xfs/xfs_trans.c | 31 +++++++++++++++++++++++++++----
> > 2 files changed, 36 insertions(+), 4 deletions(-)
> >
> > diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
> > index 00fda2e8e738..33244680d0d4 100644
> > --- a/fs/xfs/xfs_log.c
> > +++ b/fs/xfs/xfs_log.c
> > @@ -830,8 +830,17 @@ xlog_unmount_write(
> > xfs_lsn_t lsn;
> > uint flags = XLOG_UNMOUNT_TRANS;
> > int error;
> > + unsigned long pflags;
> >
> > + /*
> > + * xfs_log_reserve() allocates memory. This can lead to fs reclaim
> > + * which may conflicts with the unmount process. To avoid that,
> > + * disable fs reclaim for this allocation.
> > + */
> > + current_set_flags_nested(&pflags, PF_MEMALLOC_NOFS);
> > error = xfs_log_reserve(mp, 600, 1, &tic, XFS_LOG, 0);
> > + current_restore_flags_nested(&pflags, PF_MEMALLOC_NOFS);
> > +
> > if (error)
> > goto out_err;
>
> The more I look at this, the more I think Darrick is right and I
> somewhat misinterpretted what he meant by "the top of the freeze
> path".
>
> i.e. setting PF_MEMALLOC_NOFS here is out of place - only one caller
> of xlog_unmount_write requires PF_MEMALLOC_NOFS
> context. That context should be set in the caller that requires this
> context, and in this case it is xfs_fs_freeze(). This is top of the
> final freeze state processing (what I think Darrick meant), not the
> top of the freeze syscall call chain (what I thought he meant).

Aha! Yes, that's exactly what I meant. Sorry we all kinda muddled
around for a few days. :/

--D

> So if set PF_MEMALLOC_NOFS setting in xfs_fs_freeze(), it covers all
> the allocations in this problematic path, and it should obliviates
> the need for the first patch in the series altogether.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx