Re: [PATCH for rc] mm/shmem: Ensure proper fallback if page faults
From: Matthew Wilcox
Date: Mon Oct 24 2022 - 17:00:35 EST
On Mon, Oct 24, 2022 at 09:54:30AM -0700, Ira Weiny wrote:
> On Sun, Oct 23, 2022 at 09:33:05PM -0700, Ira wrote:
> > From: Ira Weiny <ira.weiny@xxxxxxxxx>
> >
> > The kernel test robot flagged a recursive lock as a result of a
> > conversion from kmap_atomic() to kmap_local_folio()[Link]
> >
> > The cause was due to the code depending on the kmap_atomic() side effect
> > of disabling page faults. In that case the code expects the fault to
> > fail and take the fallback case.
> >
> > git archaeology implied that the recursion may not be an actual bug.[1]
> > However, the mmap_lock needed in the fault may be the one held.[2]
> >
> > Add an explicit pagefault_disable() and a big comment to explain this
> > for future souls looking at this code.
> >
> > [1] https://lore.kernel.org/all/Y1MymJ%2FINb45AdaY@iweiny-desk3/
> > [2] https://lore.kernel.org/all/Y1M2p9OtBGnKwGUE@x1n/
> >
> > Fixes: 7a7256d5f512 ("shmem: convert shmem_mfill_atomic_pte() to use a folio")
> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> > Cc: Randy Dunlap <rdunlap@xxxxxxxxxxxxx>
> > Cc: Peter Xu <peterx@xxxxxxxxxx>
> > Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
> > Reported-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> > Reported-by: kernel test robot <yujie.liu@xxxxxxxxx>
> > Link: https://lore.kernel.org/r/202210211215.9dc6efb5-yujie.liu@xxxxxxxxx
> > Signed-off-by: Ira Weiny <ira.weiny@xxxxxxxxx>
> >
> > ---
> > Thanks to Matt and Andrew for initial diagnosis.
> > Thanks to Randy for pointing out C code needs ';' :-D
> > Thanks to Andrew for suggesting an elaborate comment
> > Thanks to Peter for pointing out that the mm's may be the same.
> > ---
> > mm/shmem.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 8280a5cb48df..c1bca31cd485 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -2424,9 +2424,16 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
> >
> > if (!zeropage) { /* COPY */
> > page_kaddr = kmap_local_folio(folio, 0);
> > + /*
> > + * The mmap_lock is held here. Disable page faults to
> > + * prevent deadlock should copy_from_user() fault. The
> > + * copy will be retried outside the mmap_lock.
> > + */
>
> Offline Dave Hansen and I were discussing this and he was concerned that this
> comment implies that a deadlock would always occur rather than might occur.
>
> I was not clear on this as I was thinking the read mmap_lock was non-recursive.
>
> So I think we have 3 cases only 1 of which will actually deadlock and is, as
> Dave puts it, currently theoretical.
>
> 1) Different mm's are in play (no issue)
> 2) Readlock implementation is recursive and same mm is in play (no issue)
> 3) Readlock implementation is _not_ recursive (issue)
>
> In both 1 and 2 lockdep is incorrectly flagging the issue but 3 is a problem
> and I think this is what Andrea was thinking.
The readlock implementation is only recursive if nobody else has taken a
write lock. AIUI, no other process can take a write lock on the
mmap_lock (other processes can take read locks by examining
/proc/$pid/maps, for example), although maybe ptrace can take the
mmap_lock for write?
But if you have a multithreaded process, one of the other threads can
call mmap() and that will prevent recursion (due to fairness). Even if
it's a different process that you're trying to acquire the mmap read
lock on, you can still get into a deadly embrace. eg:
process A thread 1 takes read lock on own mmap_lock
process A thread 2 calls mmap, blocks taking write lock
process B thread 1 takes page fault, read lock on own mmap lock
process B thread 2 calls mmap, blocks taking write lock
process A thread 1 blocks taking read lock on process B
process B thread 1 blocks taking read lock on process A
Now all four threads are blocked waiting for each other.