Re: [PATCH 2/2 v2] mm/zsmalloc.c: Fix race condition in zs_destroy_pool
From: Henry Burns
Date: Fri Aug 23 2019 - 04:10:25 EST
On Thu, Aug 22, 2019 at 7:23 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Tue, 20 Aug 2019 11:59:39 +0900 Sergey Senozhatsky <sergey.senozhatsky.work@xxxxxxxxx> wrote:
> > On (08/09/19 11:17), Henry Burns wrote:
> > > In zs_destroy_pool() we call flush_work(&pool->free_work). However, we
> > > have no guarantee that migration isn't happening in the background
> > > at that time.
> > >
> > > Since migration can't directly free pages, it relies on free_work
> > > being scheduled to free the pages. But there's nothing preventing an
> > > in-progress migrate from queuing the work *after*
> > > zs_unregister_migration() has called flush_work(). Which would mean
> > > pages still pointing at the inode when we free it.
> > >
> > > Since we know at destroy time all objects should be free, no new
> > > migrations can come in (since zs_page_isolate() fails for fully-free
> > > zspages). This means it is sufficient to track a "# isolated zspages"
> > > count by class, and have the destroy logic ensure all such pages have
> > > drained before proceeding. Keeping that state under the class
> > > spinlock keeps the logic straightforward.
> > >
> > > Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
> > > Signed-off-by: Henry Burns <henryburns@xxxxxxxxxx>
> > Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx>
> Thanks. So we have a couple of races which result in memory leaks? Do
> we feel this is serious enough to justify a -stable backport of the
In this case a memory leak could lead to an eventual crash if
compaction hits the leaked page. I don't know what a -stable
backport entails, but this crash would only occur if people are
changing their zswap backend at runtime
(which eventually starts destruction).