Re: [PATCH v2] zsmalloc: fix migrate_zspage-zs_free race condition
From: Junil Lee
Date: Sun Jan 17 2016 - 20:02:43 EST
2016-01-16 오후 7:05에 Sergey Senozhatsky 이(가) 쓴 글:
> On (01/16/16 09:16), Vlastimil Babka wrote:
> [..]
> > BTW, couldn't the correct fix also just look like this?
> >
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 9f15bdd9163c..43f743175ede 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1635,8 +1635,8 @@ static int migrate_zspage(struct zs_pool
> *pool, struct
> > size_class *class,
> > free_obj = obj_malloc(d_page, class, handle);
> > zs_object_copy(free_obj, used_obj, class);
> > index++;
> > + /* This also effectively unpins the handle */
> > record_obj(handle, free_obj);
> > - unpin_tag(handle);
> > obj_free(pool, class, used_obj);
> > }
>
> I think this will work.
>
I agree.
And I tested previous patch as I sent, this problem has not been
happened since 2 days ago.
I will resend v3 as Babka.
Thanks.
>
> > But I'd still recommend WRITE_ONCE in record_obj(). And I'm not even
> sure it's
> > safe on all architectures to do a simple overwrite of a word against
> somebody
> > else trying to lock a bit there?
>
> hm... for example, generic bitops from
> include/asm-generic/bitops/atomic.h
> use _atomic_spin_lock_irqsave()
>
> #define test_and_set_bit_lock(nr, addr) test_and_set_bit(nr, addr)
>
> static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
> {
> unsigned long mask = BIT_MASK(nr);
> unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
> unsigned long old;
> unsigned long flags;
>
> _atomic_spin_lock_irqsave(p, flags);
> old = *p;
> *p = old | mask;
> _atomic_spin_unlock_irqrestore(p, flags);
>
> return (old & mask) != 0;
> }
>
> so overwriting it from the outside world (w/o taking
> _atomic_spin_lock_irqsave(p))
> can theoretically be tricky in some cases.
>
> -ss