Re: [PATCH 04/14] mm,migration: Allow the migration ofPageSwapCache pages

From: Minchan Kim
Date: Thu Apr 22 2010 - 10:24:16 EST


On Thu, 2010-04-22 at 19:51 +0900, KAMEZAWA Hiroyuki wrote:
> On Thu, 22 Apr 2010 19:31:06 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
>
> > On Thu, 22 Apr 2010 19:13:12 +0900
> > Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
> >
> > > On Thu, Apr 22, 2010 at 6:46 PM, KAMEZAWA Hiroyuki
> > > <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >
> > > > Hmm..in my test, the case was.
> > > >
> > > > Before try_to_unmap:
> > > > mapcount=1, SwapCache, remap_swapcache=1
> > > > After remap
> > > > mapcount=0, SwapCache, rc=0.
> > > >
> > > > So, I think there may be some race in rmap_walk() and vma handling or
> > > > anon_vma handling. migration_entry isn't found by rmap_walk.
> > > >
> > > > Hmm..it seems this kind patch will be required for debug.
> > >
>
> Ok, here is my patch for _fix_. But still testing...
> Running well at least for 30 minutes, where I can see bug in 10minutes.
> But this patch is too naive. please think about something better fix.
>
> ==
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
>
> At adjust_vma(), vma's start address and pgoff is updated under
> write lock of mmap_sem. This means the vma's rmap information
> update is atoimic only under read lock of mmap_sem.
>
>
> Even if it's not atomic, in usual case, try_to_ummap() etc...
> just fails to decrease mapcount to be 0. no problem.
>
> But at page migration's rmap_walk(), it requires to know all
> migration_entry in page tables and recover mapcount.
>
> So, this race in vma's address is critical. When rmap_walk meet
> the race, rmap_walk will mistakenly get -EFAULT and don't call
> rmap_one(). This patch adds a lock for vma's rmap information.
> But, this is _very slow_.
> We need something sophisitcated, light-weight update for this..
>
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
> ---
> include/linux/mm_types.h | 1 +
> kernel/fork.c | 1 +
> mm/mmap.c | 11 ++++++++++-
> mm/rmap.c | 3 +++
> 4 files changed, 15 insertions(+), 1 deletion(-)
>
> Index: linux-2.6.34-rc4-mm1/include/linux/mm_types.h
> ===================================================================
> --- linux-2.6.34-rc4-mm1.orig/include/linux/mm_types.h
> +++ linux-2.6.34-rc4-mm1/include/linux/mm_types.h
> @@ -183,6 +183,7 @@ struct vm_area_struct {
> #ifdef CONFIG_NUMA
> struct mempolicy *vm_policy; /* NUMA policy for the VMA */
> #endif
> + spinlock_t adjust_lock;
> };
>
> struct core_thread {
> Index: linux-2.6.34-rc4-mm1/mm/mmap.c
> ===================================================================
> --- linux-2.6.34-rc4-mm1.orig/mm/mmap.c
> +++ linux-2.6.34-rc4-mm1/mm/mmap.c
> @@ -584,13 +584,20 @@ again: remove_next = 1 + (end > next->
> if (adjust_next)
> vma_prio_tree_remove(next, root);
> }
> -
> + /*
> + * changing all params in atomic. If not, vma_address in rmap.c
> + * can see wrong result.
> + */
> + spin_lock(&vma->adjust_lock);
> vma->vm_start = start;
> vma->vm_end = end;
> vma->vm_pgoff = pgoff;
> + spin_unlock(&vma->adjust_lock);
> if (adjust_next) {
> + spin_lock(&next->adjust_lock);
> next->vm_start += adjust_next << PAGE_SHIFT;
> next->vm_pgoff += adjust_next;
> + spin_unlock(&next->adjust_lock);
> }
>
> if (root) {
> @@ -1939,6 +1946,7 @@ static int __split_vma(struct mm_struct
> *new = *vma;
>
> INIT_LIST_HEAD(&new->anon_vma_chain);
> + spin_lock_init(&new->adjust_lock);
>
> if (new_below)
> new->vm_end = addr;
> @@ -2338,6 +2346,7 @@ struct vm_area_struct *copy_vma(struct v
> if (IS_ERR(pol))
> goto out_free_vma;
> INIT_LIST_HEAD(&new_vma->anon_vma_chain);
> + spin_lock_init(&new_vma->adjust_lock);
> if (anon_vma_clone(new_vma, vma))
> goto out_free_mempol;
> vma_set_policy(new_vma, pol);
> Index: linux-2.6.34-rc4-mm1/kernel/fork.c
> ===================================================================
> --- linux-2.6.34-rc4-mm1.orig/kernel/fork.c
> +++ linux-2.6.34-rc4-mm1/kernel/fork.c
> @@ -350,6 +350,7 @@ static int dup_mmap(struct mm_struct *mm
> goto fail_nomem;
> *tmp = *mpnt;
> INIT_LIST_HEAD(&tmp->anon_vma_chain);
> + spin_lock_init(&tmp->adjust_lock);
> pol = mpol_dup(vma_policy(mpnt));
> retval = PTR_ERR(pol);
> if (IS_ERR(pol))
> Index: linux-2.6.34-rc4-mm1/mm/rmap.c
> ===================================================================
> --- linux-2.6.34-rc4-mm1.orig/mm/rmap.c
> +++ linux-2.6.34-rc4-mm1/mm/rmap.c
> @@ -332,11 +332,14 @@ vma_address(struct page *page, struct vm
> pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
> unsigned long address;
>
> + spin_lock(&vma->adjust_lock);
> address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
> if (unlikely(address < vma->vm_start || address >= vma->vm_end)) {
> + spin_unlock(&vma->adjust_lock);
> /* page should be within @vma mapping range */
> return -EFAULT;
> }
> + spin_unlock(&vma->adjust_lock);
> return address;
> }
>

Nice Catch, Kame. :)

For further optimization, we can hold vma->adjust_lock if vma_address
returns -EFAULT. But I hope we redesigns it without new locking.
But I don't have good idea, now. :(

--
Kind regards,
Minchan Kim


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/