Re: [PATCH v3 01/17] mm: support madvise(MADV_FREE)
From: Minchan Kim
Date: Fri Nov 13 2015 - 01:16:54 EST
On Thu, Nov 12, 2015 at 01:26:20PM +0200, Kirill A. Shutemov wrote:
> On Thu, Nov 12, 2015 at 01:32:57PM +0900, Minchan Kim wrote:
> > @@ -256,6 +260,125 @@ static long madvise_willneed(struct vm_area_struct *vma,
> > return 0;
> > }
> >
> > +static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > + unsigned long end, struct mm_walk *walk)
> > +
> > +{
> > + struct mmu_gather *tlb = walk->private;
> > + struct mm_struct *mm = tlb->mm;
> > + struct vm_area_struct *vma = walk->vma;
> > + spinlock_t *ptl;
> > + pte_t *pte, ptent;
> > + struct page *page;
> > +
> > + split_huge_page_pmd(vma, addr, pmd);
> > + if (pmd_trans_unstable(pmd))
> > + return 0;
> > +
> > + pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
> > + arch_enter_lazy_mmu_mode();
> > + for (; addr != end; pte++, addr += PAGE_SIZE) {
> > + ptent = *pte;
> > +
> > + if (!pte_present(ptent))
> > + continue;
> > +
> > + page = vm_normal_page(vma, addr, ptent);
> > + if (!page)
> > + continue;
> > +
> > + if (PageSwapCache(page)) {
>
> Could you put VM_BUG_ON_PAGE(PageTransCompound(page), page) here?
> Just in case.
No problem.
>
> > + if (!trylock_page(page))
> > + continue;
> > +
> > + if (!try_to_free_swap(page)) {
> > + unlock_page(page);
> > + continue;
> > + }
> > +
> > + ClearPageDirty(page);
> > + unlock_page(page);
>
> Hm. Do we handle pages shared over fork() here?
> Souldn't we ignore pages with mapcount > 0?
It was handled later patch by historical reason but it's better
to fold the patch to this.
Thanks for review!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/