Re: [patch][v2] swap: virtual swap readahead

From: Rik van Riel
Date: Wed Jun 03 2009 - 10:53:53 EST


Johannes Weiner wrote:
On Wed, Jun 03, 2009 at 01:34:57AM +0200, Andi Kleen wrote:
On Wed, Jun 03, 2009 at 12:37:39AM +0200, Johannes Weiner wrote:

+ pgd = pgd_offset(vma->vm_mm, pos);
+ if (!pgd_present(*pgd))
+ continue;
+ pud = pud_offset(pgd, pos);
+ if (!pud_present(*pud))
+ continue;
+ pmd = pmd_offset(pud, pos);
+ if (!pmd_present(*pmd))
+ continue;
+ pte = pte_offset_map_lock(vma->vm_mm, pmd, pos, &ptl);
You could be more efficient here by using the standard mm/* nested loop
pattern that avoids relookup of everything in each iteration. I suppose
it would mainly make a difference with 32bit highpte where mapping a pte
can be somewhat costly. And you would take less locks this way.

I ran into weird problems here. The above version is actually faster
in the benchmarks than writing a nested level walker or using
walk_page_range(). Still digging but it can take some time. Busy
week :(

I'm not too worried about not walking the page tables,
because swap is an extreme slow path anyway.

--
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/