Rik van Riel wrote:
>
> On Fri, 2 Aug 2002, Andrew Morton wrote:
> > Daniel Phillips wrote:
> > >
> > > This patch eliminates about 35% of the raw rmap setup/teardown overhead by
> > > adopting a new locking interface that allows the add_rmaps to be batched in
> > > copy_page_range.
> >
> > Well that's fairly straightforward, thanks. Butt-ugly though ;)
>
> It'd be nice if the code would be a bit more beautiful and the
> reverse mapping scheme more modular.
I changed it to, essentially:
foo()
{
spinlock_t *rmap_lock = NULL;
unsigned rmap_lockno = -1;
...
for (stuff) {
cached_rmap_lock(page, &rmap_lock, &rmap_lockno);
__page_add_rmap(page, ptep);
..
}
drop_rmap_lock(&rmap_lock, &rmap_lockno);
}
See http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.30/daniel-rmap-speedup.patch
Fixing zap_pte_range pretty much requires the pagemap_lru_lock
rework; otherwise we couldn't hold the rmap lock across
tlb_remove_page().
> Remember that we're planning to go to an object-based scheme
> later on, turning the code into a big monolithic mesh really
> makes long-term maintenance a pain...
We have short-term rmap problems:
1) Unexplained pte chain state with ntpd
2) 10-20% increased CPU load in fork/exec/exit loads
3) system lock under heavy mmap load
4) ZONE_NORMAL pte_chain consumption
Daniel and I are on 2), Bill is on 4) (I think).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Wed Aug 07 2002 - 22:00:21 EST