Re: [PATCH v2] memcg: reduce lock time at move charge (Was Re:[PATCH 04/10] memcg: disable local interrupts in lock_page_cgroup()

From: Andrew Morton
Date: Fri Oct 08 2010 - 00:55:37 EST


On Fri, 8 Oct 2010 13:37:12 +0900 KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:

> On Thu, 7 Oct 2010 16:14:54 -0700
> Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> > On Thu, 7 Oct 2010 17:04:05 +0900
> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >
> > > Now, at task migration among cgroup, memory cgroup scans page table and moving
> > > account if flags are properly set.
> > >
> > > The core code, mem_cgroup_move_charge_pte_range() does
> > >
> > > pte_offset_map_lock();
> > > for all ptes in a page table:
> > > 1. look into page table, find_and_get a page
> > > 2. remove it from LRU.
> > > 3. move charge.
> > > 4. putback to LRU. put_page()
> > > pte_offset_map_unlock();
> > >
> > > for pte entries on a 3rd level? page table.
> > >
> > > This pte_offset_map_lock seems a bit long. This patch modifies a rountine as
> > >
> > > for 32 pages: pte_offset_map_lock()
> > > find_and_get a page
> > > record it
> > > pte_offset_map_unlock()
> > > for all recorded pages
> > > isolate it from LRU.
> > > move charge
> > > putback to LRU
> > > for all recorded pages
> > > put_page()
> >
> > The patch makes the code larger, more complex and slower!
> >
>
> Slower ?

Sure. It walks the same data three times, potentially causing
thrashing in the L1 cache. It takes and releases locks at a higher
frequency. It increases the text size.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/