Re: [COUNTERPATCH] mm: avoid overflowing preempt_count() inmmu_take_all_locks()
From: Peter Zijlstra
Date: Thu Apr 01 2010 - 07:27:55 EST
On Thu, 2010-04-01 at 14:17 +0300, Avi Kivity wrote:
> On 04/01/2010 02:13 PM, Avi Kivity wrote:
> >> Anyway, I don't see a reason why we can't convert those locks to
> >> mutexes and get rid of the whole preempt disabled region.
> > If someone is willing to audit all code paths to make sure these locks
> > are always taken in schedulable context I agree that's a better fix.
> From mm/rmap.c:
> > /*
> > * Lock ordering in mm:
> > *
> > * inode->i_mutex (while writing or truncating, not reading or
> > faulting)
> > * inode->i_alloc_sem (vmtruncate_range)
> > * mm->mmap_sem
> > * page->flags PG_locked (lock_page)
> > * mapping->i_mmap_lock
> > * anon_vma->lock
> > *
> > * (code doesn't rely on that order so it could be switched around)
> > * ->tasklist_lock
> > * anon_vma->lock (memory_failure, collect_procs_anon)
> > * pte map lock
> > */
> i_mmap_lock is a spinlock, and tasklist_lock is a rwlock, so some
> changes will be needed.
i_mmap_lock will need to change just as well, mm_take_all_locks() uses
both anon_vma->lock and mapping->i_mmap_lock.
I've almost got a patch done that converts those two, still need to look
where that tasklist_lock muck happens.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/