Re: [linux-next: Tree for Jun 1] __khugepaged_exit rwsem_down_write_failed lockup
From: Michal Hocko
Date: Fri Jun 03 2016 - 06:05:15 EST
On Fri 03-06-16 11:55:49, Michal Hocko wrote:
> On Fri 03-06-16 17:43:47, Sergey Senozhatsky wrote:
> > On (06/03/16 09:25), Michal Hocko wrote:
> > > > it's quite hard to trigger the bug (somehow), so I can't
> > > > follow up with more information as of now.
> > either I did something very silly fixing up the patch, or the
> > patch may be causing general protection faults on my system.
> > RIP collect_mm_slot() + 0x42/0x84
> > khugepaged
> So is this really collect_mm_slot called directly from khugepaged or is
> some inlining going on there?
> > prepare_to_wait_event
> > maybe_pmd_mkwrite
> > kthread
> > _raw_sin_unlock_irq
> > ret_from_fork
> > kthread_create_on_node
> > collect_mm_slot() + 0x42/0x84 is
> I guess that the problem is that I have missed that __khugepaged_exit
> doesn't clear the cached khugepaged_scan.mm_slot. Does the following on
> top fixes that?
That wouldn't be sufficient after a closer look. We need to do the same
from khugepaged_scan_mm_slot when atomic_inc_not_zero fails. So I guess
it would be better to stick it into collect_mm_slot.
Thanks for your testing!
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6574c62ca4a3..0432581fb87c 100644
@@ -2011,6 +2011,9 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
/* khugepaged_mm_lock actually not necessary for the below */
+ if (khugepaged_scan.mm_slot == mm_slot)
+ khugepaged_scan.mm_slot = NULL;