Re: [PATCH V4] mm: fix kernel crash in khugepaged thread

From: Vlastimil Babka
Date: Fri Nov 13 2015 - 05:47:41 EST

On 11/12/2015 03:29 PM, Steven Rostedt wrote:
On Thu, 12 Nov 2015 16:21:02 +0800
yalin wang <yalin.wang2010@xxxxxxxxx> wrote:

This crash is caused by NULL pointer deference, in page_to_pfn() marco,
when page == NULL :

[ 182.639154 ] Unable to handle kernel NULL pointer dereference at virtual address 00000000

add the trace point with TP_CONDITION(page),

I wonder if we still want to trace even if page is NULL?

I'd say we want to. There's even a "SCAN_PAGE_NULL" result defined for that case, and otherwise we would only have to guess why collapsing failed, which is the thing that the tracepoint should help us find out in the first place :)

avoid trace NULL page.

Signed-off-by: yalin wang <yalin.wang2010@xxxxxxxxx>
include/trace/events/huge_memory.h | 20 ++++++++++++--------
mm/huge_memory.c | 6 +++---
2 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 11c59ca..727647b 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -45,12 +45,14 @@ SCAN_STATUS
#define EM(a, b) {a, b},
#define EMe(a, b) {a, b}


- TP_PROTO(struct mm_struct *mm, unsigned long pfn, bool writable,
+ TP_PROTO(struct mm_struct *mm, struct page *page, bool writable,
bool referenced, int none_or_zero, int status, int unmapped),

- TP_ARGS(mm, pfn, writable, referenced, none_or_zero, status, unmapped),
+ TP_ARGS(mm, page, writable, referenced, none_or_zero, status, unmapped),

__field(struct mm_struct *, mm)
@@ -64,7 +66,7 @@ TRACE_EVENT(mm_khugepaged_scan_pmd,

__entry->mm = mm;
- __entry->pfn = pfn;
+ __entry->pfn = page_to_pfn(page);

Instead of the condition, we could have:

__entry->pfn = page ? page_to_pfn(page) : -1;

I agree. Please do it like this.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at