Re: [discuss] mmap, mbind and write to mmap'ed memory crashes 2.6.16-rc1[2] on 2 node X86_64

From: Bharata B Rao
Date: Wed Feb 15 2006 - 00:39:36 EST


On Tue, Feb 14, 2006 at 11:33:00AM -0800, Christoph Lameter wrote:
> I just took another look at this issue and I cannot see anything wrong. An
> empty zone should be ignored by the page allocator since nr_free == 0. My
> patch should not be needed.

There is a check for list_empty(&area->free_list) in __rmqueue(), which
I think is one of the points in the page allocator where the emptiness of
the free_area list is checked. The current zone(when the crash happens)
bypasses this test leading to this crash.

>
> Could you get us the contents of the struct zone that the page allocator
> is trying to get memory from?

The zone looks like this:

crash> p *(struct zone *)0xffff81000000e700
$1 = {
free_pages = 0,
pages_min = 0,
pages_low = 0,
pages_high = 0,
lowmem_reserve = {0, 0, 0, 0},
pageset = {0xffff81000c013740, 0xffff81013fc42f40, 0xffffffff8071d600,
0xffffffff8071d680, 0xffffffff8071d700, 0xffffffff8071d780,
0xffffffff8071d800, 0xffffffff8071d880, 0xffffffff8071d900,
0xffffffff8071d980, 0xffffffff8071da00, 0xffffffff8071da80,
0xffffffff8071db00, 0xffffffff8071db80, 0xffffffff8071dc00,
0xffffffff8071dc80, 0xffffffff8071dd00, 0xffffffff8071dd80,
0xffffffff8071de00, 0xffffffff8071de80, 0xffffffff8071df00,
0xffffffff8071df80, 0xffffffff8071e000, 0xffffffff8071e080,
0xffffffff8071e100, 0xffffffff8071e180, 0xffffffff8071e200,
0xffffffff8071e280, 0xffffffff8071e300, 0xffffffff8071e380,
0xffffffff8071e400, 0xffffffff8071e480},
lock = {
raw_lock = {
slock = 0
},
break_lock = 1
},
free_area = {{
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}, {
free_list = {
next = 0x0,
prev = 0x0
},
nr_free = 0
}},
_pad1_ = {
x = 0xffff81000000e980 "\001"
},
lru_lock = {
raw_lock = {
slock = 1
},
break_lock = 0
},
active_list = {
next = 0xffff81000000e988,
prev = 0xffff81000000e988
},
inactive_list = {
next = 0xffff81000000e998,
prev = 0xffff81000000e998
},
nr_scan_active = 0,
nr_scan_inactive = 0,
nr_active = 0,
nr_inactive = 0,
pages_scanned = 0,
all_unreclaimable = 0,
reclaim_in_progress = {
counter = 0
},
last_unsuccessful_zone_reclaim = 0,
temp_priority = 12,
prev_priority = 12,
_pad2_ = {
x = 0xffff81000000ea00 ""
},
wait_table = 0x0,
wait_table_size = 0,
wait_table_bits = 0,
zone_pgdat = 0xffff81000000e000,
zone_mem_map = 0x0,
zone_start_pfn = 0,
spanned_pages = 0,
present_pages = 0,
name = 0xffffffff804a858c "Normal"
}

Regards,
Bharata.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/