I have seen RH3.0 crash on 32GB systems because it has tooRoger, thanks for the information.
much memory tied up in write cache. It required update 2 (this was a while ago) and a change of a parameter in /proc
to prevent the crash, it was because of a overagressive
write caching change RH implemented in the kernel resulted
in the crash. This crash was an OOM related crash. To
duplicate the bug, you booted the machine and ran a dd
to create a very large file filling the disk.
We did test and did determine that it did not appear to have
the issue if you had less than 28GB of ram, this was on an
itanium machine, so I don't know if it occurs on other arches,
and if it occurs at the same memory limits on the other arches
either.
Roger
-----Original Message-----
From: linux-kernel-owner@xxxxxxxxxxxxxxx [mailto:linux-kernel-owner@xxxxxxxxxxxxxxx] On Behalf Of Márcio Oliveira
Sent: Friday, July 22, 2005 2:42 PM
To: Neil Horman
Cc: arjanv@xxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
Subject: Re: Memory Management
Neil Horman wrote:
On Fri, Jul 22, 2005 at 11:32:52AM -0300, Márcio Oliveira wrote:inactive_clean_percent,
Neil Horman wrote:
On Thu, Jul 21, 2005 at 10:40:54AM -0300, Márcio Oliveira wrote:
http://people.redhat.com/nhorman/papers/rhel3_vm.pdfNeil,
I wrote this with norm awhile back. It may help you out.
Regards
Neil
Thanks.~10-12GB of total RAM (16GB) are
How can Proc virtual memory parameters like
me to solveovercommit_memory, overcommit_ratio and page_cache help
RAM and lots/ reduce Out Of Memory conditions on servers with 16GB
seeing out ofof GB swap?I wouldn't touch memory overcommit if you are already
would suggestmemory issues. If you are using lots of pagecache, I
pagecahce.max value,increasing inactive_clean percent, reducing the
such thatand modifying the bdflush parameters in the above document
iteration.bdflush runs sooner, more often, and does more work per
kernel and youThis will help you move data in pagecache back to disk more aggressively so that memory will be available for other purposes, like heap allocations. Also if you're using a Red Hat
- 16GB). Ishave 16GB of ram in your system, you're a good candidate for the hugemem kernel. Rather than a straightforward out of memory condition, you may be seeing a exhaustion of your kernels address space (check LowFree in /proc/meminfo). In this even the hugemem kernel will help you in that it increases your Low Memory address space from 1GB to 4GB, preventing some OOM conditions.
Kernel does not free cached memory (~10-12GB of total RAM
under thethere some way to force the kernel to free cached memory?Cached memory is freed on demand. Just because its listed
purpose.cached line
below doesn't mean it can't be freed and used for another
70651904 13194563584Implement
the tunings above, and your situation should improve.
Regards
Neil
/proc/meminfo:
total: used: free: shared: buffers: cached:
Mem: 16603488256 16523333632 80154624 0
process 23716 (oracle).Neil,Swap: 17174257664 11771904 17162485760
MemTotal: 16214344 kB
MemFree: 78276 kB
Buffers: 68996 kB
Cached: 12874808 kB
Thanks to all.
Marcio.
Thanks for the answers!
The following lines are the Out Of Memory log:
Jul 20 13:45:44 server kernel: Out of Memory: Killed
for user rootJul 20 13:45:44 server kernel: Fixed up OOM kill of mm-less task
Jul 20 13:45:45 server su(pam_unix)[3848]: session closed
min: 1279Jul 20 13:45:48 server kernel: Mem-info:
Jul 20 13:45:48 server kernel: Zone:DMA freepages: 1884 min: 0 low: 0 high: 0
Jul 20 13:45:48 server kernel: Zone:Normal freepages: 1084
freepages:386679 min: 255low: 4544 high: 6304
Jul 20 13:45:48 server kernel: Zone:HighMem
(386679 HighMem)low: 61952 high: 92928
Jul 20 13:45:48 server kernel: Free pages: 389647
il:15 ic:0 fr:1085Jul 20 13:45:48 server kernel: ( Active: 2259787/488777, inactive_laundry: 244282, inactive_clean: 244366, free: 389647 )
Jul 20 13:45:48 server kernel: aa:0 ac:0 id:0 il:0 ic:0 fr:1884
Jul 20 13:45:48 server kernel: aa:1620 ac:1801 id:231
1*64kB 0*128kBJul 20 13:45:48 server kernel: aa:1099230 ac:1157136 id:488536 il:244277 ic:244366 fr:386679
Jul 20 13:45:48 server kernel: 0*4kB 0*8kB 1*16kB 1*32kB
1546716kB)1*256kB 0*512kB 1*1024kB 1*2048kB 1*4096kB = 7536kB)Jul 20 13:45:48 server kernel: 55*4kB 9*8kB 19*16kB 9*32kB 0*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 4340kB)
Jul 20 13:45:48 server kernel: 291229*4kB 46179*8kB 711*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB =
delete 189665,Jul 20 13:45:48 server kernel: Swap cache: add 192990,
value is too low?find 21145/90719, race 0+0
Jul 20 13:45:48 server kernel: 139345 pages of slabcache
Jul 20 13:45:48 server kernel: 1890 pages of kernel stacks
Jul 20 13:45:48 server kernel: 0 lowmem pagetables, 274854 highmem pagetables
Jul 20 13:45:48 server kernel: Free swap: 16749720kB
Jul 20 13:45:49 server kernel: 4194304 pages of RAM
Jul 20 13:45:49 server kernel: 3899360 pages of HIGHMEM
Jul 20 13:45:49 server kernel: 140718 reserved pages
Jul 20 13:45:49 server kernel: 35350398 pages shared
Jul 20 13:45:49 server kernel: 3325 pages swap cached
/proc/meminfo LowFree info:
LowFree: 17068 kB ------> Do you think this
change quicklyNo that should be plenty of lowFree, but that number can
depending on workload.6304 ---->
Zone:Normal freepages: 1084 min: 1279 low: 4544 high:
92928 ---->(freepages < min) It's normal?
Zone:HighMem freepages:386679 min: 255 low: 61952 high:
zone for free pages,(freepages < min) It's normal?You're beneath your low water mark in the normal (lowmem)
so your kernel is likely trying to get lots of data moved todisk. Although
given that you're largest buddy list has a 2048K chunk free,I'm hard pressed to
see how you aren't able to get memory when you need it. Doyou have a module
loaded in your kernel that might require such large memoryallocations.
NeilNeil,
Thanks a lot Neil!
Márcio Oliveira.
Thanks for the help.
I have a storage attached to the server. Maybe the storage module require lots of memory.
Maybe the "LowFree" be wrong (out of OOM time), so there is possible that "LowFree" value be too small on the OOM condition.
Is there a way to identify if the Low Memory is too small? (some program, command, daemon...)
The server has 16GB RAM and 16GB swap. When the OOM kill conditions happens, the system has ~6GB RAM used, ~10GB RAM cached and 16GB free swap. Is that indicate that the server can't allocate Low Memory and starts OOM conditions? Because the High Memory is OK, right?
Thanks again!
Márcio.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/