Re: Behaviour of the VM on a embedded linux

From: Christopher Snook
Date: Fri Aug 22 2008 - 17:39:17 EST


Wappler Marcel wrote:
Alex Riesen wrote:
I'm trying to figure out whats going on an embedded system I have to deal
with. It's running a 2.6.24.7 kernel on 32 MBytes of RAM. There is no
swapping. There are some daemons and shells running and - a big
monolithic c++ application.

The application runs a lot of pthreads on different real time priority
levels. It looks like the application consumes a huge ammount of real
memory in contrast to the assumption, that large code size is no problem
due to paging out pages with unused code.
Maybe the kernel wont page anything if the paging support is compiled out.
IOW, you still need paging code even if there is now swap partitions.

Alex, this is the case - I do observe normal operation of the VM subsytem -
it moves memory pages dynamicaly throughout the system. But: when I create a
large file on the tmpfs a kernel OOM occurs and kills the big monolithic
application instead of stealing pages from the application. This is the fact
I'm wondering about. In the past every guy told me that code size is no
problem on systems using MMUs because the system can steal pages which
contain code of the application in situations of low memory. But in my
situation this is not the case.

Any ideas?

Marcel PS: please CC me on replies

All these things you're doing in userspace have a memory footprint in kernelspace as well, and that memory can't be swapped. Page tables for your tmpfs mappings aren't free. Kernel stacks and task_structs for your threads aren't free.

Also, there are many places in the kernel where a thread may not go to sleep to wait for memory to be freed. The kernel has asynchronous tasks that try to keep memory free to avoid this problem, but if you're churning through your big monolithic binary, it's getting paged in as fast as the kernel can page it out.

That said, the modern VM is tuned with larger systems in mind, so you may be able to improve the situation by tweaking the vm.* sysctls, particularly vm.min_free_kbytes. You can also change oom-killer settings for your process via the /proc/$PID/oom_* parameters. It might help, or it might replace a recoverable userspace oom-kill with an unrecoverable kernel oom panic.

Either way, I'd be a little more conservative about code size on very small systems with no swap.

-- Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/