Re: lowmemory android driver not needed?

From: Arve Hjønnevåg
Date: Wed Jan 28 2009 - 21:51:22 EST


On Wed, Jan 28, 2009 at 5:48 PM, KOSAKI Motohiro
<kosaki.motohiro@xxxxxxxxxxxxxx> wrote:
>> To indicate that it is more expensive to restart a killed process than
>> a regular cache page.
>
> I think you already know this answer isn't actual answer.
> anybody know >1 mean expensive. a reviewer want to know why DEFAULT_SEEKS * 16
> is best, not *10 nor *20.

DEFAULT_SEEKS * 16 is probably not best. 10 or 20 probably works just
as well. I don't remember what values I tried, but at some point
lowmem_shrink did not get called often enough to be effective.

>> >> static uint32_t lowmem_debug_level = 2;
>> >> static int lowmem_adj[6] = {
>> >
>> > why do you choice [6]?
>>
>> We use six levels.
>
> if you don't consider other user and other usage case,
> this file can't merge forever.

I never expected it to be merged. I wrote it to allow us to ship a product.

> The top problem is, this file stay on non proper place.
> then, MM folks don't review at all.
>
> I think this patch need to receive MM folks review.

This patch solves two problems for us:
1. It gives us more control over which process gets killed when we run
out of memory. This can easily be fixed in the regular oom killer
instead.
2. It prevents the system from getting unusable due to excessive
demand paging. Before we added the low memory killer, we had devices
stuck for many minutes before the system managed to allocate enough
memory for the oom killer to kick in.
The second problem is the most critical and if it is solved, we will
not need this patch.

>> Another problem we run into is that allocations of contiguous pages
>> can cause kswapd to empty all the caches. This will not trigger this
>> lowmemorykiller or the regular oom killer.
>
> strange usage.
> you shouldn't use high order allocation on embedded machine.

The first level page table on arm needs four contiguous pages. This is
enough to trigger this problem. I suspect the lack of swap makes this
problem much more apparent.

>> How about this:
>> ----
>> diff --git a/drivers/misc/lowmemorykiller.c b/drivers/misc/lowmemorykiller.c
>> index 3715d56..3f4e41a 100644
>> --- a/drivers/misc/lowmemorykiller.c
>> +++ b/drivers/misc/lowmemorykiller.c
>> @@ -71,6 +71,12 @@ static int lowmem_shrink(int nr_to_scan, gfp_t gfp_mask)
>> }
>> if(nr_to_scan > 0)
>> lowmem_print(3, "lowmem_shrink %d, %x, ofree %d, ma
>> %d\n", nr_to_scan, gfp_mask, other_free, min_adj);
>> + rem = global_page_state(NR_ACTIVE_ANON) +
>> global_page_state(NR_ACTIVE_FILE);
>> + if (!nr_to_scan || min_adj == OOM_ADJUST_MAX + 1) {
>> + lowmem_print(5, "lowmem_shrink %d, %x, return %d\n",
>> nr_to_scan, gfp_mask, rem);
>> + return rem;
>> + }
>> +
>
> I don't understand your intention.
> if file server machine has tons NR_ACTIVE_FILE, this logic kill many
> process.
>
> Do I misunderstand anything?

It only kills one process at a time, and only if the amount of memory
available is below the selected threshold. The number returned is a
very rough estimate of how much memory can be freed. The old code used
the sum of rss for the killable processes which is not accurate
either.

--
Arve Hjønnevåg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/