Re: Avoid high order memory allocating with kmalloc, when readlarge seq file

From: Andrew Morton
Date: Tue Jan 29 2013 - 19:24:39 EST


On Tue, 29 Jan 2013 14:14:14 +0800
xtu4 <xiaobing.tu@xxxxxxxxx> wrote:

> @@ -209,8 +209,17 @@ ssize_t seq_read(struct file *file, char __user
> *buf, size_t size, loff_t *ppos)
> if (m->count < m->size)
> goto Fill;
> m->op->stop(m, p);
> - kfree(m->buf);
> - m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> + if (m->size > 2 * PAGE_SIZE) {
> + vfree(m->buf);
> + } else
> + kfree(m->buf);
> + m->size <<= 1;
> + if (m->size > 2 * PAGE_SIZE) {
> + m->buf = vmalloc(m->size);
> + } else
> + m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> +
> +
> if (!m->buf)
> goto Enomem;
> m->count = 0;
> @@ -325,7 +334,10 @@ EXPORT_SYMBOL(seq_lseek);

The conventional way of doing this is to attempt the kmalloc with
__GFP_NOWARN and if that failed, fall back to vmalloc().

Using vmalloc is generally not a good thing, mainly because of
fragmentation issues, but for short-lived allocations like this, that
shouldn't be too bad.

But really, the binder code is being obnoxious here and it would be
best to fix it up. Please identify with some care which part of the
binder code is causing this problem. binder_stats_show(), from a
guess? It looks like that function's output size is proportional to
the number of processes on binder_procs? If so, there is no upper
bound, is there? Problem!

btw, binder_debug_no_lock should just go away. That list needs
locking.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/