Re: Is it safe to kmalloc a large size of memory in interrupt handler?

From: Michal Hocko
Date: Mon Aug 05 2019 - 08:13:20 EST


On Mon 05-08-19 19:57:54, Fuqian Huang wrote:
> In the implementation of kmalloc.
> when the allocated size is larger than KMALLOC_MAX_CACHE_SIZE,
> it will call kmalloc_large to allocate the memory.
> kmalloc_large ->
> kmalloc_order_trace->kmalloc_order->alloc_pages->alloc_pages_current->alloc_pages_nodemask->get_page_from_freelist->node_reclaim->__node_reclaim->shrink_node->shrink_node_memcg->get_scan_count

You shouldn't really get there when using GFP_NOWAIT/GFP_ATOMIC.

> get_scan_count will call spin_unlock_irq which enables local interrupt.
> As the local interrupt should be disabled in the interrupt handler.
> It is safe to use kmalloc to allocate a large size of memory in
> interrupt handler?

It will work very unreliably because larger physically contiguous memory
is not generally available without doing compaction after a longer
runtime. In general I would recommend to use pre allocated buffers or
defer the actual handling to a less restricted context if possible.

--
Michal Hocko
SUSE Labs