Re: [PATCH] zram: Using GFP_ATOMIC instead of GFP_KERNEL to allocate bitmap memory in backing_dev_store

From: Sergey Senozhatsky
Date: Fri Dec 01 2023 - 10:40:13 EST


On (23/11/30 23:20), Dongyun Liu wrote:
> INFO: task init:331 blocked for more than 120 seconds. "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:init state:D stack: 0 pid: 1 ppid: 0 flags:0x04000000
> Call trace:
> __switch_to+0x244/0x4e4
> __schedule+0x5bc/0xc48
> schedule+0x80/0x164
> rwsem_down_read_slowpath+0x4fc/0xf9c
> __down_read+0x140/0x188
> down_read+0x14/0x24
> try_wakeup_wbd_thread+0x78/0x1ec [zram]
> __zram_bvec_write+0x720/0x878 [zram]
> zram_bvec_rw+0xa8/0x234 [zram]
> zram_submit_bio+0x16c/0x268 [zram]
> submit_bio_noacct+0x128/0x3c8
> submit_bio+0x1cc/0x3d0
> __swap_writepage+0x5c4/0xd4c
> swap_writepage+0x130/0x158
> pageout+0x1f4/0x478
> shrink_page_list+0x9b4/0x1eb8
> shrink_inactive_list+0x2f4/0xaa8
> shrink_lruvec+0x184/0x340
> shrink_node_memcgs+0x84/0x3a0
> shrink_node+0x2c4/0x6c4
> shrink_zones+0x16c/0x29c
> do_try_to_free_pages+0xe4/0x2b4
> try_to_free_pages+0x388/0x7b4
> __alloc_pages_direct_reclaim+0x88/0x278
> __alloc_pages_slowpath+0x4ec/0xf6c
> __alloc_pages_nodemask+0x1f4/0x3dc
> kmalloc_order+0x54/0x338
> kmalloc_order_trace+0x34/0x1bc
> __kmalloc+0x5e8/0x9c0
> kvmalloc_node+0xa8/0x264
> backing_dev_store+0x1a4/0x818 [zram]
> dev_attr_store+0x38/0x8c
> sysfs_kf_write+0x64/0xc4

Hmm, I'm not really following this backtrace. Backing device
configuration is only possible on un-initialized zram device.
If it's uninitialized, then why is it being used for swapout
later in the call stack?