Re: [PATCH 1/1] mm/page_alloc: add scheduling point to free_unref_page_list

From: Vlastimil Babka
Date: Tue Mar 08 2022 - 11:05:48 EST


On 3/2/22 02:38, wangjianxing wrote:
> free a large list of pages maybe cause rcu_sched starved on
> non-preemptible kernels
>
> rcu: rcu_sched kthread starved for 5359 jiffies! g454793 f0x0
> RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=19
> [...]
> Call Trace:
> free_unref_page_list+0x19c/0x270
> release_pages+0x3cc/0x498
> tlb_flush_mmu_free+0x44/0x70
> zap_pte_range+0x450/0x738
> unmap_page_range+0x108/0x240
> unmap_vmas+0x74/0xf0
> unmap_region+0xb0/0x120
> do_munmap+0x264/0x438
> vm_munmap+0x58/0xa0
> sys_munmap+0x10/0x20
> syscall_common+0x24/0x38
>
> Signed-off-by: wangjianxing <wangjianxing@xxxxxxxxxxx>

Acked-by: Vlastimil Babka <vbabka@xxxxxxx>

> ---
> mm/page_alloc.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3589febc6..1b96421c8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3479,6 +3479,9 @@ void free_unref_page_list(struct list_head *list)
> */
> if (++batch_count == SWAP_CLUSTER_MAX) {
> local_unlock_irqrestore(&pagesets.lock, flags);
> +
> + cond_resched();
> +
> batch_count = 0;
> local_lock_irqsave(&pagesets.lock, flags);
> }