Re: [PATCH v2 1/2] mm,page_owner: Fix refcount imbalance
From: Vlastimil Babka
Date: Thu Mar 21 2024 - 06:37:08 EST
On 3/19/24 19:32, Oscar Salvador wrote:
> Current code does not contemplate scenarios were an allocation and
> free operation on the same pages do not handle it in the same amount
> at once.
> To give an example, page_alloc_exact(), where we will allocate a page
> of enough order to stafisfy the size request, but we will free the
> remainings right away.
>
> In the above example, we will increment the stack_record refcount
> only once, but we will decrease it the same number of times as number
> of unused pages we have to free.
> This will lead to a warning because of refcount imbalance.
>
> Fix this by recording the number of base pages in the refcount field.
>
> Reported-by: syzbot+41bbfdb8d41003d12c0f@xxxxxxxxxxxxxxxxxxxxxxxxx
> Closes: https://lore.kernel.org/linux-mm/00000000000090e8ff0613eda0e5@xxxxxxxxxx
> Fixes: 217b2119b9e2 ("mm,page_owner: implement the tracking of the stacks count")
> Signed-off-by: Oscar Salvador <osalvador@xxxxxxx>
With the fixup,
Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx>
But I think you'll need to resend with the missing hunk already applied, it
had broken whitespace in your email and IIUC this is was dropped from mm tree.
Also I'd suggest a change:
> +++ b/mm/page_owner.c
> @@ -196,9 +196,11 @@ static void add_stack_record_to_list(struct stack_record *stack_record,
> spin_unlock_irqrestore(&stack_list_lock, flags);
> }
>
> -static void inc_stack_record_count(depot_stack_handle_t handle, gfp_t gfp_mask)
> +static void inc_stack_record_count(depot_stack_handle_t handle, gfp_t gfp_mask,
> + int nr_base_pages)
> {
> struct stack_record *stack_record = __stack_depot_get_stack_record(handle);
> + int old = REFCOUNT_SATURATED;
>
> if (!stack_record)
> return;
> @@ -210,22 +212,18 @@ static void inc_stack_record_count(depot_stack_handle_t handle, gfp_t gfp_mask)
> * Since we do not use STACK_DEPOT_FLAG_GET API, let us
> * set a refcount of 1 ourselves.
> */
> - if (refcount_read(&stack_record->count) == REFCOUNT_SATURATED) {
> - int old = REFCOUNT_SATURATED;
I think this was useful optimization in that most cases the count is not
REFCOUNT_SATURATED so we don't have to go for the expensive cmpxchg all the
time. Or do I miss a reason why this was changed?
> -
> - if (atomic_try_cmpxchg_relaxed(&stack_record->count.refs, &old, 1))
> - /* Add the new stack_record to our list */
> - add_stack_record_to_list(stack_record, gfp_mask);
> - }
> - refcount_inc(&stack_record->count);
> + if (atomic_try_cmpxchg_relaxed(&stack_record->count.refs, &old, 1))
> + add_stack_record_to_list(stack_record, gfp_mask);
> + refcount_add(nr_base_pages, &stack_record->count);
> }
>