Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop

From: Lorenzo Stoakes

Date: Fri Oct 24 2025 - 10:03:35 EST


On Thu, Oct 23, 2025 at 02:59:13PM +0800, Kairui Song wrote:
> From: Kairui Song <kasong@xxxxxxxxxxx>
>
> The order check and fallback loop is updating the index value on every
> loop, this will cause the index to be wrongly aligned by a larger value
> while the loop shrinks the order.
>
> This may result in inserting and returning a folio of the wrong index
> and cause data corruption with some userspace workloads [1].
>
> Cc: stable@xxxxxxxxxxxxxxx
> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@xxxxxxxxxxxxxx/ [1]
> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Kairui Song <kasong@xxxxxxxxxxx>

Yikes... LGTM so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx>

See below for a small nit.

>
> ---
>
> Changes from V2:
> - Introduce a temporary variable to improve code,
> no behavior change, generated code is identical.
> - Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@xxxxxxxxx/
>
> Changes from V1:
> - Remove unnecessary cleanup and simplify the commit message.
> - Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@xxxxxxxxx/
>
> ---
> mm/shmem.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index b50ce7dbc84a..e1dc2d8e939c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
> struct shmem_inode_info *info = SHMEM_I(inode);
> unsigned long suitable_orders = 0;
> struct folio *folio = NULL;
> + pgoff_t aligned_index;

Nit, but can't we just declare this in the loop? That makes it even clearer
that we don't reuse the value.

> long pages;
> int error, order;
>
> @@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
> order = highest_order(suitable_orders);
> while (suitable_orders) {
> pages = 1UL << order;
> - index = round_down(index, pages);
> - folio = shmem_alloc_folio(gfp, order, info, index);
> - if (folio)
> + aligned_index = round_down(index, pages);
> + folio = shmem_alloc_folio(gfp, order, info, aligned_index);
> + if (folio) {
> + index = aligned_index;
> goto allocated;
> + }
>
> if (pages == HPAGE_PMD_NR)
> count_vm_event(THP_FILE_FALLBACK);
> --
> 2.51.0
>