[PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts
From: Baolin Wang
Date: Wed Jul 31 2024 - 01:46:43 EST
In the shmem_suitable_orders() function, xa_find() is used to check for
conflicts in the pagecache to select suitable huge orders. However, when
checking each huge order in every loop, the aligned index is calculated
from the previous iteration, which may cause suitable huge orders to be
missed.
We should use the original index each time in the loop to calculate a
new aligned index for checking conflicts to avoid this issue.
Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
mm/shmem.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index a4332a97558c..6e9836b1bd1d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1686,6 +1686,7 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
unsigned long orders)
{
struct vm_area_struct *vma = vmf->vma;
+ pgoff_t aligned_index;
unsigned long pages;
int order;
@@ -1697,9 +1698,9 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
order = highest_order(orders);
while (orders) {
pages = 1UL << order;
- index = round_down(index, pages);
- if (!xa_find(&mapping->i_pages, &index,
- index + pages - 1, XA_PRESENT))
+ aligned_index = round_down(index, pages);
+ if (!xa_find(&mapping->i_pages, &aligned_index,
+ aligned_index + pages - 1, XA_PRESENT))
break;
order = next_order(&orders, order);
}
--
2.39.3