[PATCH RESEND v3 0/2] Minimize xa_node allocation during xarry split

From: Zi Yan
Date: Fri Mar 14 2025 - 18:21:38 EST


Hi Andrew,

It is on top of mm-unstable with old V3 (plus a fixup) reverted, so that you
can replace the old one with this. Since the patch 1/2 on mm-unstable
tree is not the same as my original one, which caused a compilation issue
and would confuse people due to a comment is relocated incorrectly.

Thanks.

When splitting a multi-index entry in XArray from order-n to order-m,
existing xas_split_alloc()+xas_split() approach requires
2^(n % XA_CHUNK_SHIFT) xa_node allocations. But its callers,
__filemap_add_folio() and shmem_split_large_entry(), use at most 1 xa_node.
To minimize xa_node allocation and remove the limitation of no split from
order-12 (or above) to order-0 (or anything between 0 and 5)[1],
xas_try_split() was added[2], which allocates
(n / XA_CHUNK_SHIFT - m / XA_CHUNK_SHIFT) xa_node. It is used
for non-uniform folio split, but can be used by __filemap_add_folio()
and shmem_split_large_entry().

xas_split_alloc() and xas_split() split an order-9 to order-0:

---------------------------------
| | | | | | | | |
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| | | | | | | | |
---------------------------------
| | | |
------- --- --- -------
| | ... | |
V V V V
----------- ----------- ----------- -----------
| xa_node | | xa_node | ... | xa_node | | xa_node |
----------- ----------- ----------- -----------

xas_try_split() splits an order-9 to order-0:
---------------------------------
| | | | | | | | |
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| | | | | | | | |
---------------------------------
|
|
V
-----------
| xa_node |
-----------

xas_try_split() is designed to be called iteratively with n = m + 1.
xas_try_split_mini_order() is added to minmize the number of calls to
xas_try_split() by telling the caller the next minimal order to split to
instead of n - 1. Splitting order-n to order-m when m= l * XA_CHUNK_SHIFT
does not require xa_node allocation and requires 1 xa_node
when n=l * XA_CHUNK_SHIFT and m = n - 1, so it is OK to use
xas_try_split() with n > m + 1 when no new xa_node is needed.

xfstests quick group test passed on xfs and tmpfs.

Changelog
===
>From V2[3]:
1. Fixed shmem_split_large_entry() by setting swap offset correct.
(Thank Baolin for the detailed review)
2. Used updated xas_try_split() to avoid a bug when xa_node is allocated
by xas_nomem() instead of xas_try_split() itself.

Let me know your comments.


[1] https://lore.kernel.org/linux-mm/Z6YX3RznGLUD07Ao@xxxxxxxxxxxxxxxxxxxx/
[2] https://lore.kernel.org/linux-mm/20250226210032.2044041-1-ziy@xxxxxxxxxx/
[3] https://lore.kernel.org/linux-mm/20250218235444.1543173-1-ziy@xxxxxxxxxx/


Zi Yan (2):
mm/filemap: use xas_try_split() in __filemap_add_folio()
mm/shmem: use xas_try_split() in shmem_split_large_entry()

include/linux/xarray.h | 7 +++++
lib/xarray.c | 25 ++++++++++++++++++
mm/filemap.c | 45 +++++++++++++-------------------
mm/shmem.c | 59 ++++++++++++++++++++----------------------
4 files changed, 78 insertions(+), 58 deletions(-)

--
2.47.2