On Fri, Oct 25, 2024 at 10:33:20AM +0800, Chi Zhiling wrote:
From: Chi Zhiling <chizhiling@xxxxxxxxxx>Ok, so you have *badly* fragmented free space. That going to cause
Recently, we found that the CPU spent a lot of time in
xfs_alloc_ag_vextent_size when the filesystem has millions of fragmented
spaces.
The reason is that we conducted much extra searching for extents that
could not yield a better result, and these searches would cost a lot of
time when there were millions of extents to search through. Even if we
get the same result length, we don't switch our choice to the new one,
so we can definitely terminate the search early.
Since the result length cannot exceed the found length, when the found
length equals the best result length we already have, we can conclude
the search.
We did a test in that filesystem:
[root@localhost ~]# xfs_db -c freesp /dev/vdb
from to extents blocks pct
1 1 215 215 0.01
2 3 994476 1988952 99.99
lots more problems than only "allocation searches take a long
time". e.g. you can't allocate inodes in a AG that is fragmented
this badly - not even sparse inode clusters....
Thanks!
Before this patch:Yup, that's a good improvement.
0) | xfs_alloc_ag_vextent_size [xfs]() {
0) * 15597.94 us | }
After this patch:
0) | xfs_alloc_ag_vextent_size [xfs]() {
0) 19.176 us | }
Signed-off-by: Chi Zhiling <chizhiling@xxxxxxxxxx>Yup, I think that works fine. We aren't caring about using locality
---
fs/xfs/libxfs/xfs_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 04f64cf9777e..22bdbb3e9980 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -1923,7 +1923,7 @@ xfs_alloc_ag_vextent_size(
error = -EFSCORRUPTED;
goto error0;
}
- if (flen < bestrlen)
+ if (flen <= bestrlen)
break;
busy = xfs_alloc_compute_aligned(args, fbno, flen,
&rbno, &rlen, &busy_gen);
as a secondary search key so as soon as we have a candidate extent
of a length that that the remaining extents in the free space btree
can't improve on, we are done.
Nice work!
Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx>