Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls

From: Zhihao Cheng

Date: Tue Mar 17 2026 - 00:06:14 EST


在 2026/3/16 3:39, Kuan-Wei Chiu 写道:
Historically, list_sort() implemented a hack in merge_final():
if (unlikely(!++count))
cmp(priv, b, b);

This was designed specifically so that callers could periodically
invoke cond_resched() within their comparison functions when merging
highly unbalanced lists.

However, an audit of the kernel tree reveals that only fs/ubifs/ relies
on this mechanism. For the vast majority of list_sort() users (such as
block layer IO schedulers and file systems), this results in completely
wasted function calls. In the worst-case scenario (merging an already
sorted list where 'a' is exhausted quickly), this results in
approximately (N/2)/256 unnecessary cmp() calls.

To clean up this API while ensuring behavior compatibility:
1. Introduce list_sort_nonatomic(), which explicitly calls
cond_resched() internally when count overflows.
2. Remove the dummy cmp(priv, b, b) fallback for standard list_sort(),
saving unnecessary function calls and improving determinism.
3. Convert the sole user (fs/ubifs/) to the new API.

Note that ubifs still maintains cond_resched() inside its own
comparison functions. This patch does not alter the frequency or timing
of those scheduling points, guaranteeing no regressions for ubifs,
while benefiting all other kernel users.

Signed-off-by: Kuan-Wei Chiu <visitorckw@xxxxxxxxx>
---
fs/ubifs/gc.c | 4 +-
fs/ubifs/replay.c | 2 +-
include/linux/list_sort.h | 3 +
lib/list_sort.c | 166 +++++++++++++++++++++-----------------
4 files changed, 100 insertions(+), 75 deletions(-)

lgtm for UBIFS.

Reviewed-by: Zhihao Cheng <chengzhihao1@xxxxxxxxxx>

one small nit below.


[...]
diff --git a/include/linux/list_sort.h b/include/linux/list_sort.h
index 453105f74e05..f7af29073d48 100644
--- a/include/linux/list_sort.h
+++ b/include/linux/list_sort.h
@@ -11,4 +11,7 @@ typedef int __attribute__((nonnull(2,3))) (*list_cmp_func_t)(void *,
__attribute__((nonnull(2,3)))
void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp);
+
+__attribute__((nonnull(2, 3)))
+void list_sort_nonatomic(void *priv, struct list_head *head, list_cmp_func_t cmp);
#endif
diff --git a/lib/list_sort.c b/lib/list_sort.c
index a310ecb7ccc0..788bfc26cf7b 100644
--- a/lib/list_sort.c
+++ b/lib/list_sort.c
@@ -3,6 +3,7 @@
#include <linux/export.h>
#include <linux/list_sort.h>
#include <linux/list.h>
+#include <linux/sched.h>
/*
* Returns a list organized in an intermediate format suited
@@ -47,7 +48,7 @@ static struct list_head *merge(void *priv, list_cmp_func_t cmp,
*/
__attribute__((nonnull(2,3,4,5)))
static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
- struct list_head *a, struct list_head *b)
+ struct list_head *a, struct list_head *b, bool may_schedule)
{
struct list_head *tail = head;
u8 count = 0;
@@ -79,12 +80,11 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
/*
* If the merge is highly unbalanced (e.g. the input is
* already sorted), this loop may run many iterations.
- * Continue callbacks to the client even though no
- * element comparison is needed, so the client's cmp()
- * routine can invoke cond_resched() periodically.
+ * If may_schedule is true, periodically invoke cond_resched()
+ * to avoid soft lockups.
*/
- if (unlikely(!++count))
- cmp(priv, b, b);
+ if (may_schedule && unlikely(!++count))
+ cond_resched();
The cond_resched() already has a judgment on whether to schedule out, so the 'count' could be removed?
b->prev = tail;
tail = b;
b = b->next;