On 1/23/2024 8:37 AM, Keith Busch wrote:I believe the only reason the async scanning should take any longer than
On Mon, Jan 22, 2024 at 11:13:15AM +0200, Sagi Grimberg wrote:
On 1/18/24 23:03, Stuart Hayes wrote:
@@ -3901,19 +3932,25 @@ static int nvme_scan_ns_list(struct nvme_ctrl *ctrl)
goto free;
}
+ /*
+ * scan list starting at list offset 0
+ */
+ atomic_set(&scan_state.count, 0);
for (i = 0; i < nr_entries; i++) {
u32 nsid = le32_to_cpu(ns_list[i]);
if (!nsid) /* end of the list? */
goto out;
- nvme_scan_ns(ctrl, nsid);
+ async_schedule_domain(nvme_scan_ns, &scan_state, &domain);
while (++prev < nsid)
nvme_ns_remove_by_nsid(ctrl, prev);
}
+ async_synchronize_full_domain(&domain);
You mentioned async scanning was an improvement if you have 1000
namespaces, but wouldn't this be worse if you have very few namespaces?
IOW, the decision to use the async schedule should be based on
nr_entries, right?
Perhaps it's also helpful to documents the data for small number of
namespaces, we can think of collecting data something like this:-
NR Namespaces Seq Scan Async Scan
2
4
8
16
32
64
128
256
512
1024
If we find that difference is not that much then we can go ahead with
this patch, if it the difference is not acceptable to the point that it
will regress for common setups then we can make it configurable ?
-ck