As you saw, this API has the above problem too, but any one of CPUsIf all hctx->cpumask are offline then we should not allocate a request andbut sometimes we just need to allocate for a specific HWBut all cpus on this hctx->cpumask could become offline.
queue...
For my usecase of interest, it should not impact if the cpumask of the HW
queue goes offline after selecting the cpu in blk_mq_alloc_request_hctx(),
so any race is ok ... I think.
However it should be still possible to make blk_mq_alloc_request_hctx() more
robust. How about using something like work_on_cpu_safe() to allocate and
execute the request with blk_mq_alloc_request() on a cpu associated with the
HW queue, such that we know the cpu is online and stays online until we
execute it? Or also extent to work_on_cpumask_safe() variant, so that we
don't need to try all cpus in the mask (to see if online)?
this is acceptable. Maybe I am missing your point.
may become online later, maybe just during blk_mq_alloc_request_hctx(),
and it is easy to cause inconsistence.
You didn't share your use case, but for nvme connection request, if it
is 1:1 mapping, if any one of CPU becomes offline, the controller
initialization could be failed, that isn't good from user viewpoint at
all.