blk_mq_bio_to_request() is one which setup these fields, then you addCan you please show me how these are initialized twice?- rq->bio = rq->biotail = NULL;This patch looks not good, why do you switch to initialize the three fields
twice in fast path?
another one in blk_mq_rq_ctx_init().
If there is a real concern with this then we go with my original idea, whichDid you know the exact issue on nvme-tcp, nvme-rdma or nvme-fc maybe
was to copy the init method of blk_mq_alloc_request() (in
blk_mq_alloc_request_hctx())
BTW, we know blk_mq_alloc_request_hctx() has big trouble, so pleaseYeah, I know this,
avoid to extend it to other use cases.
with blk_mq_alloc_request_hctx()?
but sometimes we just need to allocate for a specific HWBut all cpus on this hctx->cpumask could become offline.
queue...
For my usecase of interest, it should not impact if the cpumask of the HW
queue goes offline after selecting the cpu in blk_mq_alloc_request_hctx(),
so any race is ok ... I think.
However it should be still possible to make blk_mq_alloc_request_hctx() more
robust. How about using something like work_on_cpu_safe() to allocate and
execute the request with blk_mq_alloc_request() on a cpu associated with the
HW queue, such that we know the cpu is online and stays online until we
execute it? Or also extent to work_on_cpumask_safe() variant, so that we
don't need to try all cpus in the mask (to see if online)?