blk-mq is using preempt_disable/enable in order to ensure that the
queue runners are placed on the right CPU. This does not work with
the RT patches, because __blk_mq_run_hw_queue takes a non-raw
spinlock with the preemption-disabled region. If there is contention
on the lock, this violates the rules for preemption-disabled regions.
While this could be fixed easily within the RT patches just by doing
migrate_disable/enable (note: this was not tested :)), we can do
better. The first patch concentrates the preempt_disable/enable in
a single place in blk_mq_run_hw_queue, and also avoids useless calls
when the caller wants to start __blk_mq_run_hw_queue asynchronously.
(There is already a call to __blk_mq_run_hw_queue that does not disable
preemption in blk-flush.c; not coincidentially, it passes async=true).
Once this is done, it is trivial to use get/put_cpu instead of
preempt_disable/smp_processor_id/preempt_enable, which is what the
second patch does. The RT patches then can change this to use
get_cpu_light.
With these changes (and the additional switch to get_cpu_light),
virtio-blk can be used again with RT kernels.