blk-mq vs cpu hotplug performance (due to percpu_ref_put performance)
From: Christian Borntraeger
Date: Tue Oct 28 2014 - 15:35:51 EST
Tejun,
when going from 3.17 to 3.18-rc2 cpu hotplug become horrible slow on some KVM guests on s390
I was able to bisect this to
commit 9eca80461a45177e456219a9cd944c27675d6512
("Revert "blk-mq, percpu_ref: implement a kludge for SCSI blk-mq stall during probe")
Seems that this is due to all the rcu grace periods on percpu_ref_put during the cpu hotplug notifiers.
This is barely noticable on small guests (lets say 1 virtio disk), but on guests with 20 disks a hotplug takes 2 or 3 instead of around 0.1 sec.
There are three things that make this especially noticably on s390:
- s390 has 100HZ which makes grace period waiting slower
- s390 does not yet implement context tracking which would speed up RCU
- s390 systems usually have a bigger amount of disk (e.g. 20 7GB disks instead of one 140GB disks)
Any idea how to improve the situation? I think we could accept an expedited variant on cpu hotplug, since stop_machine_run will cause hickups anyway, but there are probably other callers.
Christian
PS: on the plus side, this makes CPU hotplug races less likely....
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/