On 2013å06æ08æ 03:53, Vivek Goyal wrote:On Fri, Jun 07, 2013 at 11:09:54AM +0800, sanbai wrote:That's a new idea, I will give a try later.On 2013å06æ05æ 21:30, Vivek Goyal wrote:Nobody got confused when we added cgroup support to CFQ. Not thatOn Wed, Jun 05, 2013 at 10:09:31AM +0800, Robin Dong wrote:I think if we add cgroups support into deadline, it will not beWe want to use blkio.cgroup on high-speed device (like fusionio) for our mysql clusters.So why not enhance deadline to be able to be used with cgroups instead of
After testing different io-scheduler, we found that cfq is too slow and deadline can't run on cgroup.
coming up with a new scheduler?
suitable to call "deadline" anymore...so a new ioscheduler and a new
name may not confuse users.
I am saying go add support to deadline. I am just saying that need
for cgroup support does not sound like it justfies need of a new
IO scheduler.
[..]Ok, this works only if all the groups are full all the time otherwiseCan you give more details. Do you idle? Idling kills performance. If not,We don't idle, when comes to .elevator_dispatch_fnïwe just compute
then without idling how do you achieve performance differentiation.
quota for every group:
quota = nr_requests - rq_in_driver;
group_quota = quota * group_weight / total_weight;
and dispatch 'group_quota' requests for the coordinate group.
Therefore high-weight group
will dispatch more requests than low-weight group.
groups will lose their fair share. This simplifies the things a lot.
That is fairness is provided only if group is always backlogged. In
practice, this happens only if a group is doing IO at very high rate
(like your fio scripts). Have you tried running any real life workload
in these cgroups (apache, databases etc) and see how good is service
differentiation.
Anyway, sounds like this can be done at generic block layer like
blk-throtl and it can sit on top so that it can work with all schedulers
and can also work with bio based block drivers.
I changed the iodepth to 512 in fio script and the new result is:
[..]I do the test again for cfq (slice_idle=0, quatum=128) and tppsYep, I am sure there are more simple opportunites for optimization
cfq (slice_idle=0, quatum=128)
groupname iops avg-rt(ms) max-rt(ms)
test1 16148 15 188
test2 12756 20 117
test3 9778 26 268
test4 6198 41 209
tpps
groupname iops avg-rt(ms) max-rt(ms)
test1 17292 14 65
test2 15221 16 80
test3 12080 21 66
test4 7995 32 90
Looks cfq with is much better than before.
where it can help. Can you try couple more things.
- Drive even deeper queue depth. Set quantum=512.
- set group_idle=0.
cfq (group_idle=0, quantum=512)
groupname iops avg-rt(ms) max-rt(ms)
test1 15259 33 305
test2 11858 42 345
test3 8885 57 335
test4 5738 89 355
cfq (group_idle=0, quantum=512, slice_sync=10)
groupname iops avg-rt(ms) max-rt(ms)
test1 16507 31 177
test2 12896 39 366
test3 9301 55 188
test4 6023 84 545
tpps
groupname iops avg-rt(ms) max-rt(ms)
test1 16316 31 99
test2 15066 33 106
test3 12182 42 101
test4 8350 61 180
looks cfq works much better now.
Ideally this should effectively emulate what you are doing. That is try
to provide fairness without idling on group.
In practice I could not keep group queue full and before group exhausted
its slice, it got empty and got deleted from service tree and lost its
fair share. So if group_idle=0 leads to no service differentiation,
try slice_sync=10 and see what happens.
Thanks
Vivek