Re: [PATCH] kvm: irqchip: Break up high order allocations of kvm_irq_routing_table

From: Christian Borntraeger
Date: Mon May 11 2015 - 08:50:53 EST

Am 11.05.2015 um 13:45 schrieb Paolo Bonzini:
> On 11/05/2015 13:25, Joerg Roedel wrote:
>>>> It probably doesn't matter much indeed, but can you time the difference?
>>>> kvm_set_irq_routing is not too frequent, but happens enough often that
>>>> we had to use a separate SRCU instance just to speed it up (see commit
>>>> 719d93cd5f5, kvm/irqchip: Speed up KVM_SET_GSI_ROUTING, 2014-01-16).
>> The results vary a lot, but what I can say for sure is that the
>> kvm_set_irq_routing function takes at least twice as long (~10.000 vs
>> ~22.000 cycles) as before on my AMD Kaveri machine (maximum was between
>> 3-4 times as long).
>> On the other side this function is only called 2 times at boot in my
>> test, so I couldn't detect a noticable effect on the overall boot time
>> of the guest (37 disks were attached).

x86 probably has only some irq lines for this, (or Joerg is using virtio-scsi)

s390 has a route per device, but with 100 virtio-blk devices the difference seem
pretty much on the "dont care" side. qemu aio-poll/drain code seems to cause
much more delay since we elimited the kernel delays by using

> Christian, can you test this?

guest comes up and performance is ok.
I did not do any additional thing (lockdep, kmemleak) but I think the
generic approach is good.
in case the host is overcommited and paging, order-0 allocations might
be much faster and much more reliable than one big order-2, 3 or 4.

Bonus points for the future: We might be able to rework this to re-use
the old allocations for struct kvm_kernel_irq_routing_entry (bascially
replacing only chip, mr_rt_entries and hlist)


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at