Re: [PATCH v2 4/4] KVM: x86: Fix stack-out-of-bounds memory access from ioapic_write_indirect()
From: Eduardo Habkost
Date: Thu Aug 26 2021 - 15:27:31 EST
I'm re-reading this, and:
On Tue, Aug 24, 2021 at 07:07:58PM +0300, Maxim Levitsky wrote:
[...]
> Hi,
>
> Not a classical review but,
> I did some digital archaeology with this one, trying to understand what is going on:
>
>
> I think that 16 bit vcpu bitmap is due to the fact that IOAPIC spec states that
> it can address up to 16 cpus in physical destination mode.
>
> In logical destination mode, assuming flat addressing and that logical id = 1 << physical id
> which KVM hardcodes, it is also only possible to address 8 CPUs.
>
> However(!) in flat cluster mode, the logical apic id is split in two.
> We have 16 clusters and each have 4 CPUs, so it is possible to address 64 CPUs,
> and unlike the logical ID, the KVM does honour cluster ID,
> thus one can stick say cluster ID 0 to any vCPU.
>
>
> Let's look at ioapic_write_indirect.
> It does:
>
> -> bitmap_zero(&vcpu_bitmap, 16);
> -> kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, &vcpu_bitmap);
> -> kvm_make_scan_ioapic_request_mask(ioapic->kvm, &vcpu_bitmap); // use of the above bitmap
>
>
> When we call kvm_bitmap_or_dest_vcpus, we can already overflow the bitmap,
> since we pass all 8 bit of the destination even when it is physical.
>
>
> Lets examine the kvm_bitmap_or_dest_vcpus:
>
> -> It calls the kvm_apic_map_get_dest_lapic which
>
> -> for physical destinations, it just sets the bitmap, which can overflow
> if we pass it 8 bit destination (which basically includes reserved bits + 4 bit destination).
How exactly do you think kvm_apic_map_get_dest_lapic() can
overflow? It never writes beyond `bitmap[0]`, as far as I can
see.
>
>
> -> For logical apic ID, it seems to truncate the result to 16 bit, which isn't correct as I explained
> above, but should not overflow the result.
>
>
> -> If call to kvm_apic_map_get_dest_lapic fails, it goes over all vcpus and tries to match the destination
> This can overflow as well.
>
> [...]
--
Eduardo