Re: [PATCH v2 2/2] KVM: arm/arm64: harden unmap_stage2_ptes in case end is not PAGE_SIZE aligned

From: Jia He
Date: Fri May 18 2018 - 08:12:32 EST




On 5/18/2018 5:48 PM, Marc Zyngier Wrote:
> On 18/05/18 10:27, Jia He wrote:
>> If it passes addr=0x202920000,size=0xfe00 to unmap_stage2_range->
>> ...->unmap_stage2_ptes, unmap_stage2_ptes will get addr=0x202920000,
>> end=0x20292fe00. After first while loop addr=0x202930000, end=0x20292fe00,
>> then addr!=end. Thus it will touch another pages by put_pages() in the
>> 2nd loop.
>>
>> This patch fixes it by hardening the break condition of while loop.
>>
>> Signed-off-by: jia.he@xxxxxxxxxxxxxxxx
>> ---
>> v2: newly added
>>
>> virt/kvm/arm/mmu.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 8dac311..45cd040 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -217,7 +217,7 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd,
>>
>> put_page(virt_to_page(pte));
>> }
>> - } while (pte++, addr += PAGE_SIZE, addr != end);
>> + } while (pte++, addr += PAGE_SIZE, addr < end);
>>
>> if (stage2_pte_table_empty(start_pte))
>> clear_stage2_pmd_entry(kvm, pmd, start_addr);
>>
>
> I don't think this change is the right thing to do. You get that failure
> because you're being passed a size that is not a multiple of PAGE_SIZE.
> That's the mistake.
>
> You should ensure that this never happens, rather than changing the page
> table walkers (which are consistent with the way this kind of code is
> written in other places of the kernel). As you mentioned in your first
> patch, the real issue is that KSM is broken, and this is what should be
> fixed.
>
Got it, thanks
Should I resend the patch 1/2 without any changes after droping patch 2/2?

--
Cheers,
Jia