Re: [PATCH v3] kvm: mmu: lazy collapse small sptes into large sptes
From: Wanpeng Li
Date: Tue Apr 14 2015 - 02:56:59 EST
On Mon, Apr 13, 2015 at 11:06:25PM -0700, Andres Lagar-Cavilla wrote:
>On Mon, Apr 13, 2015 at 10:25 PM, Wanpeng Li <wanpeng.li@xxxxxxxxxxxxxxx> wrote:
>> Hi Andres,
>> On Fri, Apr 10, 2015 at 11:05:26AM -0700, Andres Lagar-Cavilla wrote:
>> [...]
>>>> + if (sp->role.direct &&
>>>> + !kvm_is_reserved_pfn(pfn) &&
>>>> + PageTransCompound(pfn_to_page(pfn))) {
>>>
>>>Not your fault, but PageTransCompound is very unhappy naming, as it
>>>also yields true for PageHuge. Suggestion: document this check covers
>>>static hugetlbfs, or switch to PageCompound() check.
>>>
>>>A slightly bolder approach would be to refactor and reuse the nearly
>>>identical check done in transparent_hugepage_adjust, instead of
>>>open-coding here. In essence this code is asking for the same check,
>>>plus the out-of-band check for static hugepages.
>>
>> PageCompound() check still return true for both transparent huge pages
>> and hugetlbfs pages, !PageHuge(page) && PageTransHuge(page) check can
>> guarantee to catch the right transparent huge pages just as my old commit
>> e76d30e20be5fc ("mm/hwpoison: fix test for a transparent huge page").
>> I will send a patch to fix this.
>>
>Why would you want to "fix" it that way? Aren't static hugepages supported?
>
>(PageAnon is an inline check and much cheaper than !PageHuge(), which
>is an actual function call)
>
>Please consider my suggestion about refactoring the similar checks in
>transparent_hugepage_adjust.
Ok, will do. :)
Regards,
Wanpeng Li
>
>Thanks a ton
>Andres
>>>
>>>
>>>> + drop_spte(kvm, sptep);
>>>> + sptep = rmap_get_first(*rmapp, &iter);
>>>> + need_tlb_flush = 1;
>>>> + } else
>>>> + sptep = rmap_get_next(&iter);
>>>> + }
>>>> +
>>>> + return need_tlb_flush;
>>>> +}
>>>> +
>>>> +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
>>>> + struct kvm_memory_slot *memslot)
>>>> +{
>>>> + bool flush = false;
>>>> + unsigned long *rmapp;
>>>> + unsigned long last_index, index;
>>>> + gfn_t gfn_start, gfn_end;
>>>> +
>>>> + spin_lock(&kvm->mmu_lock);
>>>> +
>>>> + gfn_start = memslot->base_gfn;
>>>> + gfn_end = memslot->base_gfn + memslot->npages - 1;
>>>> +
>>>> + if (gfn_start >= gfn_end)
>>>> + goto out;
>>>
>>>I don't understand the value of this check here. Are we looking for a
>>>broken memslot? Shouldn't this be a BUG_ON? Is this the place to care
>>>about these things? npages is capped to KVM_MEM_MAX_NR_PAGES, i.e.
>>>2^31. A 64 bit overflow would be caused by a gigantic gfn_start which
>>>would be trouble in many other ways.
>>>
>>>All this to say: please remove the above 5 lines and make code simpler.
>>
>> I will send a patch to cleanup it. Thanks for your review. :)
>>
>> Regards,
>> Wanpeng Li
>>
>
>
>
>--
>Andres Lagar-Cavilla | Google Kernel Team | andreslc@xxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/