Re: [PATCH RFCv2 6/9] kvm/arm64: Export kvm_handle_user_mem_abort() with prefault mode
From: Mark Rutland
Date: Tue May 26 2020 - 06:58:18 EST
On Fri, May 08, 2020 at 01:29:16PM +1000, Gavin Shan wrote:
> This renames user_mem_abort() to kvm_handle_user_mem_abort(), and
> then export it. The function will be used in asynchronous page fault
> to populate a page table entry once the corresponding page is populated
> from the backup device (e.g. swap partition):
>
> * Parameter @fault_status is replace by @esr.
> * The parameters are reorder based on their importance.
It seems like multiple changes are going on here, and it would be
clearer with separate patches.
Passing the ESR rather than the extracted fault status seems fine, but
for clarirty it's be nicer to do this in its own patch.
Why is it necessary to re-order the function parameters? Does that align
with other function prototypes?
What exactly is the `prefault` parameter meant to do? It doesn't do
anything currently, so it'd be better to introduce it later when logic
using it is instroduced, or where callers will pass distinct values.
Thanks,
Mark.
>
> This shouldn't cause any functional changes.
>
> Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx>
> ---
> arch/arm64/include/asm/kvm_host.h | 4 ++++
> virt/kvm/arm/mmu.c | 14 ++++++++------
> 2 files changed, 12 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 32c8a675e5a4..f77c706777ec 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -437,6 +437,10 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
> struct kvm_vcpu_events *events);
>
> #define KVM_ARCH_WANT_MMU_NOTIFIER
> +int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr,
> + struct kvm_memory_slot *memslot,
> + phys_addr_t fault_ipa, unsigned long hva,
> + bool prefault);
> int kvm_unmap_hva_range(struct kvm *kvm,
> unsigned long start, unsigned long end);
> int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index e462e0368fd9..95aaabb2b1fc 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1656,12 +1656,12 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
> (hva & ~(map_size - 1)) + map_size <= uaddr_end;
> }
>
> -static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> - struct kvm_memory_slot *memslot, unsigned long hva,
> - unsigned long fault_status)
> +int kvm_handle_user_mem_abort(struct kvm_vcpu *vcpu, unsigned int esr,
> + struct kvm_memory_slot *memslot,
> + phys_addr_t fault_ipa, unsigned long hva,
> + bool prefault)
> {
> - int ret;
> - u32 esr = kvm_vcpu_get_esr(vcpu);
> + unsigned int fault_status = kvm_vcpu_trap_get_fault_type(esr);
> bool write_fault, writable, force_pte = false;
> bool exec_fault, needs_exec;
> unsigned long mmu_seq;
> @@ -1674,6 +1674,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> pgprot_t mem_type = PAGE_S2;
> bool logging_active = memslot_is_logging(memslot);
> unsigned long vma_pagesize, flags = 0;
> + int ret;
>
> write_fault = kvm_is_write_fault(esr);
> exec_fault = kvm_vcpu_trap_is_iabt(esr);
> @@ -1995,7 +1996,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
> goto out_unlock;
> }
>
> - ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
> + ret = kvm_handle_user_mem_abort(vcpu, esr, memslot,
> + fault_ipa, hva, false);
> if (ret == 0)
> ret = 1;
> out:
> --
> 2.23.0
>