Re: [PATCH v4 2/2] arm64: kvm: Introduce MTE VCPU feature
From: Catalin Marinas
Date: Tue Nov 17 2020 - 11:08:16 EST
Hi Steven,
On Mon, Oct 26, 2020 at 03:57:27PM +0000, Steven Price wrote:
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 19aacc7d64de..38fe25310ca1 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -862,6 +862,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (vma_pagesize == PAGE_SIZE && !force_pte)
> vma_pagesize = transparent_hugepage_adjust(memslot, hva,
> &pfn, &fault_ipa);
> +
> + /*
> + * The otherwise redundant test for system_supports_mte() allows the
> + * code to be compiled out when CONFIG_ARM64_MTE is not present.
> + */
> + if (system_supports_mte() && kvm->arch.mte_enabled && pfn_valid(pfn)) {
> + /*
> + * VM will be able to see the page's tags, so we must ensure
> + * they have been initialised.
> + */
> + struct page *page = pfn_to_page(pfn);
> + long i, nr_pages = compound_nr(page);
> +
> + /* if PG_mte_tagged is set, tags have already been initialised */
> + for (i = 0; i < nr_pages; i++, page++) {
> + if (!test_and_set_bit(PG_mte_tagged, &page->flags))
> + mte_clear_page_tags(page_address(page));
> + }
> + }
If this page was swapped out and mapped back in, where does the
restoring from swap happen?
I may have asked in the past, is user_mem_abort() the only path for
mapping Normal pages into stage 2?
--
Catalin