Re: [PATCH v4 2/2] arm64: kvm: Introduce MTE VCPU feature

From: Steven Price
Date: Thu Nov 19 2020 - 07:45:58 EST


On 18/11/2020 17:05, Andrew Jones wrote:
On Wed, Nov 18, 2020 at 04:50:01PM +0000, Catalin Marinas wrote:
On Wed, Nov 18, 2020 at 04:01:20PM +0000, Steven Price wrote:
On 17/11/2020 16:07, Catalin Marinas wrote:
On Mon, Oct 26, 2020 at 03:57:27PM +0000, Steven Price wrote:
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 19aacc7d64de..38fe25310ca1 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -862,6 +862,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (vma_pagesize == PAGE_SIZE && !force_pte)
vma_pagesize = transparent_hugepage_adjust(memslot, hva,
&pfn, &fault_ipa);
+
+ /*
+ * The otherwise redundant test for system_supports_mte() allows the
+ * code to be compiled out when CONFIG_ARM64_MTE is not present.
+ */
+ if (system_supports_mte() && kvm->arch.mte_enabled && pfn_valid(pfn)) {
+ /*
+ * VM will be able to see the page's tags, so we must ensure
+ * they have been initialised.
+ */
+ struct page *page = pfn_to_page(pfn);
+ long i, nr_pages = compound_nr(page);
+
+ /* if PG_mte_tagged is set, tags have already been initialised */
+ for (i = 0; i < nr_pages; i++, page++) {
+ if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+ mte_clear_page_tags(page_address(page));
+ }
+ }

If this page was swapped out and mapped back in, where does the
restoring from swap happen?

Restoring from swap happens above this in the call to gfn_to_pfn_prot()

Looking at the call chain, gfn_to_pfn_prot() ends up with
get_user_pages() using the current->mm (the VMM) and that does a
set_pte_at(), presumably restoring the tags. Does this mean that all
memory mapped by the VMM in user space should have PROT_MTE set?
Otherwise we don't take the mte_sync_tags() path in set_pte_at() and no
tags restored from swap (we do save them since when they were mapped,
PG_mte_tagged was set).

So I think the code above should be similar to mte_sync_tags(), even
calling a common function, but I'm not sure where to get the swap pte
from.

You're right - the code is broken as it stands. I've just been able to reproduce the loss of tags due to swap.

The problem is that we also don't have a suitable pte to do the restore from swap from. So either set_pte_at() would have to unconditionally check for MTE tags for all previous swap entries as you suggest below. I had a quick go at testing this and hit issues with the idle task getting killed during boot - I fear there are some fun issues regarding initialisation order here.

Or we enforce PROT_MTE...

An alternative is to only enable HCR_EL2.ATA and MTE in guest if the vmm
mapped the memory with PROT_MTE.

This is a very reasonable alternative. The VMM must be aware of whether
the guest may use MTE anyway. Asking it to map the memory with PROT_MTE
when it wants to offer the guest that option is a reasonable requirement.
If the memory is not mapped as such, then the host kernel shouldn't assume
MTE may be used by the guest, and it should even enforce that it is not
(by not enabling the feature).

The main issue with this is that the VMM can change the mappings while the guest is running, so the only place we can reliably check this is during user_mem_abort(). So we can't just downgrade HCR_EL2.ATA. This makes the error reporting not so great as the memory access is simply faulted. However I do have this working and it's actually (slightly) less code.

Another drawback is that the VMM needs to be more careful with the tags - e.g. for virtualised devices the VMM can't simply have a non-PROT_MTE mapping and ignore what the guest is doing with tags.

Steve