Re: [PATCH v4 2/2] arm64: kvm: Introduce MTE VCPU feature

From: Catalin Marinas
Date: Thu Nov 19 2020 - 11:24:38 EST


On Thu, Nov 19, 2020 at 12:45:52PM +0000, Steven Price wrote:
> On 18/11/2020 17:05, Andrew Jones wrote:
> > On Wed, Nov 18, 2020 at 04:50:01PM +0000, Catalin Marinas wrote:
> > > On Wed, Nov 18, 2020 at 04:01:20PM +0000, Steven Price wrote:
> > > > On 17/11/2020 16:07, Catalin Marinas wrote:
> > > > > On Mon, Oct 26, 2020 at 03:57:27PM +0000, Steven Price wrote:
> > > > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > > > > index 19aacc7d64de..38fe25310ca1 100644
> > > > > > --- a/arch/arm64/kvm/mmu.c
> > > > > > +++ b/arch/arm64/kvm/mmu.c
> > > > > > @@ -862,6 +862,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > > > > > if (vma_pagesize == PAGE_SIZE && !force_pte)
> > > > > > vma_pagesize = transparent_hugepage_adjust(memslot, hva,
> > > > > > &pfn, &fault_ipa);
> > > > > > +
> > > > > > + /*
> > > > > > + * The otherwise redundant test for system_supports_mte() allows the
> > > > > > + * code to be compiled out when CONFIG_ARM64_MTE is not present.
> > > > > > + */
> > > > > > + if (system_supports_mte() && kvm->arch.mte_enabled && pfn_valid(pfn)) {
> > > > > > + /*
> > > > > > + * VM will be able to see the page's tags, so we must ensure
> > > > > > + * they have been initialised.
> > > > > > + */
> > > > > > + struct page *page = pfn_to_page(pfn);
> > > > > > + long i, nr_pages = compound_nr(page);
> > > > > > +
> > > > > > + /* if PG_mte_tagged is set, tags have already been initialised */
> > > > > > + for (i = 0; i < nr_pages; i++, page++) {
> > > > > > + if (!test_and_set_bit(PG_mte_tagged, &page->flags))
> > > > > > + mte_clear_page_tags(page_address(page));
> > > > > > + }
> > > > > > + }
> > > > >
> > > > > If this page was swapped out and mapped back in, where does the
> > > > > restoring from swap happen?
> > > >
> > > > Restoring from swap happens above this in the call to gfn_to_pfn_prot()
> > >
> > > Looking at the call chain, gfn_to_pfn_prot() ends up with
> > > get_user_pages() using the current->mm (the VMM) and that does a
> > > set_pte_at(), presumably restoring the tags. Does this mean that all
> > > memory mapped by the VMM in user space should have PROT_MTE set?
> > > Otherwise we don't take the mte_sync_tags() path in set_pte_at() and no
> > > tags restored from swap (we do save them since when they were mapped,
> > > PG_mte_tagged was set).
> > >
> > > So I think the code above should be similar to mte_sync_tags(), even
> > > calling a common function, but I'm not sure where to get the swap pte
> > > from.
>
> You're right - the code is broken as it stands. I've just been able to
> reproduce the loss of tags due to swap.
>
> The problem is that we also don't have a suitable pte to do the restore from
> swap from. So either set_pte_at() would have to unconditionally check for
> MTE tags for all previous swap entries as you suggest below. I had a quick
> go at testing this and hit issues with the idle task getting killed during
> boot - I fear there are some fun issues regarding initialisation order here.

My attempt here but not fully tested (just booted, no swap support):

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index b35833259f08..27d7fd336a16 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -304,7 +304,7 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
__sync_icache_dcache(pte);

if (system_supports_mte() &&
- pte_present(pte) && pte_tagged(pte) && !pte_special(pte))
+ pte_present(pte) && pte_valid_user(pte) && !pte_special(pte))
mte_sync_tags(ptep, pte);

__check_racy_pte_update(mm, ptep, pte);
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 52a0638ed967..bbd6c56d33d9 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -20,18 +20,24 @@
#include <asm/ptrace.h>
#include <asm/sysreg.h>

-static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
+static void mte_sync_page_tags(struct page *page, pte_t *ptep, pte_t pte,
+ bool check_swap)
{
pte_t old_pte = READ_ONCE(*ptep);

if (check_swap && is_swap_pte(old_pte)) {
swp_entry_t entry = pte_to_swp_entry(old_pte);

- if (!non_swap_entry(entry) && mte_restore_tags(entry, page))
+ if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
+ set_bit(PG_mte_tagged, &page->flags);
return;
+ }
}

- mte_clear_page_tags(page_address(page));
+ if (pte_tagged(pte)) {
+ mte_clear_page_tags(page_address(page));
+ set_bit(PG_mte_tagged, &page->flags);
+ }
}

void mte_sync_tags(pte_t *ptep, pte_t pte)
@@ -42,8 +48,8 @@ void mte_sync_tags(pte_t *ptep, pte_t pte)

/* if PG_mte_tagged is set, tags have already been initialised */
for (i = 0; i < nr_pages; i++, page++) {
- if (!test_and_set_bit(PG_mte_tagged, &page->flags))
- mte_sync_page_tags(page, ptep, check_swap);
+ if (!test_bit(PG_mte_tagged, &page->flags))
+ mte_sync_page_tags(page, ptep, pte, check_swap);
}
}

--
Catalin