[PATCH 5.19 181/192] Revert "arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags""
From: Greg Kroah-Hartman
Date: Tue Sep 13 2022 - 10:31:04 EST
This reverts commit add4bc9281e8704e5ab15616b429576c84f453a2.
On Mon, Sep 12, 2022 at 10:52:45AM +0100, Catalin Marinas wrote:
>I missed this (holidays) and it looks like it's in stable already. On
>its own it will likely break kasan_hw if used together with user-space
>MTE as this change relies on two previous commits:
>
>70c248aca9e7 ("mm: kasan: Skip unpoisoning of user pages")
>6d05141a3930 ("mm: kasan: Skip page unpoisoning only if __GFP_SKIP_KASAN_UNPOISON")
>
>The reason I did not cc stable is that there are other dependencies in
>this area. The potential issues without the above commits were rather
>theoretical, so take these patches rather as clean-ups/refactoring than
>fixes.
Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
---
arch/arm64/kernel/hibernate.c | 5 +++++
arch/arm64/kernel/mte.c | 9 +++++++++
arch/arm64/mm/copypage.c | 9 +++++++++
arch/arm64/mm/mteswap.c | 9 +++++++++
4 files changed, 32 insertions(+)
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index af5df48ba915b..2e248342476ea 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -300,6 +300,11 @@ static void swsusp_mte_restore_tags(void)
unsigned long pfn = xa_state.xa_index;
struct page *page = pfn_to_online_page(pfn);
+ /*
+ * It is not required to invoke page_kasan_tag_reset(page)
+ * at this point since the tags stored in page->flags are
+ * already restored.
+ */
mte_restore_page_tags(page_address(page), tags);
mte_free_tag_storage(tags);
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index b2b730233274b..f6b00743c3994 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -48,6 +48,15 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
if (!pte_is_tagged)
return;
+ page_kasan_tag_reset(page);
+ /*
+ * We need smp_wmb() in between setting the flags and clearing the
+ * tags because if another thread reads page->flags and builds a
+ * tagged address out of it, there is an actual dependency to the
+ * memory access, but on the current thread we do not guarantee that
+ * the new page->flags are visible before the tags were updated.
+ */
+ smp_wmb();
mte_clear_page_tags(page_address(page));
}
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index 24913271e898c..0dea80bf6de46 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -23,6 +23,15 @@ void copy_highpage(struct page *to, struct page *from)
if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
set_bit(PG_mte_tagged, &to->flags);
+ page_kasan_tag_reset(to);
+ /*
+ * We need smp_wmb() in between setting the flags and clearing the
+ * tags because if another thread reads page->flags and builds a
+ * tagged address out of it, there is an actual dependency to the
+ * memory access, but on the current thread we do not guarantee that
+ * the new page->flags are visible before the tags were updated.
+ */
+ smp_wmb();
mte_copy_page_tags(kto, kfrom);
}
}
diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
index 4334dec93bd44..a9e50e930484a 100644
--- a/arch/arm64/mm/mteswap.c
+++ b/arch/arm64/mm/mteswap.c
@@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
if (!tags)
return false;
+ page_kasan_tag_reset(page);
+ /*
+ * We need smp_wmb() in between setting the flags and clearing the
+ * tags because if another thread reads page->flags and builds a
+ * tagged address out of it, there is an actual dependency to the
+ * memory access, but on the current thread we do not guarantee that
+ * the new page->flags are visible before the tags were updated.
+ */
+ smp_wmb();
mte_restore_page_tags(page_address(page), tags);
return true;
--
2.35.1