Re: [PATCH 1/1] mm/thp: fix MTE tag mismatch when replacing zero-filled subpages

From: Lance Yang

Date: Tue Sep 23 2025 - 22:49:49 EST




On 2025/9/24 00:14, Catalin Marinas wrote:
On Tue, Sep 23, 2025 at 12:52:06PM +0100, Catalin Marinas wrote:
I just realised that on arm64 with MTE we won't get any merging with the
zero page even if the user page isn't mapped with PROT_MTE. In
cpu_enable_mte() we zero the tags in the zero page and set
PG_mte_tagged. The reason is that we want to use the zero page with
PROT_MTE mappings (until tag setting causes CoW). Hmm, the arm64
memcmp_pages() messed up KSM merging with the zero page even before this
patch.
[...]
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index e5e773844889..72a1dfc54659 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -73,6 +73,8 @@ int memcmp_pages(struct page *page1, struct page *page2)
{
char *addr1, *addr2;
int ret;
+ bool page1_tagged = page_mte_tagged(page1) && !is_zero_page(page1);
+ bool page2_tagged = page_mte_tagged(page2) && !is_zero_page(page2);
addr1 = page_address(page1);
addr2 = page_address(page2);
@@ -83,11 +85,10 @@ int memcmp_pages(struct page *page1, struct page *page2)
/*
* If the page content is identical but at least one of the pages is
- * tagged, return non-zero to avoid KSM merging. If only one of the
- * pages is tagged, __set_ptes() may zero or change the tags of the
- * other page via mte_sync_tags().
+ * tagged, return non-zero to avoid KSM merging. Ignore the zero page
+ * since it is always tagged with the tags cleared.
*/
- if (page_mte_tagged(page1) || page_mte_tagged(page2))
+ if (page1_tagged || page2_tagged)
return addr1 != addr2;
return ret;

Unrelated to this discussion, I got an internal report that Linux hangs
during boot with CONFIG_DEFERRED_STRUCT_PAGE_INIT because
try_page_mte_tagging() locks up on uninitialised page flags.

Since we (always?) map the zero page as pte_special(), set_pte_at()
won't check if the tags have to be initialised, so we can skip the
PG_mte_tagged altogether. We actually had this code for some time until
we introduced the pte_special() check in set_pte_at().

So alternative patch that also fixes the deferred struct page init (on
the assumptions that the zero page is always mapped as pte_special():

I can confirm that this alternative patch also works correctly; my tests
for MTE all pass ;)

This looks like a better fix since it solves the boot hang issue too.


diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7b78c95a9017..e325ba34f45c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2419,17 +2419,21 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
#ifdef CONFIG_ARM64_MTE
static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
{
+ static bool cleared_zero_page = false;
+
sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ATA | SCTLR_EL1_ATA0);
mte_cpu_setup();
/*
* Clear the tags in the zero page. This needs to be done via the
- * linear map which has the Tagged attribute.
+ * linear map which has the Tagged attribute. Since this page is
+ * always mapped as pte_special(), set_pte_at() will not attempt to
+ * clear the tags or set PG_mte_tagged.
*/
- if (try_page_mte_tagging(ZERO_PAGE(0))) {
+ if (!cleared_zero_page) {
+ cleared_zero_page = true;
mte_clear_page_tags(lm_alias(empty_zero_page));
- set_page_mte_tagged(ZERO_PAGE(0));
}
kasan_init_hw_tags_cpu();