On 21.08.24 20:47, Yang Shi wrote:
Enable MTE support for hugetlb.
The MTE page flags will be set on the head page only. When copying
hugetlb folio, the tags for all tail pages will be copied when copying
head page.
When freeing hugetlb folio, the MTE flags will be cleared.
Signed-off-by: Yang Shi <yang@xxxxxxxxxxxxxxxxxxxxxx>
---
arch/arm64/include/asm/hugetlb.h | 11 ++++++++++-
arch/arm64/include/asm/mman.h | 3 ++-
arch/arm64/kernel/hibernate.c | 7 +++++++
arch/arm64/kernel/mte.c | 25 +++++++++++++++++++++++--
arch/arm64/kvm/guest.c | 16 +++++++++++++---
arch/arm64/kvm/mmu.c | 11 +++++++++++
arch/arm64/mm/copypage.c | 25 +++++++++++++++++++++++--
fs/hugetlbfs/inode.c | 2 +-
8 files changed, 90 insertions(+), 10 deletions(-)
v2: * Reimplemented the patch to fix the comments from Catalin.
* Added test cases (patch #2) per Catalin.
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 293f880865e8..00a1f75d40ee 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -11,6 +11,7 @@
#define __ASM_HUGETLB_H
#include <asm/cacheflush.h>
+#include <asm/mte.h>
#include <asm/page.h>
#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
@@ -20,7 +21,15 @@ extern bool arch_hugetlb_migration_supported(struct hstate *h);
static inline void arch_clear_hugetlb_flags(struct folio *folio)
{
- clear_bit(PG_dcache_clean, &folio->flags);
+ const unsigned long clear_flags = BIT(PG_dcache_clean) |
+ BIT(PG_mte_tagged) | BIT(PG_mte_lock);
+
+ if (!system_supports_mte()) {
+ clear_bit(PG_dcache_clean, &folio->flags);
+ return;
+ }
+
+ folio->flags &= ~clear_flags;
}
#define arch_clear_hugetlb_flags arch_clear_hugetlb_flags
diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
index 5966ee4a6154..304dfc499e68 100644
--- a/arch/arm64/include/asm/mman.h
+++ b/arch/arm64/include/asm/mman.h
@@ -28,7 +28,8 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
* backed by tags-capable memory. The vm_flags may be overridden by a
* filesystem supporting MTE (RAM-based).
*/
- if (system_supports_mte() && (flags & MAP_ANONYMOUS))
+ if (system_supports_mte() &&
+ (flags & (MAP_ANONYMOUS | MAP_HUGETLB)))
return VM_MTE_ALLOWED;
return 0;
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 02870beb271e..722e76f29141 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -266,10 +266,17 @@ static int swsusp_mte_save_tags(void)
max_zone_pfn = zone_end_pfn(zone);
for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) {
struct page *page = pfn_to_online_page(pfn);
+ struct folio *folio;
if (!page)
continue;
+ folio = page_folio(page);
+
+ if (folio_test_hugetlb(folio) &&
+ !page_mte_tagged(&folio->page))
+ continue;
Can we have folio_test_mte_tagged() whereby you make sure that only folio_test_hugetlb() uses it for now (VM_WARN_ON_ONCE) and then make sure that nobody uses page_mte_tagged() on hugetlb folios (VM_WARN_ON_ONCE)?
Same for folio_set_mte_tagged() and other functions. We could slap a "hugetlb" into the function names, but maybe in the future we'll be able to use a single flag per folio (I know, it's complicated ...).