Re: [v4 PATCH 1/2] hugetlb: arm64: add mte support

From: Yang Shi
Date: Fri Sep 13 2024 - 13:49:50 EST




On 9/13/24 10:13 AM, Catalin Marinas wrote:
On Thu, Sep 12, 2024 at 01:41:28PM -0700, Yang Shi wrote:
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index a7bb20055ce0..c8687ccc2633 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -18,17 +18,41 @@ void copy_highpage(struct page *to, struct page *from)
{
void *kto = page_address(to);
void *kfrom = page_address(from);
+ struct folio *src = page_folio(from);
+ struct folio *dst = page_folio(to);
+ unsigned int i, nr_pages;
copy_page(kto, kfrom);
if (kasan_hw_tags_enabled())
page_kasan_tag_reset(to);
- if (system_supports_mte() && page_mte_tagged(from)) {
- /* It's a new page, shouldn't have been tagged yet */
- WARN_ON_ONCE(!try_page_mte_tagging(to));
- mte_copy_page_tags(kto, kfrom);
- set_page_mte_tagged(to);
+ if (system_supports_mte()) {
+ if (folio_test_hugetlb(src) &&
+ folio_test_hugetlb_mte_tagged(src)) {
+ if (!try_folio_hugetlb_mte_tagging(dst))
+ return;
+
+ /*
+ * Populate tags for all subpages.
+ *
+ * Don't assume the first page is head page since
+ * huge page copy may start from any subpage.
+ */
+ nr_pages = folio_nr_pages(src);
+ for (i = 0; i < nr_pages; i++) {
+ kfrom = page_address(folio_page(src, i));
+ kto = page_address(folio_page(dst, i));
+ mte_copy_page_tags(kto, kfrom);
+ }
+ folio_set_hugetlb_mte_tagged(dst);
+ } else if (page_mte_tagged(from)) {
+ /* It's a new page, shouldn't have been tagged yet */
+ WARN_ON_ONCE(!try_page_mte_tagging(to));
+
+ mte_copy_page_tags(kto, kfrom);
+ set_page_mte_tagged(to);
+ }
}
}
A nitpick here: I don't like that much indentation, so just do an early
return if !system_supports_mte() in this function.

Sure.


Otherwise the patch looks fine to me. I agree with David's point on an
earlier version of this patch, the naming of these functions isn't
great. So, as per David's suggestion (at least for the first two):

folio_test_hugetlb_mte_tagged()
folio_set_hugetlb_mte_tagged()
folio_try_hugetlb_mte_tagging()

I already incorporated the first two in this version. But I kept try_folio_hugetlb_mte_tagging(). Will change to folio_try_hugetlb_mte_tagging().

I will spin a new version and send out soon since the change is trivial and I'm going to travel to LPC on Monday.


As for "try" vs "test_and_set_.*_lock", the original name was picked to
mimic spin_trylock() since this function is waiting/spinning. It's not
great but the alternative naming is closer to test_and_set_bit_lock().
This has different behaviour, it only sets a bit with acquire semantics,
no waiting/spinning.