[PATCH V9 35/45] memremap_pages: Introduce pgmap_protection_available()

From: ira . weiny
Date: Thu Mar 10 2022 - 12:23:03 EST


From: Ira Weiny <ira.weiny@xxxxxxxxx>

PMEM will flag additional dev_pagemap protection through (struct
dev_pagemap)->flags. However, it is more efficient to know if that
protection is available prior to requesting it and failing the mapping.

Define pgmap_protection_available() to check if protection is available
prior to being requested. The name of pgmap_protection_available() was
specifically chosen to isolate the implementation of the protection from
higher level users.

Signed-off-by: Ira Weiny <ira.weiny@xxxxxxxxx>

---
Changes for V9
Clean up commit message
From Dan Williams
make call stack static inline throughout this call and
pks_available() such that callers calls
cpu_feature_enabled() directly

Changes for V8
Split this out to it's own patch.
s/pgmap_protection_enabled/pgmap_protection_available
---
include/linux/mm.h | 17 +++++++++++++++++
1 file changed, 17 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5744a3fc4716..9ab799403004 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -31,6 +31,7 @@
#include <linux/sizes.h>
#include <linux/sched.h>
#include <linux/pgtable.h>
+#include <linux/pks.h>
#include <linux/kasan.h>

struct mempolicy;
@@ -1143,6 +1144,22 @@ static inline bool is_pci_p2pdma_page(const struct page *page)
page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
}

+#ifdef CONFIG_DEVMAP_ACCESS_PROTECTION
+
+static inline bool pgmap_protection_available(void)
+{
+ return pks_available();
+}
+
+#else
+
+static inline bool pgmap_protection_available(void)
+{
+ return false;
+}
+
+#endif /* CONFIG_DEVMAP_ACCESS_PROTECTION */
+
/* 127: arbitrary random number, small enough to assemble well */
#define folio_ref_zero_or_close_to_overflow(folio) \
((unsigned int) folio_ref_count(folio) + 127u <= 127u)
--
2.35.1