Re: [PATCH v2 3/3] arm64: realm: Use aliased addresses for device DMA to shared buffers

From: Suzuki K Poulose
Date: Tue Feb 25 2025 - 11:18:52 EST


On 25/02/2025 13:04, Robin Murphy wrote:
On 2025-02-19 10:07 pm, Suzuki K Poulose wrote:
When a device performs DMA to a shared buffer using physical addresses,
(without Stage1 translation), the device must use the "{I}PA address" with the
top bit set in Realm. This is to make sure that a trusted device will be able
to write to shared buffers as well as the protected buffers. Thus, a Realm must
always program the full address including the "protection" bit, like AMD SME
encryption bits.

Enable this by providing arm64 specific dma_{encrypted,decrypted,clear_encryption}
helpers for Realms. Please note that the VMM needs to similarly make sure that
the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the
unprotected alias.

Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx>
Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
Cc: Robin Murphy <robin.murphy@xxxxxxx>
Cc: Steven Price <steven.price@xxxxxxx>
Cc: Christoph Hellwig <hch@xxxxxx>
Cc: Tom Lendacky <thomas.lendacky@xxxxxxx>
Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxx>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@xxxxxxx>
---
  arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
  1 file changed, 22 insertions(+)

diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/ include/asm/mem_encrypt.h
index f8f78f622dd2..aeda3bba255e 100644
--- a/arch/arm64/include/asm/mem_encrypt.h
+++ b/arch/arm64/include/asm/mem_encrypt.h
@@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct device *dev)
      return is_realm_world();
  }
+static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
+{
+    if (is_realm_world())
+        daddr |= prot_ns_shared;
+    return daddr;
+}
+#define dma_decrypted dma_decrypted
+
+static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
+{
+    if (is_realm_world())
+        daddr &= prot_ns_shared - 1;

Nit: is there a reason this isn't the direct inverse of the other operation, i.e. "daddr &= ~prot_ns_shared"? If so, it might be worth

It could be. The IPA size for the realm is split into half with the
lower half protected/encrypted and anything above that unprotected. Technically any addr >= prot_ns_shared is "unencrypted" (even though it
may be invalid, if >= BIT(IPA_Size) - 1), so to cover that, I masked
anything above the MS. But now when I think of it, it is much better to
trigger a Stage2 fault if the address is illegal (i.e., > BIT(IPA_Size) - 1) than corrupting some valid memory, by masking the top bits (beyond prot_ns_shared).

Cheers
Suzuki




dropping a comment why we're doing slightly unintuitive arithmetic on a pagetable attribute (and if not then maybe just do the more obvious thing). I doubt anyone's in a rush to support TBI for DMA, and this would be far from the only potential hiccup for that, but still... :)

Thanks,
Robin.

+    return daddr;
+}
+#define dma_encrypted dma_encrypted
+
+static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
+{
+    return dma_encrypted(daddr);
+}
+#define dma_clear_encryption dma_clear_encryption
+
  #endif    /* __ASM_MEM_ENCRYPT_H */