Re: [RFC PATCH v3 3/5] dma-mapping: Decrypt memory on remap
From: Aneesh Kumar K . V
Date: Tue Apr 14 2026 - 05:36:56 EST
Mostafa Saleh <smostafa@xxxxxxxxxx> writes:
> In case memory needs to be remapped on systems with
> force_dma_unencrypted(), where this memory is not allocated
> from a restricted-dma pool, this was currently ignored, while only
> setting the decrypted pgprot in the remapped alias.
>
> The memory still needs to be decrypted in that case.
>
For ARM CCA, we cannot mark a vmap address as decrypted. I don’t expect
non-coherent DMA devices to be used in an ARM CCA configuration, but we
may need a way to document this in the code.
>
> With memory decryption, don't allow highmem allocations, but that
> shouldn't be a problem on such modern systems.
>
> Also, move force_dma_unencrypted() outside of dma_set_* to make it
> clear to be able to use more generic logic to decided memory
> state.
>
> Reported-by: Catalin Marinas <catalin.marinas@xxxxxxx>
> Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc")
> Signed-off-by: Mostafa Saleh <smostafa@xxxxxxxxxx>
> ---
> kernel/dma/direct.c | 31 ++++++++++++++-----------------
> 1 file changed, 14 insertions(+), 17 deletions(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index ce74f213ec40..de63e0449700 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -79,8 +79,6 @@ bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
>
> static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size)
> {
> - if (!force_dma_unencrypted(dev))
> - return 0;
> return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size));
> }
>
> @@ -88,8 +86,6 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
> {
> int ret;
>
> - if (!force_dma_unencrypted(dev))
> - return 0;
> ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
> if (ret)
> pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
> @@ -206,7 +202,7 @@ static void *dma_direct_alloc_no_mapping(struct device *dev, size_t size,
> void *dma_direct_alloc(struct device *dev, size_t size,
> dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
> {
> - bool remap = false, set_uncached = false, encrypt = false;
> + bool remap = false, set_uncached = false, decrypt = force_dma_unencrypted(dev);
> struct page *page;
> void *ret;
>
> @@ -215,7 +211,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> gfp |= __GFP_NOWARN;
>
> if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev))
> + !decrypt && !is_swiotlb_for_alloc(dev))
> return dma_direct_alloc_no_mapping(dev, size, dma_handle, gfp);
>
> if (!dev_is_dma_coherent(dev)) {
> @@ -249,12 +245,15 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> * Remapping or decrypting memory may block, allocate the memory from
> * the atomic pools instead if we aren't allowed block.
> */
> - if ((remap || force_dma_unencrypted(dev)) &&
> + if ((remap || decrypt) &&
> dma_direct_use_pool(dev, gfp))
> return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>
> - /* we always manually zero the memory once we are done */
> - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
> + /*
> + * we always manually zero the memory once we are done, and only allow
> + * high mem if pages doesn't need decryption.
> + */
> + page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, !decrypt);
> if (!page)
> return NULL;
>
> @@ -268,10 +267,12 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> set_uncached = false;
> }
>
> + if (decrypt && dma_set_decrypted(dev, page_address(page), size))
> + goto out_leak_pages;
> if (remap) {
> pgprot_t prot = dma_pgprot(dev, PAGE_KERNEL, attrs);
>
> - if (force_dma_unencrypted(dev))
> + if (decrypt)
> prot = pgprot_decrypted(prot);
>
> /* remove any dirty cache lines on the kernel alias */
> @@ -281,11 +282,9 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> ret = dma_common_contiguous_remap(page, size, prot,
> __builtin_return_address(0));
> if (!ret)
> - goto out_free_pages;
> + goto out_encrypt_pages;
> } else {
> ret = page_address(page);
> - if (dma_set_decrypted(dev, ret, size))
> - goto out_leak_pages;
> }
>
> memset(ret, 0, size);
> @@ -301,9 +300,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> return ret;
>
> out_encrypt_pages:
> - encrypt = true;
> -out_free_pages:
> - __dma_direct_free_pages(dev, page, size, encrypt);
> + __dma_direct_free_pages(dev, page, size, decrypt);
> return NULL;
> out_leak_pages:
> return NULL;
> @@ -366,7 +363,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
> return NULL;
>
> ret = page_address(page);
> - if (dma_set_decrypted(dev, ret, size))
> + if (force_dma_unencrypted(dev) && dma_set_decrypted(dev, ret, size))
> goto out_leak_pages;
> memset(ret, 0, size);
> *dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
> --
> 2.53.0.1213.gd9a14994de-goog