Re: [RESEND PATCH 1/2] dma/pool: Use vmap() address for memory encryption helpers on ARM64

From: Catalin Marinas
Date: Mon Aug 11 2025 - 13:26:20 EST


On Sun, Aug 10, 2025 at 07:50:34PM -0500, Shanker Donthineni wrote:
> In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted()
> are currently called with page_to_virt(page). On ARM64 with
> CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so
> page_to_virt(page) does not reference the actual mapped region.
>
> Using this incorrect address can cause encryption attribute updates to
> be applied to the wrong memory region. On ARM64 systems with memory
> encryption enabled (e.g. CCA), this can lead to data corruption or
> crashes.
>
> Fix this by using the vmap() address ('addr') on ARM64 when invoking
> the memory encryption helpers, while retaining the existing
> page_to_virt(page) usage for other architectures.
>
> Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools")
> Signed-off-by: Shanker Donthineni <sdonthineni@xxxxxxxxxx>
> ---
> kernel/dma/pool.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 7b04f7575796b..ba08a301590fd 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
> {
> unsigned int order;
> struct page *page = NULL;
> + void *vaddr;
> void *addr;
> int ret = -ENOMEM;
>
> @@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
> * Memory in the atomic DMA pools must be unencrypted, the pools do not
> * shrink so no re-encryption occurs in dma_direct_free().
> */
> - ret = set_memory_decrypted((unsigned long)page_to_virt(page),
> - 1 << order);
> + vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page);
> + ret = set_memory_decrypted((unsigned long)vaddr, 1 << order);

At least with arm CCA, there are two aspects to setting the memory
encrypted/decrypted: an RMM (realm management monitor) call and setting
of the attributes of the stage 1 mapping. The RMM call doesn't care
about the virtual address, only the (intermediate) physical address, so
having page_to_virt(page) here is fine.

The second part is setting the (fake) attribute for this mapping (top
bit of the IPA space). Can we not instead just call:

addr = dma_common_contiguous_remap(page, pool_size,
pgprot_decrypted(pgprot_dmacoherent(PAGE_KERNEL)),
__builtin_return_address(0));

in the atomic pool code? The advantage is that we keep the
set_memory_decrypted() call on the linear map so that we change its
attributes as well.

I want avoid walking the page tables for vmap regions if possible in the
arm64 set_memory_* implementation. At some point I was proposing a
GFP_DECRYPTED flag for allocations but never got around to post a patch
(and implement vmalloc() support):

https://lore.kernel.org/linux-arm-kernel/ZmNJdSxSz-sYpVgI@xxxxxxx/

--
Catalin