Re: [PATCH v3 3/3] dma-direct: use RAM start to offset zone_dma_limit

From: Catalin Marinas
Date: Wed Jul 31 2024 - 13:34:11 EST


On Mon, Jul 29, 2024 at 01:51:26PM +0300, Baruch Siach wrote:
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 410a7b40e496..ded3d841c88c 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -12,6 +12,7 @@
> #include <linux/set_memory.h>
> #include <linux/slab.h>
> #include <linux/workqueue.h>
> +#include <linux/memblock.h>
>
> static struct gen_pool *atomic_pool_dma __ro_after_init;
> static unsigned long pool_size_dma;
> @@ -70,7 +71,7 @@ static bool cma_in_zone(gfp_t gfp)
> /* CMA can't cross zone boundaries, see cma_activate_area() */
> end = cma_get_base(cma) + size - 1;
> if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
> - return end <= zone_dma_limit;
> + return end <= memblock_start_of_DRAM() + zone_dma_limit;

I think this patch is entirely wrong. After the previous patch,
zone_dma_limit is already a physical/CPU address, not some offset or
range - of_dma_get_max_cpu_address() returns the absolute physical
address. Adding memblock_start_of_DRAM() to it does not make any sense.
It made sense when we had zone_dma_bits but since we are trying to move
away from bitmasks to absolute CPU addresses, zone_dma_limit already
includes the start of DRAM.

What problems do you see without this patch? Is it because
DMA_BIT_MASK(32) can be lower than zone_dma_limit as I mentioned on the
previous patch?

--
Catalin