RE: [EXT] Re: [PATCH] MA-21654 Use dma_alloc_pages in vb2_dma_sg_alloc_compacted
From: Hui Fang
Date: Wed Sep 20 2023 - 06:02:44 EST
On Thu, Sep 20, 2023 at 15:41 PM Tomasz Figa <tfiga@xxxxxxxxxxxx> wrote:
> Is CONFIG_ZONE_DMA32 really the factor that triggers the problem? My
> understanding was that the problem was that the hardware has 32-bit DMA,
> but the system has physical memory at addresses beyond the first 4G.
Yes, you are right. But CONFIG_ZONE_DMA32 may affect swiotlb_init_remap().
In arch/arm64/mm/init.c
static void __init zone_sizes_init(void)
{
......
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = disable_dma32 ? 0 : PFN_DOWN(dma32_phys_limit);
if (!arm64_dma_phys_limit)
arm64_dma_phys_limit = dma32_phys_limit;
#endif
......
}
void __init mem_init(void)
{
swiotlb_init(max_pfn > PFN_DOWN(arm64_dma_phys_limit), SWIOTLB_VERBOSE);
}
In kernel/dma/swiotlb.c
void __init swiotlb_init(bool addressing_limit, unsigned int flags)
{
swiotlb_init_remap(addressing_limit, flags, NULL);
}
void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
int (*remap)(void *tlb, unsigned long nslabs))
{
struct io_tlb_mem *mem = &io_tlb_default_mem;
unsigned long nslabs;
size_t alloc_size;
size_t bytes;
void *tlb;
if (!addressing_limit && !swiotlb_force_bounce)
return;
}
Also thanks for your suggestion, will refine my patch.
BRs,
Fang Hui