Re: [PATCH] habanalabs: Elide a warning on 32-bit targets
From: Oded Gabbay
Date: Fri Apr 01 2022 - 14:14:23 EST
On Fri, Apr 1, 2022 at 7:41 PM Palmer Dabbelt <palmer@xxxxxxxxxxxx> wrote:
>
> From: Palmer Dabbelt <palmer@xxxxxxxxxxxx>
>
> This double-cast pattern looks a bit awkward, but it already exists
> elsewhere in the driver. Without this patch I get
>
> drivers/misc/habanalabs/common/memory.c: In function ‘alloc_device_memory’:
> drivers/misc/habanalabs/common/memory.c:153:49: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
> 153 | (u64) gen_pool_dma_alloc_align(vm->dram_pg_pool,
> | ^
>
> which ends up promoted to a build error in my test setup.
>
> Signed-off-by: Palmer Dabbelt <palmer@xxxxxxxxxxxx>
>
> ---
>
> I don't know anything about this driver, I'm just pattern-matching the
> warning away.
> ---
> drivers/misc/habanalabs/common/memory.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
> index e008d82e4ba3..f1fc79c1fc10 100644
> --- a/drivers/misc/habanalabs/common/memory.c
> +++ b/drivers/misc/habanalabs/common/memory.c
> @@ -150,12 +150,12 @@ static int alloc_device_memory(struct hl_ctx *ctx, struct hl_mem_in *args,
> for (i = 0 ; i < num_pgs ; i++) {
> if (is_power_of_2(page_size))
> phys_pg_pack->pages[i] =
> - (u64) gen_pool_dma_alloc_align(vm->dram_pg_pool,
> - page_size, NULL,
> - page_size);
> + (u64) (uintptr_t) gen_pool_dma_alloc_align(vm->dram_pg_pool,
> + page_size, NULL,
> + page_size);
> else
> - phys_pg_pack->pages[i] = (u64) gen_pool_alloc(vm->dram_pg_pool,
> - page_size);
> + phys_pg_pack->pages[i] = (u64) (uintptr_t) gen_pool_alloc(vm->dram_pg_pool,
> + page_size);
> if (!phys_pg_pack->pages[i]) {
> dev_err(hdev->dev,
> "Failed to allocate device memory (out of memory)\n");
> --
> 2.34.1
>
This patch is:
Reviewed-by: Oded Gabbay <ogabbay@xxxxxxxxxx>
Greg,
Could you please apply this directly to your misc tree and send it to
Linus at your next pull request ?
I don't have any other fixes pending for 5.18.
For 5.19 we will do a more elegant solution that Arnd has recommended.
Thanks,
Oded