Re: [PATCH v15 18/19] drm/etnaviv: Allow userspace specify the domain of etnaviv GEM buffer object

From: Lucas Stach
Date: Tue Oct 01 2024 - 10:51:13 EST


Am Sonntag, dem 08.09.2024 um 17:43 +0800 schrieb Sui Jingfeng:
> Otherwise we don't know where a etnaviv GEM buffer object should put when
> we create it at userspace.
>
> Signed-off-by: Sui Jingfeng <sui.jingfeng@xxxxxxxxx>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_drv.c | 9 +++++++++
> include/uapi/drm/etnaviv_drm.h | 12 ++++++++++++
> 2 files changed, 21 insertions(+)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
> index f10661fe079f..cdc62f64b200 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
> @@ -331,11 +331,20 @@ static int etnaviv_ioctl_gem_new(struct drm_device *dev, void *data,
> struct drm_file *file)
> {
> struct drm_etnaviv_gem_new *args = data;
> + u32 domain;
> +
> + domain = args->flags & ETNA_BO_DOMAIN_MASK;
> +
> + args->flags &= ~ETNA_BO_DOMAIN_MASK;

This is not a proper input validation, as it would accept data in the
domain mask range that doesn't correspond to valid flags. You need to
add your new valid flag bits to the check below.

>
> if (args->flags & ~(ETNA_BO_CACHED | ETNA_BO_WC | ETNA_BO_UNCACHED |
> ETNA_BO_FORCE_MMU))
> return -EINVAL;
>
> + if (domain == ETNA_BO_PL_VRAM)
> + return etnaviv_gem_new_vram(dev, file, args->size,
> + args->flags, &args->handle);
> +
> return etnaviv_gem_new_handle(dev, file, args->size,
> args->flags, &args->handle);
> }
> diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h
> index 61eaa8cd0f5e..00e778c9d312 100644
> --- a/include/uapi/drm/etnaviv_drm.h
> +++ b/include/uapi/drm/etnaviv_drm.h
> @@ -99,6 +99,18 @@ struct drm_etnaviv_param {
> /* map flags */
> #define ETNA_BO_FORCE_MMU 0x00100000
>
> +/* domain (placement) flags */
> +#define ETNA_BO_DOMAIN_MASK 0x00f00000

How does this work? Has this been tested? This define masks different
bits than the placement flags defined below.
>
> +
> +/* CPU accessible, GPU accessible pages in dedicated VRAM */
> +#define ETNA_BO_PL_VRAM 0x01000000

Other drivers call this the visible VRAM range.

> +/* CPU accessible, GPU accessible pages in SHMEM */
> +#define ETNA_BO_PL_GTT 0x02000000
> +/* Userspace allocated memory, at least CPU accessible */
> +#define ETNA_BO_PL_USERPTR 0x08000000

How is this a valid placement? If it's userspace allocated memory, the
driver has no influence on placement. All it can do is to pin the pages
and set up a GART mapping.

> +/* GPU accessible but CPU not accessible private VRAM pages */
> +#define ETNA_BO_PL_PRIV 0x04000000
> +

VRAM_INVISIBLE would be a more descriptive name for the flag than PRIV.

However I'm not sure if we can make the distinction between visible and
invisible VRAM at allocation time. What needs to be CPU visible may
change over the runtime of the workload, which is why real TTM drivers
can migrate BOs in and out of the visible region.

Regards,
Lucas

> struct drm_etnaviv_gem_new {
> __u64 size; /* in */
> __u32 flags; /* in, mask of ETNA_BO_x */