Re: [PATCH RESEND] drm/virtio: Align host mapping request to maximum platform page size
From: Dmitry Osipenko
Date: Fri Jan 24 2025 - 17:52:12 EST
On 1/25/25 01:01, Sasha Finkelstein via B4 Relay wrote:
> From: Sasha Finkelstein <fnkl.kernel@xxxxxxxxx>
>
> This allows running different page sizes between host and guest on
> platforms that support mixed page sizes.
>
> Signed-off-by: Sasha Finkelstein <fnkl.kernel@xxxxxxxxx>
> ---
> drivers/gpu/drm/virtio/virtgpu_vram.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
> index 25df81c027837c248a746e41856b5aa7e216b8d5..8a0577c2170ec9c12cad12be57f9a41c14f04660 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vram.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
> @@ -138,6 +138,12 @@ bool virtio_gpu_is_vram(struct virtio_gpu_object *bo)
> return bo->base.base.funcs == &virtio_gpu_vram_funcs;
> }
>
> +#if defined(__powerpc64__) || defined(__aarch64__) || defined(__mips__) || defined(__loongarch__)
> +#define MAX_PAGE_SIZE 65536
#define MAX_PAGE_SIZE SZ_64K
for improved readability
> +#else
> +#define MAX_PAGE_SIZE PAGE_SIZE
> +#endif
> +
> static int virtio_gpu_vram_map(struct virtio_gpu_object *bo)
> {
> int ret;
> @@ -150,8 +156,8 @@ static int virtio_gpu_vram_map(struct virtio_gpu_object *bo)
> return -EINVAL;
>
> spin_lock(&vgdev->host_visible_lock);
> - ret = drm_mm_insert_node(&vgdev->host_visible_mm, &vram->vram_node,
> - bo->base.base.size);
> + ret = drm_mm_insert_node_generic(&vgdev->host_visible_mm, &vram->vram_node,
> + bo->base.base.size, MAX_PAGE_SIZE, 0, 0);
This change only reserves extra space in the memory allocator, but
doesn't change actual size of allocated BO. Instead, you likely need to
replace all ALIGN(size, PAGE_SIZE) occurrences in the driver code with
ALIGN(args->size, MAX_PAGE_SIZE)
> spin_unlock(&vgdev->host_visible_lock);
>
> if (ret)
Note, previously a new virtio-gpu parameter was proposed to expose
host's page size to guest [1], if you haven't seen it.
[1] https://lore.kernel.org/dri-devel/20240723114914.53677-1-slp@xxxxxxxxxx/
Aligning GEM's size to 64K indeed could be a good enough immediate
solution. Don't see any obvious problems with that, other than the
potential size overhead for a small BOs.
We have been running into cases where a DXVK game allocates enormous
amounts of small BOs to the point that amount reaches max number of
mappings on QEMU with amdgpu native context. On the other hand, it
showed that adding internal sub-allocator to RADV might be a worthwhile
effort. We won't change alignment on x86 with this patch and on non-x86
likely the increased size won't be noticeable, hence the proposed change
might be acceptable.
Curious what Rob Clark thinks about it. Rob, WDYT?
--
Best regards,
Dmitry