Re: [RFC PATCH] Limit reclaim to avoid TTM desktop stutter under mem pressure
From: Thomas Hellström
Date: Wed Apr 01 2026 - 03:42:42 EST
On Tue, 2026-03-31 at 22:08 -0400, Daniel Colascione wrote:
> TTM seems to be too eager to kick off reclaim while kwin is drawing
>
> I've noticed that in 7.0-rc6, and since at least 6.17, kwin_wayland
> stalls in DRM ioctls to xe when the system is under memory pressure,
> causing missed frames, cursor-movement stutter, and general
> sluggishness. The root cause seems to be synchronous and asynchronous
> reclaim in ttm_pool_alloc_page as TTM tries, and fails, to allocate
> progressively lower-order pages in response to pool-cache misses when
> allocating graphics buffers.
>
> Memory is fragmented enough that the compaction fails (as I can see
> in
> compact_fail and compact_stall in /proc/vmstat; extfrag says the
> normal
> pool is unusable for large allocations too). Additionally, compaction
> seems to be emptying the ttm pool, since page_pool in TTM debugfs
> reports all the buckets are empty while I'm seeing the
> kwin_wayland sluggishness.
>
> In profiles, I see time dominated by copy_pages and clear_pages in
> the
> TTM paging code. kswapd runs constantly despite the system as a whole
> having plenty of free memory.
>
> I can reproduce the problem on my 32GB-RAM X1C Gen 13 by booting with
> kernelcore=8G (not needed, but makes the repro happen sooner),
> running a
> find / >/dev/null (to fragment memory), and doing general web
> browsing. The stalls seem self-perpetuating once it gets started; it
> persists even after killing the find. I've noticed this stall in
> ordinary use too, even without the kernelcore= zone tweak, but
> without
> kernelcore, it usually takes a while (hours?) after boot for memory
> to
> become fragmented enough that higher-order allocations fail.
>
> The patch below fixes the issue for me. TBC, I'm not sure it's the
> _right_ fix, but it works for me. I'm guessing that even if the
> approach
> is right, a new module parameter isn't warranted.
>
> With the patch below, when I set my new max_reclaim_order ttm module
> parameter to zero, the kwin_wayland stalls under memory pressure
> stop. (TBC, this setting inhibits sync or async reclaim except for
> order-zero pages.) TTM allocation occurs in latency-critical paths
> (e.g. Wayland frame commit): do you think we _should_ reclaim here?
Could you elaborate on what exactly fixes this. You say that if you set
max_reclaim_order to 0 kwin_wayland stalls, but OTOH the default is 0
and you also say it fixes the issue?
>
> BTW, I also tried having xe pass a beneficial order of 9, but it
> didn't
> help: we end up doing a lot of compaction work below this order
> anyway.
>
> Signed-off-by: Daniel Colascione <dancol@xxxxxxxxxx>
Interesting. The xe bo shrinker is actually splitting pages to avoid
dipping too far into the kernel reserves when swapping stuff out,
perhaps contributing to the fragmentation. Could you check what happens
if you turn that shrinker off by disabling swap? Does that improve on
the situation?
sudo /sbin/swapoff -a
Another thing that appears bad is that if compaction fails, and starts
shrinking the lower order pools, we might end up in a pathological
situation where lower-order WC allocation split higher order pages and
those are immediately reclaimed.
It sounds like we also need to investigate why buffer object
allocations are made in latency-critical paths.
Thanks,
Thomas
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> b/drivers/gpu/drm/ttm/ttm_pool.c
> index c0d95559197c..fd255914c0d3 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -115,9 +115,13 @@ struct ttm_pool_tt_restore {
> };
>
> static unsigned long page_pool_size;
> +static unsigned int max_reclaim_order;
>
> MODULE_PARM_DESC(page_pool_size, "Number of pages in the WC/UC/DMA
> pool");
> module_param(page_pool_size, ulong, 0644);
> +MODULE_PARM_DESC(max_reclaim_order,
> + "Maximum order that keeps upstream reclaim
> behavior");
> +module_param(max_reclaim_order, uint, 0644);
>
> static atomic_long_t allocated_pages;
>
> @@ -146,16 +150,14 @@ static struct page *ttm_pool_alloc_page(struct
> ttm_pool *pool, gfp_t gfp_flags,
> * Mapping pages directly into an userspace process and
> calling
> * put_page() on a TTM allocated page is illegal.
> */
> - if (order)
> + if (order) {
> gfp_flags |= __GFP_NOMEMALLOC | __GFP_NORETRY |
> __GFP_NOWARN |
> __GFP_THISNODE;
> -
> - /*
> - * Do not add latency to the allocation path for allocations
> orders
> - * device tolds us do not bring them additional performance
> gains.
> - */
> - if (beneficial_order && order > beneficial_order)
> - gfp_flags &= ~__GFP_DIRECT_RECLAIM;
> + if (beneficial_order && order > beneficial_order)
> + gfp_flags &= ~__GFP_DIRECT_RECLAIM;
> + if (order > max_reclaim_order)
> + gfp_flags &= ~__GFP_RECLAIM;
> + }
>
> if (!ttm_pool_uses_dma_alloc(pool)) {
> p = alloc_pages_node(pool->nid, gfp_flags, order);