Re: [PATCH] mm, slub: Use prefetchw instead of prefetch

From: David Rientjes
Date: Sun Oct 10 2021 - 18:49:18 EST


On Fri, 8 Oct 2021, Hyeonggon Yoo wrote:

> It's certain that an object will be not only read, but also
> written after allocation.
>

Why is it certain? I think perhaps what you meant to say is that if we
are doing any prefetching here, then access will benefit from prefetchw
instead of prefetch. But it's not "certain" that allocated memory will be
accessed at all.

> Use prefetchw instead of prefetchw. On supported architecture

If we're using prefetchw instead of prefetchw, I think the diff would be
0 lines changed :)

> like x86, it helps to invalidate cache line when the object exists
> in other processors' cache.
>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>
> ---
> mm/slub.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 3d2025f7163b..2aca7523165e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -352,9 +352,9 @@ static inline void *get_freepointer(struct kmem_cache *s, void *object)
> return freelist_dereference(s, object + s->offset);
> }
>
> -static void prefetch_freepointer(const struct kmem_cache *s, void *object)
> +static void prefetchw_freepointer(const struct kmem_cache *s, void *object)
> {
> - prefetch(object + s->offset);
> + prefetchw(object + s->offset);
> }
>
> static inline void *get_freepointer_safe(struct kmem_cache *s, void *object)
> @@ -3195,10 +3195,9 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
> note_cmpxchg_failure("slab_alloc", s, tid);
> goto redo;
> }
> - prefetch_freepointer(s, next_object);
> + prefetchw_freepointer(s, next_object);
> stat(s, ALLOC_FASTPATH);
> }
> -
> maybe_wipe_obj_freeptr(s, object);
> init = slab_want_init_on_alloc(gfpflags, s);
>
> --
> 2.27.0
>
>