Re: [PATCH] mm, slab: Fix sign conversion problem in memcg_uncharge_slab()
From: Roman Gushchin
Date: Sat Jun 20 2020 - 15:59:50 EST
On Sat, Jun 20, 2020 at 02:47:19PM -0400, Waiman Long wrote:
> It was found that running the LTP test on a PowerPC system could produce
> erroneous values in /proc/meminfo, like:
>
> MemTotal: 531915072 kB
> MemFree: 507962176 kB
> MemAvailable: 1100020596352 kB
>
> Using bisection, the problem is tracked down to commit 9c315e4d7d8c
> ("mm: memcg/slab: cache page number in memcg_(un)charge_slab()").
>
> In memcg_uncharge_slab() with a "int order" argument:
>
> unsigned int nr_pages = 1 << order;
> :
> mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages);
>
> The mod_lruvec_state() function will eventually call the
> __mod_zone_page_state() which accepts a long argument. Depending on
> the compiler and how inlining is done, "-nr_pages" may be treated as
> a negative number or a very large positive number. Apparently, it was
> treated as a large positive number in that PowerPC system leading to
> incorrect stat counts. This problem hasn't been seen in x86-64 yet,
> perhaps the gcc compiler there has some slight difference in behavior.
>
> It is fixed by making nr_pages a signed value. For consistency, a
> similar change is applied to memcg_charge_slab() as well.
>
> Fixes: 9c315e4d7d8c ("mm: memcg/slab: cache page number in memcg_(un)charge_slab()").
> Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
Good catch!
Interesting that I haven't seen it on x86-64, but it's reproducible on Power.
Acked-by: Roman Gushchin <guro@xxxxxx>
Thanks!
> ---
> mm/slab.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 207c83ef6e06..74f7e09a7cfd 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -348,7 +348,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
> gfp_t gfp, int order,
> struct kmem_cache *s)
> {
> - unsigned int nr_pages = 1 << order;
> + int nr_pages = 1 << order;
> struct mem_cgroup *memcg;
> struct lruvec *lruvec;
> int ret;
> @@ -388,7 +388,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
> static __always_inline void memcg_uncharge_slab(struct page *page, int order,
> struct kmem_cache *s)
> {
> - unsigned int nr_pages = 1 << order;
> + int nr_pages = 1 << order;
> struct mem_cgroup *memcg;
> struct lruvec *lruvec;
>
> --
> 2.18.1
>