Re: + zram-introduce-per-device-debug_stat-sysfs-node-update-2.patch added to -mm tree

From: Sergey Senozhatsky
Date: Wed May 18 2016 - 20:57:11 EST


On (05/18/16 14:54), akpm@xxxxxxxxxxxxxxxxxxxx wrote:
> The patch titled
> Subject: zram-introduce-per-device-debug_stat-sysfs-node-update-2
> has been added to the -mm tree. Its filename is
> zram-introduce-per-device-debug_stat-sysfs-node-update-2.patch
>
> This patch should soon appear at
> http://ozlabs.org/~akpm/mmots/broken-out/zram-introduce-per-device-debug_stat-sysfs-node-update-2.patch
> and later at
> http://ozlabs.org/~akpm/mmotm/broken-out/zram-introduce-per-device-debug_stat-sysfs-node-update-2.patch

Hello Andrew,

please drop this one. sorry for inconvenience, it was confusing,
but the final patch here is

message-id: 20160513230834.GB26763@bbox
lkml.kernel.org/r/20160513230834.GB26763@bbox

(link http://marc.info/?l=linux-kernel&m=146318088829399)


-ss


> Before you just go and hit "reply", please:
> a) Consider who else should be cc'ed
> b) Prefer to cc a suitable mailing list as well
> c) Ideally: find the original patch on the mailing list and do a
> reply-to-all to that, adding suitable additional cc's
>
> *** Remember to use Documentation/SubmitChecklist when testing your code ***
>
> The -mm tree is included into linux-next and is updated
> there every 3-4 working days
>
> ------------------------------------------------------
> From: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx>
> Subject: zram-introduce-per-device-debug_stat-sysfs-node-update-2
>
> Link: http://lkml.kernel.org/r/20160513130358.631-1-sergey.senozhatsky@xxxxxxxxx
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx>
> Acked-by: Minchan Kim <minchan@xxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
>
> drivers/block/zram/zram_drv.c | 8 ++++----
> drivers/block/zram/zram_drv.h | 2 +-
> 2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff -puN drivers/block/zram/zram_drv.c~zram-introduce-per-device-debug_stat-sysfs-node-update-2 drivers/block/zram/zram_drv.c
> --- a/drivers/block/zram/zram_drv.c~zram-introduce-per-device-debug_stat-sysfs-node-update-2
> +++ a/drivers/block/zram/zram_drv.c
> @@ -446,7 +446,7 @@ static ssize_t debug_stat_show(struct de
> ret = scnprintf(buf, PAGE_SIZE,
> "version: %d\n%8llu\n",
> version,
> - (u64)atomic64_read(&zram->stats.writestall));
> + (u64)atomic64_read(&zram->stats.num_recompress));
> up_read(&zram->init_lock);
>
> return ret;
> @@ -737,12 +737,12 @@ compress_again:
> zcomp_strm_release(zram->comp, zstrm);
> zstrm = NULL;
>
> - atomic64_inc(&zram->stats.writestall);
> -
> handle = zs_malloc(meta->mem_pool, clen,
> GFP_NOIO | __GFP_HIGHMEM);
> - if (handle)
> + if (handle) {
> + atomic64_inc(&zram->stats.num_recompress);
> goto compress_again;
> + }
>
> pr_err("Error allocating memory for compressed page: %u, size=%zu\n",
> index, clen);
> diff -puN drivers/block/zram/zram_drv.h~zram-introduce-per-device-debug_stat-sysfs-node-update-2 drivers/block/zram/zram_drv.h
> --- a/drivers/block/zram/zram_drv.h~zram-introduce-per-device-debug_stat-sysfs-node-update-2
> +++ a/drivers/block/zram/zram_drv.h
> @@ -85,7 +85,7 @@ struct zram_stats {
> atomic64_t zero_pages; /* no. of zero filled pages */
> atomic64_t pages_stored; /* no. of pages currently stored */
> atomic_long_t max_used_pages; /* no. of maximum pages stored */
> - atomic64_t writestall; /* no. of write slow paths */
> + atomic64_t num_recompress; /* no. of compression slow paths */
> };
>
> struct zram_meta {
> _
>
> Patches currently in -mm which might be from sergey.senozhatsky@xxxxxxxxx are
>
> zsmalloc-require-gfp-in-zs_malloc.patch
> zsmalloc-require-gfp-in-zs_malloc-v2.patch
> zram-user-per-cpu-compression-streams.patch
> zram-user-per-cpu-compression-streams-fix.patch
> zram-remove-max_comp_streams-internals.patch
> zram-introduce-per-device-debug_stat-sysfs-node.patch
> zram-introduce-per-device-debug_stat-sysfs-node-update-2.patch
>
> --
> To unsubscribe from this list: send the line "unsubscribe mm-commits" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>