Re: [PATCH v2 1/2] mm/zswap: global lru and shrinker shared by all zswap_pools
From: Chengming Zhou
Date: Fri Feb 16 2024 - 03:51:35 EST
On 2024/2/15 03:20, Yosry Ahmed wrote:
> On Wed, Feb 14, 2024 at 08:54:37AM +0000, Chengming Zhou wrote:
>> Dynamic zswap_pool creation may create/reuse to have multiple
>> zswap_pools in a list, only the first will be current used.
>>
>> Each zswap_pool has its own lru and shrinker, which is not
>> necessary and has its problem:
>>
>> 1. When memory has pressure, all shrinker of zswap_pools will
>> try to shrink its own lru, there is no order between them.
>>
>> 2. When zswap limit hit, only the last zswap_pool's shrink_work
>> will try to shrink its lru, which is inefficient.
>
> I think the rationale here was to try and empty the old pool first so
> that we can completely drop it. However, since we only support exclusive
> loads now, the LRU ordering should be entirely decided by the order of
> stores, so I think the oldest entries on the LRU will naturally be the
> from the oldest pool, right?
>
> Probably worth stating this.
Right, will add your this part in the commit message.
>
>>
>> Anyway, having a global lru and shrinker shared by all zswap_pools
>> is better and efficient.
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
>
> LGTM with a few comments, with those:
> Acked-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
>
>> ---
>> mm/zswap.c | 170 +++++++++++++++++++++++--------------------------------------
>> 1 file changed, 65 insertions(+), 105 deletions(-)
>>
>> diff --git a/mm/zswap.c b/mm/zswap.c
>> index 62fe307521c9..dbff67d7e1c7 100644
>> --- a/mm/zswap.c
>> +++ b/mm/zswap.c
>> @@ -176,14 +176,18 @@ struct zswap_pool {
>> struct kref kref;
>> struct list_head list;
>> struct work_struct release_work;
>> - struct work_struct shrink_work;
>> struct hlist_node node;
>> char tfm_name[CRYPTO_MAX_ALG_NAME];
>> +};
>> +
>> +static struct {
>> struct list_lru list_lru;
>> - struct mem_cgroup *next_shrink;
>> - struct shrinker *shrinker;
>> atomic_t nr_stored;
>> -};
>> + struct shrinker *shrinker;
>> + struct work_struct shrink_work;
>> + struct mem_cgroup *next_shrink;
>> + spinlock_t shrink_lock;
>
> The lock is exclusively protecting next_shrink, right? Perhaps we should
> rename it to next_shrink_lock or at least document this.
Ok, I will add a comment to it.
>
>> +} zswap;
>>
>> /*
>> * struct zswap_entry
>> @@ -301,9 +305,6 @@ static void zswap_update_total_size(void)
>> * pool functions
>> **********************************/
>>
>> -static void zswap_alloc_shrinker(struct zswap_pool *pool);
>> -static void shrink_worker(struct work_struct *w);
>> -
>> static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
>> {
>> int i;
>> @@ -353,30 +354,16 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
>> if (ret)
>> goto error;
>>
>> - zswap_alloc_shrinker(pool);
>> - if (!pool->shrinker)
>> - goto error;
>> -
>> - pr_debug("using %s compressor\n", pool->tfm_name);
>
> nit: the next patch introduces a new failure case between this debug
> print and zswap_pool_debug() below, so it will become possible again
> that we get one and not the other. Not a big deal though.
Yes, not a big deal.
Thanks!