Re: [PATCH] zcache: fix deadlock condition

From: Konrad Rzeszutek Wilk
Date: Fri Jan 27 2012 - 15:42:02 EST


On Wed, Jan 25, 2012 at 02:32:51PM -0800, Dan Magenheimer wrote:
> I discovered this deadlock condition awhile ago working on RAMster
> but it affects zcache as well. The list spinlock must be
> locked prior to the page spinlock and released after. As
> a result, the page copy must also be done while the locks are held.
>
> Applies to 3.2. Konrad, please push (via GregKH?)...
> this is definitely a bug fix so need not be pushed during
> a -rc0 window.

Looks good to me. Greg, could you apply the patch to your tree
please? [Or would you rather I sent an GIT pull?]

Thanks!
>
> Signed-off-by: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
>
> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
> index 56c1f9c..5b9f74e 100644
> --- a/drivers/staging/zcache/zcache-main.c
> +++ b/drivers/staging/zcache/zcache-main.c
> @@ -358,8 +358,8 @@ static struct zbud_hdr *zbud_create(uint16_t client_id, uint16_t pool_id,
> if (unlikely(zbpg == NULL))
> goto out;
> /* ok, have a page, now compress the data before taking locks */
> - spin_lock(&zbpg->lock);
> spin_lock(&zbud_budlists_spinlock);
> + spin_lock(&zbpg->lock);
> list_add_tail(&zbpg->bud_list, &zbud_unbuddied[nchunks].list);
> zbud_unbuddied[nchunks].count++;
> zh = &zbpg->buddy[0];
> @@ -389,12 +389,11 @@ init_zh:
> zh->oid = *oid;
> zh->pool_id = pool_id;
> zh->client_id = client_id;
> - /* can wait to copy the data until the list locks are dropped */
> - spin_unlock(&zbud_budlists_spinlock);
> -
> to = zbud_data(zh, size);
> memcpy(to, cdata, size);
> spin_unlock(&zbpg->lock);
> + spin_unlock(&zbud_budlists_spinlock);
> +
> zbud_cumul_chunk_counts[nchunks]++;
> atomic_inc(&zcache_zbud_curr_zpages);
> zcache_zbud_cumul_zpages++;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/