Re: [PATCH 1/1] vmalloc: purge_fragmented_blocks: Acquire spinlockbefore reading vmap_block
From: David Rientjes
Date: Thu Dec 08 2011 - 02:07:19 EST
On Thu, 8 Dec 2011, Kautuk Consul wrote:
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 3231bf3..2228971 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -855,11 +855,14 @@ static void purge_fragmented_blocks(int cpu)
>
> rcu_read_lock();
> list_for_each_entry_rcu(vb, &vbq->free, free_list) {
> + spin_lock(&vb->lock);
>
> - if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS))
> + if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS &&
> + vb->dirty != VMAP_BBMAP_BITS)) {
> + spin_unlock(&vb->lock);
> continue;
> + }
>
> - spin_lock(&vb->lock);
> if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) {
> vb->free = 0; /* prevent further allocs after releasing lock */
> vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */
Nack, this is wrong because the if-clause you're modifying isn't the
criteria that is used to determine whether the purge occurs or not. It's
merely an optimization to prevent doing exactly what your patch is doing:
taking vb->lock unnecessarily.
In the original code, if the if-clause fails, the lock is only then taken
and the exact same test occurs again while protected. If the test now
fails, the lock is immediately dropped. A branch here is faster than a
contented spinlock.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/