Re: [PATCH 2/2] tmpfs: Make tmpfs scalable with caches for freeblocks

From: Andrew Morton
Date: Thu May 20 2010 - 19:13:34 EST


On Tue, 18 May 2010 16:34:32 -0700
tim <tim.c.chen@xxxxxxxxxxxxxxx> wrote:

> The current implementation of tmpfs is not scalable.
> The stat_lock is contended whenever we need to get a
> new page, leading to lots of lock contentions. This patch
> makes use of the qtoken library to maintain local
> caches of free pages to speed up getting and returning
> of pages without acquisition of stat_lock. It
> improved the performance of tmpfs by 270% for Aim7 fserver
> workload.
>
> ...
>
> - spin_lock(&sbinfo->stat_lock);
> - sbinfo->free_blocks += pages;
> + spin_lock(&inode->i_lock);
> + qtoken_return(&sbinfo->token_jar, pages);
> inode->i_blocks -= pages*BLOCKS_PER_PAGE;
> - spin_unlock(&sbinfo->stat_lock);
> + spin_unlock(&inode->i_lock);

Well most of the calls into the qtoken layer occur under inode->i_lock.
So did we really need that spinlock inside the qtoken library code?

It is a problem when library code such as qtoken performs its own
internal locking. We have learned that such code is much more useful
and flexible if it performs no locking at all, and requires that
callers provide the locking (lib/rbtree.c, lib/radix-tree.c,
lib/prio_heap.c, lib/flex_array.c, etcetera). Can we follow this
approach with qtoken?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/