On Fri, Jan 29, 2016 at 02:30:46PM -0500, Waiman Long wrote:
The inode_sb_list_add() and inode_sb_list_del() functions in the vfsI've never seen sb inode list contention in typical workloads in
layer just perform list addition and deletion under lock. So they can
use the new list batching facility to speed up the list operations
when many CPUs are trying to do it simultaneously.
In particular, the inode_sb_list_del() function can be a performance
bottleneck when large applications with many threads and associated
inodes exit. With an exit microbenchmark that creates a large number
of threads, attachs many inodes to them and then exits. The runtimes
of that microbenchmark with 1000 threads before and after the patch
exit processing. Can you post the test script you are using?
The inode sb list contention I usually often than not, it's
workloads that turn over the inode cache quickly (i.e. instantiating
lots of inodes through concurrent directory traversal or create
workloads). These are often latency sensitive, so I'm wondering what
the effect of spinning waiting for batch processing on every
contended add is going to do to lookup performance...
on a 4-socket Intel E7-4820 v3 system (48 cores, 96 threads) wereI wonder if you'd get the same results on such a benchmark simply by
as follows:
Kernel Elapsed Time System Time
------ ------------ -----------
Vanilla 4.4 65.29s 82m14s
Patched 4.4 45.69s 49m44s
making the spin lock a mutex, thereby reducing the number of CPUs
spinning on a single lock cacheline at any one point in time.
Certainly the system time will plummet....
The elapsed time and the reported system time were reduced by 30%I don't like the API. This should simply be:
and 40% respectively.
Signed-off-by: Waiman Long<Waiman.Long@xxxxxxx>
---
fs/inode.c | 13 +++++--------
fs/super.c | 1 +
include/linux/fs.h | 2 ++
3 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index 9f62db3..870de8c 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -424,19 +424,16 @@ static void inode_lru_list_del(struct inode *inode)
*/
void inode_sb_list_add(struct inode *inode)
{
- spin_lock(&inode->i_sb->s_inode_list_lock);
- list_add(&inode->i_sb_list,&inode->i_sb->s_inodes);
- spin_unlock(&inode->i_sb->s_inode_list_lock);
+ do_list_batch(&inode->i_sb->s_inode_list_lock, lb_cmd_add,
+ &inode->i_sb->s_list_batch,&inode->i_sb_list);
void inode_sb_list_add(struct inode *inode)
{
list_batch_add(&inode->i_sb_list,&inode->i_sb->s_inodes);
}
void inode_sb_list_del(struct inode *inode)
{
list_batch_del(&inode->i_sb_list,&inode->i_sb->s_inodes);
}
And all the locks, lists and batch commands are internal to the
struct list_batch and the API implementation.