Re: [PATCH 18/18] fs: Reduce inode I_FREEING and factor inodedisposal

From: Dave Chinner
Date: Fri Oct 08 2010 - 06:53:04 EST


On Fri, Oct 08, 2010 at 11:18:19AM +0100, Al Viro wrote:
> On Fri, Oct 08, 2010 at 04:21:32PM +1100, Dave Chinner wrote:
>
> > + spin_unlock(&sb->s_inodes_lock);
> >
> > - spin_lock(&inode_lru_lock);
> > - list_move(&inode->i_lru, dispose);
> > - spin_unlock(&inode_lru_lock);
> > + dispose_one_inode(inode);
> >
> > - percpu_counter_dec(&nr_inodes_unused);
> > + spin_lock(&sb->s_inodes_lock);
>
> And now you've unlocked the list and even blocked. What's going to
> keep next valid through that fun?

See the comment at the start of the loop in invalidate_list():

/*
* We can reschedule here without worrying about the list's
* consistency because the per-sb list of inodes must not
* change during umount anymore, and because iprune_sem keeps
* shrink_icache_memory() away.
*/
cond_resched_lock(&sb->s_inodes_lock);

Hence I've assumed it's ok to add another point that drops locks and blocks
inside the loop and next will still be valid.

>
> > + spin_unlock(&inode_lru_lock);
> > +
> > + dispose_one_inode(inode);
> > + cond_resched();
> > +
> > + spin_lock(&inode_lru_lock);
>
> Same, only worse - in the previous you might hope for lack of activity
> on fs, in this one you really can't.

That one in prune_icache() is safe because the loop always gets the
first inod eon the list:

for (nr_scanned = 0; nr_scanned < nr_to_scan; nr_scanned++) {
struct inode *inode;

if (list_empty(&inode_lru))
break;

inode = list_entry(inode_lru.prev, struct inode, i_lru);
.....

because there is pre-existing branch in the loop that drops all the
locks.

Cheers,

Dave.



> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

--
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/