Re: [PATCH RFC] fsnotify: destroy marks with call_srcu instead of dedicated thread
From: Jan Kara
Date: Sat Oct 24 2015 - 11:06:18 EST
On Fri 23-10-15 15:06:59, Jeff Layton wrote:
> At the time that this code was originally written, call_srcu didn't
> exist, so this thread was required to ensure that we waited for that
> SRCU grace period to settle before finally freeing the object.
>
> It does exist now however and we can much more efficiently use call_srcu
> to handle this. That also allows us to potentially use srcu_barrier
> to ensure that they are all of the callbacks have run before proceeding.
> In order to conserve space, we union the rcu_head with the g_list.
>
> This will be necessary for nfsd which will allocate marks from a
> dedicated slabcache. We have to be able to ensure that all of the
> objects are destroyed before destroying the cache. That's fairly
> difficult to ensure with dedicated thread doing the destruction.
The patch looks good. Just one nit below:
> diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
> index 533c4408529a..6b7e89f45aa4 100644
> --- a/include/linux/fsnotify_backend.h
> +++ b/include/linux/fsnotify_backend.h
> @@ -220,7 +220,10 @@ struct fsnotify_mark {
> /* List of marks by group->i_fsnotify_marks. Also reused for queueing
> * mark into destroy_list when it's waiting for the end of SRCU period
> * before it can be freed. [group->mark_mutex] */
Please update this comment to not speak about destroy_list. After that feel
free to add:
Reviewed-by: Jan Kara <jack@xxxxxxxx>
Honza
> - struct list_head g_list;
> + union {
> + struct list_head g_list;
> + struct rcu_head g_rcu;
> + };
> /* Protects inode / mnt pointers, flags, masks */
> spinlock_t lock;
> /* List of marks for inode / vfsmount [obj_lock] */
Honza
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/