Re: [PATCH 04/16] sched: convert user_struct.__count to refcount_t
From: Kees Cook
Date: Thu Nov 16 2017 - 19:17:15 EST
On Wed, Nov 15, 2017 at 6:03 AM, Elena Reshetova
<elena.reshetova@xxxxxxxxx> wrote:
> atomic_t variables are currently used to implement reference
> counters with the following properties:
> - counter is initialized to 1 using atomic_set()
> - a resource is freed upon counter reaching zero
> - once counter reaches zero, its further
> increments aren't allowed
> - counter schema uses basic atomic operations
> (set, inc, inc_not_zero, dec_and_test, etc.)
>
> Such atomic variables should be converted to a newly provided
> refcount_t type and API that prevents accidental counter overflows
> and underflows. This is important since overflows and underflows
> can lead to use-after-free situation and be exploitable.
>
> The variable user_struct.__count is used as pure reference counter.
> Convert it to refcount_t and fix up the operations.
>
> **Important note for maintainers:
>
> Some functions from refcount_t API defined in lib/refcount.c
> have different memory ordering guarantees than their atomic
> counterparts.
> The full comparison can be seen in
> https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
> in state to be merged to the documentation tree.
> Normally the differences should not matter since refcount_t provides
> enough guarantees to satisfy the refcounting use cases, but in
> some rare cases it might matter.
> Please double check that you don't have some undocumented
> memory guarantees for this variable usage.
>
> For the user_struct.__count it might make a difference
> in following places:
> - free_uid(): decrement in refcount_dec_and_lock() only
> provides RELEASE ordering, control dependency on success
> and will hold a spin lock on success vs. fully ordered
> atomic counterpart. Note there is no changes in spin lock
> locking here.
As with the others, this looks correct to me. Andrew, this looks like
one for you again. :)
Acked-by: Kees Cook <keescook@xxxxxxxxxxxx>
-Kees
>
> Suggested-by: Kees Cook <keescook@xxxxxxxxxxxx>
> Reviewed-by: David Windsor <dwindsor@xxxxxxxxx>
> Reviewed-by: Hans Liljestrand <ishkamiel@xxxxxxxxx>
> Signed-off-by: Elena Reshetova <elena.reshetova@xxxxxxxxx>
> ---
> include/linux/sched/user.h | 5 +++--
> kernel/user.c | 8 ++++----
> 2 files changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/sched/user.h b/include/linux/sched/user.h
> index 0dcf4e4..2ca7cf4 100644
> --- a/include/linux/sched/user.h
> +++ b/include/linux/sched/user.h
> @@ -4,6 +4,7 @@
>
> #include <linux/uidgid.h>
> #include <linux/atomic.h>
> +#include <linux/refcount.h>
>
> struct key;
>
> @@ -11,7 +12,7 @@ struct key;
> * Some day this will be a full-fledged user tracking system..
> */
> struct user_struct {
> - atomic_t __count; /* reference count */
> + refcount_t __count; /* reference count */
> atomic_t processes; /* How many processes does this user have? */
> atomic_t sigpending; /* How many pending signals does this user have? */
> #ifdef CONFIG_FANOTIFY
> @@ -55,7 +56,7 @@ extern struct user_struct root_user;
> extern struct user_struct * alloc_uid(kuid_t);
> static inline struct user_struct *get_uid(struct user_struct *u)
> {
> - atomic_inc(&u->__count);
> + refcount_inc(&u->__count);
> return u;
> }
> extern void free_uid(struct user_struct *);
> diff --git a/kernel/user.c b/kernel/user.c
> index 9a20acc..f104474 100644
> --- a/kernel/user.c
> +++ b/kernel/user.c
> @@ -96,7 +96,7 @@ static DEFINE_SPINLOCK(uidhash_lock);
>
> /* root_user.__count is 1, for init task cred */
> struct user_struct root_user = {
> - .__count = ATOMIC_INIT(1),
> + .__count = REFCOUNT_INIT(1),
> .processes = ATOMIC_INIT(1),
> .sigpending = ATOMIC_INIT(0),
> .locked_shm = 0,
> @@ -122,7 +122,7 @@ static struct user_struct *uid_hash_find(kuid_t uid, struct hlist_head *hashent)
>
> hlist_for_each_entry(user, hashent, uidhash_node) {
> if (uid_eq(user->uid, uid)) {
> - atomic_inc(&user->__count);
> + refcount_inc(&user->__count);
> return user;
> }
> }
> @@ -169,7 +169,7 @@ void free_uid(struct user_struct *up)
> return;
>
> local_irq_save(flags);
> - if (atomic_dec_and_lock(&up->__count, &uidhash_lock))
> + if (refcount_dec_and_lock(&up->__count, &uidhash_lock))
> free_user(up, flags);
> else
> local_irq_restore(flags);
> @@ -190,7 +190,7 @@ struct user_struct *alloc_uid(kuid_t uid)
> goto out_unlock;
>
> new->uid = uid;
> - atomic_set(&new->__count, 1);
> + refcount_set(&new->__count, 1);
>
> /*
> * Before adding this, check whether we raced
> --
> 2.7.4
>
--
Kees Cook
Pixel Security