Re: [PATCH v2] creds: Convert cred.usage to refcount_t
From: Andrew Morton
Date: Fri Aug 18 2023 - 15:32:42 EST
On Fri, 18 Aug 2023 11:48:16 -0700 Kees Cook <keescook@xxxxxxxxxxxx> wrote:
> On Fri, Aug 18, 2023 at 08:17:55PM +0200, Jann Horn wrote:
> > On Fri, Aug 18, 2023 at 7:56 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> > > On Thu, 17 Aug 2023 21:17:41 -0700 Kees Cook <keescook@xxxxxxxxxxxx> wrote:
> > >
> > > > From: Elena Reshetova <elena.reshetova@xxxxxxxxx>
> > > >
> > > > atomic_t variables are currently used to implement reference counters
> > > > with the following properties:
> > > > - counter is initialized to 1 using atomic_set()
> > > > - a resource is freed upon counter reaching zero
> > > > - once counter reaches zero, its further
> > > > increments aren't allowed
> > > > - counter schema uses basic atomic operations
> > > > (set, inc, inc_not_zero, dec_and_test, etc.)
> > > >
> > > > Such atomic variables should be converted to a newly provided
> > > > refcount_t type and API that prevents accidental counter overflows and
> > > > underflows. This is important since overflows and underflows can lead
> > > > to use-after-free situation and be exploitable.
> > >
> > > ie, if we have bugs which we have no reason to believe presently exist,
> > > let's bloat and slow down the kernel just in case we add some in the
> > > future?
> >
> > Yeah. Or in case we currently have some that we missed.
>
> Right, or to protect us against the _introduction_ of flaws.
We could cheerfully add vast amounts of code to the kernel to check for
the future addition of bugs. But we don't do that, because it would be
insane.
> > Though really we don't *just* need refcount_t to catch bugs; on a
> > system with enough RAM you can also overflow many 32-bit refcounts by
> > simply creating 2^32 actual references to an object. Depending on the
> > structure of objects that hold such refcounts, that can start
> > happening at around 2^32 * 8 bytes = 32 GiB memory usage, and it
> > becomes increasingly practical to do this with more objects if you
> > have significantly more RAM. I suppose you could avoid such issues by
> > putting a hard limit of 32 GiB on the amount of slab memory and
> > requiring that kernel object references are stored as pointers in slab
> > memory, or by making all the refcounts 64-bit.
>
> These problems are a different issue, and yes, the path out of it would
> be to crank the size of refcount_t, etc.
Is it possible for such overflows to occur in the cred code? If so,
that's a bug. Can we fix that cred bug without all this overhead?
With a cc:stable backport. If not then, again, what is the non
handwavy, non cargoculty justification for adding this overhead to
the kernel?