Re: [PATCH v1 1/2] binderfs: implement "max" mount option

From: Greg KH
Date: Mon Dec 24 2018 - 06:46:04 EST


On Mon, Dec 24, 2018 at 12:09:37PM +0100, Christian Brauner wrote:
> On Sun, Dec 23, 2018 at 03:35:49PM +0100, Christian Brauner wrote:
> > Since binderfs can be mounted by userns root in non-initial user namespaces
> > some precautions are in order. First, a way to set a maximum on the number
> > of binder devices that can be allocated per binderfs instance and second, a
> > way to reserve a reasonable chunk of binderfs devices for the initial ipc
> > namespace.
> > A first approach as seen in [1] used sysctls similiar to devpts but was
> > shown to be flawed (cf. [2] and [3]) since some aspects were unneeded. This
> > is an alternative approach which avoids sysctls completely and instead
> > switches to a single mount option.
> >
> > Starting with this commit binderfs instances can be mounted with a limit on
> > the number of binder devices that can be allocated. The max=<count> mount
> > option serves as a per-instance limit. If max=<count> is set then only
> > <count> number of binder devices can be allocated in this binderfs
> > instance.
> >
> > This allows to safely bind-mount binderfs instances into unprivileged user
> > namespaces since userns root in a non-initial user namespace cannot change
> > the mount option as long as it does not own the mount namespace the
> > binderfs mount was created in and hence cannot drain the host of minor
> > device numbers
> >
> > [1]: https://lore.kernel.org/lkml/20181221133909.18794-1-christian@xxxxxxxxxx/
> > [2]; https://lore.kernel.org/lkml/20181221163316.GA8517@xxxxxxxxx/
> > [3]: https://lore.kernel.org/lkml/CAHRSSEx+gDVW4fKKK8oZNAir9G5icJLyodO8hykv3O0O1jt2FQ@xxxxxxxxxxxxxx/
> > [4]: https://lore.kernel.org/lkml/20181221192044.5yvfnuri7gdop4rs@xxxxxxxxxx/
> >
> > Cc: Todd Kjos <tkjos@xxxxxxxxxx>
> > Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
> > Signed-off-by: Christian Brauner <christian.brauner@xxxxxxxxxx>
>
> Right, I forgot to ask. Do we still have time to land this alongside the
> other patches in 4.21? :)

It's too late for 4.21-rc1, but let's see what happens after that :)

greg k-h