Re: [RFC][PATCH] exec: Use init rlimits for setuid exec
From: Linus Torvalds
Date: Fri Jul 07 2017 - 02:02:32 EST
On Thu, Jul 6, 2017 at 10:45 PM, Kees Cook <keescook@xxxxxxxxxxxx> wrote:
> On Thu, Jul 6, 2017 at 10:36 PM, Andy Lutomirski <luto@xxxxxxxxxx> wrote:
>> Aren't there real use cases that use many megs of arguments?
> They'd be relatively new since the args were pretty limited before.
> I'd be curious to see them.
"megs" yes. "many megs" no.
The traditional kernel limit was 32 pages (so 128kB on x86, explaining
our MAX_ARG value).
We moved to the much nider "two active VM's at the same time" model a
fairly long time ago, though - it was back in v2.6.23 or so. So about
10 years ago.
I would have expected lots of scripts to have been written since that
just end up going *far* over the old 128kB limit, because it's really
easy to do.
Things like big directories and the shell expanding "*" can easily be
a megabyte of arguments. I know I used to have scripts where I had to
use "xargs" in the past, and with the > 128kB change I just stopped,
because "a couple of megabytes" is enough for a lot of things where
128kB wasn't necessarily.
Oh, one example is actually the kernel source tree. I don't do it any
more (because "git grep" is much better), but I used to do things like
grep something $(find . -name '*.[ch]')
all the time.
And that actually currently *just* overflows the 2MB argument size,
but used to work (easily) ten years ago. Oh, how the kernel has
Yes, yes, *portably* you should always have done
find . -print0 -name '*.[ch]' | xargs -0 grep
but be honest now: that first thing is what you actually write when
you do some throw-away one-liner.
So 2+MB is still definitely something people can do (and probably *do* do).