Re: [patch] remove MAX_ARG_PAGES

From: Ingo Molnar
Date: Fri Dec 29 2006 - 17:24:27 EST



[Cc:-ed Ulrich too]

* Linus Torvalds <torvalds@xxxxxxxx> wrote:

> On Fri, 29 Dec 2006, Russell King wrote:
> >
> > Suggest you test (eg) a rebuild of libX11 to see how it reacts to
> > this patch.
>
> Also: please rebuild "xargs" and install first. Otherwise, a lot of
> build script etc that use "xargs" won't ever trigger the new limits
> (or lack thereof), because xargs will have been installed with some
> old limits.

yeah, and i think the default chunking of xargs should still remain
128K.

If it's fine for a script to get chunked input, and if the script has no
security relevance (xargs is fundamentally unsafe if any portion of the
VFS namespace it gets used is untrusted), then there's no problem for
the xargs limit to stay at 128K.

> Perhaps more worrying is if compiling xargs under a new kernel then
> means that it won't work correctly under an old one.

xargs has its limit hardcoded AFAICS, it's based on:

#define ARG_MAX 131072 /* # bytes of args + environ for exec() */

i'd not change that just yet. The sysconf(3) manpage says it's generally
unreliable:

BUGS
It is difficult to use ARG_MAX because it is not specified how much of
the argument space for exec() is consumed by the user's environment
variables.

but ... as it is with every limit, it is always possible to write an
application that hardcodes a larger limit and then doesnt work when
running with the lower limit. Would that have been a correct argument
against say raising the user stack limit from the historic 1MB?

right now some of my (more stupid) scripts occasionally break if any
random portion of my VFS namespace grows over the silly 128K limit. (and
it rarely has the tendency to shrink, sadly) I think that is just as
much of a legitimate problem as any naive newly written script not
working on an older kernel on a huge VFS namespace. (in fact i could
argue for it to be a more legitimate problem than other stupid scripts
not being backwards compatible, not the least because it is a problem
with /my/ scripts ;-)

we could try something like adding an ARG_MAX rlimit, but i think that
would be overdoing it ... we could also do a sysctl as a global limit -
equally pointless because distros will likely tweak it up anyway, and in
any case neither measure really prevents the writing of stupid scripts.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/