On Fri, 14 Jan 2000, Martin Buchholz wrote:
> AV> Oh, yes. And both assume that graph doesn't change under them. Which is
> AV> patently false in case of filesystem. You have a single-threaded code -
> AV> good for you, but VFS is _not_ single-threaded. And locking everybody out
> AV> of VFS for the time of symlink lookup is not an option. One of the reasons
> AV> being that symlink lookup may trigger automounter. Which will need to
> AV> access VFS.
>
> Again, the graph might change under you even in the case of a fixed 5
> symlink limit, so you already have to have code that deals with this.
> Given that today you have to have a secure, thread-aware, etc... file
> system graph walker, I still don't see why this can't be modified to a
> secure, thread-aware, tortoise-hare graph walker.
Aaaargh. Fixed limit means guaranteed termination, for one. And there are
other issues. Really.
> The only reason to have a recursive algorithm that needs to keep
> around a stack frame for each symlink hop instead of an iterative
> algorithm, would be that you need to look at every stack frame at each
> hop. Is that true?
Not. There are rather nasty issues with VFS (currently _not_ in a good
state) and fixing them will make those animals very different. Passing
through the symlink may have (and in many cases has) side effects. Sorry.
> >> I'm not going to try to figure out what races are possible, but it
> >> would be quite sad if userland couldn't do secure symlink-chasing at
> >> all. Having it done by the kernel should simply be an optimization.
>
> AV> Sorry, but realpath() _is_ dangerous unless you control every step of the
> AV> chain and are sure that nothing will change. That is, if you base
> AV> behaviour on its results. I'm less than happy with the concept of this
> AV> beast, BTW - WTF _is_ path to file, in the first place? If it's just a
> AV> normalization - sorry, it doesn't work (obviously - there are files with
> AV> i_nlinks > 1).
>
> Hard links are becoming more and more rare. I think they were a
> design mistake. Symbolic links are almost always better (except for
> performance).
No, they are not and they were not. Sorry, it _is_ UNIX. If you don't know
where the links are used - too bad.
> Normalization always works for directories, because there are no hard links.
... unless we are doing Plan 9-ish namespaces, which _will_ happen. Unless
we are mounting the same thing twice. Unless we have loopback mount.
Enough?
> AV> Why?
>
> AV> Wait a minute. Do you mean that you want _more_ than 32 nested
> AV> symlinks?
>
> Yes.
>
> Wait a minute. Do you mean you want to be able to edit files with
> lines longer than 1024 characters?
Oh, please! I'm quite fine with 64K characters per line limit in nvi,
thank you very much.
> Wait a minute. Do you mean you want to have more than 64k userids on
> your system?
Yes, it happens. And?
> Wait a minute. Do you mean you want to run command lines longer than
> 20k?
Frankly, I don't. xargs is there for purpose.
> AV> <gaaack> Let me guess: you had the structure changing. And you were not
> AV> responsible for backups, right? Anyway, you have 32. And if somebody wants
> AV> more I'm more than comfortable with telling lusers where they can shove
> AV> their requests. Done that. More than once.
>
> Your job is to let the lusers run their system their way, even if you
> think it's stupid.
Like hell it is. If you are talking about admin stuff - the job
is to keep it running, in the first place (and not mine those days -
recovery is sweet and all such). If you are talking about VFS - you
got the source, you want changes, you send patches.
> I don't see why backups are an issue, except that it's easier to back
> up 10 GB than 100 GB.
Umhm... And after the structure change you have all sorts of fun
chasing the new locations. And when the space goes low you have all sorts
of fun moving the stuff around. And when duhveloper (who has root, doncha
know?) fscks the system up (along with quite a few of those symlinks) you
have tons of fun restoring them. And then you have duhveloper foo who had
set up his own quagmire of symlinks to stuff from duhvelopers bar, baz and
quux (mostly their pr0n archives) and did it in all sorts of interesting
ways. Guess who is paged at 3am with request to restore them into working
state due yesterday? And then you are getting all sorts of funny stuff
when some lackwit (PHB's FOAF, ergo the holy cow you can't LART out
of existence) decides to move the stuff around and gets $BIGNUM of
copies all over the place. And then there is a bright duhveloper (who
can't be bothered to clean his fscking .rhosts, BTW) and said luser brings
his own box with k3wl OS. He "knows computer stuff", so he admins the
thing himself. Unfortunately he calls you up whenever the things fall
apart. And they do, since his NFS client barfs on the hell you have on
server. Not that you could honestly call that its fault. Aaaarrrgh...
Been there, done that, still have the scares.
> The normal use for deeply nested symlinks is sparse trees, mostly
> populated by symlinks, and some real files. These trees are linked
> together in a tree of trees.
Right. And if you got to 32 layers of indirection (we are talking about
the limit on recursion, don't forget it) you are using the wrong tools.
Not to mention the fact that keeping this structure in _your_ memory
(wetware one) is a nightmare.
> You know, Emacs picks up a lot of junk, and it's not the fault of
> `Emacs proper'.
>
> If you look at the size of the Lisp interpreter at the core of Emacs,
> it's probably stayed the same size for years...
Let's stop it, OK? I'm less than happy about digging out EMACS source and
picking a collection of sucking code. I can do that, but let's do it
outside of l-k. I suspect that bugs-glibc is also not a proper place for
that. Besides, I got rather impressive collection of lossage on hands
right now ;-/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sat Jan 15 2000 - 21:00:25 EST