Also, being self taught means I might make some technical mistakes in my
assertions. Remember flames go to /dev/null, and reasonable, informative
corrections will be welcomed.
On with it then.... :)
First I ask, what is the point? I can see two potential applications:
1) You can use regular file tools to manipulate directories. Is this really
important, or just a parlor trick to amuse one's friends? What is so
evil (or hard) about:
(cd /srcdir && tar cplf - .) | (cd /destdir && tar xpvf -)
or cp -a (for linux) or cp -Rf (for all other *nixs)
2) You can use it to break down complex files into a series of less complex,
but related files, for applications such as word processors, desktops,
spreadsheets, bla bla bla. This would be for the purposes of:
- splitting text, image, other-data(tm) streams
- embedding icons into files, and other GUI related metadata
storage.
- keeping related metadata under one 'umbrella'
- the example cbbrowne@godel.brownes.org proposed about
keeping RCS information under one file with multiple
forks, I admit, looks cool.
Next, what are some of the issues?
1) As had been stated by several people before, anything that comes out of
this in the form of a linux-specific kernel change will go the way of the
dinosaur for the following reasons:
- Many won't consider adopting it due to the non-portability
- and if it goes into Linux, other vendors won't likely
implement it for their OS due to the inevitable GPL nature
of the code. (Remember, not everyone has warmed up to open
source, and of those that have, many are still hostile to
GPL.)
- Users will reject it, as it will break their existing programs.
- I don't think examples are needed here. They are self
evident. But I shall provide one. I actually rely
on 'mv * /newdir' NOT moving directories. The 'cat' vs.
'cp' issue raised by cbbrowne@godel.brownes.org is also
shows such things to be unworkable.
2) This can all be done in user space (save the using cp to copy a directory
parlor trick). Theodore Y. Tso's document re: this shows this to be
entirely true. The difference is I don't even think a libc hack is
appropriate (save the case of the POSIX people creating some kind of
resource fork standard)
Why should the kernel and/or libc be responsible for being able to
deconstruct a file into it's base components? That's what an
application-level API is for! I don't really care that I won't be able to
use 'xv' to view the images in my word-processing files. If I want to
look at them, I'll fire up the word processor.
Should I actually have a need to extract said files, then (assuming all
open source software is involved) I have the APIs for the word-processor
file format, and I can use that to preform this (you have to admit)
special-purpose task.
I guess the conclusion to this is, that applications should determine how
they store their data, not the kernel and/or libc. If the GNOME/KDE
people want to write a library that provides a globbed format (like a
mini-virtual filesystem), and standardize on that, then great. They could
have tools to extract/stuff files into said files, and they could even
integrate support for looking at these files as directories (which I know
for a fact GNOME's filemanager does NOW for the case of tar files.)
But when I use cp/mv/rm/tar/cat/less/whatever, that file should be
treated as just that, a file, nothing less, nothing more. If I want to
peek further into it then I can do so with the tools and APIs provided
by whatever created it.
Another thing that might be worth looking at (I've not seen any dicussion
WRT this yet) is how existing resource forked filesystems work under Linux.
I've just looked at the documentation for HFS (Apple) for the Linux kernel.
My first impression. It's ugly. All kinds of dot files for this, that, and
the other thing. I know I don't want my Linux to work that way. And I know
that much of what is being discussed is an implementation that 'hides' such
things, but how can the kernel be expected to know the proper way to provide
a 'default' single-stream view for all possible fork formats, now and in the
future?
My second impression is that there are only three forks:
- data (code segment?)
- resources (text segment?)
- metadata (finder stuffs)
The above could be wrong. I don't use macs much. But if this is the case,
then data and resources are already combined for the case of executable
files, and for the case of the finder stuffs, whatever GUI is used could
implement such things in their own way. (IE: a 'registry'-like database
could take care of launchers, etc, and something like an icon could be
stored in a dot file of some sort.
For those who absolutely believe they cannot live without resource forks,
which I've also seen termed streams (ugh. jargon namespace conflict ;) then
would it not be possible to use traditional streams on the back-end, with
some kind of stream (de)multiplexing library of some sort? (similar to the
idea of the mini-virtual file-system, but more stringent in looking like
real file streams. IE: stream_open, stream_read, stream_write, all looking
like their POSIX counterparts, but taking an additional argument on open
about WHICH stream....?)
Anyhow, I'll digress for now. But hopefully I've made some people think
about this differently, which was my intent.
>From a proud user of Linux for 4+ years.... Thanks. :)
ps: regarding the semantics of filesystems needing innovation, there's an
old saying that goes 'If it ain't broke, don't fix it.'
-----------------------------------------------------------------------------
/\ Joe Hohertz - Senior Systems Administrator
/ \ jhohertz@golden.net http://home.golden.net/~jhohertz
/____\ Golden Triangle Online
-----[The opinions expressed by me do not necessarily match those of GT]-----
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/