Re: Implementing Meta File information in Linux

Albert D. Cahalan (acahalan@cs.uml.edu)
Tue, 1 Sep 1998 23:59:44 -0400 (EDT)


Chris Wedgwood writes:
> On Tue, Sep 01, 1998 at 10:41:26PM -0400, Albert D. Cahalan wrote:

>> Long filenames (over 14 characters) broke stuff everywhere.
>
> True. But code expecting 14 or fewer character filenames is/was
> broken anyhow. Also, way back then, linux was a mere shadow of its
...
> Oh, and at the time many other OSs never had the 14 character
> restriction.

I don't mean Linux on the Minix filesystem really. The 14-character
limit was once common to all unix systems. The apps were not broken
in their assumption. We'd have severe troubles with a larger limit
on Linux today, yet we do not consider our apps broken.

>> What about tar though? If you must restore from a backup, wouldn't
>> you want to keep the immutable bits? How would you copy a directory
>> tree with ext2 attributes intact?
>
> The whole point of ext2fs attributes is that thing like tar, cp, etc.
> don't know about them!

That is crazy. The same could be said of standard attributes.

> If I want to backup (or more to the point restore) files where the
> immutable bit has been sent, then some manual intervention is going
> to required which is precisely why I set the immutable bit in the
> first place!

You set the immutable bit because you want more work?
I'd say you are paid by the hour.

> Thats not to say dedicated backup programs shouldn't know about the
> immutable bit, because in this case it could be smarted than tar and
> utilize other precautions, but I'd be positively slutted if I did a
> 'tar zvx' and it clobbered a file I'd marked immutable for some
> special reason.

If tar fails to mark files immutable, your restored filesystem
is corrupt. If tar can clobber an immutable file, I'd say it is
your own damn fault for giving tar permission to do so.

>> Now, you want to haul several gigabytes of data over 10baseT
>> and lose all the information that Linux can't understand???
>
> Not at all... but it desn't make it a kernel space solution

So, explain how you could avoid the network traffic.

> and either way, cp is a `dumb' program that doesn't know
> about all these features.

It is dumb because there is no way for it to support the features.
If there were system calls to handle the job, cp would use them.

> The linux-kernel doesn't need, nor should it have, every immaginable
> feature or misfeature that another OS has got.

Compatibility is good. Without it, it is hard to replace every
other OS with Linux.

>> You'd have the server uncompress and recompress all the data too.
>
> Compressions != forks. Its an entirely different matter.

The system calls needed for fork support overlap with those
needed for efficient compressed file support. In this case,
both would be greatly helped by a file copy system call.

>> That looks like a 30-minute operation, plus data loss
>> which causes a security hole. No, that is not OK.
>>
>> Perhaps Bob should bypass the kernel filesystem. He could do raw
>> network I/O to reach the server using a setuid /bin/cp.
>>
>>>> Windows has a backup API that handles all the details. Microsoft
>>>> could add weird new features to the filesystem without breaking
>>>> backup tools. The backup API requires special privilege and lets
>>>> the backup admin avoid disturbing _any_ of the time stamps.
>>>
>>> This can be done in userspace.
>>
>> Sure: unmount the filesystem and hit the raw device. That kind of
>> downtime is simply not acceptable for many business uses.
>
> wtf are you talking about?

ACLs exist. Forked files exist. Unusual file attributes exist.
How do you propose to backup and restore everything? Don't forget
to cover all 3 time stamps without interrupting atime updates
during the backup. Filesystem-specific answers are generally bad.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html