Oh, heh, that's one of the features I actually liked from Sun's filesystem
that was never implemented on the SGI systems I adminned. I've still got
that.
> If the undelete stuff was fixed, I'd want to have a cron job that scaned
> the undeleteable deleted files, and really got rid of them after a set
> time (depending on your avalable disk space anything from 1 hour to weeks).
>
> Also you'd have to make it a real pain for users to undelete stuff if the
> files arn't part of there quota (ie just keep the file's deletion date
> under 5 days, and it's never purged, instant free disk space).
Well, is it really necessary to set the timeout at longer than a day
on deleted files? From when I've wanted to recover files it's usually
''oh, crap, I just typed rm -rf on the wrong directory DOH!" and frantically
hit the break key to stop it before it purges everything; usually my recovery
time would be less than 15 minutes. I can see if you wrote a cron job that
ran at night you might want up to a day to undelete files, but I can't imagine
a case for a longer recovery period. Of course, I haven't thought about this
too hard either :)
> You could make sure it's part of there quota, and it's not a problem,
> except when you have users mailing that they can't save even though
> they've got no files in there home directory (though you could allow them
> access to the purge command in the same way is passwd works (ie 'passwd' is
> me, 'passwd user' is user, and only root can specify a user name).
Hmm... that's an interesting idea. A quota with soft, hard limits, along with
an 'undelete' space limit? That would make the undelete feature easier
to accept, I think. That way the people that didn't want that feature could
set the 'undelete' space limit to 0.
-- Bob Glamm | "You can't do a `goto' to a block Email: glamm@mountains.ee.umn.edu | that has been optimized away. URL: http://www.cs.umn.edu/~glamm | Darn." Home: (612)623-9437 Work: (612)625-7876 | - from the perltrap(1) manpage