Unless I've accidently run my portable (written in straight ANSI C...works on
Unix, Windows, and Mac!) file system fragmenter on it.
Basic algorithm to create a highly fragmented file on pretty much any
file system:
while file system not full
create random small files
delete one of them
open target file for writing
while target file not fully written
write until error
delete one of the small files at random
close target file
delete all of the small random files that remain
Are there any file systems around that will manage to resist fragmentation
if subjected to that?
(No, I'm not insane. I wrote a fragmenter so I could test a Mac background
defragmenter I wrote).
--Tim Smith
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/