> .. This gives me an idea: Is it possible to speed up things even more, by
> making a "tmpfs" that wouldn't care a bit if the data is lost at boot, it
> would only care about maximal performance?
Yeah, with a tempfs, you will lose everything at boot. But with Linux
you're not necessarily going to get big speedups by redesigning things
with that assumption, because most of the slowness of real file
systems is masked by the buffer and page cache. But the temporary
nature should simplify a lot of the design.
Where are the bottlenecks in Linux file systems anyway?
> And maybe kernel should do some
> defragmenting of this if the disk isn't busy.
I thought about this, but this might be better to do later on, after
it's shown that the whole idea is doable and worthwhile (ie, second
rev). Since most files should be short-lived, defragmenting may be
of lesser value than in initially keeping the files contiguous.
I don't know how far I'll get. Basically, one of the important
reasons to write it is so I'll get more familiar with Linux
internals (is there any kernel-projects list?).