Re: LogFS merge

From: Linus Torvalds
Date: Fri May 02 2008 - 16:34:24 EST




On Fri, 2 May 2008, JÃrn Engel wrote:
>
> Currently performance sucks badly on block device flashes (usb stick,
> etc.) when creating/removing/renaming files. The combination of logfs
> and the built-in logic can result in 1-2MB of data written to create a
> single empty file. Yuck!

Can you talk about why, and describe these kinds of things? Is it just
because of deep directory trees and having to rebuild the tree from the
root up, or is it something else going on?

> Fragmentation is neither actively avoided nor actively enforced.

I was more thinking about the fragmentation in terms of how much free
space you need for reasonable performance behavior - these kinds of things
tend to easily start behaving really badly when the disk fills up and you
need to GC all the time just to make room for new erase blocks for the
trivial inode mtime/atime updates etc.

Maybe logfs doesn't have that problem for some reason, but in many cases
there are rules like "we will consider the filesystem full when it goes
over 90% theoretical fill", and it's interesting to know.

> I guess the above could go into Documentation/filesystems/logfs.txt.
> And some more.

I did try looking at gitweb to see if I could find some documentation
file. I didn't find anything.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/