Re: [PATCH 00/16] f2fs: introduce flash-friendly file system
From: Jaegeuk Kim
Date: Sat Oct 13 2012 - 08:37:23 EST
2012-10-13 (í), 13:26 +0900, Namjae Jeon:
> Is there high possibility that the storage device can be rapidly
> worn-out by cleaning process ? e.g. severe fragmentation situation by
> creating and removing small files.
>
Yes, the cleaning process in F2FS induces additional writes so that
flash storage can be worn out quickly.
However, how about in traditonal file systems?
As all of us know that, FTL has an wear-leveling issue too due to the
garbage collection overhead that is fundamentally similar to the
cleaning overhead in LFS or F2FS.
So, what's the difference between them?
IMHO, the major factor to reduce the cleaning or garbage collection
overhead is how to efficiently separate hot and cold data.
So, which is a better layer between FTL and file system to achieve that?
I think the answer is the file system, since the file system has much
more information on such a hotness of all the data, but FTL doesn't know
or is hard to figure out that kind of information.
Therefore, I think the LFS approach is more beneficial to span the life
time of the storage rather than traditional one.
And, in order to do this perfectly, one thing is a criteria, the
alignment between FTL and F2FS.
> And you told us only advantages of f2fs. Would you tell us the disadvantages ?
I think there is a scenario like this.
1) One big file is created and written data sequentially.
2) Many random writes are done across the whole file range.
3) User discards cached data by doing "drop_caches" or "reboot".
At this point, I worry about the sequential read performance due to the
fragmentation.
I don't know how frequently this use-case happens, but it is one of cons
in the LFS approach.
Nevertheless, I'm thinking that the performance could be enhanced by
cooperating with a readahead mechanism in VFS.
Thanks,
>
> Thanks.
--
Jaegeuk Kim
Samsung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/