On Wed, 8 October 2008 16:51:46 -0400, Chris Snook wrote:Stefan Monnier wrote:
Writes to magnetic disks are functionally atomic at the sector level. With SSDs, writing requires an erase followed by rewriting the sectors that aren't changing. This means that an ill-timed power loss can corrupt an entire erase block, which could be up to 256k on some MLC flash. Unless
What makes you think that? The standard mode of operation in El Cheapo
devices is to write to a new eraseblock first, then delete the old one.
An ill-timed power loss results in either the old or the new block being
valid as a whole. This has been the standard ever since you could buy
4MB compactflash cards.
logfs tries to solve the write amplification problem by forcing all write activity to be sequential. I'm not sure how mature it is.
Still under development. What exactly do you mean by the write
amplification problem?
Or is there some hope for SSDs to provide access to the MTD layer in theI hope not. The proper fix is to have the devices report their physical topology via SCSI/ATA commands. This allows dumb software to function correctly, albeit inefficiently, and allows smart software to optimize itself. This technique also helps with RAID arrays, large-sector disks, etc.
not too distant future?
Having access to the actual flash would provide a large number of
benefits. It just isn't a safe default choice at the moment.
I suspect that in the long run, the problem will go away. Erase blocks are a relic of the days when flash was used primarily for low-power, read-mostly applications. As the SSD market heats up, the flash vendors will move to smaller erase blocks, possibly as small as the sector size.
Do you have any information to back this claim? AFAICT smaller erase
blocks would require more chip area per bit, making devices more
expensive. If anything, I can see a trend towards bigger erase blocks.