Re: Serious problems with HFS+
From: Roman Zippel
Date: Mon Mar 14 2005 - 05:21:01 EST
On Sun, 13 Mar 2005, Matt Mackall wrote:
> I've noticed a few problems with HFS+ support in recent kernels on
> another user's machine running Ubuntu (Warty) running
> 22.214.171.124-3-powerpc. I'm not in a position to extensively test or fix
> either of these problem because of the fs tools situation so I'm just
> passing this on.
> First, it reports inappropriate blocks to stat(2). It uses 4096 byte
> blocks rather than 512 byte blocks which stat callers are expecting.
> This seriously confuses du(1) (and me, for a bit). Looks like it may
> be forgetting to set s_blocksize_bits.
This should be fixed since 2.6.10.
> Second, if an HFS+ filesystem mounted via Firewire or USB becomes
> detached, the filesystem appears to continue working just fine. I can
> find on the entire tree, despite memory pressure. I can even create
> new files that continue to appear in directory listings! Writes to
> such files succeed (they're async, of course) and the typical app is
> none the wiser. It's only when apps attempt to read later that they
> encounter problems. It turns out that various apps including scp
> ignore IO errors on read and silently copy zero-filled files to the
> destination. So I got this report as "why aren't the pictures I took
> off my camera visible on my website?"
HFS+ metadata is also in the page cache, so as long as everything is
cached, HFS+ won't notice a problem.
> This is obviously a really nasty failure mode. At the very least, open
> of new files should fail with -EIO. Preferably the fs should force a
> read-only remount on IO errors. Given that the vast majority of HFS+
> filesystems Linux is likely to be used with are on hotpluggable media,
> I think this FS should be marked EXPERIMENTAL until such integrity
> problems are addressed.
Currently nobody tells fs about such events, so even if I check for
write errors, it can still take a while until the error is detected.
I acknowledge that this is a problem, but I don't think it's a HFS+
specific problem, there is currently no standard procedure for fs what to
do in such situations, so I didn't implement anything.
It would be nice if the fs would be tould about plug/unplug events, e.g.
HFS+ could check the mount count to see if it was connected to a different
host in the meantime and react appropriately.
> Having the whole directory tree seemingly pinned in memory is probably
> something that wants addressing as well.
This actually might be a real HFS+ problem :), I have to look into it.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/