Re: [PATCH] [13/16] HWPOISON: The high level memory error handler in the VM v5
From: Nick Piggin
Date: Tue Jun 09 2009 - 05:52:11 EST
On Wed, Jun 03, 2009 at 08:46:47PM +0200, Andi Kleen wrote:
> +static int me_pagecache_clean(struct page *p, unsigned long pfn)
> +{
> + struct address_space *mapping;
> +
> + if (!isolate_lru_page(p))
> + page_cache_release(p);
> +
> + /*
> + * Now truncate the page in the page cache. This is really
> + * more like a "temporary hole punch"
> + * Don't do this for block devices when someone else
> + * has a reference, because it could be file system metadata
> + * and that's not safe to truncate.
> + */
> + mapping = page_mapping(p);
> + if (mapping && S_ISBLK(mapping->host->i_mode) && page_count(p) > 1) {
> + printk(KERN_ERR
> + "MCE %#lx: page looks like a unsupported file system metadata page\n",
> + pfn);
> + return FAILED;
> + }
page_count check is racy. Hmm, S_ISBLK should handle xfs's private mapping.
AFAIK btrfs has a similar private mapping but a quick grep does not show
up S_IFBLK anywhere, so I don't know what the situation is there.
Unfortunately though, the linear mapping is not the only metadata mapping
a filesystem might have. Many work on directories in seperate mappings
(ext2, for example, which is where I first looked and will still oops with
your check).
Also, others may have other interesting inodes they use for metadata. Do
any of them go through the pagecache? I dont know. The ext3 journal,
for example? How does that work?
Unfortunately I don't know a good way to detect regular data mappings
easily. Ccing linux-fsdevel. Until that is worked out, you'd need to
use the safe pagecache invalidate rather than unsafe truncate.
> + if (mapping) {
> + truncate_inode_page(mapping, p);
> + if (page_has_private(p) && !try_to_release_page(p, GFP_NOIO)) {
> + pr_debug(KERN_ERR "MCE %#lx: failed to release buffers\n",
> + pfn);
> + return FAILED;
> + }
> + }
> + return RECOVERED;
> +}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/