Re: [PATCH] [13/16] HWPOISON: The high level memory error handlerin the VM v3
From: Wu Fengguang
Date: Thu May 28 2009 - 06:33:29 EST
On Thu, May 28, 2009 at 06:11:11PM +0800, Andi Kleen wrote:
> On Thu, May 28, 2009 at 05:59:34PM +0800, Wu Fengguang wrote:
> > Dirty swap cache page is tricky to handle. The page could live both in page
> > cache and swap cache(ie. page is freshly swapped in). So it could be referenced
> > concurrently by 2 types of PTEs: one normal PTE and another swap PTE. We try to
> > handle them consistently by calling try_to_unmap(TTU_IGNORE_HWPOISON) to convert
> > the normal PTEs to swap PTEs, and then
> > - clear dirty bit to prevent IO
> > - remove from LRU
> > - but keep in the swap cache, so that when we return to it on
> > a later page fault, we know the application is accessing
> > corrupted data and shall be killed (we installed simple
> > interception code in do_swap_page to catch it).
> That's a good description. I'll add it as a comment to the code.
> > > You haven't waited on writeback here AFAIKS, and have you
> > > *really* verified it is safe to call delete_from_swap_cache?
> > Good catch. I'll soon submit patches for handling the under
> > read/write IO pages. In this patchset they are simply ignored.
> Yes, we assume the IO device does something sensible with the poisoned
> cache lines and aborts. Later we can likely abort IO requests in a early
> stage on the Linux, but that's more advanced.
> The question is if we need to wait on writeback for correctness?
Not necessary. Because I'm going to add a me_writeback() handler.
Then the writeback pages simply won't reach here. And it won't
magically go into writeback state, since the page has been locked.
> We still don't want to crash if we take a page away that is currently
> My original assumption was that taking the page lock would take
> care of that. Is that not true?
> ak@xxxxxxxxxxxxxxx -- Speaking for myself only.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/