Re: [oom]: [0/4] fix OOM deadlock running OAST

From: Andrew Morton
Date: Wed Jun 23 2004 - 20:10:02 EST


William Lee Irwin III <wli@xxxxxxxxxxxxxx> wrote:
>
> On Wed, Jun 23, 2004 at 05:26:51PM -0700, William Lee Irwin III wrote:
> > Then it sounds like the smaller fix below may be better for you.
>
> Also, as we're fixing this a different way, could you clarify for me
> which of the pieces of the original fix or related things (e.g. the
> zone->all_unreclaimable stuff, yanking PG_wired stuff off the LRU,
> maybe more) you wanted me to rework and send back in later?
>

Well none, really. This problem is now fixed, is it not?

It would be nice to fix up the unrelated issue of putting unbacked pages
onto the LRU. Could do that by adding backing_dev_info.not_on_lru, check
that in the various places where we add pages to the LRU.


Or, conceivably, do it lazily: take these pages off the LRU if we encounter
them in page reclaim. This might be a net win - do extra work for the rare
case, less work for the common case.

Something like this:

--- 25/mm/vmscan.c~a 2004-06-23 18:05:19.738648736 -0700
+++ 25-akpm/mm/vmscan.c 2004-06-23 18:06:00.386469328 -0700
@@ -367,6 +367,17 @@ static int shrink_list(struct list_head
if (page_mapped(page) || PageSwapCache(page))
sc->nr_scanned++;

+ mapping = page_mapping(page);
+
+ if (mapping && mapping->backing_dev_info->not_on_lru) {
+ /*
+ * comment goes here
+ */
+ unlock_page(page);
+ page_cache_release(page);
+ continue;
+ }
+
page_map_lock(page);
referenced = page_referenced(page);
if (referenced && page_mapping_inuse(page)) {
@@ -386,11 +397,11 @@ static int shrink_list(struct list_head
page_map_unlock(page);
if (!add_to_swap(page))
goto activate_locked;
+ mapping = page_mapping(page);
page_map_lock(page);
}
#endif /* CONFIG_SWAP */

- mapping = page_mapping(page);
may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
(PageSwapCache(page) && (sc->gfp_mask & __GFP_IO));

_

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/