Re: upcoming kerneloops.org item: get_page_from_freelist
From: Linus Torvalds
Date: Thu Jun 25 2009 - 16:12:37 EST
On Thu, 25 Jun 2009, Theodore Tso wrote:
>
> Never mind, stupid question; I hit the send button before thinking
> about this enough. Obviously we should try without GFP_ATOMIC so the
> allocator can try to release some memory. So maybe the answer for
> filesystem code where the alternative to allocator failure is
> remounting the root filesystem read-only or panic(), should be:
>
> 1) Try to do the allocation GFP_NOFS.
Well, even with NOFS, the kernel will still do data writeout that can be
done purely by swapping. NOFS is really about avoiding recursion from
filesystems.
So you might want to try GFP_NOIO, which will mean that the kernel will
try to free memory that needs no IO at all. This also protects from
recursion in the IO path (ie block layer request allocation etc).
That said, GFP_ATOMIC may be better than both in practice, if only because
it might be better at balancing memory usage (ie too much "NOIO" might
result in the kernel aggressively dropping clean page-cache pages, since
it cannot drop dirty ones).
Note the "might". It probably doesn't matter in practice, since the bulk
of all allocations should always hopefully be GFP_KERNEL or GFP_USER.
> 2) Then try GFP_ATOMIC
The main difference between NOIO and ATOMIC is
- ATOMIC never tries to free _any_ kind of memory, since it doesn't want
to take the locks, and cannot enable interrupts.
- ATOMIC has the magic "high priority" bit set that means that you get to
dip into critical memory resources in order to satisfy the memory
request.
Whether these are important to you or not, I dunno. I actually suspect
that we might want a combination of "high priority + allow memory
freeing", which would be
#define GFP_CRITICAL (__GFP_HIGH | __GFP_WAIT)
and might be useful outside of interrupt context for things that _really_
want memory at all costs.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/