Re: [PATCH 0/3] btrfs: ENOMEM bugfixes

From: Josef Bacik
Date: Fri Feb 20 2015 - 16:22:22 EST

On 02/20/2015 04:20 PM, Omar Sandoval wrote:
On Tue, Feb 17, 2015 at 02:51:06AM -0800, Omar Sandoval wrote:

As it turns out, running with low memory is a really easy way to shake
out undesirable behavior in Btrfs. This can be especially bad when
considering that a memory limit is really easy to hit in a container
(e.g., by using cgroup memory.limit_in_bytes). Here's a simple script
that can hit several problems:


cgcreate -g memory:enomem
MEM=$((64 * 1024 * 1024))
echo $MEM > /sys/fs/cgroup/memory/enomem/memory.limit_in_bytes

cgexec -g memory:enomem ~/xfstests/ltp/fsstress -p128 -n999999999 -d /mnt/test &
trap "killall fsstress; exit 0" SIGINT SIGTERM

while true; do
cgexec -g memory:enomem python -c '
l = []
while True:

Ignoring for now the cases that drop the filesystem into read-only mode
with relatively little fuss, here are a few patches that fix some of the
low-hanging fruit. They apply to Linus' tree as of today.

So I didn't realize this until I saw Tetsuo Handa's email to the ext4
list (, but
it looks like this behavior was exposed by a change to the kernel memory
allocator related to the too-small-to-fail allocation fiasco. To
summarize, Commit 9879de7373fc (mm: page_alloc: embed OOM killing
naturally into allocation slowpath), merged for v3.19-rc7, changed the
behavior of GFP_NOFS allocations which makes it much easier to trigger
allocation failures in filesystems.

This means that Btrfs falls over under memory pressure pretty easily
now, so it might be a good idea to follow the conversation over at
linux-mm (

These are bugs regardless of the outcome there, however, so I'd like to
see this patch series merged.

Yeah I'm fine with this, your stuff fixes actual problems and they look sane so I'm cool with taking them. Regardless of what the mm guys do we shouldn't fall over horribly when allocations fail. Thanks,

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at