[PATCH v8 0/3] Charge loop device i/o to issuing cgroup
From: Dan Schatzberg
Date: Mon Aug 31 2020 - 11:37:25 EST
Much of the discussion about this has died down. There's been a
concern raised that we could generalize infrastructure across loop,
md, etc. This may be possible, in the future, but it isn't clear to me
how this would look like. I'm inclined to fix the existing issue with
loop devices now (this is a problem we hit at FB) and address
consolidation with other cases if and when those need to be addressed.
Note that this series needs to be based off of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) to compile.
Changes since V8:
* Rebased on top of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) which provides the nesting
support for setting active memcg. Dropped the patch from this series
that did the same thing.
Changes since V7:
* Rebased against linus's branch
Changes since V6:
* Added separate spinlock for worker synchronization
* Minor style changes
Changes since V5:
* Fixed a missing css_put when failing to allocate a worker
* Minor style changes
Changes since V4:
Only patches 1 and 2 have changed.
* Fixed irq lock ordering bug
* Simplified loop detach
* Added support for nesting memalloc_use_memcg
Changes since V3:
* Fix race on loop device destruction and deferred worker cleanup
* Ensure charge on shmem_swapin_page works just like getpage
* Minor style changes
Changes since V2:
* Deferred destruction of workqueue items so in the common case there
is no allocation needed
Changes since V1:
* Split out and reordered patches so cgroup charging changes are
separate from kworker -> workqueue change
* Add mem_css to struct loop_cmd to simplify logic
The loop device runs all i/o to the backing file on a separate kworker
thread which results in all i/o being charged to the root cgroup. This
allows a loop device to be used to trivially bypass resource limits
and other policy. This patch series fixes this gap in accounting.
A simple script to demonstrate this behavior on cgroupv2 machine:
'''
#!/bin/bash
set -e
CGROUP=/sys/fs/cgroup/test.slice
LOOP_DEV=/dev/loop0
if [[ ! -d $CGROUP ]]
then
sudo mkdir $CGROUP
fi
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit to tmpfs -> OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
dd if=/dev/zero of=/tmp/file bs=1M count=256" || true
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit through loopback
# device -> no OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
truncate -s 512m /tmp/backing_file
losetup $LOOP_DEV /tmp/backing_file
dd if=/dev/zero of=$LOOP_DEV bs=1M count=256;
losetup -D $LOOP_DEV" || true
grep oom_kill $CGROUP/memory.events
'''
Naively charging cgroups could result in priority inversions through
the single kworker thread in the case where multiple cgroups are
reading/writing to the same loop device. This patch series does some
minor modification to the loop driver so that each cgroup can make
forward progress independently to avoid this inversion.
With this patch series applied, the above script triggers OOM kills
when writing through the loop device as expected.
Dan Schatzberg (3):
loop: Use worker per cgroup instead of kworker
mm: Charge active memcg when no mm is set
loop: Charge i/o to mem and blk cg
drivers/block/loop.c | 248 ++++++++++++++++++++++++++++++-------
drivers/block/loop.h | 15 ++-
include/linux/memcontrol.h | 6 +
kernel/cgroup/cgroup.c | 1 +
mm/memcontrol.c | 11 +-
mm/shmem.c | 4 +-
6 files changed, 232 insertions(+), 53 deletions(-)
--
2.24.1