[PATCH] page_cgroup: fix horrid swap accounting regression

From: Hugh Dickins
Date: Mon Mar 05 2012 - 23:53:43 EST


Why is memcg's swap accounting so broken? Insane counts, wrong ownership,
unfreeable structures, which later get freed and then accessed after free.

Turns out to be a tiny a little 3.3-rc1 regression in 9fb4b7cc0724
"page_cgroup: add helper function to get swap_cgroup": the helper
function (actually named lookup_swap_cgroup()) returns an address
using void* arithmetic, but the structure in question is a short.

Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx>
---

mm/page_cgroup.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- 3.3-rc6/mm/page_cgroup.c 2012-01-20 08:42:35.320020840 -0800
+++ linux/mm/page_cgroup.c 2012-03-05 19:51:13.535372098 -0800
@@ -379,13 +379,15 @@ static struct swap_cgroup *lookup_swap_c
pgoff_t offset = swp_offset(ent);
struct swap_cgroup_ctrl *ctrl;
struct page *mappage;
+ struct swap_cgroup *sc;

ctrl = &swap_cgroup_ctrl[swp_type(ent)];
if (ctrlp)
*ctrlp = ctrl;

mappage = ctrl->map[offset / SC_PER_PAGE];
- return page_address(mappage) + offset % SC_PER_PAGE;
+ sc = page_address(mappage);
+ return sc + offset % SC_PER_PAGE;
}

/**
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/