Re: [PATCH] Make /proc/slabinfo 0400

From: Pekka Enberg
Date: Fri Mar 04 2011 - 15:02:57 EST


On Fri, Mar 4, 2011 at 8:14 PM, Matt Mackall <mpm@xxxxxxxxxxx> wrote:
>> Of course, as you say, '/proc/meminfo' still does give you the trigger
>> for "oh, now somebody actually allocated a new page". That's totally
>> independent of slabinfo, though (and knowing the number of active
>> slabs would neither help nor hurt somebody who uses meminfo - you
>> might as well allocate new sockets in a loop, and use _only_ meminfo
>> to see when that allocated a new page).
>
> I think lying to the user is much worse than changing the permissions.
> The cost of the resulting confusion is WAY higher.

Yeah, maybe. I've attached a proof of concept patch that attempts to
randomize object layout in individual slabs. I'm don't completely
understand the attack vector so I don't make any claims if the patch
helps or not.

Pekka
From cd1e20fb8eb44627fa5ccebc8a2803c1fd7bf7ba Mon Sep 17 00:00:00 2001
From: Pekka Enberg <penberg@xxxxxxxxxx>
Date: Fri, 4 Mar 2011 21:28:56 +0200
Subject: [PATCH] SLUB: Randomize object layout in slabs

Signed-off-by: Pekka Enberg <penberg@xxxxxxxxxx>
---
mm/slub.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 45 insertions(+), 0 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index e15aa7f..1837fe3 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -27,6 +27,7 @@
#include <linux/memory.h>
#include <linux/math64.h>
#include <linux/fault-inject.h>
+#include <linux/random.h>

#include <trace/events/kmem.h>

@@ -1183,6 +1184,46 @@ static void setup_object(struct kmem_cache *s, struct page *page,
s->ctor(object);
}

+static bool setup_slab_randomized(struct kmem_cache *s, struct page *page, gfp_t flags)
+{
+ unsigned long bitmap[8];
+ size_t bitmap_size;
+ void *last, *start;
+
+ bitmap_size = BITS_TO_LONGS(page->objects) * sizeof(unsigned long);
+
+ if (ARRAY_SIZE(bitmap) * sizeof(unsigned long) < bitmap_size)
+ return false;
+
+ bitmap_fill(bitmap, page->objects);
+
+ start = page_address(page);
+
+ last = start;
+ while (!bitmap_empty(bitmap, page->objects)) {
+ unsigned long idx;
+ void *p;
+
+ idx = get_random_int() % page->objects;
+
+ idx = find_next_bit(bitmap, page->objects, idx);
+
+ if (idx >= page->objects)
+ continue;
+
+ clear_bit(idx, bitmap);
+
+ p = start + idx * s->size;
+ setup_object(s, page, last);
+ set_freepointer(s, last, p);
+ last = p;
+ }
+ setup_object(s, page, last);
+ set_freepointer(s, last, NULL);
+
+ return true;
+}
+
static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
{
struct page *page;
@@ -1206,6 +1247,9 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
if (unlikely(s->flags & SLAB_POISON))
memset(start, POISON_INUSE, PAGE_SIZE << compound_order(page));

+ if (setup_slab_randomized(s, page, flags))
+ goto done;
+
last = start;
for_each_object(p, s, start, page->objects) {
setup_object(s, page, last);
@@ -1215,6 +1259,7 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
setup_object(s, page, last);
set_freepointer(s, last, NULL);

+done:
page->freelist = start;
page->inuse = 0;
out:
--
1.7.0.4