I will send the next version today. Note that I get_random_bytes_arch
is used because at that stage we have 0 bits of entropy. It seemed
like a better idea to use the arch version that will fallback on
get_random_bytes sub API in the worse case.
On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier <thgarnie@xxxxxxxxxx> wrote:
Thanks for the comments. I will address them in a v2 early next week.
If anyone has other comments, please let me know.
Thomas
On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches <joe@xxxxxxxxxxx> wrote:
On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier <thgarnie@xxxxxxxxxx> wrote:
Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
SLAB freelist. The list is randomized during initialization of a new set
of pages. The order on different freelist sizes is pre-computed at boot
for performance. This security feature reduces the predictability of the
kernel SLAB allocator against heap overflows rendering attacks much less
stable.
trivia:
[]@@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache *cachep, int index)
+ */
+static freelist_idx_t master_list_2[2];
+static freelist_idx_t master_list_4[4];
+static freelist_idx_t master_list_8[8];
+static freelist_idx_t master_list_16[16];
+static freelist_idx_t master_list_32[32];
+static freelist_idx_t master_list_64[64];
+static freelist_idx_t master_list_128[128];
+static freelist_idx_t master_list_256[256];
+static struct m_list {
+ size_t count;
+ freelist_idx_t *list;
+} master_lists[] = {
+ { ARRAY_SIZE(master_list_2), master_list_2 },
+ { ARRAY_SIZE(master_list_4), master_list_4 },
+ { ARRAY_SIZE(master_list_8), master_list_8 },
+ { ARRAY_SIZE(master_list_16), master_list_16 },
+ { ARRAY_SIZE(master_list_32), master_list_32 },
+ { ARRAY_SIZE(master_list_64), master_list_64 },
+ { ARRAY_SIZE(master_list_128), master_list_128 },
+ { ARRAY_SIZE(master_list_256), master_list_256 },
+};
static const struct m_list?