Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support

From: Pekka Enberg
Date: Thu Nov 20 2008 - 14:35:46 EST


Hi Catalin,

On Thu, Nov 20, 2008 at 1:30 PM, Catalin Marinas
<catalin.marinas@xxxxxxx> wrote:
> +#ifdef CONFIG_SMP
> +#define cache_line_align(x) L1_CACHE_ALIGN(x)
> +#else
> +#define cache_line_align(x) (x)
> +#endif

Maybe we should be put to <linux/cache.h> and call it cache_line_align_in_smp()?

> +/*
> + * Object allocation
> + */
> +static void *fast_cache_alloc(struct fast_cache *cache)
> +{
> + unsigned int cpu = get_cpu();
> + unsigned long flags;
> + struct list_head *entry;
> + struct fast_cache_page *page;
> +
> + local_irq_save(flags);
> +
> + if (list_empty(&cache->free_list[cpu]))
> + __fast_cache_grow(cache, cpu);
> +
> + entry = cache->free_list[cpu].next;
> + page = entry_to_page(entry);
> + list_del(entry);
> + page->free_nr[cpu]--;
> + BUG_ON(page->free_nr[cpu] < 0);
> + fast_cache_dec_free(cache, cpu);
> +
> + local_irq_restore(flags);
> + put_cpu_no_resched();
> +
> + return (void *)(entry + 1);
> +}

The slab allocators are pretty fast as well. Is there a reason you
can't use kmalloc() or kmem_cache_alloc() for this? You can fix the
recursion problem by adding a new GFP_NOLEAKTRACK flag that makes sure
memleak hooks are not invoked if it's set.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/