Re: [PATCH] fs: fcntl_setlease defies lease_init assumptions
From: Pekka Enberg
Date: Mon May 08 2006 - 15:36:04 EST
On Mon, 2006-05-08 at 09:28 -0700, Linus Torvalds wrote:
> So from a performance standpoint, maybe my previous trivial patch is the
> right thing to do, along with an even _stronger_ test for
> kmem_cache_free(), where we could do
>
> BUG_ON(virt_to_cache(objp) != cachep);
>
> which you can then remove from the slab debug case.
>
> So for a lot of the normal paths, you'd basically have no extra cost (two
> instructions, no data cache pressure), but for kmem_cache_free() we'd have
> a slight overhead (but a lot lower than SLAB_DEBUG, and at least for NUMA
> it's reading a cacheline that we'd be using regardless.
>
> I think it sounds like it's worth it, but I'm not going to really push it.
Sounds good to me. Andrew?
Pekka
[PATCH] slab: verify pointers before free
From: Pekka Enberg <penberg@xxxxxxxxxxxxxx>
Passing an invalid pointer to kfree() and kmem_cache_free() is likely
to cause bad memory corruption or even take down the whole system
because the bad pointer is likely reused immediately due to the
per-CPU caches. Until now, we don't do any verification for this if
CONFIG_DEBUG_SLAB is disabled.
As suggested by Linus, add PageSlab check to page_to_cache() and
page_to_slab() to verify pointers passed to kfree(). Also, move the
stronger check from cache_free_debugcheck() to kmem_cache_free() to
ensure the passed pointer actually belongs to the cache we're about to
free the object.
For page_to_cache() and page_to_slab(), the assertions should have
virtually no extra cost (two instructions, no data cache pressure) and
for kmem_cache_free() the overhead should be minimal.
Signed-off-by: Pekka Enberg <penberg@xxxxxxxxxxxxxx>
---
mm/slab.c | 13 ++++---------
1 files changed, 4 insertions(+), 9 deletions(-)
8e4b800f3fb45bbffcc7b365115a63b2a4c911cb
diff --git a/mm/slab.c b/mm/slab.c
index c32af7e..bc9805a 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -597,6 +597,7 @@ static inline struct kmem_cache *page_ge
{
if (unlikely(PageCompound(page)))
page = (struct page *)page_private(page);
+ BUG_ON(!PageSlab(page));
return (struct kmem_cache *)page->lru.next;
}
@@ -609,6 +610,7 @@ static inline struct slab *page_get_slab
{
if (unlikely(PageCompound(page)))
page = (struct page *)page_private(page);
+ BUG_ON(!PageSlab(page));
return (struct slab *)page->lru.prev;
}
@@ -2597,15 +2599,6 @@ static void *cache_free_debugcheck(struc
kfree_debugcheck(objp);
page = virt_to_page(objp);
- if (page_get_cache(page) != cachep) {
- printk(KERN_ERR "mismatch in kmem_cache_free: expected "
- "cache %p, got %p\n",
- page_get_cache(page), cachep);
- printk(KERN_ERR "%p is %s.\n", cachep, cachep->name);
- printk(KERN_ERR "%p is %s.\n", page_get_cache(page),
- page_get_cache(page)->name);
- WARN_ON(1);
- }
slabp = page_get_slab(page);
if (cachep->flags & SLAB_RED_ZONE) {
@@ -3361,6 +3354,8 @@ void kmem_cache_free(struct kmem_cache *
{
unsigned long flags;
+ BUG_ON(virt_to_cache(objp) != cachep);
+
local_irq_save(flags);
__cache_free(cachep, objp);
local_irq_restore(flags);
--
1.3.0
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/