Re: [PATCH] degrade severity of lockdep printk

From: Peter Zijlstra
Date: Mon Mar 23 2009 - 17:13:21 EST


On Mon, 2009-03-23 at 16:39 -0400, Kyle McMartin wrote:
> On Mon, Mar 23, 2009 at 09:32:35PM +0100, Peter Zijlstra wrote:
> > > Although I don't really see how this can be particularly fixed, since
> > > any workload that allocates a struct with a lock, initializes, and then
> > > eventually frees it a whole bunch of times will run out of lock_lists
> > > won't it?
> >
> > Not if the lock init site doesn't change. Modules are the big exception,
> > cycling modules will run you out of lockdep space..
> >
>
> Somewhat confused by this... possibly I'm just being thick and missing
> some subtlety though, but surely the context is equally important?
> I mean, the locks held on entry to, say, a fs_operations struct member
> function could be different on every different possible callpath...

I'm thinking we're not quite understanding each other here, so let me
try and explain things (hopefully more clearly).

Each lock gets a key object, this key object must be in static storage.
Static lock objects: eg.

static spinlock_t my_lock;

can use the spinlock_t iteslf as key, dynamic objects:

struct my_struct {
spinlock_t lock;
...
};

struct my_struct *my_obj = kmalloc(sizeof(*my_obj), GFP_KERNEL);
spin_lock_init(&my->obj->lock);

will use a static storage associated with the init site:

# define spin_lock_init(lock) \
do { \
static struct lock_class_key __key; \
\
__spin_lock_init((lock), #lock, &__key); \
} while (0)

So, no matter how many my_struct instances you allocate/free, it will
not be more or less objects for lockdep. All that matters are the static
key objects.

Each key object creates a lock-class. And these lock_list thingies which
we're running low on are used to encode the relationships between such
classes.

This relation is again, purely static, its a property of the code. And
unless we have an in-kernel JIT, that doesn't change (only thing is,
we're detecting them dynamically -- but their structure is already
encoded in the code).

Now the second thing you mention, callpaths -- distinctly different from
structure instances -- these are static and predetermined at compile
time.


Now, since we're detecting these things are run-time, since we have to
deduce them from our code structure, we need this allocation stuff, and
it might turn out our estimates were too low, or alternatively, our lock
dependencies have bloated over time (certainly possible).


All this gets messed up with module unloading, we find all keys in the
module space we're about to unload and destroy all related and derived
information we had about them -- _NOT_ releasing anything back to these
static pools like the list_entries[] array.

So a module load/unload cycle will eat lockdep storage like crazy.

We could possibly fix that, but it will add complexity, which imho isn't
worth it -- just say no to modules :-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/