Re: Kernel WARNING: at net/core/dev.c:1330__netif_schedule+0x2c/0x98()
From: Peter Zijlstra
Date: Wed Jul 23 2008 - 05:50:31 EST
On Wed, 2008-07-23 at 09:35 +0000, Jarek Poplawski wrote:
> On Wed, Jul 23, 2008 at 11:03:06AM +0200, Peter Zijlstra wrote:
> > On Wed, 2008-07-23 at 08:54 +0000, Jarek Poplawski wrote:
> > > On Wed, Jul 23, 2008 at 12:59:21AM -0700, David Miller wrote:
> > > > From: Jarek Poplawski <jarkao2@xxxxxxxxx>
> > > > Date: Wed, 23 Jul 2008 06:20:36 +0000
> > > >
> > > > > PS: if there is nothing new in lockdep the classical method would
> > > > > be to change this static array:
> > > > >
> > > > > static struct lock_class_key
> > > > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
> > > > >
> > > > > to
> > > > >
> > > > > static struct lock_class_key
> > > > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)][MAX_NUM_TX_QUEUES];
> > > > >
> > > > > and set lockdep classes per queue as well. (If we are sure we don't
> > > > > need lockdep subclasses anywhere this could be optimized by using
> > > > > one lock_class_key per 8 queues and spin_lock_nested()).
> > > >
> > > > Unfortunately MAX_NUM_TX_QUEUES is USHORT_MAX, so this isn't really
> > > > a feasible approach.
> > >
> > > Is it used by real devices already? Maybe for the beginning we could
> > > start with something less?
> > >
> > > > spin_lock_nested() isn't all that viable either, as the subclass
> > > > limit is something like 8.
> > >
> > > This method would need to do some additional counting: depending of
> > > a queue number each 8 subsequent queues share (are set to) the same
> > > class and their number mod 8 gives the subqueue number for
> > > spin_lock_nested().
> > >
> > > I'll try to find if there is something new around this in lockdep.
> > > (lockdep people added to CC.)
> >
> > There isn't.
> >
> > Is there a static data structure that the driver needs to instantiate to
> > 'create' a queue? Something like:
> >
> > /* this imaginary e1000 hardware has 16 hardware queues */
> > static struct net_tx_queue e1000e_tx_queues[16];
>
> I guess, not.
>
> Then, IMHO, we could be practical and simply skip lockdep validation
> for "some" range of locks: e.g. to set the table for the first 256
> queues only, and to do e.g. __raw_spin_lock() for bigger numbers. (If
> there are any bad locking patterns this should be enough for checking.)
definite NAK on using raw spinlock ops...
I'll go look at this multiqueue stuff to see if there is anything to be
done.. David, what would be a good commit to start reading?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/