Re: [PATCH 2/2] usbnet: Fix a race between usbnet_stop() and the BH

From: Eugene Shatokhin
Date: Mon Aug 31 2015 - 04:50:40 EST


31.08.2015 10:32, BjÃrn Mork ÐÐÑÐÑ:
Eugene Shatokhin <eugene.shatokhin@xxxxxxxxxx> writes:
28.08.2015 11:55, BjÃrn Mork ÐÐÑÐÑ:

I guess you are right. At least I cannot prove that you are not :)

There is a bit too much complexity involved here for me...

:-)

Yes, it is quite complex.

I admit, it was easier for me to find the races in usbnet (the tools
like KernelStrider and RaceHound do the dirty work) than to analyze
their consequences. The latter often requires some time and effort,
and so it did this time.

Well, any objections to this patch?

No objections from me.

But I would have liked it much better if the code became simpler instead
of more complex.

Me too, but I can see no other way here. The code is simpler without locking, indeed, but locking is needed to prevent the problems described earlier.

One needs to make sure that checking if txq or rxq is empty in usbnet_terminate_urbs() cannot get inbetween of processing of these queues and dev->done in defer_bh(). So 'list' and 'dev->done' must be updated under a common lock in defer_bh(). list->lock is an obvious candidate for this.

For the same reason, skb_queue_empty(q) must be called under q->lock. So the code takes it, calls skb_queue_empty(q) and then releases it to wait a little. Rinse and repeat.

The last complex piece is that spin_lock_nested() in defer_bh. It is safe to take both list->lock and dev->done.lock there (defer_bh can only be called for list = dev->rxq or dev->txq but not for dev->done). For lockdep, however, this is suspicious because '*list' and 'dev->done' are of the same type so the lock class is the same. So it complained.

To tell lockdep it is OK to use such locking scheme in this particular case, the recommended pattern was used: spin_lock_nested with SINGLE_DEPTH_NESTING.

Regards,
Eugene

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/