Re: use of preempt_count instead of in_atomic() at leds-gpio.c

From: Andrew Morton
Date: Thu Mar 20 2008 - 22:29:45 EST


On Fri, 21 Mar 2008 02:36:51 +0100 Michael Buesch <mb@xxxxxxxxx> wrote:

> On Friday 21 March 2008 02:31:44 Alan Stern wrote:
> > On Thu, 20 Mar 2008, Andrew Morton wrote:
> >
> > > On Thu, 20 Mar 2008 21:36:04 -0300 Henrique de Moraes Holschuh <hmh@xxxxxxxxxx> wrote:
> > >
> > > > Well, so far so good for LEDs, but what about the other users of in_atomic
> > > > that apparently should not be doing it either?
> > >
> > > Ho hum. Lots of cc's added.
> >
> > ...
> >
> > > The usual pattern for most of the above is
> > >
> > > if (!in_atomic())
> > > do_something_which_might_sleep();
> > >
> > > problem is, in_atomic() returns false inside spinlock on non-preptible
> > > kernels. So if anyone calls those functions inside spinlock they will
> > > incorrectly schedule and another task can then come in and try take the
> > > already-held lock.
> > >
> > > Now, it happens that in_atomic() returns true on non-preemtible kernels
> > > when running in interrupt or softirq context. But if the above code really
> > > is using in_atomic() to detect am-i-called-from-interrupt and NOT
> > > am-i-called-from-inside-spinlock, they should be using in_irq(),
> > > in_softirq() or in_interrupt().
> >
> > Presumably most of these places are actually trying to detect
> > am-i-allowed-to-sleep. Isn't that what in_atomic() is supposed to do?
>
> No, I think there is no such check in the kernel. Most likely for performance
> reasons, as it would require a global flag that is set on each spinlock.

Yup. non-preemptible kernels avoid the inc/dec of
current_thread_info->preempt_count on spin_lock/spin_unlock

> You simply must always _know_, if you are allowed to sleep or not. This is
> done by defining an API. The call-context is part of any kernel API.

Yup. 99.99% of kernel code manages to do this...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/