Re: [patch] Real-Time Preemption, -RT-2.6.12-rc6-V0.7.48-00
From: Esben Nielsen
Date: Wed Jun 15 2005 - 04:16:15 EST
I have made a patch renaming
local_irq_disable()
to
rt_preempt_irq_disable()
local_irq_disable() is a macro for rt_preempt_irq_disable() (see below).
I think the latter name is tells you more when considering both PREEMPT_RT
and !PREEMPT_RT
In PREEMPT_RT: It disables preemption and threaded irq handlers.
In !PREEMPT_RT: it disables irqs and preemption and it has something to do
with RT_PREEMPT, so keep your hands off!
Furthermore I have made an option to make all local_irq_disable() stop
compiling (CONFIG_NO_LOCAL_IRQ_DISABLE). local_irq_disable() removes
preemption. We don't want that in PREEMPT_RT. All places where it is used
it ought to be reviewed wrt. RT. When that is done we can replace them
with rt_preempt_irq_disable() or better: a lock. When I get time I'll try
to make a lock type which is spinlock_t in PREEMPT_RT but a local cpu lock
with no spin in !PREEMPT_RT.
I also started on a small document Documentation/rt-locking.txt - you are
welcome to skip it. It is certainly not the best document ever written but
I think someone ought to write some short documentation of the new
locking structure and how people wanting to have PREEMPT_RT and people
only caring about !PREEMPT_RT should behave and still be live in the same
code-tree without being of too much anoyance to each other.
Esben
On Wed, 15 Jun 2005, Ingo Molnar wrote:
>
> * Daniel Walker <dwalker@xxxxxxxxxx> wrote:
>
> > > ah ... accidentaly had debug_direct_keyboard = 1 in kernel/irq/handle.c.
> > > Change it to 0 & recompile, or pick up the -48-33 patch i just uploaded.
> >
> > I think your putting to many raw_local_irq_disable calls back in ..
> > Are you planning to do an audit at some point ?
>
> correctness comes first. The softirq code needs to use the raw irq flag,
> because direct interrupts (the timer irq) may modify
> local_softirq_pending().
>
> Ingo
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/Documentation/rt-locking.txt linux-2.6.12-rc6-V0.7.48-25.new/Documentation/rt-locking.txt
--- linux-2.6.12-rc6-V0.7.48-25/Documentation/rt-locking.txt 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.12-rc6-V0.7.48-25.new/Documentation/rt-locking.txt 2005-06-15 00:34:24.000000000 +0200
@@ -0,0 +1,81 @@
+ About locks in PREEMPT_RT mode
+
+ Esben Nielsen <simlo@xxxxxxxxxx>
+
+ 14 Juny 2005
+
+The goal of PREEMPT_RT option is to make Linux into a determnistic RT system.
+I.e. worst case the latency of any only depends on the tasks running at higher
+priority and hard irqs (see below). If the task, waiting for some external
+event, has the highest priority, the task should be running within a
+deterministic amount of time after the external event has happened. That amount
+of time depends on the hardware, the irq handler delivering the event but
+nothing else.
+
+To obtain this the following has been done:
+
+1) All normal Linux interrupt handlers (hard and soft) run in threads. You can
+add non-threaded interrupt handlers but we then assume you are familiar with
+real-time programming and know the rules.
+
+2) A lot of locks and irq-macros have been redefined to make a lot sections
+in the kernel preemptible. New ones have also been introduced. The new
+locks translates into some old well known lock for !PREEMPT_RT. The following
+table says which translation is going on.
+
+
+!PREEMPT_RT | PREEMPT_RT
+------------------------------------------------------------------------------
+spinlock_t | semaphore
+ Disable interrupts locally and | Fully preemptible mutex with priority
+ spins. | inheritance.
+ |
+rwlock_t | rwlock_t
+ Like spinlock_t but allows many | Fully preemptible mutex. Can be used
+ readers. | recursively. Only 1 reader.
+ |
+local_irq_XX | rt_preempt_irq_XX
+ Disable interrupts locally. | Disables preemption locally. Thus
+ | also disables all normal Linux irq
+ | handlers, which are threaded.
+ |
+raw_spinlock | raw_spinlock
+ Same as spinlock. | Same as spinlock for !PREEMPT_RT
+ |
+raw_local_irq_XX | raw_local_irq_XX
+ Same as local_irq_XX. | Same as local_irq_XX for !PREEMPT_RT
+
+raw_X translates to X when !PREEMPT_RT.
+
+Rules for use:
+If you do only code under !PREEMPT_RT and do not care about real-time don't use
+ raw_*
+ rt_preempt_*
+If you want change code protected with these kind of locks: Change them to the
+corresponding !PREEMPT_RT locks before submitting a patch. People carring
+about real-time will prefer this breakage over getting a system not being
+real-time.
+Use the usual locks as you have always done.
+
+If you want to make stuff under PREEMPT_RT:
+Use rt_preempt_irq_* and raw_* with ONLY around code sections which are
+1) Deterministic in execution time. I.e. no algoritms being O(N), N not a
+constant.
+2) Execution time must be in the order of schedule() or less.
+
+Otherwise use the non-raw versions. Or even better: use a mutex (which by
+tradition is called semaphore although I find that highly missleading!). Even
+!PREEMPT_RT it wont be nice to use a spinlock, but you have to do it if you
+share data with an interrupt handler.
+
+If you want to make non-thread irq-handlers you must use raw_local_irq_XX
+to protect data shared with threads. raw_spin_lock does _not_ protect data
+as it only turns off preemption!!
+
+
+
+
+
+
+
+
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/arch/i386/kernel/entry.S linux-2.6.12-rc6-V0.7.48-25.new/arch/i386/kernel/entry.S
--- linux-2.6.12-rc6-V0.7.48-25/arch/i386/kernel/entry.S 2005-06-13 22:56:34.000000000 +0200
+++ linux-2.6.12-rc6-V0.7.48-25.new/arch/i386/kernel/entry.S 2005-06-14 23:53:31.000000000 +0200
@@ -332,7 +332,7 @@
cli
call __schedule
#ifdef CONFIG_PREEMPT_RT
- call local_irq_enable_noresched
+ call rt_preempt_irq_enable_noresched
#endif
# make sure we don't miss an interrupt
# setting need_resched or sigpending
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/arch/i386/kernel/process.c linux-2.6.12-rc6-V0.7.48-25.new/arch/i386/kernel/process.c
--- linux-2.6.12-rc6-V0.7.48-25/arch/i386/kernel/process.c 2005-06-13 22:56:34.000000000 +0200
+++ linux-2.6.12-rc6-V0.7.48-25.new/arch/i386/kernel/process.c 2005-06-15 00:42:34.000000000 +0200
@@ -211,7 +211,8 @@
*/
static void mwait_idle(void)
{
- local_irq_enable();
+ rt_preempt_irq_enable(); /* Safe wrt. RT because the need_resched() is
+ polled below */
if (!need_resched() && !need_resched_delayed()) {
set_thread_flag(TIF_POLLING_NRFLAG);
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/include/linux/interrupt.h linux-2.6.12-rc6-V0.7.48-25.new/include/linux/interrupt.h
--- linux-2.6.12-rc6-V0.7.48-25/include/linux/interrupt.h 2005-06-13 22:56:35.000000000 +0200
+++ linux-2.6.12-rc6-V0.7.48-25.new/include/linux/interrupt.h 2005-06-15 00:40:23.000000000 +0200
@@ -61,6 +61,7 @@
* Temporary defines for UP kernels, until all code gets fixed.
*/
#ifndef CONFIG_SMP
+#ifndef CONFIG_NO_LOCAL_IRQ_DISABLE
static inline void __deprecated cli(void)
{
local_irq_disable();
@@ -84,6 +85,7 @@
local_irq_save(*x);
}
#define save_and_cli(x) save_and_cli(&x)
+#endif /* CONFIG_NO_LOCAL_IRQ_DISABLE */
#endif /* CONFIG_SMP */
#ifdef CONFIG_PREEMPT_RT
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/include/linux/rt_irq.h linux-2.6.12-rc6-V0.7.48-25.new/include/linux/rt_irq.h
--- linux-2.6.12-rc6-V0.7.48-25/include/linux/rt_irq.h 2005-06-13 22:56:35.000000000 +0200
+++ linux-2.6.12-rc6-V0.7.48-25.new/include/linux/rt_irq.h 2005-06-15 09:46:01.000000000 +0200
@@ -6,17 +6,34 @@
*/
#ifdef CONFIG_PREEMPT_RT
-extern void local_irq_enable(void);
-extern void local_irq_enable_noresched(void);
-extern void local_irq_disable(void);
-extern void local_irq_restore(unsigned long flags);
-extern void __local_save_flags(unsigned long *flags);
-extern void __local_irq_save(unsigned long *flags);
+
+extern void rt_preempt_irq_enable(void);
+extern void rt_preempt_irq_enable_noresched(void);
+extern void rt_preempt_irq_disable(void);
+extern void rt_preempt_irq_restore(unsigned long flags);
+extern void __rt_preempt_save_flags(unsigned long *flags);
+extern void __rt_preempt_irq_save(unsigned long *flags);
extern int irqs_disabled(void);
extern int irqs_disabled_flags(unsigned long flags);
-# define local_save_flags(flags) __local_save_flags(&(flags))
-# define local_irq_save(flags) __local_irq_save(&(flags))
+# define rt_preempt_save_flags(flags) __rt_preempt_save_flags(&(flags))
+# define rt_preempt_irq_save(flags) __rt_preempt_irq_save(&(flags))
+
+#ifdef CONFIG_NO_LOCAL_IRQ_DISABLE
+#define local_irq_enable() local_irq_not_allowed()
+#define local_irq_disable() local_irq_not_allowed()
+#define local_irq_restore(flags) local_irq_not_allowed()
+#define local_irq_save(flags) local_irq_not_allowed()
+#define local_irq_enable_noresched() local_irq_not_allowed()
+#else /* CONFIG_NO_LOCAL_IRQ_DISABLE */
+#define local_irq_enable() rt_preempt_irq_enable()
+#define local_irq_enable_noresched() rt_preempt_irq_enable_noresched()
+#define local_irq_disable() rt_preempt_irq_disable()
+#define local_irq_restore(flags) rt_preempt_irq_restore(flags)
+#define local_irq_save(flags) rt_preempt_irq_save(flags)
+#define local_save_flags(flags) rt_preempt_save_flags(flags)
+#endif /* CONFIG_NO_LOCAL_IRQ_DISABLE */
+
/* soft state does not follow the hard state */
# define raw_local_save_flags(flags) __raw_local_save_flags(flags)
@@ -25,7 +42,7 @@
# define raw_local_irq_save(flags) __raw_local_irq_save(flags)
# define raw_local_irq_restore(flags) __raw_local_irq_restore(flags)
# define raw_safe_halt() __raw_safe_halt()
-#else
+#else /* CONFIG_PREEMPT_RT */
# define raw_local_save_flags __raw_local_save_flags
# define raw_local_irq_enable __raw_local_irq_enable
# define raw_local_irq_disable __raw_local_irq_disable
@@ -40,7 +57,14 @@
# define local_irq_restore __raw_local_irq_restore
# define irqs_disabled __raw_irqs_disabled
# define irqs_disabled_flags __raw_irqs_disabled_flags
-#endif
+
+# define rt_preempt_irq_enable() local_irq_enable()
+# define rt_preempt_irq_enable_noresched() local_irq_enable_noresched()
+# define rt_preempt_irq_disable() local_irq_disable()
+# define rt_preempt_irq_restore(flags) local_irq_restore(flags)
+# define rt_preempt_irq_save(flags) local_irq_save(flags)
+# define rt_preempt_save_flags(flags) local_save_flags(flags)
+#endif /* CONFIG_PREEMPT_RT */
#define raw_irqs_disabled __raw_irqs_disabled
#define raw_irqs_disabled_flags __raw_irqs_disabled_flags
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/init/main.c linux-2.6.12-rc6-V0.7.48-25.new/init/main.c
--- linux-2.6.12-rc6-V0.7.48-25/init/main.c 2005-06-13 22:56:33.000000000 +0200
+++ linux-2.6.12-rc6-V0.7.48-25.new/init/main.c 2005-06-15 00:40:45.000000000 +0200
@@ -433,7 +433,7 @@
* Force the soft IRQ state to mimic the hard state until
* we finish boot-up.
*/
- local_irq_disable();
+ rt_preempt_irq_disable();
#endif
/*
@@ -468,7 +468,7 @@
/*
* Reset the irqs off flag after sched_init resets the preempt_count.
*/
- local_irq_disable();
+ rt_preempt_irq_disable();
#endif
build_all_zonelists();
@@ -589,7 +589,7 @@
}
if (irqs_disabled()) {
msg = "disabled interrupts";
- local_irq_enable();
+ rt_preempt_irq_enable();
}
#ifdef CONFIG_PREEMPT_RT
if (raw_irqs_disabled()) {
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/kernel/rt.c linux-2.6.12-rc6-V0.7.48-25.new/kernel/rt.c
--- linux-2.6.12-rc6-V0.7.48-25/kernel/rt.c 2005-06-13 22:56:33.000000000 +0200
+++ linux-2.6.12-rc6-V0.7.48-25.new/kernel/rt.c 2005-06-15 00:23:29.000000000 +0200
@@ -2020,24 +2020,24 @@
}
#endif
-void local_irq_enable_noresched(void)
+void rt_preempt_irq_enable_noresched(void)
{
unmask_preempt_count(IRQSOFF_MASK);
}
-EXPORT_SYMBOL(local_irq_enable_noresched);
+EXPORT_SYMBOL(rt_preempt_irq_enable_noresched);
-void local_irq_enable(void)
+void rt_preempt_irq_enable(void)
{
unmask_preempt_count(IRQSOFF_MASK);
preempt_check_resched();
}
-EXPORT_SYMBOL(local_irq_enable);
+EXPORT_SYMBOL(rt_preempt_irq_enable);
-void local_irq_disable(void)
+void rt_preempt_irq_disable(void)
{
mask_preempt_count(IRQSOFF_MASK);
}
-EXPORT_SYMBOL(local_irq_disable);
+EXPORT_SYMBOL(rt_preempt_irq_disable);
int irqs_disabled_flags(unsigned long flags)
{
@@ -2047,20 +2047,20 @@
}
EXPORT_SYMBOL(irqs_disabled_flags);
-void __local_save_flags(unsigned long *flags)
+void __rt_preempt_save_flags(unsigned long *flags)
{
*flags = irqs_off();
}
-EXPORT_SYMBOL(__local_save_flags);
+EXPORT_SYMBOL(__rt_preempt_save_flags);
-void __local_irq_save(unsigned long *flags)
+void __rt_preempt_irq_save(unsigned long *flags)
{
*flags = irqs_off();
mask_preempt_count(IRQSOFF_MASK);
}
-EXPORT_SYMBOL(__local_irq_save);
+EXPORT_SYMBOL(__rt_preempt_irq_save);
-void local_irq_restore(unsigned long flags)
+void rt_preempt_irq_restore(unsigned long flags)
{
check_soft_flags(flags);
if (flags)
@@ -2070,7 +2070,7 @@
preempt_check_resched();
}
}
-EXPORT_SYMBOL(local_irq_restore);
+EXPORT_SYMBOL(rt_preempt_irq_restore);
int irqs_disabled(void)
{
diff -Naur -X diff_exclude linux-2.6.12-rc6-V0.7.48-25/lib/Kconfig.RT linux-2.6.12-rc6-V0.7.48-25.new/lib/Kconfig.RT
--- linux-2.6.12-rc6-V0.7.48-25/lib/Kconfig.RT 2005-06-13 22:56:35.000000000 +0200
+++ linux-2.6.12-rc6-V0.7.48-25.new/lib/Kconfig.RT 2005-06-15 00:35:35.000000000 +0200
@@ -81,6 +81,20 @@
endchoice
+config NO_LOCAL_IRQ_DISABLE
+ bool ' Compile no code with local_irq_disable()'
+ depends on PREEMPT_RT
+ default n
+ help
+ This will make any code containing local_irq_disable() and similar
+ stop compiling under PREEMPT_RT. These functions are dangerous to
+ the RT behaviour but otherwise ok. This option prevents code using
+ these functions from being compiled for your kernel.
+ Look at Documentation/rt-locking.txt.
+ Say N here.
+
+
+
config PREEMPT
bool
default y
@@ -150,3 +164,4 @@
depends on PREEMPT_RT || !SPINLOCK_BKL
default n if !PREEMPT
default y
+