Re: Is 386 processor still supported?

From: Jiri Kosina
Date: Thu Jan 08 2009 - 08:49:17 EST


On Thu, 8 Jan 2009, Cyrill Gorcunov wrote:

> Jiri I could be wrong but it seems __byte_spin_lock is implemented
> under CONFIG_PARAVIRT and now referred under #else /* !CONFIG_PARAVIRT */
> At least I didn'y found additional implementaion in tree.
> Did I miss anything?

You're right, my bad. There is going to be some ifdef ugliness introduced
by this unfortunately, but on the other hand it's pretty straightforward.



From: Jiri Kosina <jkosina@xxxxxxx>
Subject: x86: make spinlocks available on machines without xadd insn

Current kernel wouldn't compile on ancient x86 machines that don't support
xadd instruction, as ticket spinlocks implementation unconditionally uses
it.

On machines without CONFIG_X86_XADD, use old-style byte spinlock
implementation instead.

Reported-by: Adam Osuchowski <adwol@xxxxxxx>
Signed-off-by: Jiri Kosina <jkosina@xxxxxxx>

---

arch/x86/include/asm/spinlock.h | 28 ++++++++++++++++++++++++----
1 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index d17c919..1331333 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -172,10 +172,10 @@ static inline int __ticket_spin_is_contended(raw_spinlock_t *lock)
return (((tmp >> TICKET_SHIFT) - tmp) & ((1 << TICKET_SHIFT) - 1)) > 1;
}

-#ifdef CONFIG_PARAVIRT
+#if defined(CONFIG_PARAVIRT) || !defined(CONFIG_X86_XADD)
/*
* Define virtualization-friendly old-style lock byte lock, for use in
- * pv_lock_ops if desired.
+ * pv_lock_ops or for machines not supporting xadd instruction.
*
* This differs from the pre-2.6.24 spinlock by always using xchgb
* rather than decb to take the lock; this allows it to use a
@@ -235,30 +235,50 @@ static inline void __byte_spin_unlock(raw_spinlock_t *lock)
smp_wmb();
bl->lock = 0;
}
-#else /* !CONFIG_PARAVIRT */
+#else
static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
{
+#ifdef CONFIG_X86_XADD
return __ticket_spin_is_locked(lock);
+#else
+ return __byte_spin_is_locked(lock);
+#endif
}

static inline int __raw_spin_is_contended(raw_spinlock_t *lock)
{
+#ifdef CONFIG_X86_XADD
return __ticket_spin_is_contended(lock);
+#else
+ return __byte_spin_is_contended(lock);
+#endif
}

static __always_inline void __raw_spin_lock(raw_spinlock_t *lock)
{
+#ifdef CONFIG_X86_XADD
__ticket_spin_lock(lock);
+#else
+ __byte_spin_lock(lock);
+#endif
}

static __always_inline int __raw_spin_trylock(raw_spinlock_t *lock)
{
+#ifdef CONFIG_X86_XADD
return __ticket_spin_trylock(lock);
+#else
+ return __byte_spin_trylock(lock);
+#endif
}

static __always_inline void __raw_spin_unlock(raw_spinlock_t *lock)
{
+#ifdef CONFIG_X86_XADD
__ticket_spin_unlock(lock);
+#else
+ __byte_spin_unlock(lock);
+#endif
}

static __always_inline void __raw_spin_lock_flags(raw_spinlock_t *lock,
@@ -267,7 +287,7 @@ static __always_inline void __raw_spin_lock_flags(raw_spinlock_t *lock,
__raw_spin_lock(lock);
}

-#endif /* CONFIG_PARAVIRT */
+#endif

static inline void __raw_spin_unlock_wait(raw_spinlock_t *lock)
{
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/