[patch 9/10] s390: spinlock corner case.
From: Martin Schwidefsky
Date: Mon Aug 29 2005 - 12:58:50 EST
[patch 9/10] s390: spinlock corner case.
From: Heiko Carstens <heiko.carstens@xxxxxxxxxx>
On s390 the lock value used for spinlocks consists of the lower 32 bits
of the PSW that holds the lock. If this address happens to be on a four
gigabyte boundary the lock is left unlocked. This allows other cpus to
grab the same lock and enter a lock protected code path concurrently.
In theory this can happen if the vmalloc area for the code of a module
crosses a 4 GB boundary.
Signed-off-by: Heiko Carstens <heiko.carstens@xxxxxxxxxx>
Signed-off-by: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>
diffstat:
include/asm-s390/spinlock.h | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff -urpN linux-2.6/include/asm-s390/spinlock.h linux-2.6-patched/include/asm-s390/spinlock.h
--- linux-2.6/include/asm-s390/spinlock.h 2005-08-29 01:41:01.000000000 +0200
+++ linux-2.6-patched/include/asm-s390/spinlock.h 2005-08-29 19:18:12.000000000 +0200
@@ -47,7 +47,7 @@ extern int _raw_spin_trylock_retry(spinl
static inline void _raw_spin_lock(spinlock_t *lp)
{
- unsigned long pc = (unsigned long) __builtin_return_address(0);
+ unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
if (unlikely(_raw_compare_and_swap(&lp->lock, 0, pc) != 0))
_raw_spin_lock_wait(lp, pc);
@@ -55,7 +55,7 @@ static inline void _raw_spin_lock(spinlo
static inline int _raw_spin_trylock(spinlock_t *lp)
{
- unsigned long pc = (unsigned long) __builtin_return_address(0);
+ unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
if (likely(_raw_compare_and_swap(&lp->lock, 0, pc) == 0))
return 1;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/