[PATCH 3.16 49/63] locking,x86: Kill atomic_or_long()
From: Ben Hutchings
Date: Wed Jan 08 2020 - 14:47:26 EST
3.16.81-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
commit f6b4ecee0eb7bfa66ae8d5652105ed4da53209a3 upstream.
There are no users, kill it.
Signed-off-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Jesse Brandeburg <jesse.brandeburg@xxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
Link: http://lkml.kernel.org/r/20140508135851.768177189@xxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
[bwh: Backported to 3.16 because this function is broken after
"x86/atomic: Fix smp_mb__{before,after}_atomic()"]
Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
---
arch/x86/include/asm/atomic.h | 15 ---------------
1 file changed, 15 deletions(-)
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -218,21 +218,6 @@ static inline short int atomic_inc_short
return *v;
}
-#ifdef CONFIG_X86_64
-/**
- * atomic_or_long - OR of two long integers
- * @v1: pointer to type unsigned long
- * @v2: pointer to type unsigned long
- *
- * Atomically ORs @v1 and @v2
- * Returns the result of the OR
- */
-static inline void atomic_or_long(unsigned long *v1, unsigned long v2)
-{
- asm(LOCK_PREFIX "orq %1, %0" : "+m" (*v1) : "r" (v2));
-}
-#endif
-
/* These are x86-specific, used by some header files */
#define atomic_clear_mask(mask, addr) \
asm volatile(LOCK_PREFIX "andl %0,%1" \