A preempt_disable()/preempt_enable() pair has been added by commit 48dfb5d2560 ("locking/rwsem: Disable preemption while trying for rwsem lock") to __up_write(). So that should not be a problem. However, that does make this change, if implemented, has dependency on the coexistence of the previous mentioned commit to be functionally complete.@@ -1179,15 +1171,12 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)__up_write()
if (waiter.handoff_set) {
enum owner_state owner_state;
- preempt_disable();
owner_state = rwsem_spin_on_owner(sem);
- preempt_enable();
-
if (owner_state == OWNER_NULL)
goto trylock_again;
}
{
rwsem_clear_owner(sem);
/*
If lockup can happen when a bound kworker gets preempted here by
a FIFO acquirer for write, this is a case of preemption deeper
than thought IMO
*/
tmp = atomic_long_fetch_add_release(-RWSEM_WRITER_LOCKED, &sem->count);
if (unlikely(tmp & RWSEM_FLAG_WAITERS))
rwsem_wake(sem);