diff mbox series

[1/2] locking/core: Fix deadlock during boot on systems with GENERIC_LOCKBREAK

Message ID 1511894539-7988-2-git-send-email-will.deacon@arm.com
State Accepted
Commit f87f3a328dbbb3e79dd53e7e889ced9222512649
Headers show
Series Fix boot regression for s390 and remove break_lock | expand

Commit Message

Will Deacon Nov. 28, 2017, 6:42 p.m. UTC
Commit a8a217c22116 ("locking/core: Remove {read,spin,write}_can_lock()")
removed the definition of raw_spin_can_lock, causing the GENERIC_LOCKBREAK
spin_lock routines to poll the break_lock field when waiting on a lock.

This has been reported to cause a deadlock during boot on s390, because
the break_lock field is also set by the waiters, and can potentially
remain set indefinitely if no other CPUs come in to take the lock after
it has been released.

This patch removes the explicit spinning on break_lock from the waiters,
instead relying on the outer trylock operation to determine when the
lock is available.

Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Fixes: a8a217c22116 ("locking/core: Remove {read,spin,write}_can_lock()")
Reported-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Tested-by: Sebastian Ott <sebott@linux.vnet.ibm.com>

Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 kernel/locking/spinlock.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

-- 
2.1.4
diff mbox series

Patch

diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
index 1fd1a7543cdd..0ebb253e2199 100644
--- a/kernel/locking/spinlock.c
+++ b/kernel/locking/spinlock.c
@@ -68,8 +68,8 @@  void __lockfunc __raw_##op##_lock(locktype##_t *lock)			\
 									\
 		if (!(lock)->break_lock)				\
 			(lock)->break_lock = 1;				\
-		while ((lock)->break_lock)				\
-			arch_##op##_relax(&lock->raw_lock);		\
+									\
+		arch_##op##_relax(&lock->raw_lock);			\
 	}								\
 	(lock)->break_lock = 0;						\
 }									\
@@ -88,8 +88,8 @@  unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)	\
 									\
 		if (!(lock)->break_lock)				\
 			(lock)->break_lock = 1;				\
-		while ((lock)->break_lock)				\
-			arch_##op##_relax(&lock->raw_lock);		\
+									\
+		arch_##op##_relax(&lock->raw_lock);			\
 	}								\
 	(lock)->break_lock = 0;						\
 	return flags;							\