diff mbox series

[02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath

Message ID 1522947547-24081-3-git-send-email-will.deacon@arm.com
State New
Headers show
Series kernel/locking: qspinlock improvements | expand

Commit Message

Will Deacon April 5, 2018, 4:58 p.m. UTC
The qspinlock locking slowpath utilises a "pending" bit as a simple form
of an embedded test-and-set lock that can avoid the overhead of explicit
queuing in cases where the lock is held but uncontended. This bit is
managed using a cmpxchg loop which tries to transition the uncontended
lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

Unfortunately, the cmpxchg loop is unbounded and lockers can be starved
indefinitely if the lock word is seen to oscillate between unlocked
(0,0,0) and locked (0,0,1). This could happen if concurrent lockers are
able to take the lock in the cmpxchg loop without queuing and pass it
around amongst themselves.

This patch fixes the problem by unconditionally setting _Q_PENDING_VAL
using atomic_fetch_or, and then inspecting the old value to see whether
we need to spin on the current lock owner, or whether we now effectively
hold the lock. The tricky scenario is when concurrent lockers end up
queuing on the lock and the lock becomes available, causing us to see
a lockword of (n,0,0). With pending now set, simply queuing could lead
to deadlock as the head of the queue may not have observed the pending
flag being cleared. Conversely, if the head of the queue did observe
pending being cleared, then it could transition the lock from (n,0,0) ->
(0,0,1) meaning that any attempt to "undo" our setting of the pending
bit could race with a concurrent locker trying to set it.

We handle this race by preserving the pending bit when taking the lock
after reaching the head of the queue and leaving the tail entry intact
if we saw pending set, because we know that the tail is going to be
updated shortly.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 kernel/locking/qspinlock.c | 80 ++++++++++++++++++++--------------------------
 1 file changed, 35 insertions(+), 45 deletions(-)

-- 
2.1.4

Comments

Peter Zijlstra April 5, 2018, 5:07 p.m. UTC | #1
On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote:
> The qspinlock locking slowpath utilises a "pending" bit as a simple form

> of an embedded test-and-set lock that can avoid the overhead of explicit

> queuing in cases where the lock is held but uncontended. This bit is

> managed using a cmpxchg loop which tries to transition the uncontended

> lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

> 

> Unfortunately, the cmpxchg loop is unbounded and lockers can be starved

> indefinitely if the lock word is seen to oscillate between unlocked

> (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are

> able to take the lock in the cmpxchg loop without queuing and pass it

> around amongst themselves.

> 

> This patch fixes the problem by unconditionally setting _Q_PENDING_VAL

> using atomic_fetch_or, 


Of course, LL/SC or cmpxchg implementations of fetch_or do not in fact
get anything from this ;-)
Peter Zijlstra April 5, 2018, 5:13 p.m. UTC | #2
On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote:
> @@ -306,58 +306,48 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)

>  		return;

>  

>  	/*

> +	 * If we observe any contention; queue.

> +	 */

> +	if (val & ~_Q_LOCKED_MASK)

> +		goto queue;

> +

> +	/*

>  	 * trylock || pending

>  	 *

>  	 * 0,0,0 -> 0,0,1 ; trylock

>  	 * 0,0,1 -> 0,1,1 ; pending

>  	 */

> +	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);

> +	if (!(val & ~_Q_LOCKED_MASK)) {

>  		/*

> +		 * we're pending, wait for the owner to go away.

> +		 *

> +		 * *,1,1 -> *,1,0

> +		 *

> +		 * this wait loop must be a load-acquire such that we match the

> +		 * store-release that clears the locked bit and create lock

> +		 * sequentiality; this is because not all

> +		 * clear_pending_set_locked() implementations imply full

> +		 * barriers.

>  		 */

> +		if (val & _Q_LOCKED_MASK)

> +			smp_cond_load_acquire(&lock->val.counter,

> +					      !(VAL & _Q_LOCKED_MASK));


I much prefer { } for multi-line statements like this.

>  		/*

> +		 * take ownership and clear the pending bit.

> +		 *

> +		 * *,1,0 -> *,0,1

>  		 */

> +		clear_pending_set_locked(lock);

>  		return;

> +	}
Waiman Long April 5, 2018, 9:16 p.m. UTC | #3
On 04/05/2018 12:58 PM, Will Deacon wrote:
> The qspinlock locking slowpath utilises a "pending" bit as a simple form

> of an embedded test-and-set lock that can avoid the overhead of explicit

> queuing in cases where the lock is held but uncontended. This bit is

> managed using a cmpxchg loop which tries to transition the uncontended

> lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

>

> Unfortunately, the cmpxchg loop is unbounded and lockers can be starved

> indefinitely if the lock word is seen to oscillate between unlocked

> (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are

> able to take the lock in the cmpxchg loop without queuing and pass it

> around amongst themselves.

>

> This patch fixes the problem by unconditionally setting _Q_PENDING_VAL

> using atomic_fetch_or, and then inspecting the old value to see whether

> we need to spin on the current lock owner, or whether we now effectively

> hold the lock. The tricky scenario is when concurrent lockers end up

> queuing on the lock and the lock becomes available, causing us to see

> a lockword of (n,0,0). With pending now set, simply queuing could lead

> to deadlock as the head of the queue may not have observed the pending

> flag being cleared. Conversely, if the head of the queue did observe

> pending being cleared, then it could transition the lock from (n,0,0) ->

> (0,0,1) meaning that any attempt to "undo" our setting of the pending

> bit could race with a concurrent locker trying to set it.

>

> We handle this race by preserving the pending bit when taking the lock

> after reaching the head of the queue and leaving the tail entry intact

> if we saw pending set, because we know that the tail is going to be

> updated shortly.

>

> Cc: Peter Zijlstra <peterz@infradead.org>

> Cc: Ingo Molnar <mingo@kernel.org>

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> ---

>  kernel/locking/qspinlock.c | 80 ++++++++++++++++++++--------------------------

>  1 file changed, 35 insertions(+), 45 deletions(-)

>

> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c

> index a192af2fe378..b75361d23ea5 100644

> --- a/kernel/locking/qspinlock.c

> +++ b/kernel/locking/qspinlock.c

> @@ -294,7 +294,7 @@ static __always_inline u32  __pv_wait_head_or_lock(struct qspinlock *lock,

>  void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)

>  {

>  	struct mcs_spinlock *prev, *next, *node;

> -	u32 new, old, tail;

> +	u32 old, tail;

>  	int idx;

>  

>  	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));

> @@ -306,58 +306,48 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)

>  		return;

>  

>  	/*

> +	 * If we observe any contention; queue.

> +	 */

> +	if (val & ~_Q_LOCKED_MASK)

> +		goto queue;

> +

> +	/*

>  	 * trylock || pending

>  	 *

>  	 * 0,0,0 -> 0,0,1 ; trylock

>  	 * 0,0,1 -> 0,1,1 ; pending

>  	 */

> -	for (;;) {

> +	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);

> +	if (!(val & ~_Q_LOCKED_MASK)) {

>  		/*

> -		 * If we observe any contention; queue.

> +		 * we're pending, wait for the owner to go away.

> +		 *

> +		 * *,1,1 -> *,1,0

> +		 *

> +		 * this wait loop must be a load-acquire such that we match the

> +		 * store-release that clears the locked bit and create lock

> +		 * sequentiality; this is because not all

> +		 * clear_pending_set_locked() implementations imply full

> +		 * barriers.

>  		 */

> -		if (val & ~_Q_LOCKED_MASK)

> -			goto queue;

> -

> -		new = _Q_LOCKED_VAL;

> -		if (val == new)

> -			new |= _Q_PENDING_VAL;

> -

> +		if (val & _Q_LOCKED_MASK)

> +			smp_cond_load_acquire(&lock->val.counter,

> +					      !(VAL & _Q_LOCKED_MASK));

>  		/*

> -		 * Acquire semantic is required here as the function may

> -		 * return immediately if the lock was free.

> +		 * take ownership and clear the pending bit.

> +		 *

> +		 * *,1,0 -> *,0,1

>  		 */

> -		old = atomic_cmpxchg_acquire(&lock->val, val, new);

> -		if (old == val)

> -			break;

> -

> -		val = old;

> -	}

> -

> -	/*

> -	 * we won the trylock

> -	 */

> -	if (new == _Q_LOCKED_VAL)

> +		clear_pending_set_locked(lock);

>  		return;

> +	}

>  

>  	/*

> -	 * we're pending, wait for the owner to go away.

> -	 *

> -	 * *,1,1 -> *,1,0

> -	 *

> -	 * this wait loop must be a load-acquire such that we match the

> -	 * store-release that clears the locked bit and create lock

> -	 * sequentiality; this is because not all clear_pending_set_locked()

> -	 * implementations imply full barriers.

> -	 */

> -	smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK));

> -

> -	/*

> -	 * take ownership and clear the pending bit.

> -	 *

> -	 * *,1,0 -> *,0,1

> +	 * If pending was clear but there are waiters in the queue, then

> +	 * we need to undo our setting of pending before we queue ourselves.

>  	 */

> -	clear_pending_set_locked(lock);

> -	return;

> +	if (!(val & _Q_PENDING_MASK))

> +		atomic_andnot(_Q_PENDING_VAL, &lock->val);

Can we add a clear_pending() helper that will just clear the byte if
_Q_PENDING_BITS == 8? That will eliminate one atomic instruction from
the failure path.

-Longman
Will Deacon April 6, 2018, 3:08 p.m. UTC | #4
On Thu, Apr 05, 2018 at 07:07:06PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote:

> > The qspinlock locking slowpath utilises a "pending" bit as a simple form

> > of an embedded test-and-set lock that can avoid the overhead of explicit

> > queuing in cases where the lock is held but uncontended. This bit is

> > managed using a cmpxchg loop which tries to transition the uncontended

> > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

> > 

> > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved

> > indefinitely if the lock word is seen to oscillate between unlocked

> > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are

> > able to take the lock in the cmpxchg loop without queuing and pass it

> > around amongst themselves.

> > 

> > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL

> > using atomic_fetch_or, 

> 

> Of course, LL/SC or cmpxchg implementations of fetch_or do not in fact

> get anything from this ;-)


Whilst it's true that they would still be unfair, the window is at least
reduced and moves a lot more of the fairness burden onto hardware itself.
ARMv8.1 has an instruction for atomic_fetch_or, so we can make good use
of it here.

Will
Will Deacon April 6, 2018, 3:08 p.m. UTC | #5
On Thu, Apr 05, 2018 at 05:16:16PM -0400, Waiman Long wrote:
> On 04/05/2018 12:58 PM, Will Deacon wrote:

> >  	/*

> > -	 * we're pending, wait for the owner to go away.

> > -	 *

> > -	 * *,1,1 -> *,1,0

> > -	 *

> > -	 * this wait loop must be a load-acquire such that we match the

> > -	 * store-release that clears the locked bit and create lock

> > -	 * sequentiality; this is because not all clear_pending_set_locked()

> > -	 * implementations imply full barriers.

> > -	 */

> > -	smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK));

> > -

> > -	/*

> > -	 * take ownership and clear the pending bit.

> > -	 *

> > -	 * *,1,0 -> *,0,1

> > +	 * If pending was clear but there are waiters in the queue, then

> > +	 * we need to undo our setting of pending before we queue ourselves.

> >  	 */

> > -	clear_pending_set_locked(lock);

> > -	return;

> > +	if (!(val & _Q_PENDING_MASK))

> > +		atomic_andnot(_Q_PENDING_VAL, &lock->val);

> Can we add a clear_pending() helper that will just clear the byte if

> _Q_PENDING_BITS == 8? That will eliminate one atomic instruction from

> the failure path.


Good idea!

Will
Waiman Long April 6, 2018, 8:50 p.m. UTC | #6
On 04/05/2018 12:58 PM, Will Deacon wrote:
> The qspinlock locking slowpath utilises a "pending" bit as a simple form

> of an embedded test-and-set lock that can avoid the overhead of explicit

> queuing in cases where the lock is held but uncontended. This bit is

> managed using a cmpxchg loop which tries to transition the uncontended

> lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

>

> Unfortunately, the cmpxchg loop is unbounded and lockers can be starved

> indefinitely if the lock word is seen to oscillate between unlocked

> (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are

> able to take the lock in the cmpxchg loop without queuing and pass it

> around amongst themselves.

>

> This patch fixes the problem by unconditionally setting _Q_PENDING_VAL

> using atomic_fetch_or, and then inspecting the old value to see whether

> we need to spin on the current lock owner, or whether we now effectively

> hold the lock. The tricky scenario is when concurrent lockers end up

> queuing on the lock and the lock becomes available, causing us to see

> a lockword of (n,0,0). With pending now set, simply queuing could lead

> to deadlock as the head of the queue may not have observed the pending

> flag being cleared. Conversely, if the head of the queue did observe

> pending being cleared, then it could transition the lock from (n,0,0) ->

> (0,0,1) meaning that any attempt to "undo" our setting of the pending

> bit could race with a concurrent locker trying to set it.

>

> We handle this race by preserving the pending bit when taking the lock

> after reaching the head of the queue and leaving the tail entry intact

> if we saw pending set, because we know that the tail is going to be

> updated shortly.

>

> Cc: Peter Zijlstra <peterz@infradead.org>

> Cc: Ingo Molnar <mingo@kernel.org>

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> ---


The pending bit was added to the qspinlock design to counter performance
degradation compared with ticket lock for workloads with light
spinlock contention. I run my spinlock stress test on a Intel Skylake
server running the vanilla 4.16 kernel vs a patched kernel with this
patchset. The locking rates with different number of locking threads
were as follows:

  # of threads  4.16 kernel     patched 4.16 kernel
  ------------  -----------     -------------------
        1       7,417 kop/s         7,408 kop/s
        2       5,755 kop/s         4,486 kop/s
        3       4,214 kop/s         4,169 kop/s
        4       4,396 kop/s         4,383 kop/s
       
The 2 contending threads case is the one that exercise the pending bit
code path the most. So it is obvious that this is the one that is most
impacted by this patchset. The differences in the other cases are mostly
noise or maybe just a little bit on the 3 contending threads case.

I am not against this patch, but we certainly need to find out a way to
bring the performance number up closer to what it is before applying
the patch.

Cheers,
Longman
Paul E. McKenney April 6, 2018, 9:09 p.m. UTC | #7
On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote:
> On 04/05/2018 12:58 PM, Will Deacon wrote:

> > The qspinlock locking slowpath utilises a "pending" bit as a simple form

> > of an embedded test-and-set lock that can avoid the overhead of explicit

> > queuing in cases where the lock is held but uncontended. This bit is

> > managed using a cmpxchg loop which tries to transition the uncontended

> > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

> >

> > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved

> > indefinitely if the lock word is seen to oscillate between unlocked

> > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are

> > able to take the lock in the cmpxchg loop without queuing and pass it

> > around amongst themselves.

> >

> > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL

> > using atomic_fetch_or, and then inspecting the old value to see whether

> > we need to spin on the current lock owner, or whether we now effectively

> > hold the lock. The tricky scenario is when concurrent lockers end up

> > queuing on the lock and the lock becomes available, causing us to see

> > a lockword of (n,0,0). With pending now set, simply queuing could lead

> > to deadlock as the head of the queue may not have observed the pending

> > flag being cleared. Conversely, if the head of the queue did observe

> > pending being cleared, then it could transition the lock from (n,0,0) ->

> > (0,0,1) meaning that any attempt to "undo" our setting of the pending

> > bit could race with a concurrent locker trying to set it.

> >

> > We handle this race by preserving the pending bit when taking the lock

> > after reaching the head of the queue and leaving the tail entry intact

> > if we saw pending set, because we know that the tail is going to be

> > updated shortly.

> >

> > Cc: Peter Zijlstra <peterz@infradead.org>

> > Cc: Ingo Molnar <mingo@kernel.org>

> > Signed-off-by: Will Deacon <will.deacon@arm.com>

> > ---

> 

> The pending bit was added to the qspinlock design to counter performance

> degradation compared with ticket lock for workloads with light

> spinlock contention. I run my spinlock stress test on a Intel Skylake

> server running the vanilla 4.16 kernel vs a patched kernel with this

> patchset. The locking rates with different number of locking threads

> were as follows:

> 

>   # of threads  4.16 kernel     patched 4.16 kernel

>   ------------  -----------     -------------------

>         1       7,417 kop/s         7,408 kop/s

>         2       5,755 kop/s         4,486 kop/s

>         3       4,214 kop/s         4,169 kop/s

>         4       4,396 kop/s         4,383 kop/s

>        

> The 2 contending threads case is the one that exercise the pending bit

> code path the most. So it is obvious that this is the one that is most

> impacted by this patchset. The differences in the other cases are mostly

> noise or maybe just a little bit on the 3 contending threads case.

> 

> I am not against this patch, but we certainly need to find out a way to

> bring the performance number up closer to what it is before applying

> the patch.


It would indeed be good to not be in the position of having to trade off
forward-progress guarantees against performance, but that does appear to
be where we are at the moment.

							Thanx, Paul
Peter Zijlstra April 7, 2018, 8:47 a.m. UTC | #8
On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote:
> It would indeed be good to not be in the position of having to trade off

> forward-progress guarantees against performance, but that does appear to

> be where we are at the moment.


Depends of course on how unfair cmpxchg is. On x86 we trade one cmpxchg
loop for another so the patch doesn't cure anything at all there. And
our cmpxchg has 'some' hardware fairness to it.

So while the patch is 'good' for platforms that have native fetch-or,
it doesn't help (or in our case even hurts) those that do not.
Peter Zijlstra April 7, 2018, 9:07 a.m. UTC | #9
On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote:
>   # of threads  4.16 kernel     patched 4.16 kernel

>   ------------  -----------     -------------------

>         1       7,417 kop/s         7,408 kop/s

>         2       5,755 kop/s         4,486 kop/s

>         3       4,214 kop/s         4,169 kop/s

>         4       4,396 kop/s         4,383 kop/s

>        


Interesting, I didn't see that dip in my userspace tests.. I'll have to
try again.
Paul E. McKenney April 7, 2018, 11:37 p.m. UTC | #10
On Sat, Apr 07, 2018 at 10:47:32AM +0200, Peter Zijlstra wrote:
> On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote:

> > It would indeed be good to not be in the position of having to trade off

> > forward-progress guarantees against performance, but that does appear to

> > be where we are at the moment.

> 

> Depends of course on how unfair cmpxchg is. On x86 we trade one cmpxchg

> loop for another so the patch doesn't cure anything at all there. And

> our cmpxchg has 'some' hardware fairness to it.

> 

> So while the patch is 'good' for platforms that have native fetch-or,

> it doesn't help (or in our case even hurts) those that do not.


Might need different implementations for different architectures, then.
Or take advantage of the fact that x86 can do a native fetch-or to the
topmost bit, if that helps.

							Thanx, Paul
Will Deacon April 9, 2018, 10:58 a.m. UTC | #11
Hi Waiman,

Thanks for taking this lot for a spin. Comments and questions below.

On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote:
> On 04/05/2018 12:58 PM, Will Deacon wrote:

> > The qspinlock locking slowpath utilises a "pending" bit as a simple form

> > of an embedded test-and-set lock that can avoid the overhead of explicit

> > queuing in cases where the lock is held but uncontended. This bit is

> > managed using a cmpxchg loop which tries to transition the uncontended

> > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

> >

> > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved

> > indefinitely if the lock word is seen to oscillate between unlocked

> > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are

> > able to take the lock in the cmpxchg loop without queuing and pass it

> > around amongst themselves.

> >

> > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL

> > using atomic_fetch_or, and then inspecting the old value to see whether

> > we need to spin on the current lock owner, or whether we now effectively

> > hold the lock. The tricky scenario is when concurrent lockers end up

> > queuing on the lock and the lock becomes available, causing us to see

> > a lockword of (n,0,0). With pending now set, simply queuing could lead

> > to deadlock as the head of the queue may not have observed the pending

> > flag being cleared. Conversely, if the head of the queue did observe

> > pending being cleared, then it could transition the lock from (n,0,0) ->

> > (0,0,1) meaning that any attempt to "undo" our setting of the pending

> > bit could race with a concurrent locker trying to set it.

> >

> > We handle this race by preserving the pending bit when taking the lock

> > after reaching the head of the queue and leaving the tail entry intact

> > if we saw pending set, because we know that the tail is going to be

> > updated shortly.

> >

> > Cc: Peter Zijlstra <peterz@infradead.org>

> > Cc: Ingo Molnar <mingo@kernel.org>

> > Signed-off-by: Will Deacon <will.deacon@arm.com>

> > ---

> 

> The pending bit was added to the qspinlock design to counter performance

> degradation compared with ticket lock for workloads with light

> spinlock contention. I run my spinlock stress test on a Intel Skylake

> server running the vanilla 4.16 kernel vs a patched kernel with this

> patchset. The locking rates with different number of locking threads

> were as follows:

> 

>   # of threads  4.16 kernel     patched 4.16 kernel

>   ------------  -----------     -------------------

>         1       7,417 kop/s         7,408 kop/s

>         2       5,755 kop/s         4,486 kop/s

>         3       4,214 kop/s         4,169 kop/s

>         4       4,396 kop/s         4,383 kop/s

>        

> The 2 contending threads case is the one that exercise the pending bit

> code path the most. So it is obvious that this is the one that is most

> impacted by this patchset. The differences in the other cases are mostly

> noise or maybe just a little bit on the 3 contending threads case.


That is bizarre. A few questions:

  1. Is this with my patches as posted, or also with your WRITE_ONCE change?
  2. Could you try to bisect my series to see which patch is responsible
     for this degradation, please?
  3. Could you point me at your stress test, so I can try to reproduce these
     numbers on arm64 systems, please?

> I am not against this patch, but we certainly need to find out a way to

> bring the performance number up closer to what it is before applying

> the patch.


We certainly need to *understand* where the drop is coming from, because
the two-threaded case is still just a CAS on x86 with and without this
patch series. Generally, there's a throughput cost when ensuring fairness
and forward-progress otherwise we'd all be using test-and-set.

Thanks,

Will
Will Deacon April 9, 2018, 10:58 a.m. UTC | #12
On Sat, Apr 07, 2018 at 10:47:32AM +0200, Peter Zijlstra wrote:
> On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote:

> > It would indeed be good to not be in the position of having to trade off

> > forward-progress guarantees against performance, but that does appear to

> > be where we are at the moment.

> 

> Depends of course on how unfair cmpxchg is. On x86 we trade one cmpxchg

> loop for another so the patch doesn't cure anything at all there. And

> our cmpxchg has 'some' hardware fairness to it.

> 

> So while the patch is 'good' for platforms that have native fetch-or,

> it doesn't help (or in our case even hurts) those that do not.


We need to get to the bottom of this, otherwise we're just relying on
Waiman's testing to validate any changes to this code!

Will
Will Deacon April 9, 2018, 2:54 p.m. UTC | #13
On Mon, Apr 09, 2018 at 11:58:35AM +0100, Will Deacon wrote:
> On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote:

> > The pending bit was added to the qspinlock design to counter performance

> > degradation compared with ticket lock for workloads with light

> > spinlock contention. I run my spinlock stress test on a Intel Skylake

> > server running the vanilla 4.16 kernel vs a patched kernel with this

> > patchset. The locking rates with different number of locking threads

> > were as follows:

> > 

> >   # of threads  4.16 kernel     patched 4.16 kernel

> >   ------------  -----------     -------------------

> >         1       7,417 kop/s         7,408 kop/s

> >         2       5,755 kop/s         4,486 kop/s

> >         3       4,214 kop/s         4,169 kop/s

> >         4       4,396 kop/s         4,383 kop/s

> >        

> > The 2 contending threads case is the one that exercise the pending bit

> > code path the most. So it is obvious that this is the one that is most

> > impacted by this patchset. The differences in the other cases are mostly

> > noise or maybe just a little bit on the 3 contending threads case.

> 

> That is bizarre. A few questions:

> 

>   1. Is this with my patches as posted, or also with your WRITE_ONCE change?

>   2. Could you try to bisect my series to see which patch is responsible

>      for this degradation, please?

>   3. Could you point me at your stress test, so I can try to reproduce these

>      numbers on arm64 systems, please?

> 

> > I am not against this patch, but we certainly need to find out a way to

> > bring the performance number up closer to what it is before applying

> > the patch.

> 

> We certainly need to *understand* where the drop is coming from, because

> the two-threaded case is still just a CAS on x86 with and without this

> patch series. Generally, there's a throughput cost when ensuring fairness

> and forward-progress otherwise we'd all be using test-and-set.


Whilst I think we still need to address my questions above, I've had a
crack at the diff below. Please can you give it a spin? It sticks a trylock
on the slowpath before setting pending and replaces the CAS-based set
with an xchg (which I *think* is safe, but will need to ponder it some
more).

Thanks,

Will

--->8

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 19261af9f61e..71eb5e3a3d91 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -139,6 +139,20 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
 	WRITE_ONCE(lock->locked_pending, _Q_LOCKED_VAL);
 }
 
+/**
+ * set_pending_fetch_acquire - set the pending bit and return the old lock
+ *                             value with acquire semantics.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,*,* -> *,1,*
+ */
+static __always_inline u32 set_pending_fetch_acquire(struct qspinlock *lock)
+{
+	u32 val = xchg_relaxed(&lock->pending, 1) << _Q_PENDING_OFFSET;
+	val |= (atomic_read_acquire(&lock->val) & ~_Q_PENDING_MASK);
+	return val;
+}
+
 /*
  * xchg_tail - Put in the new queue tail code word & retrieve previous one
  * @lock : Pointer to queued spinlock structure
@@ -184,6 +198,18 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
 }
 
 /**
+ * set_pending_fetch_acquire - set the pending bit and return the old lock
+ *                             value with acquire semantics.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,*,* -> *,1,*
+ */
+static __always_inline u32 set_pending_fetch_acquire(struct qspinlock *lock)
+{
+	return atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+}
+
+/**
  * xchg_tail - Put in the new queue tail code word & retrieve previous one
  * @lock : Pointer to queued spinlock structure
  * @tail : The new queue tail code word
@@ -289,18 +315,26 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 		return;
 
 	/*
-	 * If we observe any contention; queue.
+	 * If we observe queueing, then queue ourselves.
 	 */
-	if (val & ~_Q_LOCKED_MASK)
+	if (val & _Q_TAIL_MASK)
 		goto queue;
 
 	/*
+	 * We didn't see any queueing, so have one more try at snatching
+	 * the lock in case it became available whilst we were taking the
+	 * slow path.
+	 */
+	if (queued_spin_trylock(lock))
+		return;
+
+	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock
 	 * 0,0,1 -> 0,1,1 ; pending
 	 */
-	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+	val = set_pending_fetch_acquire(lock);
 	if (!(val & ~_Q_LOCKED_MASK)) {
 		/*
 		 * we're pending, wait for the owner to go away.
Peter Zijlstra April 9, 2018, 3:54 p.m. UTC | #14
On Mon, Apr 09, 2018 at 03:54:09PM +0100, Will Deacon wrote:

> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c

> index 19261af9f61e..71eb5e3a3d91 100644

> --- a/kernel/locking/qspinlock.c

> +++ b/kernel/locking/qspinlock.c

> @@ -139,6 +139,20 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)

>  	WRITE_ONCE(lock->locked_pending, _Q_LOCKED_VAL);

>  }

>  

> +/**

> + * set_pending_fetch_acquire - set the pending bit and return the old lock

> + *                             value with acquire semantics.

> + * @lock: Pointer to queued spinlock structure

> + *

> + * *,*,* -> *,1,*

> + */

> +static __always_inline u32 set_pending_fetch_acquire(struct qspinlock *lock)

> +{

> +	u32 val = xchg_relaxed(&lock->pending, 1) << _Q_PENDING_OFFSET;

> +	val |= (atomic_read_acquire(&lock->val) & ~_Q_PENDING_MASK);

> +	return val;

> +}


> @@ -289,18 +315,26 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)

>  		return;

>  

>  	/*

> -	 * If we observe any contention; queue.

> +	 * If we observe queueing, then queue ourselves.

>  	 */

> -	if (val & ~_Q_LOCKED_MASK)

> +	if (val & _Q_TAIL_MASK)

>  		goto queue;

>  

>  	/*

> +	 * We didn't see any queueing, so have one more try at snatching

> +	 * the lock in case it became available whilst we were taking the

> +	 * slow path.

> +	 */

> +	if (queued_spin_trylock(lock))

> +		return;

> +

> +	/*

>  	 * trylock || pending

>  	 *

>  	 * 0,0,0 -> 0,0,1 ; trylock

>  	 * 0,0,1 -> 0,1,1 ; pending

>  	 */

> +	val = set_pending_fetch_acquire(lock);

>  	if (!(val & ~_Q_LOCKED_MASK)) {


So, if I remember that partial paper correctly, the atomc_read_acquire()
can see 'arbitrary' old values for everything except the pending byte,
which it just wrote and will fwd into our load, right?

But I think coherence requires the read to not be older than the one
observed by the trylock before (since it uses c-cas its acquire can be
elided).

I think this means we can miss a concurrent unlock vs the fetch_or. And
I think that's fine, if we still see the lock set we'll needlessly 'wait'
for it go become unlocked.
Will Deacon April 9, 2018, 5:19 p.m. UTC | #15
On Mon, Apr 09, 2018 at 05:54:20PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 09, 2018 at 03:54:09PM +0100, Will Deacon wrote:

> > @@ -289,18 +315,26 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)

> >  		return;

> >  

> >  	/*

> > -	 * If we observe any contention; queue.

> > +	 * If we observe queueing, then queue ourselves.

> >  	 */

> > -	if (val & ~_Q_LOCKED_MASK)

> > +	if (val & _Q_TAIL_MASK)

> >  		goto queue;

> >  

> >  	/*

> > +	 * We didn't see any queueing, so have one more try at snatching

> > +	 * the lock in case it became available whilst we were taking the

> > +	 * slow path.

> > +	 */

> > +	if (queued_spin_trylock(lock))

> > +		return;

> > +

> > +	/*

> >  	 * trylock || pending

> >  	 *

> >  	 * 0,0,0 -> 0,0,1 ; trylock

> >  	 * 0,0,1 -> 0,1,1 ; pending

> >  	 */

> > +	val = set_pending_fetch_acquire(lock);

> >  	if (!(val & ~_Q_LOCKED_MASK)) {

> 

> So, if I remember that partial paper correctly, the atomc_read_acquire()

> can see 'arbitrary' old values for everything except the pending byte,

> which it just wrote and will fwd into our load, right?

> 

> But I think coherence requires the read to not be older than the one

> observed by the trylock before (since it uses c-cas its acquire can be

> elided).

> 

> I think this means we can miss a concurrent unlock vs the fetch_or. And

> I think that's fine, if we still see the lock set we'll needlessly 'wait'

> for it go become unlocked.


Ah, but there is a related case that doesn't work. If the lock becomes
free just before we set pending, then another CPU can succeed on the
fastpath. We'll then set pending, but the lockword we get back may still
have the locked byte of 0, so two people end up holding the lock.

I think it's worth giving this a go with the added trylock, but I can't
see a way to avoid the atomic_fetch_or at the moment.

Will
Waiman Long April 9, 2018, 5:55 p.m. UTC | #16
On 04/09/2018 06:58 AM, Will Deacon wrote:
> Hi Waiman,

>

> Thanks for taking this lot for a spin. Comments and questions below.

>

> On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote:

>> On 04/05/2018 12:58 PM, Will Deacon wrote:

>>> The qspinlock locking slowpath utilises a "pending" bit as a simple form

>>> of an embedded test-and-set lock that can avoid the overhead of explicit

>>> queuing in cases where the lock is held but uncontended. This bit is

>>> managed using a cmpxchg loop which tries to transition the uncontended

>>> lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

>>>

>>> Unfortunately, the cmpxchg loop is unbounded and lockers can be starved

>>> indefinitely if the lock word is seen to oscillate between unlocked

>>> (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are

>>> able to take the lock in the cmpxchg loop without queuing and pass it

>>> around amongst themselves.

>>>

>>> This patch fixes the problem by unconditionally setting _Q_PENDING_VAL

>>> using atomic_fetch_or, and then inspecting the old value to see whether

>>> we need to spin on the current lock owner, or whether we now effectively

>>> hold the lock. The tricky scenario is when concurrent lockers end up

>>> queuing on the lock and the lock becomes available, causing us to see

>>> a lockword of (n,0,0). With pending now set, simply queuing could lead

>>> to deadlock as the head of the queue may not have observed the pending

>>> flag being cleared. Conversely, if the head of the queue did observe

>>> pending being cleared, then it could transition the lock from (n,0,0) ->

>>> (0,0,1) meaning that any attempt to "undo" our setting of the pending

>>> bit could race with a concurrent locker trying to set it.

>>>

>>> We handle this race by preserving the pending bit when taking the lock

>>> after reaching the head of the queue and leaving the tail entry intact

>>> if we saw pending set, because we know that the tail is going to be

>>> updated shortly.

>>>

>>> Cc: Peter Zijlstra <peterz@infradead.org>

>>> Cc: Ingo Molnar <mingo@kernel.org>

>>> Signed-off-by: Will Deacon <will.deacon@arm.com>

>>> ---

>> The pending bit was added to the qspinlock design to counter performance

>> degradation compared with ticket lock for workloads with light

>> spinlock contention. I run my spinlock stress test on a Intel Skylake

>> server running the vanilla 4.16 kernel vs a patched kernel with this

>> patchset. The locking rates with different number of locking threads

>> were as follows:

>>

>>   # of threads  4.16 kernel     patched 4.16 kernel

>>   ------------  -----------     -------------------

>>         1       7,417 kop/s         7,408 kop/s

>>         2       5,755 kop/s         4,486 kop/s

>>         3       4,214 kop/s         4,169 kop/s

>>         4       4,396 kop/s         4,383 kop/s

>>        

>> The 2 contending threads case is the one that exercise the pending bit

>> code path the most. So it is obvious that this is the one that is most

>> impacted by this patchset. The differences in the other cases are mostly

>> noise or maybe just a little bit on the 3 contending threads case.

> That is bizarre. A few questions:

>

>   1. Is this with my patches as posted, or also with your WRITE_ONCE change?


This is just the with your patches as posted.
 
>   2. Could you try to bisect my series to see which patch is responsible

>      for this degradation, please?


I have done further analysis with the help of CONFIG_QUEUED_LOCK_STAT
with another patch to enable counting the pending and the queuing code
paths.

Running the 2-thread test with the original qspinlock code on a Haswell
server, the performance data were

pending count = 3,265,220
queuing count = 22
locking rate = 11,648 kop/s

With your posted patches,

pending count = 330
queuing count = 9,965,127
locking rate = 4,178 kop/s

I believe that my test case has heavy dependency on _Q_PENDING_VAL
spinning loop. When I added back the loop, the performance data became:

pending count = 3,278,320
queuing count = 0
locking rate = 11,884 kop/s

Instead of an infinite loop, I also tried a limited spin with loop count
of 0x200 and I got similar performance data as the infinite loop case.

>   3. Could you point me at your stress test, so I can try to reproduce these

>      numbers on arm64 systems, please?


I will send you the test that I  used in a separate email.

>> I am not against this patch, but we certainly need to find out a way to

>> bring the performance number up closer to what it is before applying

>> the patch.

> We certainly need to *understand* where the drop is coming from, because

> the two-threaded case is still just a CAS on x86 with and without this

> patch series. Generally, there's a throughput cost when ensuring fairness

> and forward-progress otherwise we'd all be using test-and-set.


As stated above, the drop comes mainly from skipping the _Q_PENDING_VAL
spinning loop. I supposed that if we just do a limited spin, we can
still ensure forward progress while preserving the performance profile
of the original qspinlock code.

I don't think other codes in your patches cause any performance
regression as far as my testing is concerned.

Cheers,
Longman
Waiman Long April 9, 2018, 7:33 p.m. UTC | #17
On 04/09/2018 10:54 AM, Will Deacon wrote:
>

>>> I am not against this patch, but we certainly need to find out a way to

>>> bring the performance number up closer to what it is before applying

>>> the patch.

>> We certainly need to *understand* where the drop is coming from, because

>> the two-threaded case is still just a CAS on x86 with and without this

>> patch series. Generally, there's a throughput cost when ensuring fairness

>> and forward-progress otherwise we'd all be using test-and-set.

> Whilst I think we still need to address my questions above, I've had a

> crack at the diff below. Please can you give it a spin? It sticks a trylock

> on the slowpath before setting pending and replaces the CAS-based set

> with an xchg (which I *think* is safe, but will need to ponder it some

> more).

>

> Thanks,

>

> Will

>


Unfortunately, this patch didn't help.

pending count = 777
queuing count = 9,991,272
locking rate = 4,087 kop/s

-Longman
Peter Zijlstra April 10, 2018, 9:35 a.m. UTC | #18
On Mon, Apr 09, 2018 at 06:19:59PM +0100, Will Deacon wrote:
> On Mon, Apr 09, 2018 at 05:54:20PM +0200, Peter Zijlstra wrote:

> > On Mon, Apr 09, 2018 at 03:54:09PM +0100, Will Deacon wrote:

> > > @@ -289,18 +315,26 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)

> > >  		return;

> > >  

> > >  	/*

> > > -	 * If we observe any contention; queue.

> > > +	 * If we observe queueing, then queue ourselves.

> > >  	 */

> > > -	if (val & ~_Q_LOCKED_MASK)

> > > +	if (val & _Q_TAIL_MASK)

> > >  		goto queue;

> > >  

> > >  	/*

> > > +	 * We didn't see any queueing, so have one more try at snatching

> > > +	 * the lock in case it became available whilst we were taking the

> > > +	 * slow path.

> > > +	 */

> > > +	if (queued_spin_trylock(lock))

> > > +		return;

> > > +

> > > +	/*

> > >  	 * trylock || pending

> > >  	 *

> > >  	 * 0,0,0 -> 0,0,1 ; trylock

> > >  	 * 0,0,1 -> 0,1,1 ; pending

> > >  	 */

> > > +	val = set_pending_fetch_acquire(lock);

> > >  	if (!(val & ~_Q_LOCKED_MASK)) {

> > 

> > So, if I remember that partial paper correctly, the atomc_read_acquire()

> > can see 'arbitrary' old values for everything except the pending byte,

> > which it just wrote and will fwd into our load, right?

> > 

> > But I think coherence requires the read to not be older than the one

> > observed by the trylock before (since it uses c-cas its acquire can be

> > elided).

> > 

> > I think this means we can miss a concurrent unlock vs the fetch_or. And

> > I think that's fine, if we still see the lock set we'll needlessly 'wait'

> > for it go become unlocked.

> 

> Ah, but there is a related case that doesn't work. If the lock becomes

> free just before we set pending, then another CPU can succeed on the

> fastpath. We'll then set pending, but the lockword we get back may still

> have the locked byte of 0, so two people end up holding the lock.

> 

> I think it's worth giving this a go with the added trylock, but I can't

> see a way to avoid the atomic_fetch_or at the moment.


Oh yikes, indeed. Yeah, I don't see how we'd be able to fix that one.
Peter Zijlstra Sept. 20, 2018, 4:08 p.m. UTC | #19
On Mon, Apr 09, 2018 at 06:19:59PM +0100, Will Deacon wrote:
> On Mon, Apr 09, 2018 at 05:54:20PM +0200, Peter Zijlstra wrote:

> > On Mon, Apr 09, 2018 at 03:54:09PM +0100, Will Deacon wrote:


> > > +/**

> > > + * set_pending_fetch_acquire - set the pending bit and return the old lock

> > > + *                             value with acquire semantics.

> > > + * @lock: Pointer to queued spinlock structure

> > > + *

> > > + * *,*,* -> *,1,*

> > > + */

> > > +static __always_inline u32 set_pending_fetch_acquire(struct qspinlock *lock)

> > > +{

> > > +       u32 val = xchg_relaxed(&lock->pending, 1) << _Q_PENDING_OFFSET;


	smp_mb();

> > > +       val |= (atomic_read_acquire(&lock->val) & ~_Q_PENDING_MASK);

> > > +       return val;

> > > +}


> > > @@ -289,18 +315,26 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)

> > >  		return;

> > >  

> > >  	/*

> > > -	 * If we observe any contention; queue.

> > > +	 * If we observe queueing, then queue ourselves.

> > >  	 */

> > > -	if (val & ~_Q_LOCKED_MASK)

> > > +	if (val & _Q_TAIL_MASK)

> > >  		goto queue;

> > >  

> > >  	/*

> > > +	 * We didn't see any queueing, so have one more try at snatching

> > > +	 * the lock in case it became available whilst we were taking the

> > > +	 * slow path.

> > > +	 */

> > > +	if (queued_spin_trylock(lock))

> > > +		return;

> > > +

> > > +	/*

> > >  	 * trylock || pending

> > >  	 *

> > >  	 * 0,0,0 -> 0,0,1 ; trylock

> > >  	 * 0,0,1 -> 0,1,1 ; pending

> > >  	 */

> > > +	val = set_pending_fetch_acquire(lock);

> > >  	if (!(val & ~_Q_LOCKED_MASK)) {

> > 

> > So, if I remember that partial paper correctly, the atomc_read_acquire()

> > can see 'arbitrary' old values for everything except the pending byte,

> > which it just wrote and will fwd into our load, right?

> > 

> > But I think coherence requires the read to not be older than the one

> > observed by the trylock before (since it uses c-cas its acquire can be

> > elided).

> > 

> > I think this means we can miss a concurrent unlock vs the fetch_or. And

> > I think that's fine, if we still see the lock set we'll needlessly 'wait'

> > for it go become unlocked.

> 

> Ah, but there is a related case that doesn't work. If the lock becomes

> free just before we set pending, then another CPU can succeed on the

> fastpath. We'll then set pending, but the lockword we get back may still

> have the locked byte of 0, so two people end up holding the lock.

> 

> I think it's worth giving this a go with the added trylock, but I can't

> see a way to avoid the atomic_fetch_or at the moment.


So IIRC the addition of the smp_mb() above should ensure the @val load
is later than the @pending store.

Which makes the thing work again, right?

Now, obviously you don't actually want that on ARM64, but I can do that
on x86 just fine (our xchg() implies smp_mb() after all).


Another approach might be to use something like:

	val = xchg_relaxed(&lock->locked_pending, _Q_PENDING_VAL | _Q_LOCKED_VAL);
	val |= atomic_read_acquire(&lock->val) & _Q_TAIL_MASK;

combined with something like:

	/* 0,0,0 -> 0,1,1 - we won trylock */
	if (!(val & _Q_LOCKED_MASK)) {
		clear_pending(lock);
		return;
	}

	/* 0,0,1 -> 0,1,1 - we won pending */
	if (!(val & ~_Q_LOCKED_MASK)) {
		...
	}

	/* *,0,1 -> *,1,1 - we won pending, but there's queueing */
	if (!(val & _Q_PENDING_VAL))
		clear_pending(lock);

	...


Hmmm?
Peter Zijlstra Sept. 20, 2018, 4:22 p.m. UTC | #20
On Thu, Sep 20, 2018 at 06:08:32PM +0200, Peter Zijlstra wrote:
> Another approach might be to use something like:

> 

> 	val = xchg_relaxed(&lock->locked_pending, _Q_PENDING_VAL | _Q_LOCKED_VAL);

> 	val |= atomic_read_acquire(&lock->val) & _Q_TAIL_MASK;

> 

> combined with something like:

> 

> 	/* 0,0,0 -> 0,1,1 - we won trylock */

> 	if (!(val & _Q_LOCKED_MASK)) {


That one doesn't actually work... let me think about this more.

> 		clear_pending(lock);

> 		return;

> 	}

> 

> 	/* 0,0,1 -> 0,1,1 - we won pending */

> 	if (!(val & ~_Q_LOCKED_MASK)) {

> 		...

> 	}

> 

> 	/* *,0,1 -> *,1,1 - we won pending, but there's queueing */

> 	if (!(val & _Q_PENDING_VAL))

> 		clear_pending(lock);

> 

> 	...

> 

> 

> Hmmm?
diff mbox series

Patch

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a192af2fe378..b75361d23ea5 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -294,7 +294,7 @@  static __always_inline u32  __pv_wait_head_or_lock(struct qspinlock *lock,
 void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 {
 	struct mcs_spinlock *prev, *next, *node;
-	u32 new, old, tail;
+	u32 old, tail;
 	int idx;
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -306,58 +306,48 @@  void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 		return;
 
 	/*
+	 * If we observe any contention; queue.
+	 */
+	if (val & ~_Q_LOCKED_MASK)
+		goto queue;
+
+	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock
 	 * 0,0,1 -> 0,1,1 ; pending
 	 */
-	for (;;) {
+	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+	if (!(val & ~_Q_LOCKED_MASK)) {
 		/*
-		 * If we observe any contention; queue.
+		 * we're pending, wait for the owner to go away.
+		 *
+		 * *,1,1 -> *,1,0
+		 *
+		 * this wait loop must be a load-acquire such that we match the
+		 * store-release that clears the locked bit and create lock
+		 * sequentiality; this is because not all
+		 * clear_pending_set_locked() implementations imply full
+		 * barriers.
 		 */
-		if (val & ~_Q_LOCKED_MASK)
-			goto queue;
-
-		new = _Q_LOCKED_VAL;
-		if (val == new)
-			new |= _Q_PENDING_VAL;
-
+		if (val & _Q_LOCKED_MASK)
+			smp_cond_load_acquire(&lock->val.counter,
+					      !(VAL & _Q_LOCKED_MASK));
 		/*
-		 * Acquire semantic is required here as the function may
-		 * return immediately if the lock was free.
+		 * take ownership and clear the pending bit.
+		 *
+		 * *,1,0 -> *,0,1
 		 */
-		old = atomic_cmpxchg_acquire(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
-
-	/*
-	 * we won the trylock
-	 */
-	if (new == _Q_LOCKED_VAL)
+		clear_pending_set_locked(lock);
 		return;
+	}
 
 	/*
-	 * we're pending, wait for the owner to go away.
-	 *
-	 * *,1,1 -> *,1,0
-	 *
-	 * this wait loop must be a load-acquire such that we match the
-	 * store-release that clears the locked bit and create lock
-	 * sequentiality; this is because not all clear_pending_set_locked()
-	 * implementations imply full barriers.
-	 */
-	smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK));
-
-	/*
-	 * take ownership and clear the pending bit.
-	 *
-	 * *,1,0 -> *,0,1
+	 * If pending was clear but there are waiters in the queue, then
+	 * we need to undo our setting of pending before we queue ourselves.
 	 */
-	clear_pending_set_locked(lock);
-	return;
+	if (!(val & _Q_PENDING_MASK))
+		atomic_andnot(_Q_PENDING_VAL, &lock->val);
 
 	/*
 	 * End of pending bit optimistic spinning and beginning of MCS
@@ -461,15 +451,15 @@  void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * claim the lock:
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
-	 * *,0,0 -> *,0,1 : lock, contended
+	 * *,*,0 -> *,*,1 : lock, contended
 	 *
-	 * If the queue head is the only one in the queue (lock value == tail),
-	 * clear the tail code and grab the lock. Otherwise, we only need
-	 * to grab the lock.
+	 * If the queue head is the only one in the queue (lock value == tail)
+	 * and nobody is pending, clear the tail code and grab the lock.
+	 * Otherwise, we only need to grab the lock.
 	 */
 	for (;;) {
 		/* In the PV case we might already have _Q_LOCKED_VAL set */
-		if ((val & _Q_TAIL_MASK) != tail) {
+		if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) {
 			set_locked(lock);
 			break;
 		}