mbox series

[v2,00/13] kernel/locking: qspinlock improvements

Message ID 1523469680-17699-1-git-send-email-will.deacon@arm.com
Headers show
Series kernel/locking: qspinlock improvements | expand

Message

Will Deacon April 11, 2018, 6:01 p.m. UTC
Hi all,

Here's v2 of the qspinlock patches I posted last week:

  https://lkml.org/lkml/2018/4/5/496

Changes since v1 include:
  * Use WRITE_ONCE to clear the pending bit if we set it erroneously
  * Report pending and slowpath acquisitions via the qspinlock stat
    mechanism [Waiman Long]
  * Spin for a bounded duration while lock is observed in the
    pending->locked transition
  * Use try_cmpxchg to get better codegen on x86
  * Reword comments

All comments welcome,

Will

--->8

Jason Low (1):
  locking/mcs: Use smp_cond_load_acquire() in mcs spin loop

Waiman Long (1):
  locking/qspinlock: Add stat tracking for pending vs slowpath

Will Deacon (11):
  barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
  locking/qspinlock: Bound spinning on pending->locked transition in
    slowpath
  locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound
  locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath
  locking/qspinlock: Kill cmpxchg loop when claiming lock from head of
    queue
  locking/qspinlock: Use atomic_cond_read_acquire
  locking/qspinlock: Use smp_cond_load_relaxed to wait for next node
  locking/qspinlock: Merge struct __qspinlock into struct qspinlock
  locking/qspinlock: Make queued_spin_unlock use smp_store_release
  locking/qspinlock: Elide back-to-back RELEASE operations with
    smp_wmb()
  locking/qspinlock: Use try_cmpxchg instead of cmpxchg when locking

 arch/x86/include/asm/qspinlock.h          |  21 ++-
 arch/x86/include/asm/qspinlock_paravirt.h |   3 +-
 include/asm-generic/atomic-long.h         |   2 +
 include/asm-generic/barrier.h             |  27 +++-
 include/asm-generic/qspinlock.h           |   2 +-
 include/asm-generic/qspinlock_types.h     |  32 +++-
 include/linux/atomic.h                    |   2 +
 kernel/locking/mcs_spinlock.h             |  10 +-
 kernel/locking/qspinlock.c                | 247 ++++++++++++++----------------
 kernel/locking/qspinlock_paravirt.h       |  41 ++---
 kernel/locking/qspinlock_stat.h           |   9 +-
 11 files changed, 209 insertions(+), 187 deletions(-)

-- 
2.1.4

Comments

Catalin Marinas April 13, 2018, 9:24 a.m. UTC | #1
On Wed, Apr 11, 2018 at 07:01:07PM +0100, Will Deacon wrote:
>   * Spin for a bounded duration while lock is observed in the

>     pending->locked transition


FWIW, I updated my model [1] to include the bounded handover loop and,
as expected, it passes the liveness check (well, assuming fairness of
other operations like fetch_or, cmpxchg etc).

[1] https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/tree/qspinlock.tla

-- 
Catalin