mbox series

[00/10] kernel/locking: qspinlock improvements

Message ID 1522947547-24081-1-git-send-email-will.deacon@arm.com
Headers show
Series kernel/locking: qspinlock improvements | expand

Message

Will Deacon April 5, 2018, 4:58 p.m. UTC
Hi all,

I've been kicking the tyres further on qspinlock and with this set of patches
I'm happy with the performance and fairness properties. In particular, the
locking algorithm now guarantees forward progress whereas the implementation
in mainline can starve threads indefinitely in cmpxchg loops.

Catalin has also implemented a model of this using TLA to prove that the
lock is fair, although this doesn't take the memory model into account:

https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/

I'd still like to get more benchmark numbers and wider exposure before
enabling this for arm64, but my current testing is looking very promising.
This series, along with the arm64-specific patches, is available at:

https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=qspinlock

Cheers,

Will

--->8

Jason Low (1):
  locking/mcs: Use smp_cond_load_acquire() in mcs spin loop

Will Deacon (9):
  locking/qspinlock: Don't spin on pending->locked transition in
    slowpath
  locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath
  locking/qspinlock: Kill cmpxchg loop when claiming lock from head of
    queue
  locking/qspinlock: Use atomic_cond_read_acquire
  barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
  locking/qspinlock: Use smp_cond_load_relaxed to wait for next node
  locking/qspinlock: Merge struct __qspinlock into struct qspinlock
  locking/qspinlock: Make queued_spin_unlock use smp_store_release
  locking/qspinlock: Elide back-to-back RELEASE operations with
    smp_wmb()

 arch/x86/include/asm/qspinlock.h          |  19 ++-
 arch/x86/include/asm/qspinlock_paravirt.h |   3 +-
 include/asm-generic/barrier.h             |  27 ++++-
 include/asm-generic/qspinlock.h           |   2 +-
 include/asm-generic/qspinlock_types.h     |  32 ++++-
 include/linux/atomic.h                    |   2 +
 kernel/locking/mcs_spinlock.h             |  10 +-
 kernel/locking/qspinlock.c                | 191 ++++++++++--------------------
 kernel/locking/qspinlock_paravirt.h       |  34 ++----
 9 files changed, 141 insertions(+), 179 deletions(-)

-- 
2.1.4

Comments

Andrea Parri April 6, 2018, 1:22 p.m. UTC | #1
On Thu, Apr 05, 2018 at 05:58:57PM +0100, Will Deacon wrote:
> Hi all,

> 

> I've been kicking the tyres further on qspinlock and with this set of patches

> I'm happy with the performance and fairness properties. In particular, the

> locking algorithm now guarantees forward progress whereas the implementation

> in mainline can starve threads indefinitely in cmpxchg loops.

> 

> Catalin has also implemented a model of this using TLA to prove that the

> lock is fair, although this doesn't take the memory model into account:

> 

> https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/


Nice!  I'll dig into this formalization, but my guess is that our model
(and axiomatic models "a-la-herd", in general) are not well-suited when
it comes to study properties such as fairness, liveness...

Did you already think about this?

  Andrea


> 

> I'd still like to get more benchmark numbers and wider exposure before

> enabling this for arm64, but my current testing is looking very promising.

> This series, along with the arm64-specific patches, is available at:

> 

> https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=qspinlock

> 

> Cheers,

> 

> Will

> 

> --->8

> 

> Jason Low (1):

>   locking/mcs: Use smp_cond_load_acquire() in mcs spin loop

> 

> Will Deacon (9):

>   locking/qspinlock: Don't spin on pending->locked transition in

>     slowpath

>   locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath

>   locking/qspinlock: Kill cmpxchg loop when claiming lock from head of

>     queue

>   locking/qspinlock: Use atomic_cond_read_acquire

>   barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed

>   locking/qspinlock: Use smp_cond_load_relaxed to wait for next node

>   locking/qspinlock: Merge struct __qspinlock into struct qspinlock

>   locking/qspinlock: Make queued_spin_unlock use smp_store_release

>   locking/qspinlock: Elide back-to-back RELEASE operations with

>     smp_wmb()

> 

>  arch/x86/include/asm/qspinlock.h          |  19 ++-

>  arch/x86/include/asm/qspinlock_paravirt.h |   3 +-

>  include/asm-generic/barrier.h             |  27 ++++-

>  include/asm-generic/qspinlock.h           |   2 +-

>  include/asm-generic/qspinlock_types.h     |  32 ++++-

>  include/linux/atomic.h                    |   2 +

>  kernel/locking/mcs_spinlock.h             |  10 +-

>  kernel/locking/qspinlock.c                | 191 ++++++++++--------------------

>  kernel/locking/qspinlock_paravirt.h       |  34 ++----

>  9 files changed, 141 insertions(+), 179 deletions(-)

> 

> -- 

> 2.1.4

>
Catalin Marinas April 11, 2018, 10:20 a.m. UTC | #2
On Fri, Apr 06, 2018 at 03:22:49PM +0200, Andrea Parri wrote:
> On Thu, Apr 05, 2018 at 05:58:57PM +0100, Will Deacon wrote:

> > I've been kicking the tyres further on qspinlock and with this set of patches

> > I'm happy with the performance and fairness properties. In particular, the

> > locking algorithm now guarantees forward progress whereas the implementation

> > in mainline can starve threads indefinitely in cmpxchg loops.

> > 

> > Catalin has also implemented a model of this using TLA to prove that the

> > lock is fair, although this doesn't take the memory model into account:

> > 

> > https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/

> 

> Nice!  I'll dig into this formalization, but my guess is that our model

> (and axiomatic models "a-la-herd", in general) are not well-suited when

> it comes to study properties such as fairness, liveness...


Maybe someone with a background in formal methods could give a better
answer. How TLA+ works is closer to rmem [1] (operational model,
exhaustive memoised state search) than herd. Liveness verification
requires checking that, under certain fairness properties, some state is
eventually reached. IOW, it tries to show that either all state change
graphs lead to (go through) such state or that there are cycles in the
graph and the state is never reached. I don't know whether herd could be
modified to check liveness. I'm not sure it can handle infinite loops
either (the above model checks an infinite lock/unlock loop on each
CPU and that's easier to implement in a tool with memoised states).

The TLA+ model above assumes sequential consistency, so no memory
ordering taken into account. One could build an operational model in
TLA+ that's equivalent to the axiomatic one (e.g. following the Flat
model equivalence as in [2]), however, liveness checking (at least with
TLA+) is orders of magnitude slower than safety. Any small variation has
an exponential impact on the state space, so likely to be impractical.
For specific parts of the algorithm, you may be able to use a poor man's
ordering by e.g. writing two accesses in two different orders so the
model checks both combinations.

There are papers (e.g. [3]) on how to convert liveness checking to
safety checking but I haven't dug further. I think it's easier/faster if
you do liveness checking with a simplified model and separately check
the safety with respect to memory ordering on tools like herd.

[1] http://www.cl.cam.ac.uk/~sf502/regressions/rmem/
[2] http://www.cl.cam.ac.uk/~pes20/armv8-mca/armv8-mca-draft.pdf
[3] https://www.sciencedirect.com/science/article/pii/S1571066104804109

-- 
Catalin
Andrea Parri April 11, 2018, 3:39 p.m. UTC | #3
On Wed, Apr 11, 2018 at 11:20:04AM +0100, Catalin Marinas wrote:
> On Fri, Apr 06, 2018 at 03:22:49PM +0200, Andrea Parri wrote:

> > On Thu, Apr 05, 2018 at 05:58:57PM +0100, Will Deacon wrote:

> > > I've been kicking the tyres further on qspinlock and with this set of patches

> > > I'm happy with the performance and fairness properties. In particular, the

> > > locking algorithm now guarantees forward progress whereas the implementation

> > > in mainline can starve threads indefinitely in cmpxchg loops.

> > > 

> > > Catalin has also implemented a model of this using TLA to prove that the

> > > lock is fair, although this doesn't take the memory model into account:

> > > 

> > > https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/

> > 

> > Nice!  I'll dig into this formalization, but my guess is that our model

> > (and axiomatic models "a-la-herd", in general) are not well-suited when

> > it comes to study properties such as fairness, liveness...

> 

> Maybe someone with a background in formal methods could give a better

> answer. How TLA+ works is closer to rmem [1] (operational model,

> exhaustive memoised state search) than herd. Liveness verification

> requires checking that, under certain fairness properties, some state is

> eventually reached. IOW, it tries to show that either all state change

> graphs lead to (go through) such state or that there are cycles in the

> graph and the state is never reached. I don't know whether herd could be

> modified to check liveness. I'm not sure it can handle infinite loops

> either (the above model checks an infinite lock/unlock loop on each

> CPU and that's easier to implement in a tool with memoised states).

> 

> The TLA+ model above assumes sequential consistency, so no memory

> ordering taken into account. One could build an operational model in

> TLA+ that's equivalent to the axiomatic one (e.g. following the Flat

> model equivalence as in [2]), however, liveness checking (at least with

> TLA+) is orders of magnitude slower than safety. Any small variation has

> an exponential impact on the state space, so likely to be impractical.

> For specific parts of the algorithm, you may be able to use a poor man's

> ordering by e.g. writing two accesses in two different orders so the

> model checks both combinations.

> 

> There are papers (e.g. [3]) on how to convert liveness checking to

> safety checking but I haven't dug further. I think it's easier/faster if

> you do liveness checking with a simplified model and separately check

> the safety with respect to memory ordering on tools like herd.


Indeed.  A fundamental problem, AFAICT, is to formalize that concept of
'[it] will _eventually_ happen'.  Consider a simple example:

	{ x = 0}

	P0	|   P1
		|
	x = 1	|   while (!x)
		|  	 ;

herd 'knows' that:

	- on the 1st iteration of the 'while' loop, the load from x
	  can return the value 0 or 1 (only);

	- on the 2nd iteration of the 'while' loop, the load from x
	  can return the value 0 or 1;

	- [ ... and 'so on'! ]

but this is pretty much all herd knows about this snippet by now ... ;)

Thanks,
  Andrea


> 

> [1] http://www.cl.cam.ac.uk/~sf502/regressions/rmem/

> [2] http://www.cl.cam.ac.uk/~pes20/armv8-mca/armv8-mca-draft.pdf

> [3] https://www.sciencedirect.com/science/article/pii/S1571066104804109

> 

> -- 

> Catalin