mbox series

[PATCHv2,00/16] atomics: API cleanups

Message ID 20180529154346.3168-1-mark.rutland@arm.com
Headers show
Series atomics: API cleanups | expand

Message

Mark Rutland May 29, 2018, 3:43 p.m. UTC
This series contains a few cleanups of the atomic API, fixing
inconsistencies between atomic_* and atomic64_*, and minimizing
repetition in arch code. This is nicer for arch code, and the improved
regularity will help when generating the atomic headers in future.

Since v1 [1]:
* Add missing inc_not_zero #define for x86_32
* Remove newly redundant test op definitions for riscv
* Remove atomic_inc_not_zero_hint()
* Make inc/dec ops optional

The bulk of the patches reorganise things so architectures consistently
provide <atomic>_fetch_add_unless(), with atomic_fetch_add_unless()
provided as a wrapper by core code. A generic fallback is provided for
<atomic>_fetch_add_unless(), based on <atomic>_read() and
<atomic>_try_cmpxchg().

Other patches in the series add common fallbacks for:

* atomic64_inc_not_zero() 
* <atomic>_inc_and_test()
* <atomic>_dec_and_test()
* <atomic>_sub_and_test()
* <atomic>add_negative()
* <atomic>_*inc*()
* <atomic>_*dec*()

... as almost all architectures provide (near-)identical implementation
of these today. Where an architecture provides a non-trivial definition,
it is updated to provide a matching preprocessor symbol, and the
instrumented atomics are updated correspondingly.

The end result is a strongly negative diffstat, though <linux/atomic.h>
grows by a reasonable amount. We could halve this in future by
templating the various fallbacks for atomic{,64}_t with code generation.

I've pushed this out to my atomics/api-cleanup branch [2] on kernel.org.

Thanks,
Mark.

[1] https://lkml.kernel.org/r/20180523133533.1076-1-mark.rutland@arm.com
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/api-cleanup

Mark Rutland (16):
  atomics/treewide: s/__atomic_add_unless/atomic_fetch_add_unless/
  atomics/treewide: remove redundant atomic_inc_not_zero() definitions
  atomics/treewide: make atomic64_inc_not_zero() optional
  atomics/treewide: make atomic_fetch_add_unless() optional
  atomics: prepare for atomic64_fetch_add_unless()
  atomics/generic: define atomic64_fetch_add_unless()
  atomics/alpha: define atomic64_fetch_add_unless()
  atomics/arc: define atomic64_fetch_add_unless()
  atomics/arm: define atomic64_fetch_add_unless()
  atomics/powerpc: define atomic64_fetch_add_unless()
  atomics/riscv: define atomic64_fetch_add_unless()
  atomics/treewide: make atomic64_fetch_add_unless() optional
  atomics/treewide: make test ops optional
  atomics/treewide: remove atomic_inc_not_zero_hint()
  atomics/treewide: make unconditional inc/dec ops optional
  atomics/treewide: make conditional inc/dec ops optional

 arch/alpha/include/asm/atomic.h           |  56 ++---
 arch/arc/include/asm/atomic.h             |  78 ++-----
 arch/arm/include/asm/atomic.h             |  55 ++---
 arch/arm64/include/asm/atomic.h           |  47 +---
 arch/h8300/include/asm/atomic.h           |  15 +-
 arch/hexagon/include/asm/atomic.h         |  18 +-
 arch/ia64/include/asm/atomic.h            |  81 -------
 arch/m68k/include/asm/atomic.h            |  24 +-
 arch/mips/include/asm/atomic.h            | 172 --------------
 arch/openrisc/include/asm/atomic.h        |   4 +-
 arch/parisc/include/asm/atomic.h          | 107 ---------
 arch/powerpc/include/asm/atomic.h         |  52 ++---
 arch/riscv/include/asm/atomic.h           | 149 +-----------
 arch/s390/include/asm/atomic.h            |  65 ------
 arch/sh/include/asm/atomic.h              |  35 ---
 arch/sparc/include/asm/atomic_32.h        |  24 +-
 arch/sparc/include/asm/atomic_64.h        |  65 +-----
 arch/sparc/lib/atomic32.c                 |   4 +-
 arch/x86/include/asm/atomic.h             |  30 +--
 arch/x86/include/asm/atomic64_32.h        |  61 +----
 arch/x86/include/asm/atomic64_64.h        |  48 +---
 arch/xtensa/include/asm/atomic.h          |  98 --------
 drivers/block/rbd.c                       |   2 +-
 drivers/infiniband/core/rdma_core.c       |   2 +-
 fs/afs/rxrpc.c                            |   2 +-
 include/asm-generic/atomic-instrumented.h |  69 +++++-
 include/asm-generic/atomic.h              |  33 ---
 include/asm-generic/atomic64.h            |  14 +-
 include/linux/atomic.h                    | 371 +++++++++++++++++++++++++-----
 kernel/bpf/syscall.c                      |   4 +-
 lib/atomic64.c                            |  13 +-
 net/atm/pppoatm.c                         |   2 +-
 net/rxrpc/call_object.c                   |   2 +-
 net/rxrpc/conn_object.c                   |   4 +-
 net/rxrpc/local_object.c                  |   2 +-
 net/rxrpc/peer_object.c                   |   2 +-
 36 files changed, 499 insertions(+), 1311 deletions(-)

-- 
2.11.0

Comments

Mark Rutland May 30, 2018, 11:23 a.m. UTC | #1
On Tue, May 29, 2018 at 04:43:30PM +0100, Mark Rutland wrote:
> This series contains a few cleanups of the atomic API, fixing

> inconsistencies between atomic_* and atomic64_*, and minimizing

> repetition in arch code. This is nicer for arch code, and the improved

> regularity will help when generating the atomic headers in future.


> Mark Rutland (16):

>   atomics/treewide: s/__atomic_add_unless/atomic_fetch_add_unless/

>   atomics/treewide: remove redundant atomic_inc_not_zero() definitions

>   atomics/treewide: make atomic64_inc_not_zero() optional

>   atomics/treewide: make atomic_fetch_add_unless() optional

>   atomics: prepare for atomic64_fetch_add_unless()

>   atomics/generic: define atomic64_fetch_add_unless()

>   atomics/alpha: define atomic64_fetch_add_unless()

>   atomics/arc: define atomic64_fetch_add_unless()

>   atomics/arm: define atomic64_fetch_add_unless()

>   atomics/powerpc: define atomic64_fetch_add_unless()

>   atomics/riscv: define atomic64_fetch_add_unless()

>   atomics/treewide: make atomic64_fetch_add_unless() optional

>   atomics/treewide: make test ops optional

>   atomics/treewide: remove atomic_inc_not_zero_hint()

>   atomics/treewide: make unconditional inc/dec ops optional

>   atomics/treewide: make conditional inc/dec ops optional


While testing the subsqeuent generated header patches, I discovered that
the andnot ifdeffery was inconsistent to that for all other atomics, so
I'm going to append the below patch to this series, if it needs a v3.

Otherwise, that's pushed out to my atomics/api-cleanup branch.

Thanks,
Mark,

---->8----
From da1cef7ec7c1ad143a06723164263ef308bfb569 Mon Sep 17 00:00:00 2001
From: Mark Rutland <mark.rutland@arm.com>

Date: Wed, 30 May 2018 11:55:28 +0100
Subject: [PATCH] atomics/treewide: clean up andnot ifdeffery

The ifdeffery for atomic*_{fetch_,}andnot() is unlike that for all the
other atomics. If atomic*_andnot() is not defined, the corresponding
atomic*_fetch_andnot() is assumed to not be defined.

Additionally, the fallbacks for the various orering cases are written
much later in atomic.h as static inlines.

This isn't problematic today, but gets in the way of scripting the
generation of atomics. To prepare for scripting, this patch:

* Switches to separate ifdefs for atomic*_andnot() and
  atomic*_fetch_andnot(), updating implementations as appropriate.

* Moves the fallbacks into the standards ifdefs, as macro expansions
  rather than static inlines.

* Removes trivial andnot implementations from architectures, where these
  are superceded by core code.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arc/include/asm/atomic.h |  8 ++--
 arch/arm/include/asm/atomic.h |  2 +
 include/linux/atomic.h        | 96 ++++++++++++++-----------------------------
 3 files changed, 36 insertions(+), 70 deletions(-)

diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 261c6a4d3f05..b706781825ac 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -187,7 +187,8 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
 
-#define atomic_andnot atomic_andnot
+#define atomic_andnot		atomic_andnot
+#define atomic_fetch_andnot	atomic_fetch_andnot
 
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, c_op, asm_op)					\
@@ -296,8 +297,6 @@ ATOMIC_OPS(add, +=, CTOP_INST_AADD_DI_R2_R2_R3)
 	ATOMIC_FETCH_OP(op, c_op, asm_op)
 
 ATOMIC_OPS(and, &=, CTOP_INST_AAND_DI_R2_R2_R3)
-#define atomic_andnot(mask, v) atomic_and(~(mask), (v))
-#define atomic_fetch_andnot(mask, v) atomic_fetch_and(~(mask), (v))
 ATOMIC_OPS(or, |=, CTOP_INST_AOR_DI_R2_R2_R3)
 ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3)
 
@@ -430,7 +429,8 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v)	\
 	ATOMIC64_OP_RETURN(op, op1, op2)				\
 	ATOMIC64_FETCH_OP(op, op1, op2)
 
-#define atomic64_andnot atomic64_andnot
+#define atomic64_andnot		atomic64_andnot
+#define atomic64_fetch_andnot	atomic64_fetch_andnot
 
 ATOMIC64_OPS(add, add.f, adc)
 ATOMIC64_OPS(sub, sub.f, sbc)
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 884c241424fe..f74756641410 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -216,6 +216,8 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 	return ret;
 }
 
+#define atomic_fetch_andnot		atomic_fetch_andnot
+
 #endif /* __LINUX_ARM_ARCH__ */
 
 #define ATOMIC_OPS(op, c_op, asm_op)					\
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 9c1b0769a846..99a2448252a1 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -352,12 +352,22 @@
 #endif
 #endif /* atomic_fetch_and_relaxed */
 
-#ifdef atomic_andnot
-/* atomic_fetch_andnot_relaxed */
+#ifndef atomic_andnot
+#define atomic_andnot(i, v)		atomic_and(~(int)(i), (v))
+#endif
+
 #ifndef atomic_fetch_andnot_relaxed
-#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot
-#define atomic_fetch_andnot_acquire	atomic_fetch_andnot
-#define atomic_fetch_andnot_release	atomic_fetch_andnot
+
+#ifndef atomic_fetch_andnot
+#define atomic_fetch_andnot(i, v)		atomic_fetch_and(~(int)(i), (v))
+#define atomic_fetch_andnot_relaxed(i, v)	atomic_fetch_and_relaxed(~(int)(i), (v))
+#define atomic_fetch_andnot_acquire(i, v)	atomic_fetch_and_acquire(~(int)(i), (v))
+#define atomic_fetch_andnot_release(i, v)	atomic_fetch_and_release(~(int)(i), (v))
+#else /* atomic_fetch_andnot */
+#define atomic_fetch_andnot_relaxed		atomic_fetch_andnot
+#define atomic_fetch_andnot_acquire		atomic_fetch_andnot
+#define atomic_fetch_andnot_release		atomic_fetch_andnot
+#endif /* atomic_fetch_andnot */
 
 #else /* atomic_fetch_andnot_relaxed */
 
@@ -376,7 +386,6 @@
 	__atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__)
 #endif
 #endif /* atomic_fetch_andnot_relaxed */
-#endif /* atomic_andnot */
 
 /* atomic_fetch_xor_relaxed */
 #ifndef atomic_fetch_xor_relaxed
@@ -653,33 +662,6 @@ static inline bool atomic_add_negative(int i, atomic_t *v)
 }
 #endif
 
-#ifndef atomic_andnot
-static inline void atomic_andnot(int i, atomic_t *v)
-{
-	atomic_and(~i, v);
-}
-
-static inline int atomic_fetch_andnot(int i, atomic_t *v)
-{
-	return atomic_fetch_and(~i, v);
-}
-
-static inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v)
-{
-	return atomic_fetch_and_relaxed(~i, v);
-}
-
-static inline int atomic_fetch_andnot_acquire(int i, atomic_t *v)
-{
-	return atomic_fetch_and_acquire(~i, v);
-}
-
-static inline int atomic_fetch_andnot_release(int i, atomic_t *v)
-{
-	return atomic_fetch_and_release(~i, v);
-}
-#endif
-
 #ifndef atomic_inc_unless_negative
 static inline bool atomic_inc_unless_negative(atomic_t *v)
 {
@@ -1026,12 +1008,22 @@ static inline int atomic_dec_if_positive(atomic_t *v)
 #endif
 #endif /* atomic64_fetch_and_relaxed */
 
-#ifdef atomic64_andnot
-/* atomic64_fetch_andnot_relaxed */
+#ifndef atomic_andnot
+#define atomic_andnot(i, v)		atomic_and(~(int)(i), (v))
+#endif
+
 #ifndef atomic64_fetch_andnot_relaxed
-#define atomic64_fetch_andnot_relaxed	atomic64_fetch_andnot
-#define atomic64_fetch_andnot_acquire	atomic64_fetch_andnot
-#define atomic64_fetch_andnot_release	atomic64_fetch_andnot
+
+#ifndef atomic64_fetch_andnot
+#define atomic64_fetch_andnot(i, v)		atomic64_fetch_and(~(long long)(i), (v))
+#define atomic64_fetch_andnot_relaxed(i, v)	atomic64_fetch_and_relaxed(~(long long)(i), (v))
+#define atomic64_fetch_andnot_acquire(i, v)	atomic64_fetch_and_acquire(~(long long)(i), (v))
+#define atomic64_fetch_andnot_release(i, v)	atomic64_fetch_and_release(~(long long)(i), (v))
+#else /* atomic64_fetch_andnot */
+#define atomic64_fetch_andnot_relaxed		atomic64_fetch_andnot
+#define atomic64_fetch_andnot_acquire		atomic64_fetch_andnot
+#define atomic64_fetch_andnot_release		atomic64_fetch_andnot
+#endif /* atomic64_fetch_andnot */
 
 #else /* atomic64_fetch_andnot_relaxed */
 
@@ -1050,7 +1042,6 @@ static inline int atomic_dec_if_positive(atomic_t *v)
 	__atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__)
 #endif
 #endif /* atomic64_fetch_andnot_relaxed */
-#endif /* atomic64_andnot */
 
 /* atomic64_fetch_xor_relaxed */
 #ifndef atomic64_fetch_xor_relaxed
@@ -1259,33 +1250,6 @@ static inline bool atomic64_add_negative(long long i, atomic64_t *v)
 }
 #endif
 
-#ifndef atomic64_andnot
-static inline void atomic64_andnot(long long i, atomic64_t *v)
-{
-	atomic64_and(~i, v);
-}
-
-static inline long long atomic64_fetch_andnot(long long i, atomic64_t *v)
-{
-	return atomic64_fetch_and(~i, v);
-}
-
-static inline long long atomic64_fetch_andnot_relaxed(long long i, atomic64_t *v)
-{
-	return atomic64_fetch_and_relaxed(~i, v);
-}
-
-static inline long long atomic64_fetch_andnot_acquire(long long i, atomic64_t *v)
-{
-	return atomic64_fetch_and_acquire(~i, v);
-}
-
-static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v)
-{
-	return atomic64_fetch_and_release(~i, v);
-}
-#endif
-
 #ifndef atomic64_inc_unless_negative
 static inline bool atomic64_inc_unless_negative(atomic64_t *v)
 {
-- 
2.11.0
Mark Rutland May 30, 2018, 1:34 p.m. UTC | #2
On Wed, May 30, 2018 at 12:23:12PM +0100, Mark Rutland wrote:
> -#ifdef atomic64_andnot

> -/* atomic64_fetch_andnot_relaxed */

> +#ifndef atomic_andnot

> +#define atomic_andnot(i, v)		atomic_and(~(int)(i), (v))

> +#endif


Ugh; s/atomic/atomic64 here.

I've pushed out a corrected branch.

Mark.
Peter Zijlstra June 5, 2018, 9:43 a.m. UTC | #3
Looking good, thanks!

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>