From patchwork Tue May 29 15:43:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137181 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4212216lji; Tue, 29 May 2018 08:44:09 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrti9CdY0Ysh2uYvJegGR5JM74ToBcnQ8vAamvJXjC5teAcIxUcwF0ako2NcXZspzlV5fNw X-Received: by 2002:a63:b91b:: with SMTP id z27-v6mr14188270pge.240.1527608649641; Tue, 29 May 2018 08:44:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608649; cv=none; d=google.com; s=arc-20160816; b=BQBMuw9kzLDGBtGMomMkXwRTQpMteT3v2s+jx7YehmStMQ5o4YzrAFDcThwZoJjmUg ywYwoDi2h3jN5JjJOyVg4VwR9T9RWU93rwqQz9Iyo8PPZREdICrKB4pUW0wJdHwtZ1RM RfkrKBu285VcG/MFSed4aHoe0QAfyXA0FWqhIaMe+/qRQLe/hDfWEUc8mfh4i30JNS+V UqnRID4+lH0K0PJmCPill5nipo1ywpT0uG6HVPdrUB179PItsNrMOhTII4i6sl5Vyh7T KsTAgCf5h5tkibIUyLESr9fdK7H5mwTxlLFHUOXebr2UcjFoXrX44ngvoAZ846x9da3u SS/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=DqcJi7mPth53i3QoNZ0QOEBT/Vckl/SvgaLEWdPkQSE=; b=C+qaYdgI2Jlx+oKGHi3JxisPoL1wcyIBznekq0z+lM1bpadgsHiT5zVWgNyoGhRo0w 3bYHsVcYlTVIkCuplYVeYF5zMpSA6dbM3WZc20mExPDzq7HMIjZDvkhUzt2N+xTcVWv5 FoA/KX01nRb50EoeVV64EtyCxoGVPmSI8JSNeRiWO6+CanWVDjqses9tSe+6FE2dnp9+ JXhTQKfIeOEk1rqVdkwSw3PUKoLVIgFEWc5VGjCYtLK/DP0xBVJaB/MpQt5N3YXRbtUQ FlTIoPMwSIlle2d1RoiqnzOwrNKDg5BrgIe+90/maS+NzASqYgIIZTtBsG8mD9DQPDD0 CwCA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e90-v6si32554189pfb.355.2018.05.29.08.44.09; Tue, 29 May 2018 08:44:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936600AbeE2PoH (ORCPT + 30 others); Tue, 29 May 2018 11:44:07 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43368 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934948AbeE2PoE (ORCPT ); Tue, 29 May 2018 11:44:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D1BE115AD; Tue, 29 May 2018 08:44:03 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C40D83F25D; Tue, 29 May 2018 08:44:02 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon Subject: [PATCHv2 01/16] atomics/treewide: s/__atomic_add_unless/atomic_fetch_add_unless/ Date: Tue, 29 May 2018 16:43:31 +0100 Message-Id: <20180529154346.3168-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org While __atomic_add_unless was originally intended as a building-block for atomic_add_unless, it's now used in a number of places around the kernel. It's the only common atomic operation named __atomic*(), rather than atomic_*(), and for consistency it would be better named atomic_fetch_add_unless(). This lack of consistency is slightly confusing, and gets in the way of scripting atomics. Given that, let's clean things up and promote it to an offical part of the atomics API, in the form of atomic_fetch_add_unless(). This patch converts definitions and uses over to the new name, including the instrumented version, using the following script: ---- git grep -w __atomic_add_unless | while read line; do sed -i '{s/\<__atomic_add_unless\>/atomic_fetch_add_unless/}' "${line%%:*}"; done git grep -w __arch_atomic_add_unless | while read line; do sed -i '{s/\<__arch_atomic_add_unless\>/arch_atomic_fetch_add_unless/}' "${line%%:*}"; done ---- Note that we do not have atomic{64,_long}_fetch_add_unless(), which will be introduced by later patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Geert Uytterhoeven Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 4 ++-- arch/arc/include/asm/atomic.h | 4 ++-- arch/arm/include/asm/atomic.h | 4 ++-- arch/arm64/include/asm/atomic.h | 2 +- arch/h8300/include/asm/atomic.h | 2 +- arch/hexagon/include/asm/atomic.h | 4 ++-- arch/ia64/include/asm/atomic.h | 2 +- arch/m68k/include/asm/atomic.h | 2 +- arch/mips/include/asm/atomic.h | 4 ++-- arch/openrisc/include/asm/atomic.h | 4 ++-- arch/parisc/include/asm/atomic.h | 4 ++-- arch/powerpc/include/asm/atomic.h | 8 ++++---- arch/riscv/include/asm/atomic.h | 4 ++-- arch/s390/include/asm/atomic.h | 2 +- arch/sh/include/asm/atomic.h | 4 ++-- arch/sparc/include/asm/atomic_32.h | 2 +- arch/sparc/include/asm/atomic_64.h | 2 +- arch/sparc/lib/atomic32.c | 4 ++-- arch/x86/include/asm/atomic.h | 4 ++-- arch/xtensa/include/asm/atomic.h | 4 ++-- drivers/block/rbd.c | 2 +- drivers/infiniband/core/rdma_core.c | 2 +- fs/afs/rxrpc.c | 2 +- include/asm-generic/atomic-instrumented.h | 4 ++-- include/asm-generic/atomic.h | 4 ++-- include/linux/atomic.h | 2 +- kernel/bpf/syscall.c | 4 ++-- net/rxrpc/call_object.c | 2 +- net/rxrpc/conn_object.c | 4 ++-- net/rxrpc/local_object.c | 2 +- net/rxrpc/peer_object.c | 2 +- 31 files changed, 50 insertions(+), 50 deletions(-) -- 2.11.0 Acked-by: Palmer Dabbelt diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 767bfdd42992..392b15a4dd4f 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -206,7 +206,7 @@ ATOMIC_OPS(xor, xor) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -214,7 +214,7 @@ ATOMIC_OPS(xor, xor) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, new, old; smp_mb(); diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 11859287c52a..67121b5ff3a3 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -309,7 +309,7 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) #undef ATOMIC_OP /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -317,7 +317,7 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v */ -#define __atomic_add_unless(v, a, u) \ +#define atomic_fetch_add_unless(v, a, u) \ ({ \ int c, old; \ \ diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 66d0e215a773..9d56d0727c9b 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -130,7 +130,7 @@ static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new) } #define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int oldval, newval; unsigned long tmp; @@ -215,7 +215,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index c0235e0ff849..264d20339f74 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -125,7 +125,7 @@ #define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) #define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) #define atomic_add_negative(i, v) (atomic_add_return((i), (v)) < 0) -#define __atomic_add_unless(v, a, u) ___atomic_add_unless(v, a, u,) +#define atomic_fetch_add_unless(v, a, u) ___atomic_add_unless(v, a, u,) #define atomic_andnot atomic_andnot /* diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h index 941e7554e886..4465cfc30a3a 100644 --- a/arch/h8300/include/asm/atomic.h +++ b/arch/h8300/include/asm/atomic.h @@ -94,7 +94,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int ret; h8300flags flags; diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index fb3dfb2a667e..287aa9f394f3 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -164,7 +164,7 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer to value * @a: amount to add * @u: unless value is equal to u @@ -173,7 +173,7 @@ ATOMIC_OPS(xor) * */ -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int __oldval; register int tmp; diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 2524fb60fbc2..9d2ddde5f9d5 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -215,7 +215,7 @@ ATOMIC64_FETCH_OP(xor, ^) (cmpxchg(&((v)->counter), old, new)) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index e993e2860ee1..8022d9ea1213 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -211,7 +211,7 @@ static inline int atomic_add_negative(int i, atomic_t *v) return c != 0; } -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 0ab176bdb8e8..02fc1553cf9b 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -275,7 +275,7 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) #define atomic_xchg(v, new) (xchg(&((v)->counter), (new))) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -283,7 +283,7 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h index 146e1660f00e..b589fac39b92 100644 --- a/arch/openrisc/include/asm/atomic.h +++ b/arch/openrisc/include/asm/atomic.h @@ -100,7 +100,7 @@ ATOMIC_OP(xor) * * This is often used through atomic_inc_not_zero() */ -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int old, tmp; @@ -119,7 +119,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) return old; } -#define __atomic_add_unless __atomic_add_unless +#define atomic_fetch_add_unless atomic_fetch_add_unless #include diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 88bae6676c9b..7748abced766 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -78,7 +78,7 @@ static __inline__ int atomic_read(const atomic_t *v) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -86,7 +86,7 @@ static __inline__ int atomic_read(const atomic_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 682b3e6a1e21..1483261080a1 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -218,7 +218,7 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v) #define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -226,13 +226,13 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int t; __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER -"1: lwarx %0,0,%1 # __atomic_add_unless\n\ +"1: lwarx %0,0,%1 # atomic_fetch_add_unless\n\ cmpw 0,%0,%3 \n\ beq 2f \n\ add %0,%2,%0 \n" @@ -538,7 +538,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER -"1: ldarx %0,0,%1 # __atomic_add_unless\n\ +"1: ldarx %0,0,%1 # atomic_fetch_add_unless\n\ cmpd 0,%0,%3 \n\ beq 2f \n\ add %0,%2,%0 \n" diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 855115ace98c..739e810c857e 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -332,7 +332,7 @@ ATOMIC_OP(dec_and_test, dec, ==, 0, 64) #undef ATOMIC_OP /* This is required to provide a full barrier on success. */ -static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) +static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int prev, rc; @@ -381,7 +381,7 @@ static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u) */ static __always_inline int atomic_inc_not_zero(atomic_t *v) { - return __atomic_add_unless(v, 1, 0); + return atomic_fetch_add_unless(v, 1, 0); } #ifndef CONFIG_GENERIC_ATOMIC64 diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index 4b55532f15c4..c2858cdd8c29 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -90,7 +90,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return __atomic_cmpxchg(&v->counter, old, new); } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 0fd0099f43cc..ef45931ebac5 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -46,7 +46,7 @@ #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -54,7 +54,7 @@ * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index d13ce517f4b9..a58f4b43bcc7 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -27,7 +27,7 @@ int atomic_fetch_or(int, atomic_t *); int atomic_fetch_xor(int, atomic_t *); int atomic_cmpxchg(atomic_t *, int, int); int atomic_xchg(atomic_t *, int); -int __atomic_add_unless(atomic_t *, int, int); +int atomic_fetch_add_unless(atomic_t *, int, int); void atomic_set(atomic_t *, int); #define atomic_set_release(v, i) atomic_set((v), (i)) diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 28db058d471b..f416fd3d2708 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -89,7 +89,7 @@ static inline int atomic_xchg(atomic_t *v, int new) return xchg(&v->counter, new); } -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/arch/sparc/lib/atomic32.c b/arch/sparc/lib/atomic32.c index 465a901a0ada..281fa634bb1a 100644 --- a/arch/sparc/lib/atomic32.c +++ b/arch/sparc/lib/atomic32.c @@ -95,7 +95,7 @@ int atomic_cmpxchg(atomic_t *v, int old, int new) } EXPORT_SYMBOL(atomic_cmpxchg); -int __atomic_add_unless(atomic_t *v, int a, int u) +int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int ret; unsigned long flags; @@ -107,7 +107,7 @@ int __atomic_add_unless(atomic_t *v, int a, int u) spin_unlock_irqrestore(ATOMIC_HASH(v), flags); return ret; } -EXPORT_SYMBOL(__atomic_add_unless); +EXPORT_SYMBOL(atomic_fetch_add_unless); /* Atomic operations are already serializing */ void atomic_set(atomic_t *v, int i) diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 0db6bec95489..84ed0bd76aef 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -254,7 +254,7 @@ static inline int arch_atomic_fetch_xor(int i, atomic_t *v) } /** - * __arch_atomic_add_unless - add unless the number is already a given value + * arch_atomic_fetch_add_unless - add unless the number is already a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -262,7 +262,7 @@ static inline int arch_atomic_fetch_xor(int i, atomic_t *v) * Atomically adds @a to @v, so long as @v was not already @u. * Returns the old value of @v. */ -static __always_inline int __arch_atomic_add_unless(atomic_t *v, int a, int u) +static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c = arch_atomic_read(v); diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index e7a23f2a519a..4188e56c06c9 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -275,7 +275,7 @@ ATOMIC_OPS(xor) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) /** - * __atomic_add_unless - add unless the number is a given value + * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -283,7 +283,7 @@ ATOMIC_OPS(xor) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u) +static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 33b36fea1d73..b21cfacdbf03 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -61,7 +61,7 @@ static int atomic_inc_return_safe(atomic_t *v) { unsigned int counter; - counter = (unsigned int)__atomic_add_unless(v, 1, 0); + counter = (unsigned int)atomic_fetch_add_unless(v, 1, 0); if (counter <= (unsigned int)INT_MAX) return (int)counter; diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index a6e904973ba8..475910ffbcb6 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -121,7 +121,7 @@ static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive) * this lock. */ if (!exclusive) - return __atomic_add_unless(&uobj->usecnt, 1, -1) == -1 ? + return atomic_fetch_add_unless(&uobj->usecnt, 1, -1) == -1 ? -EBUSY : 0; /* lock is either WRITE or DESTROY - should be exclusive */ diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index 08735948f15d..b35115b71b34 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -648,7 +648,7 @@ static void afs_wake_up_async_call(struct sock *sk, struct rxrpc_call *rxcall, trace_afs_notify_call(rxcall, call); call->need_attention = true; - u = __atomic_add_unless(&call->usage, 1, 0); + u = atomic_fetch_add_unless(&call->usage, 1, 0); if (u != 0) { trace_afs_call(call, afs_call_trace_wake, u, atomic_read(&call->net->nr_outstanding_calls), diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index ec07f23678ea..b8b14cc2df6c 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -84,10 +84,10 @@ static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 ne } #endif -static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) +static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { kasan_check_write(v, sizeof(*v)); - return __arch_atomic_add_unless(v, a, u); + return arch_atomic_fetch_add_unless(v, a, u); } diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h index abe6dd9ca2a8..10051ed6d088 100644 --- a/include/asm-generic/atomic.h +++ b/include/asm-generic/atomic.h @@ -221,8 +221,8 @@ static inline void atomic_dec(atomic_t *v) #define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new))) -#ifndef __atomic_add_unless -static inline int __atomic_add_unless(atomic_t *v, int a, int u) +#ifndef atomic_fetch_add_unless +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { int c, old; c = atomic_read(v); diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 8b276fd9a127..ec36a1375c9d 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -530,7 +530,7 @@ */ static inline int atomic_add_unless(atomic_t *v, int a, int u) { - return __atomic_add_unless(v, a, u) != u; + return atomic_fetch_add_unless(v, a, u) != u; } /** diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 016ef9025827..dd04ecb6ccaf 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -538,7 +538,7 @@ static struct bpf_map *bpf_map_inc_not_zero(struct bpf_map *map, { int refold; - refold = __atomic_add_unless(&map->refcnt, 1, 0); + refold = atomic_fetch_add_unless(&map->refcnt, 1, 0); if (refold >= BPF_MAX_REFCNT) { __bpf_map_put(map, false); @@ -1115,7 +1115,7 @@ struct bpf_prog *bpf_prog_inc_not_zero(struct bpf_prog *prog) { int refold; - refold = __atomic_add_unless(&prog->aux->refcnt, 1, 0); + refold = atomic_fetch_add_unless(&prog->aux->refcnt, 1, 0); if (refold >= BPF_MAX_REFCNT) { __bpf_prog_put(prog, false); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index f6734d8cb01a..9486293fef5c 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -415,7 +415,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, bool rxrpc_queue_call(struct rxrpc_call *call) { const void *here = __builtin_return_address(0); - int n = __atomic_add_unless(&call->usage, 1, 0); + int n = atomic_fetch_add_unless(&call->usage, 1, 0); if (n == 0) return false; if (rxrpc_queue_work(&call->processor)) diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 4c77a78a252a..77440a356b14 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -266,7 +266,7 @@ void rxrpc_kill_connection(struct rxrpc_connection *conn) bool rxrpc_queue_conn(struct rxrpc_connection *conn) { const void *here = __builtin_return_address(0); - int n = __atomic_add_unless(&conn->usage, 1, 0); + int n = atomic_fetch_add_unless(&conn->usage, 1, 0); if (n == 0) return false; if (rxrpc_queue_work(&conn->processor)) @@ -309,7 +309,7 @@ rxrpc_get_connection_maybe(struct rxrpc_connection *conn) const void *here = __builtin_return_address(0); if (conn) { - int n = __atomic_add_unless(&conn->usage, 1, 0); + int n = atomic_fetch_add_unless(&conn->usage, 1, 0); if (n > 0) trace_rxrpc_conn(conn, rxrpc_conn_got, n + 1, here); else diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index b493e6b62740..777c3ed4cfc0 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -305,7 +305,7 @@ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local) const void *here = __builtin_return_address(0); if (local) { - int n = __atomic_add_unless(&local->usage, 1, 0); + int n = atomic_fetch_add_unless(&local->usage, 1, 0); if (n > 0) trace_rxrpc_local(local, rxrpc_local_got, n + 1, here); else diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c index 1b7e8107b3ae..1cf3b408017a 100644 --- a/net/rxrpc/peer_object.c +++ b/net/rxrpc/peer_object.c @@ -406,7 +406,7 @@ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer) const void *here = __builtin_return_address(0); if (peer) { - int n = __atomic_add_unless(&peer->usage, 1, 0); + int n = atomic_fetch_add_unless(&peer->usage, 1, 0); if (n > 0) trace_rxrpc_peer(peer, rxrpc_peer_got, n + 1, here); else From patchwork Tue May 29 15:43:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137196 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4217208lji; Tue, 29 May 2018 08:48:45 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrDWy+jzycMRfPjhFDLM8MytdEuZB2Our/7eiYef9w+Vk75+v/C6qC+HuYL36rcsl2OVcyo X-Received: by 2002:aa7:83c7:: with SMTP id j7-v6mr17861486pfn.50.1527608925341; Tue, 29 May 2018 08:48:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608925; cv=none; d=google.com; s=arc-20160816; b=qBLpakB3M0KdVP7ws61kspUSGPCM4yLnEWaLdK4xZPeC9Ccy5glvWeQbkF4yOaJ91+ w15E0OsUdpOetGrkISiTqumFdnMpKxOMQqK+YpVK4KlcaZDojaqiB1m+UcfV5Xh4ZG4p TA8bOqviDZ25Ks6kHbSMIUIpN1Di22K8VCAdK2jMSI70LZTG4nmU1Lrs9S2rmZ9Edxyq ESiMTsCeh7p2zybPyhPoX9/yxDZF4KQIk/ETUpI2KCzWMOzQhUzYH04WyL56BXTo520Z +9Z/v3Br0Zu6HwivMZI8ZmTP7tqDRlvQ9vcJ6CINxCCCr+9kKyBdQbFPReMSAchsZY9I 7bjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ZMtIERzmKxoW9t+L0vmDQtE8tYg2BmT8DeW+qmHPkD4=; b=MrPdNWf+a2kDNrmiypLQk8mU60eUITPwIwvDsXz+469AxDqyhWvmzUHGBClCoCAWu0 nbZ9Qfk89BRS2FBN5VavG+vmIEsnmyN243tENYHqK1yvm9GX2+gDFLcl8knjD+gIdmLX rSzMcVQD61kvfp0dL/Prt0dM1/RhnZ5ybhetj5Bx2sGwgQNOJIrAx2i2eJ73PLzm7xPq 2F0rxoiYA9gO9pmfUhmpPooqG9+PTW4mhopsPkN4OQkFMp6PeojSUdJR3ns2J7ip5W3k kLOOZHMqgI+Qk+uyJIXtbJeTu2sNYNujY60j3f0YSz5GTweG45JW14qyvk3566fRLE77 KMvg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j131-v6si19849151pgc.606.2018.05.29.08.48.44; Tue, 29 May 2018 08:48:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936953AbeE2Psn (ORCPT + 30 others); Tue, 29 May 2018 11:48:43 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43376 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936450AbeE2PoG (ORCPT ); Tue, 29 May 2018 11:44:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3353815AD; Tue, 29 May 2018 08:44:06 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 490203F25D; Tue, 29 May 2018 08:44:05 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon Subject: [PATCHv2 02/16] atomics/treewide: remove redundant atomic_inc_not_zero() definitions Date: Tue, 29 May 2018 16:43:32 +0100 Message-Id: <20180529154346.3168-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When atomic_inc_not_zero(v) isn't defined, will define it as falling back to atomic_add_unless((v), 1, 0), so there's no need for arch code to do so. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon --- arch/arc/include/asm/atomic.h | 2 -- arch/hexagon/include/asm/atomic.h | 2 -- arch/riscv/include/asm/atomic.h | 9 --------- 3 files changed, 13 deletions(-) -- 2.11.0 Acked-by: Palmer Dabbelt diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 67121b5ff3a3..cecdf3403caf 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -336,8 +336,6 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) c; \ }) -#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) - #define atomic_inc(v) atomic_add(1, v) #define atomic_dec(v) atomic_sub(1, v) diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 287aa9f394f3..d2feeba93c44 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -197,8 +197,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) return __oldval; } -#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) - #define atomic_inc(v) atomic_add(1, (v)) #define atomic_dec(v) atomic_sub(1, (v)) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 739e810c857e..0e27e050ba14 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -375,15 +375,6 @@ static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u) } #endif -/* - * The extra atomic operations that are constructed from one of the core - * LR/SC-based operations above. - */ -static __always_inline int atomic_inc_not_zero(atomic_t *v) -{ - return atomic_fetch_add_unless(v, 1, 0); -} - #ifndef CONFIG_GENERIC_ATOMIC64 static __always_inline long atomic64_inc_not_zero(atomic64_t *v) { From patchwork Tue May 29 15:43:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137182 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4212289lji; Tue, 29 May 2018 08:44:14 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp+/eRO940MXiEYKfRsH23c7K5MoaujDkiycBTo6m4coDWwp+WeFI7QXHA7kU6BkEKeKV2/ X-Received: by 2002:a17:902:5e3:: with SMTP id f90-v6mr18533836plf.175.1527608654797; Tue, 29 May 2018 08:44:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608654; cv=none; d=google.com; s=arc-20160816; b=cqNerpi31gzwC/pQbVwKUwmCMORWUBZq+ujd//EzIpxNPUGdbYrTTb60po2/W31i5o zgxcRDeXuFvoPctwwEWZXVFmu6BRWiKxqFW9zlKxMHrtEjCap81CT6+ahQw/6rqCgWH6 U7ZLE0Se2e/AuFLF74eyOh/dqxN87qej/b5N2GGOwDxkg5aseUCMxivWjgZBENu8lv5u 6vF0TFZEi/Ku4NVtCzBqBYwsI7Mx6L+XEjC0WNxFQfbsKx/ltZY+lpgmwx1b5Jqp0OJA rdveVtNGmHRHDxekRndnL7DjzzTCp9GwmQG9KZCdC+N6geou+Z3KhX8I27tUAbT4YXXf oL1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=3Qv7bSGx+Meqsb8pOvUAbdBqa54NjbHcXgkSjmEhw2w=; b=WI3USbp8yA9kcGUfd8CHnjRSdZ/7JfjYnAeum+XVXqURxVGFQuxvALS8sSgW2YR1xq jTozHAY5q6oRKQX12hsQP7gqqDxVb9Bpnfm/EeU6VfGJdLpHQxSdP+KyuqVoDTXWI2sv NJ/tA87O2w2M7OC3aB3EgQCGwv+7hYfEmWPrEr2dXyuGvHlaeJS77GARinjdNaRIsCMK KG0Xu4tSOLgunvWJmd8r9L6ZaBwaRvFRsmgE+af6ISqC9Ye03u1Ow1cFebczBcHk6+rl w+u1vzBWlanBntfNzfwvplGfto1GyTgek0CxotHUnL/F73f1kWX09sbjG5Ul1PDSUu8k og4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j20-v6si33494507pll.223.2018.05.29.08.44.14; Tue, 29 May 2018 08:44:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936751AbeE2PoM (ORCPT + 30 others); Tue, 29 May 2018 11:44:12 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43386 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934948AbeE2PoI (ORCPT ); Tue, 29 May 2018 11:44:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D582915AD; Tue, 29 May 2018 08:44:07 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EB14F3F25D; Tue, 29 May 2018 08:44:06 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon Subject: [PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional Date: Tue, 29 May 2018 16:43:33 +0100 Message-Id: <20180529154346.3168-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atmic64_inc_not_zero(), leading most architectures to define the same boilerplate. Let's add a fallback in , and remove the redundant implementations. Note that atomic64_add_unless() is always defined in , and promotes its arguments to the requisite types, so we need not do this explicitly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 2 -- arch/arc/include/asm/atomic.h | 1 - arch/arm/include/asm/atomic.h | 1 - arch/arm64/include/asm/atomic.h | 2 -- arch/ia64/include/asm/atomic.h | 2 -- arch/mips/include/asm/atomic.h | 2 -- arch/parisc/include/asm/atomic.h | 2 -- arch/powerpc/include/asm/atomic.h | 1 + arch/riscv/include/asm/atomic.h | 7 ------- arch/s390/include/asm/atomic.h | 1 - arch/sparc/include/asm/atomic_64.h | 2 -- arch/x86/include/asm/atomic64_32.h | 2 +- arch/x86/include/asm/atomic64_64.h | 2 -- include/asm-generic/atomic-instrumented.h | 3 +++ include/asm-generic/atomic64.h | 1 - include/linux/atomic.h | 11 +++++++++++ 16 files changed, 16 insertions(+), 26 deletions(-) -- 2.11.0 Acked-by: Palmer Dabbelt diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 392b15a4dd4f..eb0f25e4c5dd 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -296,8 +296,6 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) return old - 1; } -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index cecdf3403caf..1406825b5e7d 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -603,7 +603,6 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) #define atomic64_dec(v) atomic64_sub(1LL, (v)) #define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) #define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL) #endif /* !CONFIG_GENERIC_ATOMIC64 */ diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 9d56d0727c9b..02f3894faa48 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -534,7 +534,6 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) #define atomic64_dec(v) atomic64_sub(1LL, (v)) #define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1LL, (v)) #define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL) #endif /* !CONFIG_GENERIC_ATOMIC64 */ #endif diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index 264d20339f74..ad50412889c5 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -204,7 +204,5 @@ #define atomic64_add_unless(v, a, u) (___atomic_add_unless(v, a, u, 64) != u) #define atomic64_andnot atomic64_andnot -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - #endif #endif diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 9d2ddde5f9d5..93d48b823220 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -246,8 +246,6 @@ static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u) return c != (u); } -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - static __inline__ long atomic64_dec_if_positive(atomic64_t *v) { long c, old, dec; diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 02fc1553cf9b..502e691c6393 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -644,8 +644,6 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) return c != (u); } -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - #define atomic64_dec_return(v) atomic64_sub_return(1, (v)) #define atomic64_inc_return(v) atomic64_add_return(1, (v)) diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 7748abced766..3fd0243bf405 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -305,8 +305,6 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) return c != (u); } -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - /* * atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 1483261080a1..e59620ee4f6b 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -582,6 +582,7 @@ static __inline__ int atomic64_inc_not_zero(atomic64_t *v) return t1 != 0; } +#define atomic64_inc_not_zero(v) atomic64_inc_not_zero((v)) #endif /* __powerpc64__ */ diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 0e27e050ba14..18259e90f57e 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -375,13 +375,6 @@ static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u) } #endif -#ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_inc_not_zero(atomic64_t *v) -{ - return atomic64_add_unless(v, 1, 0); -} -#endif - /* * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as * {cmp,}xchg and the operations that return, so they need a full barrier. diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index c2858cdd8c29..66dac30a4fe1 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -212,6 +212,5 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) #define atomic64_dec(_v) atomic64_sub(1, _v) #define atomic64_dec_return(_v) atomic64_sub_return(1, _v) #define atomic64_dec_and_test(_v) (atomic64_sub_return(1, _v) == 0) -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) #endif /* __ARCH_S390_ATOMIC__ */ diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index f416fd3d2708..07830a316464 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -123,8 +123,6 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u) return c != (u); } -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) - long atomic64_dec_if_positive(atomic64_t *v); #endif /* !(__ARCH_SPARC64_ATOMIC__) */ diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 92212bf0484f..2a33cc17801b 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -295,7 +295,7 @@ static inline int arch_atomic64_add_unless(atomic64_t *v, long long a, return (int)a; } - +#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero static inline int arch_atomic64_inc_not_zero(atomic64_t *v) { int r; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 6106b59d3260..6f95023894b7 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -207,8 +207,6 @@ static inline bool arch_atomic64_add_unless(atomic64_t *v, long a, long u) return true; } -#define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0) - /* * arch_atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index b8b14cc2df6c..52b7dcb41769 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -205,11 +205,14 @@ static __always_inline s64 atomic64_dec_return(atomic64_t *v) return arch_atomic64_dec_return(v); } +#ifdef arch_atomic64_inc_not_zero +#define atomic64_inc_not_zero atomic64_inc_not_zero static __always_inline s64 atomic64_inc_not_zero(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_inc_not_zero(v); } +#endif static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index 8d28eb010d0d..e9a94b7a4d0a 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -62,6 +62,5 @@ extern int atomic64_add_unless(atomic64_t *v, long long a, long long u); #define atomic64_dec(v) atomic64_sub(1LL, (v)) #define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) #define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL) #endif /* _ASM_GENERIC_ATOMIC64_H */ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index ec36a1375c9d..e83f12ec7f2d 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1048,6 +1048,17 @@ static inline int atomic_dec_if_positive(atomic_t *v) #define atomic64_try_cmpxchg_release atomic64_try_cmpxchg #endif /* atomic64_try_cmpxchg */ +/** + * atomic64_inc_not_zero - increment unless the number is zero + * @v: pointer of type atomic64_t + * + * Atomically increments @v by 1, so long as @v is non-zero. + * Returns non-zero if @v was non-zero, and zero otherwise. + */ +#ifndef atomic64_inc_not_zero +#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) +#endif + #ifndef atomic64_andnot static inline void atomic64_andnot(long long i, atomic64_t *v) { From patchwork Tue May 29 15:43:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137195 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4216916lji; Tue, 29 May 2018 08:48:27 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpuvLGXEHKQNfQ2eOoeWsDS/cI0zLaJNpPJ4a7IQ5iSBueejfrZ67714mbXtF0D/SvBUf3J X-Received: by 2002:a63:9e42:: with SMTP id r2-v6mr14254128pgo.436.1527608907472; Tue, 29 May 2018 08:48:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608907; cv=none; d=google.com; s=arc-20160816; b=t72i3yPlGGcuXYS0dyZ97+PJ+mOiR02EKNqczUh144cI6gK6QXJl3m1dOeqcsl/pqU n+sRBQwPU/Au4nUut+Rtpi+G5+eN6D63BlSK8lumJCxw4y97dTsU76FbahJa0VCY3MbD Iar6xZroyTIjP6zcOIPRdv8JMMV9REGeq/ZPyd3PEK4DC5IJAPrShH0BnuQX06mOS2A3 3Pp87EatH6L/xH9Y+UuW+Ivsu0lXJuV6CuvZ8sQu3Y4Ba4dFvLcGJjnoR1gxR4WWo/Cy t7NOeH1tXh1oOE+HVLyJmTAtJGJLCQPP8TZVY/FfQvNyXLf0DHTgsSIV7jmLU6b4TOMJ CisA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=XlzauteAXLsQ68q7gz3DECvbMsrZxq8UHkp1x3YkuNE=; b=WioQaB6+V8wbWY4YhB3gpKWXIySbPWdPvSnnCwXBiuh8sueVV7+AFlqsojXl8wotUG xc3QXqoFm+AX8q10kgwCHOs+T3vsuKtKjg61RBJJLX7M5dmWnPLooiSdQdRE6x/rIsv1 dS4BrvHWflI9pbhrDbXxxj+4Y+xOPYwAHb1qE88sCFTBN4imlwjAZbhksrlu3SxUH1j5 Z5GdEJpRbEGegbjz7oH5YBqGI4G02gYEwISHtZciLqcSUMr6b9GBHGceHairmzhwbWIv rZSO3v19y7ZLrp1a8qJAr9bUEEP0UyYc8z2p98oHuq1IHo8PhPzTmGs1CyalPrjB2TWj 8AAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g15-v6si12383194plq.242.2018.05.29.08.48.27; Tue, 29 May 2018 08:48:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936925AbeE2PsZ (ORCPT + 30 others); Tue, 29 May 2018 11:48:25 -0400 Received: from foss.arm.com ([217.140.101.70]:43390 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936601AbeE2PoK (ORCPT ); Tue, 29 May 2018 11:44:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 983DB15BE; Tue, 29 May 2018 08:44:09 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8975B3F25D; Tue, 29 May 2018 08:44:08 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Vineet Gupta Subject: [PATCHv2 04/16] atomics/treewide: make atomic_fetch_add_unless() optional Date: Tue, 29 May 2018 16:43:34 +0100 Message-Id: <20180529154346.3168-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Several architectures these have a near-identical implementation based on atomic_read() and atomic_cmpxchg() that we can instead define in , so let's do so, using something close to the existing x86 implementation with try_cmpxchg(). Where an architecture provides its own atomic_fetch_add_unless(), it must define a preprocessor symbol for it. The instrumented atomics are updated accordingly. Note that arch/arc's existing atomic_fetch_add_unless() had redundant barriers, as these are already present in its atomic_cmpxchg() implementation. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Geert Uytterhoeven Acked-by: Geert Uytterhoeven Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Vineet Gupta --- arch/alpha/include/asm/atomic.h | 2 +- arch/arc/include/asm/atomic.h | 28 ---------------------------- arch/arm/include/asm/atomic.h | 11 +---------- arch/arm64/include/asm/atomic.h | 1 - arch/h8300/include/asm/atomic.h | 1 + arch/hexagon/include/asm/atomic.h | 1 + arch/ia64/include/asm/atomic.h | 16 ---------------- arch/m68k/include/asm/atomic.h | 15 --------------- arch/mips/include/asm/atomic.h | 24 ------------------------ arch/parisc/include/asm/atomic.h | 24 ------------------------ arch/powerpc/include/asm/atomic.h | 1 + arch/riscv/include/asm/atomic.h | 1 + arch/s390/include/asm/atomic.h | 15 --------------- arch/sh/include/asm/atomic.h | 25 ------------------------- arch/sparc/include/asm/atomic_32.h | 2 ++ arch/sparc/include/asm/atomic_64.h | 15 --------------- arch/x86/include/asm/atomic.h | 21 --------------------- arch/xtensa/include/asm/atomic.h | 24 ------------------------ include/asm-generic/atomic-instrumented.h | 4 +++- include/asm-generic/atomic.h | 11 ----------- include/linux/atomic.h | 23 +++++++++++++++++++++++ 21 files changed, 34 insertions(+), 231 deletions(-) -- 2.11.0 Acked-by: Palmer Dabbelt diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index eb0f25e4c5dd..4a800a3424a3 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -235,7 +235,7 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) smp_mb(); return old; } - +#define atomic_fetch_add_unless atomic_fetch_add_unless /** * atomic64_add_unless - add unless the number is a given value diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 1406825b5e7d..60da80481c5d 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -308,34 +308,6 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v - */ -#define atomic_fetch_add_unless(v, a, u) \ -({ \ - int c, old; \ - \ - /* \ - * Explicit full memory barrier needed before/after as \ - * LLOCK/SCOND thmeselves don't provide any such semantics \ - */ \ - smp_mb(); \ - \ - c = atomic_read(v); \ - while (c != (u) && (old = atomic_cmpxchg((v), c, c + (a))) != c)\ - c = old; \ - \ - smp_mb(); \ - \ - c; \ -}) - #define atomic_inc(v) atomic_add(1, v) #define atomic_dec(v) atomic_sub(1, v) diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 02f3894faa48..74460aa00fa0 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -156,6 +156,7 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) return oldval; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #else /* ARM_ARCH_6 */ @@ -215,16 +216,6 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return ret; } -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - - c = atomic_read(v); - while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c) - c = old; - return c; -} - #endif /* __LINUX_ARM_ARCH__ */ #define ATOMIC_OPS(op, c_op, asm_op) \ diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index ad50412889c5..22c8c43d6689 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -125,7 +125,6 @@ #define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) #define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) #define atomic_add_negative(i, v) (atomic_add_return((i), (v)) < 0) -#define atomic_fetch_add_unless(v, a, u) ___atomic_add_unless(v, a, u,) #define atomic_andnot atomic_andnot /* diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h index 4465cfc30a3a..250bb3baac23 100644 --- a/arch/h8300/include/asm/atomic.h +++ b/arch/h8300/include/asm/atomic.h @@ -106,5 +106,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) arch_local_irq_restore(flags); return ret; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #endif /* __ARCH_H8300_ATOMIC __ */ diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index d2feeba93c44..86c67e9adbfa 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -196,6 +196,7 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) ); return __oldval; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #define atomic_inc(v) atomic_add(1, (v)) #define atomic_dec(v) atomic_sub(1, (v)) diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 93d48b823220..cfe44086338e 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -215,22 +215,6 @@ ATOMIC64_FETCH_OP(xor, ^) (cmpxchg(&((v)->counter), old, new)) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - - static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u) { long c, old; diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index 8022d9ea1213..596882cda224 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -211,19 +211,4 @@ static inline int atomic_add_negative(int i, atomic_t *v) return c != 0; } -static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #endif /* __ARCH_M68K_ATOMIC __ */ diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 502e691c6393..794734e730d9 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -274,30 +274,6 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), (new))) -/** - * atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #define atomic_dec_return(v) atomic_sub_return(1, (v)) #define atomic_inc_return(v) atomic_add_return(1, (v)) diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 3fd0243bf405..b2b6261d05e7 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -77,30 +77,6 @@ static __inline__ int atomic_read(const atomic_t *v) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) -/** - * atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #define ATOMIC_OP(op, c_op) \ static __inline__ void atomic_##op(int i, atomic_t *v) \ { \ diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index e59620ee4f6b..b5646c079c16 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -248,6 +248,7 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) return t; } +#define atomic_fetch_add_unless atomic_fetch_add_unless /** * atomic_inc_not_zero - increment unless the number is zero diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 18259e90f57e..5f161daefcd2 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -349,6 +349,7 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) : "memory"); return prev; } +#define atomic_fetch_add_unless atomic_fetch_add_unless #ifndef CONFIG_GENERIC_ATOMIC64 static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u) diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index 66dac30a4fe1..26c6b713a7a3 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -90,21 +90,6 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) return __atomic_cmpxchg(&v->counter, old, new); } -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == u)) - break; - old = atomic_cmpxchg(v, c, c + a); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #define ATOMIC64_INIT(i) { (i) } static inline long atomic64_read(const atomic64_t *v) diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index ef45931ebac5..422fac764ca1 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -45,31 +45,6 @@ #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) -/** - * atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - - return c; -} - #endif /* CONFIG_CPU_J2 */ #endif /* __ASM_SH_ATOMIC_H */ diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index a58f4b43bcc7..9d7a15acc0c5 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -30,6 +30,8 @@ int atomic_xchg(atomic_t *, int); int atomic_fetch_add_unless(atomic_t *, int, int); void atomic_set(atomic_t *, int); +#define atomic_fetch_add_unless atomic_fetch_add_unless + #define atomic_set_release(v, i) atomic_set((v), (i)) #define atomic_read(v) READ_ONCE((v)->counter) diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 07830a316464..e4f1c93db31f 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -89,21 +89,6 @@ static inline int atomic_xchg(atomic_t *v, int new) return xchg(&v->counter, new); } -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #define atomic64_cmpxchg(v, o, n) \ ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 84ed0bd76aef..616327ac9d39 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -253,27 +253,6 @@ static inline int arch_atomic_fetch_xor(int i, atomic_t *v) return val; } -/** - * arch_atomic_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns the old value of @v. - */ -static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c = arch_atomic_read(v); - - do { - if (unlikely(c == u)) - break; - } while (!arch_atomic_try_cmpxchg(v, &c, c + a)); - - return c; -} - #ifdef CONFIG_X86_32 # include #else diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index 4188e56c06c9..f4c9f82c40c6 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -274,30 +274,6 @@ ATOMIC_OPS(xor) #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) -/** - * atomic_fetch_add_unless - add unless the number is a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c; -} - #endif /* __KERNEL__ */ #endif /* _XTENSA_ATOMIC_H */ diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 52b7dcb41769..6e0818c182e2 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -84,12 +84,14 @@ static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 ne } #endif +#ifdef arch_atomic_fetch_add_unless +#define atomic_fetch_add_unless atomic_fetch_add_unless static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { kasan_check_write(v, sizeof(*v)); return arch_atomic_fetch_add_unless(v, a, u); } - +#endif static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h index 10051ed6d088..757e45821220 100644 --- a/include/asm-generic/atomic.h +++ b/include/asm-generic/atomic.h @@ -221,15 +221,4 @@ static inline void atomic_dec(atomic_t *v) #define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new))) -#ifndef atomic_fetch_add_unless -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c, old; - c = atomic_read(v); - while (c != u && (old = atomic_cmpxchg(v, c, c + a)) != c) - c = old; - return c; -} -#endif - #endif /* __ASM_GENERIC_ATOMIC_H */ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index e83f12ec7f2d..1105c0b37f27 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -520,6 +520,29 @@ #endif /* xchg_relaxed */ /** + * atomic_fetch_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns original value of @v + */ +#ifndef atomic_fetch_add_unless +static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + int c = atomic_read(v); + + do { + if (unlikely(c == u)) + break; + } while (!atomic_try_cmpxchg(v, &c, c + a)); + + return c; +} +#endif + +/** * atomic_add_unless - add unless the number is already a given value * @v: pointer of type atomic_t * @a: the amount to add to v... From patchwork Tue May 29 15:43:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137194 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4216544lji; Tue, 29 May 2018 08:48:04 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrX0zBuks1swjKlpof4B+mwUfpqZvGVBRiA9OaHw/N9oU0ORZlsx+1ohXMGkFy2nWR1uBoy X-Received: by 2002:a63:b307:: with SMTP id i7-v6mr14250364pgf.448.1527608884152; Tue, 29 May 2018 08:48:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608884; cv=none; d=google.com; s=arc-20160816; b=yuV6Ga2mIk+lS74UOa1uE4bkMPNFDs/9h9yQMmWP/+IeaR7ll/M2JYLYNkvtVNmk+C sN8hf0p5n+z2zXLLxqkJQsY/3aO/bomh6MF6UjQKuVgm1ai85vJNS3y6ceoTszo1/rmo IRhvkzQ2wCu3tbDwlT6h4fyA83vypP0wGKl2jAdss/METBYIRe+bPQZ7tJ/NauPvtito eKvDwstToRt/glTQ6hT6HxU8h8nDsPiSbL1GIKs5yopqcVRJJWBJu0wEfbbVCFo51ukY yBY17ZN/zt0n40Ljf6XZzN4HL87Pg+C6ChK9dkpG3RwwOk+/cyaAdlNasWPSRPdhzCSj gGVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=tIb8M3HHkq5jyOYY2EofNBFtGj04uwlggnhRXeOefu4=; b=G1NSs+wXkFX7djnH2zdNSkL2VXMe1FEFHgoZzNOr8DLAXyQMjPj7A2JfvEqXyb2dMN 9IPKjzDwWL4WFmmU7I0WYD3oGsq0OzQr4NrVZyIWopoRb6mcI6+lJrMhZHay00c8GUFA oCK8OXaXsNZZMedAFlo/Bl/q4k4DmAJTcf6I3NUKQ1xd/kvUIZF3yg2YUr0mspWYGILB 7qE8qPaTeujJu0Ek8bJJ/uabC7xSjKje9CBV238Rk74S3tG1x00cZJhgLKRTa1ylBUry 2lOhR7kd/g8HqtMbHw6+bOGvmuS1sz4u0t4E3wV5HZmMrfiwAa1h99FpNjz66+9dw7wp kVWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g15-v6si12383194plq.242.2018.05.29.08.48.03; Tue, 29 May 2018 08:48:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936908AbeE2PsB (ORCPT + 30 others); Tue, 29 May 2018 11:48:01 -0400 Received: from foss.arm.com ([217.140.101.70]:43412 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934948AbeE2PoN (ORCPT ); Tue, 29 May 2018 11:44:13 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C12AE80D; Tue, 29 May 2018 08:44:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 470B03F25D; Tue, 29 May 2018 08:44:10 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Arnd Bergmann , Richard Henderson , Ivan Kokshaysky , Matt Turner , Vineet Gupta , Russell King , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Palmer Dabbelt , Albert Ou Subject: [PATCHv2 05/16] atomics: prepare for atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:35 +0100 Message-Id: <20180529154346.3168-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently architecture must implement atomic_fetch_add_unless(), with common code providing atomic_add_unless(). Architectures must also implement atmic64_add_unless() directly, with no corresponding atomic64_fetch_add_unless(). This divergenece is unfortunate, and means that the APIs for atomic_t, atomic64_t, and atomic_long_t differ. In preparation for unifying things, with architectures providing atomic64_fetch_add_unless, this patch adds a generic atomic64_add_unless() which will use atomic64_fetch_add_unless(). The instrumented atomics are updated to take this case into account. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: Vineet Gupta Cc: Russell King Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Palmer Dabbelt Cc: Albert Ou --- include/asm-generic/atomic-instrumented.h | 9 +++++++++ include/linux/atomic.h | 16 ++++++++++++++++ 2 files changed, 25 insertions(+) -- 2.11.0 diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 6e0818c182e2..e22d7e5f4ce7 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -93,11 +93,20 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) } #endif +#ifdef arch_atomic64_fetch_add_unless +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_unless(v, a, u); +} +#else static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } +#endif static __always_inline void atomic_inc(atomic_t *v) { diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 1105c0b37f27..8d93209052e1 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1072,6 +1072,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif /* atomic64_try_cmpxchg */ /** + * atomic64_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +#ifdef atomic64_fetch_add_unless +static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +{ + return atomic64_fetch_add_unless(v, a, u) != u; +} +#endif + +/** * atomic64_inc_not_zero - increment unless the number is zero * @v: pointer of type atomic64_t * From patchwork Tue May 29 15:43:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137193 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4216284lji; Tue, 29 May 2018 08:47:48 -0700 (PDT) X-Google-Smtp-Source: AB8JxZokGm7Xs9xr0G/NHR4exL77Y6kAWTwYWW7YTpx3Cfy7EEgnJ3X3fN4QSNG8btn8O8Iiyjf+ X-Received: by 2002:a63:7d47:: with SMTP id m7-v6mr14292256pgn.443.1527608868851; Tue, 29 May 2018 08:47:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608868; cv=none; d=google.com; s=arc-20160816; b=JMP2pvOzuB2Puecn36z3VrgEOcvMHA0C4aAtJOIE2BwCo1RvGT96HqCL21jBLTQWXB VUAvU56C2M/j16KNmjIV/p1d/F23A8r/DuUuU6WB+gxEx6hTo5giQMOx4paAyYcd11k7 J1sJb8hfqLdw5IdWoo2qqHQ91+v22GABV5ctJr222gGYbPUG+uzoVI6OaMt4Uzl0OWDT oAFLvk24W7acnnawjbOUj7DVUVyJNYewTGLx7OwHFSPGQLXoiUYWgaZzJkvn5hTeIgmb 8muYDxhaNylKjkZPUDiQxCnjklJt9VVz13EvVud+ghk623ZvTMbkbslj7D5dp0nlEfJW i0TA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=jwm+QnNT6wzAfIvMD/yS5Uhf0X+zv9nSyomVjCWJXoE=; b=sOTOfchK+uOj5+GrDkz527kU/+m06U7/AGWR2PpD4YG9p1mhjZkckONqwFkE9nVi64 oydEB0ziHJ4TA38wzOHeywJxT6LiG9kOkW+lgEKngbpA3xfWqJCetn3sSoL54QvzhZ9m 8CdcJDLnJjOAHTmreBBUqwPjCTjEWyAkvGn4BLkzsf7X7VMy7WYSw8Dc15erCBLjhNMU bd4bYvDLsD2N7TH0htsJ52XXP6Di4Ih0v24miqh+IJsDN4RIbUOnl9GFuo/jbF+Whtq3 5/WGwIZjk7SsJpw9L/mkKYlG3kurYg/eS2jA4qxMg2Ba+/Hl0XKGgC8o+BRUV17CU7LI gPWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v4-v6si32205684plp.580.2018.05.29.08.47.48; Tue, 29 May 2018 08:47:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936902AbeE2Prq (ORCPT + 30 others); Tue, 29 May 2018 11:47:46 -0400 Received: from foss.arm.com ([217.140.101.70]:43434 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936801AbeE2PoP (ORCPT ); Tue, 29 May 2018 11:44:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A878215AD; Tue, 29 May 2018 08:44:14 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9AFF43F25D; Tue, 29 May 2018 08:44:13 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Arnd Bergmann Subject: [PATCHv2 06/16] atomics/generic: define atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:36 +0100 Message-Id: <20180529154346.3168-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the generic implementation of atomic64_add_unless() into a generic implementation of atomic64_fetch_add_unless(). A wrapper in will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Arnd Bergmann --- include/asm-generic/atomic64.h | 3 ++- lib/atomic64.c | 13 ++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) -- 2.11.0 diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index e9a94b7a4d0a..b0aa425361ad 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -52,7 +52,8 @@ ATOMIC64_OPS(xor) extern long long atomic64_dec_if_positive(atomic64_t *v); extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n); extern long long atomic64_xchg(atomic64_t *v, long long new); -extern int atomic64_add_unless(atomic64_t *v, long long a, long long u); +extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u); +#define atomic64_fetch_add_unless atomic64_fetch_add_unless #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) #define atomic64_inc(v) atomic64_add(1LL, (v)) diff --git a/lib/atomic64.c b/lib/atomic64.c index 53c2d5edc826..fb2a38af5ae8 100644 --- a/lib/atomic64.c +++ b/lib/atomic64.c @@ -178,18 +178,17 @@ long long atomic64_xchg(atomic64_t *v, long long new) } EXPORT_SYMBOL(atomic64_xchg); -int atomic64_add_unless(atomic64_t *v, long long a, long long u) +long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - int ret = 0; + long long val; raw_spin_lock_irqsave(lock, flags); - if (v->counter != u) { + val = v->counter; + if (val != u) v->counter += a; - ret = 1; - } raw_spin_unlock_irqrestore(lock, flags); - return ret; + return val; } -EXPORT_SYMBOL(atomic64_add_unless); +EXPORT_SYMBOL(atomic64_fetch_add_unless); From patchwork Tue May 29 15:43:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137183 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4212571lji; Tue, 29 May 2018 08:44:31 -0700 (PDT) X-Google-Smtp-Source: AB8JxZq/rpRLAZOhvaOL/3ThwvDV0fwHMX8Op2O5DpTXwbRyJIch0mv9edvbL1pVaeSxFakVf4Cj X-Received: by 2002:a65:5288:: with SMTP id y8-v6mr14407877pgp.69.1527608671771; Tue, 29 May 2018 08:44:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608671; cv=none; d=google.com; s=arc-20160816; b=ynjuNkRBmKur1EFV9Oo+vecg4YbnPvzwtO34chWtV1gFS5+3KEIxCoz5u2Q7zvutvG FxYWDL/mw8DLv+3crwh835JZDJZPxQvDWqvybxpMzOR5ZKCHHb6nWTJsH5uAjqNE560b nP6NK9h9dWD+GcaynGbaBy1etr9rKfvIuawA7zHarppQzCs0nEtTog07oGv4IdG/4Vd0 d0emFU8lOgfuQ6zmplXSuF43lkvu47rBKrQ39OKZg+tXYueacG7sVXDTaMddAdqm14tK e2xCP/3PVEIzL3eLC7MThE7J9efYEJ/Ed/S60YJVzCI0/cdCJQmA4p2STvEJZ41a7yEa lG3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=mvnZaW+9DU6Q8s7XclYHZv8MXUruZYwECFgresxVNjE=; b=pL20ZbxoyswRCl20RatgAbl7+hfEb1cF+3SHSZZpsEcdgqhiBQ5bBzbifUS5fkplRk DZ1xobQaCV4YjFDneTeovC9kL9HImYuoZsDSOxwTVvvFTg8Cuirnbw/1VWxP1s8GjG9E vzQeY4lmqJAQBXWmqGSFEu1mWZcY01wQ/BQh+kvhH3YDMkbHxz7O73wsXWmEBfxASmza LWY7sPImFJeptaijL5qXxIomrQKYX0ddO0RQXbsx6vtBX0Yp0A0H8oKlBNtc93AlRda3 81XqxOv4MYpUYd9SJBF/YRjgIw9EXQgSek4sQz4AwNwhk5gspfqGpk1Zoyw1BT16W3i+ moSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x15-v6si26644335pgq.442.2018.05.29.08.44.31; Tue, 29 May 2018 08:44:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936865AbeE2Po3 (ORCPT + 30 others); Tue, 29 May 2018 11:44:29 -0400 Received: from foss.arm.com ([217.140.101.70]:43446 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964812AbeE2PoR (ORCPT ); Tue, 29 May 2018 11:44:17 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4163515AD; Tue, 29 May 2018 08:44:17 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E14D73F25D; Tue, 29 May 2018 08:44:15 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Richard Henderson , Ivan Kokshaysky , Matt Turner Subject: [PATCHv2 07/16] atomics/alpha: define atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:37 +0100 Message-Id: <20180529154346.3168-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/alpha implementation of atomic64_add_unless() into an implementation of atomic64_fetch_add_unless(). A wrapper in will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner --- arch/alpha/include/asm/atomic.h | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) -- 2.11.0 diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 4a800a3424a3..dcb7bbeeae02 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -238,35 +238,36 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) #define atomic_fetch_add_unless atomic_fetch_add_unless /** - * atomic64_add_unless - add unless the number is a given value + * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t * @a: the amount to add to v... * @u: ...unless v is equal to u. * * Atomically adds @a to @v, so long as it was not @u. - * Returns true iff @v was not @u. + * Returns the old value of @v. */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) +static __inline__ int atomic64_fetch_add_unless(atomic64_t *v, long a, long u) { - long c, tmp; + long c, new, old; smp_mb(); __asm__ __volatile__( - "1: ldq_l %[tmp],%[mem]\n" - " cmpeq %[tmp],%[u],%[c]\n" - " addq %[tmp],%[a],%[tmp]\n" + "1: ldq_l %[old],%[mem]\n" + " cmpeq %[old],%[u],%[c]\n" + " addq %[old],%[a],%[new]\n" " bne %[c],2f\n" - " stq_c %[tmp],%[mem]\n" - " beq %[tmp],3f\n" + " stq_c %[new],%[mem]\n" + " beq %[new],3f\n" "2:\n" ".subsection 2\n" "3: br 1b\n" ".previous" - : [tmp] "=&r"(tmp), [c] "=&r"(c) + : [old] "=&r"(old), [new] "=&r"(new), [c] "=&r"(c) : [mem] "m"(*v), [a] "rI"(a), [u] "rI"(u) : "memory"); smp_mb(); - return !c; + return old; } +#define atomic64_fetch_add_unless atomic64_fetch_add_unless /* * atomic64_dec_if_positive - decrement by 1 if old value positive From patchwork Tue May 29 15:43:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137184 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4212625lji; Tue, 29 May 2018 08:44:34 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJXzfR0JBTZPD8PIFLvg9kZZmTss80I6YyVAwL7pGGAweIO9MGSwC1Hb8lJQhFGa18uFqmb X-Received: by 2002:a62:a09c:: with SMTP id p28-v6mr12605887pfl.9.1527608674757; Tue, 29 May 2018 08:44:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608674; cv=none; d=google.com; s=arc-20160816; b=oWbPqMlWwqmmhqzELgJSgWD3O71OZogbQnNHcyDkp3GRNSufjG7GEuidGuN9noJ1sL JWFuIbLjBUTTqqd1woRZJw5aEBFd3cr9DMQxQ5lN5PwKEuuyyM1f66TyEuvxvWlbyTDt LbwyXG4RWAzzjA/tvGmoKyjKIzjalUg/3VtTc5VjMsIiZcWWqSmXWuf4MZXMIj8JcJ6T DFHlxL7CjsLiqFOESxEP9OTxSatvJzgYjwZ/en8rzjogZ3Sd94a62Tv757Ac28C7HKIi uTyRPpFzTn2FpRuQ1ltG/0vOjdJFNR+QHPXx6simTbUvMEPyd7cqp2gDk6mQ1hNq11Jr AAzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=dnh+e0RuGiz1NwDjlEVtC69EhEwxFPdRYbJSOl91fqI=; b=tI1FztNfifdwFTv/X0fuifgPnTlrIHh/+VGsmF8wbu7a+wFMHDjO/CNkVmbqfj5XhX FS8Y+0sUn2XBJKTfAYVKg53Xd04v5Blu2icbLGlIZMNNgDqSK/lA7aXrve3v0DZ3Ik4f HKprdzw5Al0rqtKv2msbVvI057IyF8czUAwkEnKmYtcCj4QZsLcNog3NmbWxX+jgKITR PD10wSyrDWtrVPDhqe5N1d/ocGLnoLFbzmD7IZ+QNHu6SxHaomnkpSkBNehm0Uuz6NqF rBctmcxZtZE6we6LL7x3qmpDosHf0U1JkeNlIB/KEQyYHohnFjJNP2MTemYHsecvQMvA gMtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ay5-v6si31673104plb.459.2018.05.29.08.44.34; Tue, 29 May 2018 08:44:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964827AbeE2Poc (ORCPT + 30 others); Tue, 29 May 2018 11:44:32 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43454 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964813AbeE2PoT (ORCPT ); Tue, 29 May 2018 11:44:19 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2998615B2; Tue, 29 May 2018 08:44:19 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1C0F63F25D; Tue, 29 May 2018 08:44:17 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Vineet Gupta Subject: [PATCHv2 08/16] atomics/arc: define atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:38 +0100 Message-Id: <20180529154346.3168-9-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/arc implementation of atomic64_add_unless() into an implementation of atomic64_fetch_add_unless(). A wrapper in will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Vineet Gupta --- arch/arc/include/asm/atomic.h | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) -- 2.11.0 diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 60da80481c5d..335ddc21ed03 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -531,41 +531,40 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) } /** - * atomic64_add_unless - add unless the number is a given value + * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t * @a: the amount to add to v... * @u: ...unless v is equal to u. * - * if (v != u) { v += a; ret = 1} else {ret = 0} - * Returns 1 iff @v was not @u (i.e. if add actually happened) + * Atomically adds @a to @v, so long as it was not @u. + * Returns the old value of @v */ -static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, + long long u) { - long long val; - int op_done; + long long old, temp; smp_mb(); __asm__ __volatile__( "1: llockd %0, [%2] \n" - " mov %1, 1 \n" " brne %L0, %L4, 2f # continue to add since v != u \n" " breq.d %H0, %H4, 3f # return since v == u \n" - " mov %1, 0 \n" "2: \n" - " add.f %L0, %L0, %L3 \n" - " adc %H0, %H0, %H3 \n" - " scondd %0, [%2] \n" + " add.f %L1, %L0, %L3 \n" + " adc %H1, %H0, %H3 \n" + " scondd %1, [%2] \n" " bnz 1b \n" "3: \n" - : "=&r"(val), "=&r" (op_done) + : "=&r"(old), "=&r" (temp) : "r"(&v->counter), "r"(a), "r"(u) : "cc"); /* memory clobber comes from smp_mb() */ smp_mb(); - return op_done; + return old; } +#define atomic64_fetch_add_unless atomic64_fetch_add_unless #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) #define atomic64_inc(v) atomic64_add(1LL, (v)) From patchwork Tue May 29 15:43:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137186 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4212858lji; Tue, 29 May 2018 08:44:47 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoQdmq+XzTVIkRIWV7DZTKxRqwHU6XeyUYiTMu7CleED7vEGWVv3/cMHxM8/opTCRBzd+S4 X-Received: by 2002:a63:7986:: with SMTP id u128-v6mr14191104pgc.127.1527608687180; Tue, 29 May 2018 08:44:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608687; cv=none; d=google.com; s=arc-20160816; b=ZD3un1XGaJeDE6FvK7PF/7xxuDPo87UWtHb6mu13JmKqTIqP5bxgwby3dCslamYF5d mFf9E63iSZzFOdRF733Eh5itMgE/y8SgO9rU8pU8TYQrwVZ+NC9PyuphHoaG906cuJD6 8mAbDZ8iceZz0kPfJq1sl1rFbfQ0Z4ysJjzK7sHu7RfipSjIhKyhlwe1I4UWAR+n0fgT ARq2NoyxZI4ys9U1Jsz/Zijuy1G+tuWjKHquj+qQUHh+kOYz8eg3oWnFHjWA74cFRCLv Zq4GL1fPCAacenS86/cLTtyM/O0ehkZzbDsUvSm3HbC8fR7M2B7WQCpq3w7DElvNzh8b eTRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=cncY8u0TCAY8f7xv9sX3u6+G4D5dqDsC+rnv8SR6Z4M=; b=IdkfiDlyVrgjqb/Uj+LLZsZCjZYsEEK/VlptKGr6F+42R5/bjLJxy9/Bq+Ww1A754t pCpk9D0BsqTkh+onIfVV6PxJv4DSiPWoA3ErXv19qIGoVLwF/cPLIb46WVbcYJMohvTn JfU7E1m2LShPE/7WavpzaFkGESzF+Mft92gPx6WunaUSF+M5MAznC8SX5yvxdW7ho9Ee BJ/ycvembFmKFvuT73KnYIiv3UaIDjLI/j+JsjVRlfjEAj1EhUpKIBeUmK8vVzZ4EVDB 8WC8eltRxtMl/FV5wjaK0h6zyaqVQc5X4kLvV/TWlPSgC72QrjujXuIEVt7DuQVZWQVf Etfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k12-v6si8736967pll.319.2018.05.29.08.44.46; Tue, 29 May 2018 08:44:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935287AbeE2Pom (ORCPT + 30 others); Tue, 29 May 2018 11:44:42 -0400 Received: from foss.arm.com ([217.140.101.70]:43464 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964814AbeE2PoV (ORCPT ); Tue, 29 May 2018 11:44:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E2EDA15BE; Tue, 29 May 2018 08:44:20 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D593C3F25D; Tue, 29 May 2018 08:44:19 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Russell King Subject: [PATCHv2 09/16] atomics/arm: define atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:39 +0100 Message-Id: <20180529154346.3168-10-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/arm implementation of atomic64_add_unless() into an implementation of atomic64_fetch_add_unless(). A wrapper in will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Russell King --- arch/arm/include/asm/atomic.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) -- 2.11.0 diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 74460aa00fa0..852e1fee72b0 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -486,11 +486,11 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) return result; } -static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, + long long u) { - long long val; + long long oldval, newval; unsigned long tmp; - int ret = 1; smp_mb(); prefetchw(&v->counter); @@ -499,23 +499,23 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) "1: ldrexd %0, %H0, [%4]\n" " teq %0, %5\n" " teqeq %H0, %H5\n" -" moveq %1, #0\n" " beq 2f\n" -" adds %Q0, %Q0, %Q6\n" -" adc %R0, %R0, %R6\n" -" strexd %2, %0, %H0, [%4]\n" +" adds %Q1, %Q0, %Q6\n" +" adc %R1, %R0, %R6\n" +" strexd %2, %1, %H1, [%4]\n" " teq %2, #0\n" " bne 1b\n" "2:" - : "=&r" (val), "+r" (ret), "=&r" (tmp), "+Qo" (v->counter) + : "=&r" (oldval), "=&r" (newval), "=&r" (tmp), "+Qo" (v->counter) : "r" (&v->counter), "r" (u), "r" (a) : "cc"); - if (ret) + if (oldval != u) smp_mb(); - return ret; + return oldval; } +#define atomic64_fetch_add_unless atomic64_fetch_add_unless #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) #define atomic64_inc(v) atomic64_add(1LL, (v)) From patchwork Tue May 29 15:43:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137185 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4212724lji; Tue, 29 May 2018 08:44:39 -0700 (PDT) X-Google-Smtp-Source: AB8JxZr5LZfMZqKAlGCmAj3MsMYf9UqXP+tUIWdFURABx3k/pVoBUuIKuSYKqMluQi6i01urr50+ X-Received: by 2002:a65:498e:: with SMTP id r14-v6mr13907131pgs.78.1527608679667; Tue, 29 May 2018 08:44:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608679; cv=none; d=google.com; s=arc-20160816; b=x4XJNHAIsLCpZS8llJf0k8S1zVpDfGBOx1y2yW4I1d05E7f6lzYT+HaEq81v9lsGxR Vg8wtRr7F5kld8O0r89ASEZK9P2qGC61OyrGgJ3FFzzDBRzDqi8aMZaDMFIzI6Ult4eo Zf/MlOcg7rkoqFlORPM6qncFAEmwqkJbxecfVpTVeFezEDMaESP2/NqV5crWRMC1uysC zfd4Pevh0ACBAr7BeQc7/w9xgUQuQaQ0pOP+b3Zkn4PXhbaiAosVoukqU+v3uz5M66je bVjpKPXB+NAaHTVnzhIoKFuJvftcjbGc9f/9sCzGCVa+ztNzs9iVAZ05UCHNuHzpJHkM a/4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=MCVD6hQ/VNDrT8n9a2YGOo4Qd4xoBzGI3ErzZMpx9zE=; b=UTt2ARiGR8XP0ErjAtJBAdNRJQKNWBy7u8SC1hRFhU9AjUpz2fhX/sqRjbCiWEV8l3 TqcwYVxbOpEPbIpsPCAIbpNaE0oqGVjpXLYbKtMZfiigz7RSdfvaSbIA4lUR7rNTyIof 7EdQzq08XR4Rj7a3dXPjxtSv7XvpVBbBoUYWFvlLm/GCU1xbX2Nq0HZT0ZK7zkkzq/dl z0cBvxgzYkTOl8M1lAvSfeZnalZVbCQWO7bej6MJs9sMKj0YlKki9haBMPPzvY8IvwyD N2wBBaeDoqEp2Q73sBDC108Cp2qLuQgbPQF5hxllExhbH4UAbB53nCTsNxZv2myww3PY SZNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ay5-v6si31673104plb.459.2018.05.29.08.44.39; Tue, 29 May 2018 08:44:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964878AbeE2Poh (ORCPT + 30 others); Tue, 29 May 2018 11:44:37 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43476 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964815AbeE2PoX (ORCPT ); Tue, 29 May 2018 11:44:23 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CBC4A15BF; Tue, 29 May 2018 08:44:22 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9B0AF3F25D; Tue, 29 May 2018 08:44:21 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Benjamin Herrenschmidt , Paul Mackerras Subject: [PATCHv2 10/16] atomics/powerpc: define atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:40 +0100 Message-Id: <20180529154346.3168-11-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/powerpc implementation of atomic64_add_unless() into an implementation of atomic64_fetch_add_unless(). A wrapper in will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Acked-by: Michael Ellerman Cc: Boqun Feng Cc: Will Deacon Cc: Benjamin Herrenschmidt Cc: Paul Mackerras --- arch/powerpc/include/asm/atomic.h | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) -- 2.11.0 diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index b5646c079c16..233dbf31911c 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -525,7 +525,7 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) #define atomic64_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) /** - * atomic64_add_unless - add unless the number is a given value + * atomic64_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t * @a: the amount to add to v... * @u: ...unless v is equal to u. @@ -533,13 +533,13 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) +static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) { long t; __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER -"1: ldarx %0,0,%1 # atomic_fetch_add_unless\n\ +"1: ldarx %0,0,%1 # atomic64_fetch_add_unless\n\ cmpd 0,%0,%3 \n\ beq 2f \n\ add %0,%2,%0 \n" @@ -552,8 +552,9 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) : "r" (&v->counter), "r" (a), "r" (u) : "cc", "memory"); - return t != u; + return t; } +#define atomic64_fetch_add_unless atomic64_fetch_add_unless /** * atomic_inc64_not_zero - increment unless the number is zero From patchwork Tue May 29 15:43:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137192 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4215537lji; Tue, 29 May 2018 08:47:11 -0700 (PDT) X-Google-Smtp-Source: AB8JxZr56Wd/WZa5637GnAyU+cViPuU6HHNBS/ZNnv5wKRFIuFIIr4DAAEaCQEXYAc2U0o4oWtHm X-Received: by 2002:a17:902:7283:: with SMTP id d3-v6mr18405213pll.192.1527608831539; Tue, 29 May 2018 08:47:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608831; cv=none; d=google.com; s=arc-20160816; b=KhKUh/IdmSfrd8cK7KQNr5/BZ/Yo7Ufeq3R3IZXocS7BLS8G2rq9y5IlMkyhr2m5Ah P8BlF342oyhtujHlH6/dKQ8aCbmra90N+NZwzwtzSoqDttkjyPJAZy5KcR9WUyloMlC8 RStk3DRr59rHqc3ihUJmI91KyjOs3dlXWSGKdsgswKOa8dUM7WATtfs8ocJIUfR+OB1r v590Go2NZROXE/vc2VoyUdZsWhQiUHlmvtgrJYZPAs0ycbm9WU8O3SP7FdxCASHdRiAA lCyXe0ZgHWyk6afHO90aWWKi21DGqDQoc572UT5LozQIXthxZqTrb3yNwQVrvoIbWZ31 8qYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=y3olNnpQu5ih5pNXArUNG/eRBNipb3D38UsMnVPwSWg=; b=rd0y+nfle1Dg7YM4S/FooAaZ16T7e+12T7bSrGE6iqr2Y8RCG0ZUuDtONsFJmeHlD1 lSc4vMUJgPnEffLNlq/23udPIQW2BaSrLcE9APwqM8A1E+xIrhI1nCoLYxuStBzksGO8 vCBBT6Zf1zPGwjqydpfVC2xEVzkkHzXZhFFbTnGHSRPfBsO9OCsBfnIWlGesxAVOH19N D/KY7Mp4aYfC/9ajhJnRPHDqq/EZZfLktuzjrWOOr87kVZoUJJQyMt+U8P0ZrJOxV2g/ 9RfaQyo7JbBxa383ELY+3ZwGbm9ZprDR1TeplW8rpwZxtJWxlOXgTDE8hNtdTP4mmOTV Ncgg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a98-v6si32628430pla.239.2018.05.29.08.47.11; Tue, 29 May 2018 08:47:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936435AbeE2PrJ (ORCPT + 30 others); Tue, 29 May 2018 11:47:09 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43490 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936829AbeE2PoZ (ORCPT ); Tue, 29 May 2018 11:44:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B439F15AD; Tue, 29 May 2018 08:44:24 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 838933F25D; Tue, 29 May 2018 08:44:23 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon , Palmer Dabbelt , Albert Ou Subject: [PATCHv2 11/16] atomics/riscv: define atomic64_fetch_add_unless() Date: Tue, 29 May 2018 16:43:41 +0100 Message-Id: <20180529154346.3168-12-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/riscv implementation of atomic64_add_unless() into an implementation of atomic64_fetch_add_unless(). A wrapper in will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Palmer Dabbelt Cc: Albert Ou --- arch/riscv/include/asm/atomic.h | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) -- 2.11.0 diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 5f161daefcd2..d959bbaaad41 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -352,7 +352,7 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) #define atomic_fetch_add_unless atomic_fetch_add_unless #ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u) +static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) { long prev, rc; @@ -369,11 +369,7 @@ static __always_inline long __atomic64_add_unless(atomic64_t *v, long a, long u) : "memory"); return prev; } - -static __always_inline int atomic64_add_unless(atomic64_t *v, long a, long u) -{ - return __atomic64_add_unless(v, a, u) != u; -} +#define atomic64_fetch_add_unless atomic64_fetch_add_unless #endif /* From patchwork Tue May 29 15:43:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137191 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4215471lji; Tue, 29 May 2018 08:47:08 -0700 (PDT) X-Google-Smtp-Source: ADUXVKI7l5pfGaa33m4TMPe1SkCREaAAweBJU3Ue1CqxeJ+BuhGriqwdBfFIIji5Lgl9dyn/h0A/ X-Received: by 2002:a63:ab05:: with SMTP id p5-v6mr7045469pgf.280.1527608828119; Tue, 29 May 2018 08:47:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608828; cv=none; d=google.com; s=arc-20160816; b=ZKIgeQHiZEtTKzFl2kKalgWFao64wa9FpTynT9b8sJx3dVNkBYo9Aedf4uR1JgFz4Q WX5WgaTt60lCPDKqGehXBKkO3HlNmzhbgzgVC5IEtDAaMGVFQ6MYTWg9W8ceW2FLdfCF jgWEvpnZV61V/SqlBL7zwETD7jqTdBIfghk2mWSdG5teMj9vjs3ul5zG6pLjwSkLECky DWym71Tiay6dOLGZDNRZHpZS/SOIOvWs+A93aEr/vKuaPuzHmiXeLmwrq2a2dslj6xFY e0ui6DWbSTc8qOFVAI4sl7g+teVdsD6Db+Hze41/q9JL9d71LydDAEjbn/p3ngnvxFcW jozQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=GqqHDhB1DGjcR3C4JgOJRfFMnFovt9ExV1Ao2vyzZTM=; b=ig4XJIbNmjQhALDwYFlNIkt0oyLMY6SLHKPbhSatSxhq/JrYJBodnC7G44Vs2iTTo7 TMpiAY7tiJ0SFhyBmzErpMcQeyv4cqu1UFFATgBYf+qY0saU9zlY/hDkzQHmq1AHgj0D EI+g7OYdZTdwyEiEz9biILWFo706z+JNv9q+aZTUjCkXqP3WBHoKJnVVTb9rCYYNcSfG ZzxaYzeeQyjbe8DP+AWoG+IUKSgnGeu3IbzaskCPiKbfH8zFTCZUBO3ESBOSktck8OKM 5fFTVqcP+Amfa3dL3MChIyNI0RF2YwtkfV44LwEaGMh1pUnvgM4ueXzjYLztVYCS57Ri LyHQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a98-v6si32628430pla.239.2018.05.29.08.47.07; Tue, 29 May 2018 08:47:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965185AbeE2PrG (ORCPT + 30 others); Tue, 29 May 2018 11:47:06 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43500 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936850AbeE2Po0 (ORCPT ); Tue, 29 May 2018 11:44:26 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4E24215B2; Tue, 29 May 2018 08:44:26 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 644B13F25D; Tue, 29 May 2018 08:44:25 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon Subject: [PATCHv2 12/16] atomics/treewide: make atomic64_fetch_add_unless() optional Date: Tue, 29 May 2018 16:43:42 +0100 Message-Id: <20180529154346.3168-13-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Architectures with atomic64_fetch_add_unless provide a preprocessor symbol if they do so, and all other architectures have trivial C implementations of atomic64_add_unless() which are near-identical. Let's unify the trivial definitions of atomic64_fetch_add_unless() in , so that we always have both atomic64_fetch_add_unless() and atomic64_add_unless() with less boilerplate code. This means that atomic64_add_unless() is always implemented in core code, and the instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon --- arch/arm64/include/asm/atomic.h | 12 ------------ arch/ia64/include/asm/atomic.h | 15 --------------- arch/mips/include/asm/atomic.h | 24 ------------------------ arch/parisc/include/asm/atomic.h | 24 ------------------------ arch/s390/include/asm/atomic.h | 16 ---------------- arch/sparc/include/asm/atomic_64.h | 15 --------------- arch/x86/include/asm/atomic64_64.h | 19 ------------------- include/asm-generic/atomic-instrumented.h | 6 ------ include/linux/atomic.h | 26 ++++++++++++++++++++++++-- 9 files changed, 24 insertions(+), 133 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index 22c8c43d6689..82db0e4febd4 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -40,17 +40,6 @@ #include -#define ___atomic_add_unless(v, a, u, sfx) \ -({ \ - typeof((v)->counter) c, old; \ - \ - c = atomic##sfx##_read(v); \ - while (c != (u) && \ - (old = atomic##sfx##_cmpxchg((v), c, c + (a))) != c) \ - c = old; \ - c; \ - }) - #define ATOMIC_INIT(i) { (i) } #define atomic_read(v) READ_ONCE((v)->counter) @@ -200,7 +189,6 @@ #define atomic64_dec_and_test(v) (atomic64_dec_return(v) == 0) #define atomic64_sub_and_test(i, v) (atomic64_sub_return((i), (v)) == 0) #define atomic64_add_negative(i, v) (atomic64_add_return((i), (v)) < 0) -#define atomic64_add_unless(v, a, u) (___atomic_add_unless(v, a, u, 64) != u) #define atomic64_andnot atomic64_andnot #endif diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index cfe44086338e..0f80a3eafaba 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -215,21 +215,6 @@ ATOMIC64_FETCH_OP(xor, ^) (cmpxchg(&((v)->counter), old, new)) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static __inline__ long atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - static __inline__ long atomic64_dec_if_positive(atomic64_t *v) { long c, old, dec; diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 794734e730d9..d42b27df1548 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -596,30 +596,6 @@ static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), (new))) -/** - * atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns true iff @v was not @u. - */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - #define atomic64_dec_return(v) atomic64_sub_return(1, (v)) #define atomic64_inc_return(v) atomic64_add_return(1, (v)) diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index b2b6261d05e7..f53ba2d6ff67 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -257,30 +257,6 @@ atomic64_read(const atomic64_t *v) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -/** - * atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - /* * atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index 26c6b713a7a3..eb9329741bad 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -153,22 +153,6 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OPS -static inline int atomic64_add_unless(atomic64_t *v, long i, long u) -{ - long c, old; - - c = atomic64_read(v); - for (;;) { - if (unlikely(c == u)) - break; - old = atomic64_cmpxchg(v, c, c + i); - if (likely(old == c)) - break; - c = old; - } - return c != u; -} - static inline long atomic64_dec_if_positive(atomic64_t *v) { long c, old, dec; diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index e4f1c93db31f..458783e99997 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -93,21 +93,6 @@ static inline int atomic_xchg(atomic_t *v, int new) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static inline long atomic64_add_unless(atomic64_t *v, long a, long u) -{ - long c, old; - c = atomic64_read(v); - for (;;) { - if (unlikely(c == (u))) - break; - old = atomic64_cmpxchg((v), c, c + (a)); - if (likely(old == c)) - break; - c = old; - } - return c != (u); -} - long atomic64_dec_if_positive(atomic64_t *v); #endif /* !(__ARCH_SPARC64_ATOMIC__) */ diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 6f95023894b7..7e04b294e6eb 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -188,25 +188,6 @@ static inline long arch_atomic64_xchg(atomic64_t *v, long new) return xchg(&v->counter, new); } -/** - * arch_atomic64_add_unless - add unless the number is a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as it was not @u. - * Returns the old value of @v. - */ -static inline bool arch_atomic64_add_unless(atomic64_t *v, long a, long u) -{ - s64 c = arch_atomic64_read(v); - do { - if (unlikely(c == u)) - return false; - } while (!arch_atomic64_try_cmpxchg(v, &c, c + a)); - return true; -} - /* * arch_atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index e22d7e5f4ce7..719168de5c6e 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -100,12 +100,6 @@ static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u kasan_check_write(v, sizeof(*v)); return arch_atomic64_fetch_add_unless(v, a, u); } -#else -static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_add_unless(v, a, u); -} #endif static __always_inline void atomic_inc(atomic_t *v) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 8d93209052e1..ab534fad09e1 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1072,6 +1072,30 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif /* atomic64_try_cmpxchg */ /** + * atomic64_fetch_add_unless - add unless the number is already a given value + * @v: pointer of type atomic64_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns original value of @v + */ +#ifndef atomic64_fetch_add_unless +static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, + long long u) +{ + long long c = atomic64_read(v); + + do { + if (unlikely(c == u)) + break; + } while (!atomic64_try_cmpxchg(v, &c, c + a)); + + return c; +} +#endif + +/** * atomic64_add_unless - add unless the number is already a given value * @v: pointer of type atomic_t * @a: the amount to add to v... @@ -1080,12 +1104,10 @@ static inline int atomic_dec_if_positive(atomic_t *v) * Atomically adds @a to @v, so long as @v was not already @u. * Returns non-zero if @v was not @u, and zero otherwise. */ -#ifdef atomic64_fetch_add_unless static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) { return atomic64_fetch_add_unless(v, a, u) != u; } -#endif /** * atomic64_inc_not_zero - increment unless the number is zero From patchwork Tue May 29 15:43:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137190 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4215058lji; Tue, 29 May 2018 08:46:47 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrtapWO+WuJQtrCduwMUmbWKZHJfv4BdDzAI0qwZGgSfeSa5VGP7qsOurXRwHsgZQPgFDVi X-Received: by 2002:a17:902:708a:: with SMTP id z10-v6mr17519226plk.283.1527608807727; Tue, 29 May 2018 08:46:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608807; cv=none; d=google.com; s=arc-20160816; b=uEYfDlIDFzi3LKqCjQr3oDU7b2lN2aZWr2xFo7DURkKGynlz2dkuAypuQnVYzpHxXS HXG+q7AQ40AfG6sTrx8S610QZw2nViJgOmjxfgHIpNUjCKQOcGAZ5ne/qsJPN8jvTKcM 1S/fQNluly3amAkMJuu2/IdGgJ5pmPg2/FT09lJAsqoPQBPksMejgGH/CHC8yStf8rN8 iw7alLNv9PnS7aU6oIHMV+WwxDXOJ2LNgnKPoGSKEs6DYRvb/I5r8EoO5AXhM/pTDDuu CDpGxjTYHcSletn0Whq4QLO42HEQ5j07CChfESUA62K37/Y35l9Pa6l4wSqZqCbKOq+T 80xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=abbtOanObQS6Gu1zb6okjvkFu9cfhaxrASi0Idqbz14=; b=qGe2jApf4Ws5ac7sC8f+4aqFuH8dcE06YyqOfqsoM1MlO0qib8Jbv+53cUqgNe18Me /XQjDb9pgbZivUKQOqJXTrXKKq9vhZqF0kyvImcIkPwUjod+K7U6D8CGQ9MHgpiAR7QP Ws0vQNvACIyOPb5Snem6wtyL0IgPF8NQwLqt9vI31DONCLD6wlyzXPndNqeo164Uk9tS chckIyUlTQmNXtFQIGN7SPqs1mz+ZwsAb24GMufw826V+JebVpn0qhjLKGr4kEN5CqTI XF0MZrwE73CL4ROvzp9wrL14mcuTwIHtmluYGTrmO0Q7On/sanrq+3OlGp/4fS5r1aeh 06LQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t1-v6si33484616plb.90.2018.05.29.08.46.47; Tue, 29 May 2018 08:46:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965160AbeE2Pqp (ORCPT + 30 others); Tue, 29 May 2018 11:46:45 -0400 Received: from foss.arm.com ([217.140.101.70]:43508 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936858AbeE2Po2 (ORCPT ); Tue, 29 May 2018 11:44:28 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 108D880D; Tue, 29 May 2018 08:44:28 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 02A0D3F25D; Tue, 29 May 2018 08:44:26 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Will Deacon Subject: [PATCHv2 13/16] atomics/treewide: make test ops optional Date: Tue, 29 May 2018 16:43:43 +0100 Message-Id: <20180529154346.3168-14-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some of the atomics return the result of a test applied after the atomic operation, and almost all architectures implement these as trivial wrappers around the underlying atomic. Specifically: * _inc_and_test(v) is (_inc_return(v) == 0) * _dec_and_test(v) is (_dec_return(v) == 0) * _sub_and_test(i, v) is (_sub_return(i, v) == 0) * _add_negative(i, v) is (_add_return(i, v) < 0) Rather than have these definitions duplicated in all architectures, with minor inconsistencies in formatting and documentation, let's make these operations optional, with default fallbacks as above. Implementations must now provide a preprocessor symbol. The instrumented atomics are updated accordingly. Both x86 and m68k have custom implementations, which are left as-is, given preprocessor symbols to avoid being overridden. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Geert Uytterhoeven Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 12 --- arch/arc/include/asm/atomic.h | 10 --- arch/arm/include/asm/atomic.h | 9 --- arch/arm64/include/asm/atomic.h | 8 -- arch/h8300/include/asm/atomic.h | 5 -- arch/hexagon/include/asm/atomic.h | 5 -- arch/ia64/include/asm/atomic.h | 23 ------ arch/m68k/include/asm/atomic.h | 4 + arch/mips/include/asm/atomic.h | 84 -------------------- arch/parisc/include/asm/atomic.h | 22 ------ arch/powerpc/include/asm/atomic.h | 30 -------- arch/riscv/include/asm/atomic.h | 46 ----------- arch/s390/include/asm/atomic.h | 8 -- arch/sh/include/asm/atomic.h | 4 - arch/sparc/include/asm/atomic_32.h | 15 ---- arch/sparc/include/asm/atomic_64.h | 20 ----- arch/x86/include/asm/atomic.h | 4 + arch/x86/include/asm/atomic64_32.h | 54 ------------- arch/x86/include/asm/atomic64_64.h | 4 + arch/xtensa/include/asm/atomic.h | 42 ---------- include/asm-generic/atomic-instrumented.h | 24 ++++++ include/asm-generic/atomic.h | 9 --- include/asm-generic/atomic64.h | 4 - include/linux/atomic.h | 124 ++++++++++++++++++++++++++++++ 24 files changed, 160 insertions(+), 410 deletions(-) -- 2.11.0 Acked-by: Palmer Dabbelt diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index dcb7bbeeae02..31a4904dc8fc 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -297,24 +297,12 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) return old - 1; } -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) - #define atomic_dec_return(v) atomic_sub_return(1,(v)) #define atomic64_dec_return(v) atomic64_sub_return(1,(v)) #define atomic_inc_return(v) atomic_add_return(1,(v)) #define atomic64_inc_return(v) atomic64_add_return(1,(v)) -#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0) -#define atomic64_sub_and_test(i,v) (atomic64_sub_return((i), (v)) == 0) - -#define atomic_inc_and_test(v) (atomic_add_return(1, (v)) == 0) -#define atomic64_inc_and_test(v) (atomic64_add_return(1, (v)) == 0) - -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, (v)) == 0) - #define atomic_inc(v) atomic_add(1,(v)) #define atomic64_inc(v) atomic64_add(1,(v)) diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 335ddc21ed03..555b0a94b643 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -311,14 +311,8 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) #define atomic_inc(v) atomic_add(1, v) #define atomic_dec(v) atomic_sub(1, v) -#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) #define atomic_inc_return(v) atomic_add_return(1, (v)) #define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - -#define atomic_add_negative(i, v) (atomic_add_return(i, v) < 0) - #ifdef CONFIG_GENERIC_ATOMIC64 @@ -566,14 +560,10 @@ static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, } #define atomic64_fetch_add_unless atomic64_fetch_add_unless -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) #define atomic64_inc(v) atomic64_add(1LL, (v)) #define atomic64_inc_return(v) atomic64_add_return(1LL, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) #define atomic64_dec(v) atomic64_sub(1LL, (v)) #define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) -#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) #endif /* !CONFIG_GENERIC_ATOMIC64 */ diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 852e1fee72b0..35fb7f504daa 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -248,13 +248,8 @@ ATOMIC_OPS(xor, ^=, eor) #define atomic_inc(v) atomic_add(1, v) #define atomic_dec(v) atomic_sub(1, v) -#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) #define atomic_inc_return_relaxed(v) (atomic_add_return_relaxed(1, v)) #define atomic_dec_return_relaxed(v) (atomic_sub_return_relaxed(1, v)) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - -#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0) #ifndef CONFIG_GENERIC_ATOMIC64 typedef struct { @@ -517,14 +512,10 @@ static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, } #define atomic64_fetch_add_unless atomic64_fetch_add_unless -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) #define atomic64_inc(v) atomic64_add(1LL, (v)) #define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1LL, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) #define atomic64_dec(v) atomic64_sub(1LL, (v)) #define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1LL, (v)) -#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) #endif /* !CONFIG_GENERIC_ATOMIC64 */ #endif diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index 82db0e4febd4..edbe53fa3106 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -110,10 +110,6 @@ #define atomic_inc(v) atomic_add(1, (v)) #define atomic_dec(v) atomic_sub(1, (v)) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) -#define atomic_add_negative(i, v) (atomic_add_return((i), (v)) < 0) #define atomic_andnot atomic_andnot /* @@ -185,10 +181,6 @@ #define atomic64_inc(v) atomic64_add(1, (v)) #define atomic64_dec(v) atomic64_sub(1, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_dec_and_test(v) (atomic64_dec_return(v) == 0) -#define atomic64_sub_and_test(i, v) (atomic64_sub_return((i), (v)) == 0) -#define atomic64_add_negative(i, v) (atomic64_add_return((i), (v)) < 0) #define atomic64_andnot atomic64_andnot #endif diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h index 250bb3baac23..b658b20d2232 100644 --- a/arch/h8300/include/asm/atomic.h +++ b/arch/h8300/include/asm/atomic.h @@ -69,17 +69,12 @@ ATOMIC_OPS(sub, -=) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - #define atomic_inc_return(v) atomic_add_return(1, v) #define atomic_dec_return(v) atomic_sub_return(1, v) #define atomic_inc(v) (void)atomic_inc_return(v) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) #define atomic_dec(v) (void)atomic_dec_return(v) -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) static inline int atomic_cmpxchg(atomic_t *v, int old, int new) { diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 86c67e9adbfa..31638f511674 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -201,11 +201,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) #define atomic_inc(v) atomic_add(1, (v)) #define atomic_dec(v) atomic_sub(1, (v)) -#define atomic_inc_and_test(v) (atomic_add_return(1, (v)) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, (v)) == 0) -#define atomic_add_negative(i, v) (atomic_add_return(i, (v)) < 0) - #define atomic_inc_return(v) (atomic_add_return(1, v)) #define atomic_dec_return(v) (atomic_sub_return(1, v)) diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 0f80a3eafaba..e4143c462e65 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -231,34 +231,11 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) return dec; } -/* - * Atomically add I to V and return TRUE if the resulting value is - * negative. - */ -static __inline__ int -atomic_add_negative (int i, atomic_t *v) -{ - return atomic_add_return(i, v) < 0; -} - -static __inline__ long -atomic64_add_negative (__s64 i, atomic64_t *v) -{ - return atomic64_add_return(i, v) < 0; -} - #define atomic_dec_return(v) atomic_sub_return(1, (v)) #define atomic_inc_return(v) atomic_add_return(1, (v)) #define atomic64_dec_return(v) atomic64_sub_return(1, (v)) #define atomic64_inc_return(v) atomic64_add_return(1, (v)) -#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) -#define atomic_inc_and_test(v) (atomic_add_return(1, (v)) == 0) -#define atomic64_sub_and_test(i,v) (atomic64_sub_return((i), (v)) == 0) -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, (v)) == 0) -#define atomic64_inc_and_test(v) (atomic64_add_return(1, (v)) == 0) - #define atomic_add(i,v) (void)atomic_add_return((i), (v)) #define atomic_sub(i,v) (void)atomic_sub_return((i), (v)) #define atomic_inc(v) atomic_add(1, (v)) diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index 596882cda224..9df09c876fa2 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -138,6 +138,7 @@ static inline int atomic_dec_and_test(atomic_t *v) __asm__ __volatile__("subql #1,%1; seq %0" : "=d" (c), "+m" (*v)); return c != 0; } +#define atomic_dec_and_test atomic_dec_and_test static inline int atomic_dec_and_test_lt(atomic_t *v) { @@ -155,6 +156,7 @@ static inline int atomic_inc_and_test(atomic_t *v) __asm__ __volatile__("addql #1,%1; seq %0" : "=d" (c), "+m" (*v)); return c != 0; } +#define atomic_inc_and_test atomic_inc_and_test #ifdef CONFIG_RMW_INSNS @@ -201,6 +203,7 @@ static inline int atomic_sub_and_test(int i, atomic_t *v) : ASM_DI (i)); return c != 0; } +#define atomic_sub_and_test atomic_sub_and_test static inline int atomic_add_negative(int i, atomic_t *v) { @@ -210,5 +213,6 @@ static inline int atomic_add_negative(int i, atomic_t *v) : ASM_DI (i)); return c != 0; } +#define atomic_add_negative atomic_add_negative #endif /* __ARCH_M68K_ATOMIC __ */ diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index d42b27df1548..fd3008ae164c 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -278,37 +278,6 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) #define atomic_inc_return(v) atomic_add_return(1, (v)) /* - * atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) - -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - -/* - * atomic_dec_and_test - decrement by 1 and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) - -/* * atomic_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t */ @@ -330,17 +299,6 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) */ #define atomic_dec(v) atomic_sub(1, (v)) -/* - * atomic_add_negative - add and test if negative - * @v: pointer of type atomic_t - * @i: integer value to add - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#define atomic_add_negative(i, v) (atomic_add_return(i, (v)) < 0) - #ifdef CONFIG_64BIT #define ATOMIC64_INIT(i) { (i) } @@ -600,37 +558,6 @@ static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) #define atomic64_inc_return(v) atomic64_add_return(1, (v)) /* - * atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#define atomic64_sub_and_test(i, v) (atomic64_sub_return((i), (v)) == 0) - -/* - * atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) - -/* - * atomic64_dec_and_test - decrement by 1 and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, (v)) == 0) - -/* * atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic64_t */ @@ -652,17 +579,6 @@ static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) */ #define atomic64_dec(v) atomic64_sub(1, (v)) -/* - * atomic64_add_negative - add and test if negative - * @v: pointer of type atomic64_t - * @i: integer value to add - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#define atomic64_add_negative(i, v) (atomic64_add_return(i, (v)) < 0) - #endif /* CONFIG_64BIT */ #endif /* _ASM_ATOMIC_H */ diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index f53ba2d6ff67..f85844ff6336 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -142,22 +142,6 @@ ATOMIC_OPS(xor, ^=) #define atomic_inc_return(v) (atomic_add_return( 1,(v))) #define atomic_dec_return(v) (atomic_add_return( -1,(v))) -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) - -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) - -#define atomic_sub_and_test(i,v) (atomic_sub_return((i),(v)) == 0) - #define ATOMIC_INIT(i) { (i) } #ifdef CONFIG_64BIT @@ -246,12 +230,6 @@ atomic64_read(const atomic64_t *v) #define atomic64_inc_return(v) (atomic64_add_return( 1,(v))) #define atomic64_dec_return(v) (atomic64_add_return( -1,(v))) -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) - -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_dec_and_test(v) (atomic64_dec_return(v) == 0) -#define atomic64_sub_and_test(i,v) (atomic64_sub_return((i),(v)) == 0) - /* exported interface */ #define atomic64_cmpxchg(v, o, n) \ ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 233dbf31911c..5d76f05d2be3 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -129,8 +129,6 @@ ATOMIC_OPS(xor, xor) #undef ATOMIC_OP_RETURN_RELAXED #undef ATOMIC_OP -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) - static __inline__ void atomic_inc(atomic_t *v) { int t; @@ -163,16 +161,6 @@ static __inline__ int atomic_inc_return_relaxed(atomic_t *v) return t; } -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - static __inline__ void atomic_dec(atomic_t *v) { int t; @@ -281,9 +269,6 @@ static __inline__ int atomic_inc_not_zero(atomic_t *v) } #define atomic_inc_not_zero(v) atomic_inc_not_zero((v)) -#define atomic_sub_and_test(a, v) (atomic_sub_return((a), (v)) == 0) -#define atomic_dec_and_test(v) (atomic_dec_return((v)) == 0) - /* * Atomically test *v and decrement if it is greater than 0. * The function returns the old value of *v minus 1, even if @@ -413,8 +398,6 @@ ATOMIC64_OPS(xor, xor) #undef ATOMIC64_OP_RETURN_RELAXED #undef ATOMIC64_OP -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) - static __inline__ void atomic64_inc(atomic64_t *v) { long t; @@ -445,16 +428,6 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) return t; } -/* - * atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) - static __inline__ void atomic64_dec(atomic64_t *v) { long t; @@ -488,9 +461,6 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) #define atomic64_inc_return_relaxed atomic64_inc_return_relaxed #define atomic64_dec_return_relaxed atomic64_dec_return_relaxed -#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) -#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) - /* * Atomically test *v and decrement if it is greater than 0. * The function returns the old value of *v minus 1. diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index d959bbaaad41..68eef0a805ca 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -209,36 +209,6 @@ ATOMIC_OPS(xor, xor, i) #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN -/* - * The extra atomic operations that are constructed from one of the core - * AMO-based operations above (aside from sub, which is easier to fit above). - * These are required to perform a full barrier, but they're OK this way - * because atomic_*_return is also required to perform a full barrier. - * - */ -#define ATOMIC_OP(op, func_op, comp_op, I, c_type, prefix) \ -static __always_inline \ -bool atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_##func_op##_return(i, v) comp_op I; \ -} - -#ifdef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC_OPS(op, func_op, comp_op, I) \ - ATOMIC_OP(op, func_op, comp_op, I, int, ) -#else -#define ATOMIC_OPS(op, func_op, comp_op, I) \ - ATOMIC_OP(op, func_op, comp_op, I, int, ) \ - ATOMIC_OP(op, func_op, comp_op, I, long, 64) -#endif - -ATOMIC_OPS(add_and_test, add, ==, 0) -ATOMIC_OPS(sub_and_test, sub, ==, 0) -ATOMIC_OPS(add_negative, add, <, 0) - -#undef ATOMIC_OP -#undef ATOMIC_OPS - #define ATOMIC_OP(op, func_op, I, c_type, prefix) \ static __always_inline \ void atomic##prefix##_##op(atomic##prefix##_t *v) \ @@ -315,22 +285,6 @@ ATOMIC_OPS(dec, add, +, -1) #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN -#define ATOMIC_OP(op, func_op, comp_op, I, prefix) \ -static __always_inline \ -bool atomic##prefix##_##op(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_##func_op##_return(v) comp_op I; \ -} - -ATOMIC_OP(inc_and_test, inc, ==, 0, ) -ATOMIC_OP(dec_and_test, dec, ==, 0, ) -#ifndef CONFIG_GENERIC_ATOMIC64 -ATOMIC_OP(inc_and_test, inc, ==, 0, 64) -ATOMIC_OP(dec_and_test, dec, ==, 0, 64) -#endif - -#undef ATOMIC_OP - /* This is required to provide a full barrier on success. */ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index eb9329741bad..7f5fbd595f01 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -55,17 +55,13 @@ static inline void atomic_add(int i, atomic_t *v) __atomic_add(i, &v->counter); } -#define atomic_add_negative(_i, _v) (atomic_add_return(_i, _v) < 0) #define atomic_inc(_v) atomic_add(1, _v) #define atomic_inc_return(_v) atomic_add_return(1, _v) -#define atomic_inc_and_test(_v) (atomic_add_return(1, _v) == 0) #define atomic_sub(_i, _v) atomic_add(-(int)(_i), _v) #define atomic_sub_return(_i, _v) atomic_add_return(-(int)(_i), _v) #define atomic_fetch_sub(_i, _v) atomic_fetch_add(-(int)(_i), _v) -#define atomic_sub_and_test(_i, _v) (atomic_sub_return(_i, _v) == 0) #define atomic_dec(_v) atomic_sub(1, _v) #define atomic_dec_return(_v) atomic_sub_return(1, _v) -#define atomic_dec_and_test(_v) (atomic_sub_return(1, _v) == 0) #define ATOMIC_OPS(op) \ static inline void atomic_##op(int i, atomic_t *v) \ @@ -170,16 +166,12 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) return dec; } -#define atomic64_add_negative(_i, _v) (atomic64_add_return(_i, _v) < 0) #define atomic64_inc(_v) atomic64_add(1, _v) #define atomic64_inc_return(_v) atomic64_add_return(1, _v) -#define atomic64_inc_and_test(_v) (atomic64_add_return(1, _v) == 0) #define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v) #define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v) #define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v) -#define atomic64_sub_and_test(_i, _v) (atomic64_sub_return(_i, _v) == 0) #define atomic64_dec(_v) atomic64_sub(1, _v) #define atomic64_dec_return(_v) atomic64_sub_return(1, _v) -#define atomic64_dec_and_test(_v) (atomic64_sub_return(1, _v) == 0) #endif /* __ARCH_S390_ATOMIC__ */ diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 422fac764ca1..d438494fa112 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -32,12 +32,8 @@ #include #endif -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) #define atomic_dec_return(v) atomic_sub_return(1, (v)) #define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) -#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0) -#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) #define atomic_inc(v) atomic_add(1, (v)) #define atomic_dec(v) atomic_sub(1, (v)) diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index 9d7a15acc0c5..3a26573790c6 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -51,19 +51,4 @@ void atomic_set(atomic_t *, int); #define atomic_inc_return(v) (atomic_add_return( 1, (v))) #define atomic_dec_return(v) (atomic_add_return( -1, (v))) -#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) - -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) - #endif /* !(__ARCH_SPARC_ATOMIC__) */ diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 458783e99997..634508282aea 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -56,32 +56,12 @@ ATOMIC_OPS(xor) #define atomic_inc_return(v) atomic_add_return(1, v) #define atomic64_inc_return(v) atomic64_add_return(1, v) -/* - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) - -#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) -#define atomic64_sub_and_test(i, v) (atomic64_sub_return(i, v) == 0) - -#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) -#define atomic64_dec_and_test(v) (atomic64_sub_return(1, v) == 0) - #define atomic_inc(v) atomic_add(1, v) #define atomic64_inc(v) atomic64_add(1, v) #define atomic_dec(v) atomic_sub(1, v) #define atomic64_dec(v) atomic64_sub(1, v) -#define atomic_add_negative(i, v) (atomic_add_return(i, v) < 0) -#define atomic64_add_negative(i, v) (atomic64_add_return(i, v) < 0) - #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) static inline int atomic_xchg(atomic_t *v, int new) diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 616327ac9d39..73bda4abe180 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -80,6 +80,7 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v) * true if the result is zero, or false for all * other cases. */ +#define arch_atomic_sub_and_test arch_atomic_sub_and_test static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e); @@ -117,6 +118,7 @@ static __always_inline void arch_atomic_dec(atomic_t *v) * returns true if the result is 0, or false for all other * cases. */ +#define arch_atomic_dec_and_test arch_atomic_dec_and_test static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e); @@ -130,6 +132,7 @@ static __always_inline bool arch_atomic_dec_and_test(atomic_t *v) * and returns true if the result is zero, or false for all * other cases. */ +#define arch_atomic_inc_and_test arch_atomic_inc_and_test static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e); @@ -144,6 +147,7 @@ static __always_inline bool arch_atomic_inc_and_test(atomic_t *v) * if the result is negative, or false when * result is greater than or equal to zero. */ +#define arch_atomic_add_negative arch_atomic_add_negative static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s); diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 2a33cc17801b..a26810d005e0 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -198,20 +198,6 @@ static inline long long arch_atomic64_sub(long long i, atomic64_t *v) } /** - * arch_atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer to type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -static inline int arch_atomic64_sub_and_test(long long i, atomic64_t *v) -{ - return arch_atomic64_sub_return(i, v) == 0; -} - -/** * arch_atomic64_inc - increment atomic64 variable * @v: pointer to type atomic64_t * @@ -236,46 +222,6 @@ static inline void arch_atomic64_dec(atomic64_t *v) } /** - * arch_atomic64_dec_and_test - decrement and test - * @v: pointer to type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -static inline int arch_atomic64_dec_and_test(atomic64_t *v) -{ - return arch_atomic64_dec_return(v) == 0; -} - -/** - * atomic64_inc_and_test - increment and test - * @v: pointer to type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -static inline int arch_atomic64_inc_and_test(atomic64_t *v) -{ - return arch_atomic64_inc_return(v) == 0; -} - -/** - * arch_atomic64_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer to type atomic64_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -static inline int arch_atomic64_add_negative(long long i, atomic64_t *v) -{ - return arch_atomic64_add_return(i, v) < 0; -} - -/** * arch_atomic64_add_unless - add unless the number is a given value * @v: pointer of type atomic64_t * @a: the amount to add to v... diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 7e04b294e6eb..6a65228a3db6 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -71,6 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v) * true if the result is zero, or false for all * other cases. */ +#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e); @@ -110,6 +111,7 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v) * returns true if the result is 0, or false for all other * cases. */ +#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test static inline bool arch_atomic64_dec_and_test(atomic64_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e); @@ -123,6 +125,7 @@ static inline bool arch_atomic64_dec_and_test(atomic64_t *v) * and returns true if the result is zero, or false for all * other cases. */ +#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test static inline bool arch_atomic64_inc_and_test(atomic64_t *v) { GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e); @@ -137,6 +140,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v) * if the result is negative, or false when * result is greater than or equal to zero. */ +#define arch_atomic64_add_negative arch_atomic64_add_negative static inline bool arch_atomic64_add_negative(long i, atomic64_t *v) { GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s); diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index f4c9f82c40c6..332ae4eca737 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -198,17 +198,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP /** - * atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#define atomic_sub_and_test(i,v) (atomic_sub_return((i),(v)) == 0) - -/** * atomic_inc - increment atomic variable * @v: pointer of type atomic_t * @@ -240,37 +229,6 @@ ATOMIC_OPS(xor) */ #define atomic_dec_return(v) atomic_sub_return(1,(v)) -/** - * atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#define atomic_dec_and_test(v) (atomic_sub_return(1,(v)) == 0) - -/** - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#define atomic_inc_and_test(v) (atomic_add_return(1,(v)) == 0) - -/** - * atomic_add_negative - add and test if negative - * @v: pointer of type atomic_t - * @i: integer value to add - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#define atomic_add_negative(i,v) (atomic_add_return((i),(v)) < 0) - #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 719168de5c6e..b59edd520e39 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -225,29 +225,41 @@ static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) return arch_atomic64_dec_if_positive(v); } +#ifdef arch_atomic_dec_and_test +#define atomic_dec_and_test atomic_dec_and_test static __always_inline bool atomic_dec_and_test(atomic_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic_dec_and_test(v); } +#endif +#ifdef arch_atomic64_dec_and_test +#define atomic64_dec_and_test atomic64_dec_and_test static __always_inline bool atomic64_dec_and_test(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_dec_and_test(v); } +#endif +#ifdef arch_atomic_inc_and_test +#define atomic_inc_and_test atomic_inc_and_test static __always_inline bool atomic_inc_and_test(atomic_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic_inc_and_test(v); } +#endif +#ifdef arch_atomic64_inc_and_test +#define atomic64_inc_and_test atomic64_inc_and_test static __always_inline bool atomic64_inc_and_test(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_inc_and_test(v); } +#endif static __always_inline int atomic_add_return(int i, atomic_t *v) { @@ -333,29 +345,41 @@ static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) return arch_atomic64_fetch_xor(i, v); } +#ifdef arch_atomic_sub_and_test +#define atomic_sub_and_test atomic_sub_and_test static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic_sub_and_test(i, v); } +#endif +#ifdef arch_atomic64_sub_and_test +#define atomic64_sub_and_test atomic64_sub_and_test static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_sub_and_test(i, v); } +#endif +#ifdef arch_atomic_add_negative +#define atomic_add_negative atomic_add_negative static __always_inline bool atomic_add_negative(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic_add_negative(i, v); } +#endif +#ifdef arch_atomic64_add_negative +#define atomic64_add_negative atomic64_add_negative static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_add_negative(i, v); } +#endif static __always_inline unsigned long cmpxchg_size(volatile void *ptr, unsigned long old, unsigned long new, int size) diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h index 757e45821220..40cab858aaaa 100644 --- a/include/asm-generic/atomic.h +++ b/include/asm-generic/atomic.h @@ -186,11 +186,6 @@ ATOMIC_OP(xor, ^) #include -static inline int atomic_add_negative(int i, atomic_t *v) -{ - return atomic_add_return(i, v) < 0; -} - static inline void atomic_add(int i, atomic_t *v) { atomic_add_return(i, v); @@ -214,10 +209,6 @@ static inline void atomic_dec(atomic_t *v) #define atomic_dec_return(v) atomic_sub_return(1, (v)) #define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) - #define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new))) diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index b0aa425361ad..15fe7cca29ab 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -55,13 +55,9 @@ extern long long atomic64_xchg(atomic64_t *v, long long new); extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u); #define atomic64_fetch_add_unless atomic64_fetch_add_unless -#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) #define atomic64_inc(v) atomic64_add(1LL, (v)) #define atomic64_inc_return(v) atomic64_add_return(1LL, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) #define atomic64_dec(v) atomic64_sub(1LL, (v)) #define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) -#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) #endif /* _ASM_GENERIC_ATOMIC64_H */ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index ab534fad09e1..7d41b6fffab6 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -567,6 +567,68 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) #endif +/** + * atomic_inc_and_test - increment and test + * @v: pointer of type atomic_t + * + * Atomically increments @v by 1 + * and returns true if the result is zero, or false for all + * other cases. + */ +#ifndef atomic_inc_and_test +static inline bool atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_return(v) == 0; +} +#endif + +/** + * atomic_dec_and_test - decrement and test + * @v: pointer of type atomic_t + * + * Atomically decrements @v by 1 and + * returns true if the result is 0, or false for all other + * cases. + */ +#ifndef atomic_dec_and_test +static inline bool atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_return(v) == 0; +} +#endif + +/** + * atomic_sub_and_test - subtract value from variable and test result + * @i: integer value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtracts @i from @v and returns + * true if the result is zero, or false for all + * other cases. + */ +#ifndef atomic_sub_and_test +static inline bool atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_return(i, v) == 0; +} +#endif + +/** + * atomic_add_negative - add and test if negative + * @i: integer value to add + * @v: pointer of type atomic_t + * + * Atomically adds @i to @v and returns true + * if the result is negative, or false when + * result is greater than or equal to zero. + */ +#ifndef atomic_add_negative +static inline bool atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_return(i, v) < 0; +} +#endif + #ifndef atomic_andnot static inline void atomic_andnot(int i, atomic_t *v) { @@ -1120,6 +1182,68 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) #endif +/** + * atomic64_inc_and_test - increment and test + * @v: pointer of type atomic64_t + * + * Atomically increments @v by 1 + * and returns true if the result is zero, or false for all + * other cases. + */ +#ifndef atomic64_inc_and_test +static inline bool atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_return(v) == 0; +} +#endif + +/** + * atomic64_dec_and_test - decrement and test + * @v: pointer of type atomic64_t + * + * Atomically decrements @v by 1 and + * returns true if the result is 0, or false for all other + * cases. + */ +#ifndef atomic64_dec_and_test +static inline bool atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_return(v) == 0; +} +#endif + +/** + * atomic64_sub_and_test - subtract value from variable and test result + * @i: integer value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtracts @i from @v and returns + * true if the result is zero, or false for all + * other cases. + */ +#ifndef atomic64_sub_and_test +static inline bool atomic64_sub_and_test(long long i, atomic64_t *v) +{ + return atomic64_sub_return(i, v) == 0; +} +#endif + +/** + * atomic64_add_negative - add and test if negative + * @i: integer value to add + * @v: pointer of type atomic64_t + * + * Atomically adds @i to @v and returns true + * if the result is negative, or false when + * result is greater than or equal to zero. + */ +#ifndef atomic64_add_negative +static inline bool atomic64_add_negative(long long i, atomic64_t *v) +{ + return atomic64_add_return(i, v) < 0; +} +#endif + #ifndef atomic64_andnot static inline void atomic64_andnot(long long i, atomic64_t *v) { From patchwork Tue May 29 15:43:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137189 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4214586lji; Tue, 29 May 2018 08:46:21 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoDARDbhh44VN2Llf24E1qWIALz2oBZgH1rInC3yJoJ/nsPCzmbp9969A8JV9RQiRZhkY3l X-Received: by 2002:a17:902:8f8b:: with SMTP id z11-v6mr18027585plo.203.1527608781803; Tue, 29 May 2018 08:46:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608781; cv=none; d=google.com; s=arc-20160816; b=HICCt2YLDXdQftKnywVESlw4XDgy2QR3FGTg2PpNHrxUH7c00gKBWaZCdZ7qL7Pi+c 7Yl0r++Qk4gHHPAmvxgV0p9GM6L8iLX/3woMFWZkjZqK+UC1u78Z07Ac2VF/c0Ql/iqE cYAeiE1k+RwOKvjfZJb4qxJlO+mZs/VuWjbOVr6p9z2IaPEl5PZlYQiYU9K8B75cLIHV blSRmAfnjyDoCUq+Nr/L3RvvO901kmWqQJzgBsdts2adaV68gkPxM4vpP5PfTg2HLPuN RtqybrK4XCJ6bd6Ct4AC4VwwVE9gXOop0d9rPV2K9s3aNyQlre1K2eGEUzUwxZrXHDe3 R8+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=+Y9bMEEcT8FREn6Fvd+7h2yYj8wu8sblN3dn8WWJe5s=; b=YDBF52suYXNdTnRNXqleRtjygcR01t9yLWtRi85y38/HQHL3Tq3VYFf/KIpxDuVuxh +EO0daG9DJwnDYvZLFU6EL9vvy8wid/rA9S4iVLGX056kU3LiUITSzD8pH6eKSjP8rm7 ns/Cr+U1zItpRXxlQDOuWZYAd4RIBstree2+IOAvrlLQZpZhnyzg/9gB4XufL3plJq5r WXIBhBUIreIut1B0SW4GEU33kdT4tZSvFbekgMi4OKumWT4RyneQNa8ApG9irulKL5uO RE1QXpAB+NqdV50IolhCKvJ6XpLhCJOM/+DZVqAaRMAFAWf958cmkoPWrMmnH+Dptl1A oVWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v38-v6si33618368plg.283.2018.05.29.08.46.21; Tue, 29 May 2018 08:46:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965082AbeE2PqU (ORCPT + 30 others); Tue, 29 May 2018 11:46:20 -0400 Received: from foss.arm.com ([217.140.101.70]:43512 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936866AbeE2Poa (ORCPT ); Tue, 29 May 2018 11:44:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C19C815AD; Tue, 29 May 2018 08:44:29 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B460B3F25D; Tue, 29 May 2018 08:44:28 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon Subject: [PATCHv2 14/16] atomics/treewide: remove atomic_inc_not_zero_hint() Date: Tue, 29 May 2018 16:43:44 +0100 Message-Id: <20180529154346.3168-15-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org While documentation suggests atomic_inc_not_zero_hint() will perform better than atomic_inc_not_zero(), this is unlikely to be the case. No architectures implement atomic_inc_not_zero_hint() directly, and thus it either falls back to atomic_inc_not_zero(), or a loop using atomic_cmpxchg(). Whenever the hint does not match the value in memory, the repeated use of atomic_cmpxchg() will be more expensive than the read that atomic_inc_not_zero_hint() attempts to avoid. For architectures with LL/SC atomics, a read cannot be avoided, and it would always be better to use atomic_inc_not_zero() directly. For other architectures, their own atomic_inc_not_zero() is likely to be more optimal than an atomic_cmpxchg() loop regardless. Generally, atomic_inc_not_zero_hint() is liable to perform worse than atomic_inc_not_zero(). Further, atomic_inc_not_zero_hint() only exists for atomic_t, and not atomic64_t or atomic_long_t, and there is only one user in the kernel tree. Given all this, let's remove atomic_inc_not_zero_hint(), and migrate the existing user over to atomic_inc_not_zero(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic.h | 32 -------------------------------- net/atm/pppoatm.c | 2 +- 2 files changed, 1 insertion(+), 33 deletions(-) -- 2.11.0 diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 7d41b6fffab6..8ec4d120b264 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -656,38 +656,6 @@ static inline int atomic_fetch_andnot_release(int i, atomic_t *v) } #endif -/** - * atomic_inc_not_zero_hint - increment if not null - * @v: pointer of type atomic_t - * @hint: probable value of the atomic before the increment - * - * This version of atomic_inc_not_zero() gives a hint of probable - * value of the atomic. This helps processor to not read the memory - * before doing the atomic read/modify/write cycle, lowering - * number of bus transactions on some arches. - * - * Returns: 0 if increment was not done, 1 otherwise. - */ -#ifndef atomic_inc_not_zero_hint -static inline int atomic_inc_not_zero_hint(atomic_t *v, int hint) -{ - int val, c = hint; - - /* sanity test, should be removed by compiler if hint is a constant */ - if (!hint) - return atomic_inc_not_zero(v); - - do { - val = atomic_cmpxchg(v, c, c + 1); - if (val == c) - return 1; - c = val; - } while (c); - - return 0; -} -#endif - #ifndef atomic_inc_unless_negative static inline int atomic_inc_unless_negative(atomic_t *p) { diff --git a/net/atm/pppoatm.c b/net/atm/pppoatm.c index 21d9d341a619..a4d82afd4586 100644 --- a/net/atm/pppoatm.c +++ b/net/atm/pppoatm.c @@ -244,7 +244,7 @@ static int pppoatm_may_send(struct pppoatm_vcc *pvcc, int size) * the packet count limit, so... */ if (atm_may_send(pvcc->atmvcc, size) && - atomic_inc_not_zero_hint(&pvcc->inflight, NONE_INFLIGHT)) + atomic_inc_not_zero(&pvcc->inflight)) return 1; /* From patchwork Tue May 29 15:43:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137187 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4212889lji; Tue, 29 May 2018 08:44:49 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqkoKv1S/x4OXViE6PPg8xYKCtm9HXElc7lEgkTtittZzR4Q6HoIX1XN0xeghDpcMywrZTR X-Received: by 2002:a63:730c:: with SMTP id o12-v6mr14173833pgc.1.1527608689513; Tue, 29 May 2018 08:44:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608689; cv=none; d=google.com; s=arc-20160816; b=K1VuQghhoRLltULEX/MDoMhCp5YVlSgOccDV8jaX1Y0Qy8kRRE5Mb+e/9LVgG7EogB Z5Mthd7viHU283UP+z6pDW7OsCvNrCqBRjdMDeh10el/hedO5TYs052+hWFAfnN8l/tl RIHpoOSAaK2Jv9CVbnndNupZ3UfX6da2G/XFmLvgo0jaY7o6rJeaNhUauKl1M6yRLDwx nR2xa8VIsfJq8bl+pwAF4go1laL9enhkz0v5jVmEGHvONrY3VoPLW6GO/U0NfLVncOGQ O6xMYwZI9AkU0+2Dq9ycsYdZ1+vWSc+gW7qhm42QZIUxrKjqoTo8qbYQ852Gi7H9ocx7 9uvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=eao77slvRXs00xsvL12mFBRkf2aROcCMxiRE1qD/z0s=; b=uAUaTmrc86X/mHfX0UpX2t6vjFn8iAwjZHROBUm8gCt3Y0IypxTIDYZpgYvWDKUy+I mHave+qM+FN6SzZfQ/VBcS+RSHk48xPcVTdqaWAMRpe8J3cUdN89SFl+/t6jRVyGmFri b/PN0kXZlqDYHYhElzJEYCR3qaPcezy3rP4T7lliuPbA82G/4FOSDauN444dARP7Pnn7 W3EPQ5G15Bny+KVqV9sMD6+V96SVL1vpT0CqYHrxCDQdBAW/fh4NWiVrPcmqrJQvn92Q OMsvOlaa164w+HZmDdVdKNnVYIJkPIs6Za17YpFjy81omv5h7YdSxxQ6A0wjJkJAUny4 saLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r63-v6si31907323plb.366.2018.05.29.08.44.49; Tue, 29 May 2018 08:44:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936920AbeE2Por (ORCPT + 30 others); Tue, 29 May 2018 11:44:47 -0400 Received: from foss.arm.com ([217.140.101.70]:43508 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936885AbeE2Poc (ORCPT ); Tue, 29 May 2018 11:44:32 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 80C5280D; Tue, 29 May 2018 08:44:31 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 71F653F25D; Tue, 29 May 2018 08:44:30 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon Subject: [PATCHv2 15/16] atomics/treewide: make unconditional inc/dec ops optional Date: Tue, 29 May 2018 16:43:45 +0100 Message-Id: <20180529154346.3168-16-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Many of the inc/dec ops are mandatory, but for most architectures inc/dec are simply trivial wrappers around their corresponding add/sub ops. Let's make all the inc/dec ops optional, so that we can get rid of these boilerplate wrappers. The instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 12 ----- arch/arc/include/asm/atomic.h | 11 ----- arch/arm/include/asm/atomic.h | 11 ----- arch/arm64/include/asm/atomic.h | 24 ---------- arch/h8300/include/asm/atomic.h | 7 --- arch/hexagon/include/asm/atomic.h | 6 --- arch/ia64/include/asm/atomic.h | 9 ---- arch/m68k/include/asm/atomic.h | 5 +- arch/mips/include/asm/atomic.h | 38 ---------------- arch/parisc/include/asm/atomic.h | 12 ----- arch/powerpc/include/asm/atomic.h | 4 ++ arch/riscv/include/asm/atomic.h | 76 ------------------------------- arch/s390/include/asm/atomic.h | 8 ---- arch/sh/include/asm/atomic.h | 6 --- arch/sparc/include/asm/atomic_32.h | 5 -- arch/sparc/include/asm/atomic_64.h | 12 ----- arch/x86/include/asm/atomic.h | 5 +- arch/x86/include/asm/atomic64_32.h | 4 ++ arch/x86/include/asm/atomic64_64.h | 5 +- arch/xtensa/include/asm/atomic.h | 32 ------------- include/asm-generic/atomic-instrumented.h | 24 ++++++++++ include/asm-generic/atomic.h | 13 ------ include/asm-generic/atomic64.h | 5 -- include/linux/atomic.h | 48 +++++++++++++++++++ 24 files changed, 86 insertions(+), 296 deletions(-) -- 2.11.0 Acked-by: Palmer Dabbelt diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 31a4904dc8fc..9c50cc512a78 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -297,16 +297,4 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) return old - 1; } -#define atomic_dec_return(v) atomic_sub_return(1,(v)) -#define atomic64_dec_return(v) atomic64_sub_return(1,(v)) - -#define atomic_inc_return(v) atomic_add_return(1,(v)) -#define atomic64_inc_return(v) atomic64_add_return(1,(v)) - -#define atomic_inc(v) atomic_add(1,(v)) -#define atomic64_inc(v) atomic64_add(1,(v)) - -#define atomic_dec(v) atomic_sub(1,(v)) -#define atomic64_dec(v) atomic64_sub(1,(v)) - #endif /* _ALPHA_ATOMIC_H */ diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 555b0a94b643..abffdc525b46 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -308,12 +308,6 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_inc(v) atomic_add(1, v) -#define atomic_dec(v) atomic_sub(1, v) - -#define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic_dec_return(v) atomic_sub_return(1, (v)) - #ifdef CONFIG_GENERIC_ATOMIC64 #include @@ -560,11 +554,6 @@ static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, } #define atomic64_fetch_add_unless atomic64_fetch_add_unless -#define atomic64_inc(v) atomic64_add(1LL, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1LL, (v)) -#define atomic64_dec(v) atomic64_sub(1LL, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) - #endif /* !CONFIG_GENERIC_ATOMIC64 */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 35fb7f504daa..5a58d061d3d2 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -245,12 +245,6 @@ ATOMIC_OPS(xor, ^=, eor) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) -#define atomic_inc(v) atomic_add(1, v) -#define atomic_dec(v) atomic_sub(1, v) - -#define atomic_inc_return_relaxed(v) (atomic_add_return_relaxed(1, v)) -#define atomic_dec_return_relaxed(v) (atomic_sub_return_relaxed(1, v)) - #ifndef CONFIG_GENERIC_ATOMIC64 typedef struct { long long counter; @@ -512,11 +506,6 @@ static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, } #define atomic64_fetch_add_unless atomic64_fetch_add_unless -#define atomic64_inc(v) atomic64_add(1LL, (v)) -#define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1LL, (v)) -#define atomic64_dec(v) atomic64_sub(1LL, (v)) -#define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1LL, (v)) - #endif /* !CONFIG_GENERIC_ATOMIC64 */ #endif #endif diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index edbe53fa3106..078f785cd97f 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -50,21 +50,11 @@ #define atomic_add_return_release atomic_add_return_release #define atomic_add_return atomic_add_return -#define atomic_inc_return_relaxed(v) atomic_add_return_relaxed(1, (v)) -#define atomic_inc_return_acquire(v) atomic_add_return_acquire(1, (v)) -#define atomic_inc_return_release(v) atomic_add_return_release(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - #define atomic_sub_return_relaxed atomic_sub_return_relaxed #define atomic_sub_return_acquire atomic_sub_return_acquire #define atomic_sub_return_release atomic_sub_return_release #define atomic_sub_return atomic_sub_return -#define atomic_dec_return_relaxed(v) atomic_sub_return_relaxed(1, (v)) -#define atomic_dec_return_acquire(v) atomic_sub_return_acquire(1, (v)) -#define atomic_dec_return_release(v) atomic_sub_return_release(1, (v)) -#define atomic_dec_return(v) atomic_sub_return(1, (v)) - #define atomic_fetch_add_relaxed atomic_fetch_add_relaxed #define atomic_fetch_add_acquire atomic_fetch_add_acquire #define atomic_fetch_add_release atomic_fetch_add_release @@ -108,8 +98,6 @@ cmpxchg_release(&((v)->counter), (old), (new)) #define atomic_cmpxchg(v, old, new) cmpxchg(&((v)->counter), (old), (new)) -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) #define atomic_andnot atomic_andnot /* @@ -124,21 +112,11 @@ #define atomic64_add_return_release atomic64_add_return_release #define atomic64_add_return atomic64_add_return -#define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1, (v)) -#define atomic64_inc_return_acquire(v) atomic64_add_return_acquire(1, (v)) -#define atomic64_inc_return_release(v) atomic64_add_return_release(1, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) - #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed #define atomic64_sub_return_acquire atomic64_sub_return_acquire #define atomic64_sub_return_release atomic64_sub_return_release #define atomic64_sub_return atomic64_sub_return -#define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1, (v)) -#define atomic64_dec_return_acquire(v) atomic64_sub_return_acquire(1, (v)) -#define atomic64_dec_return_release(v) atomic64_sub_return_release(1, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) - #define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed #define atomic64_fetch_add_acquire atomic64_fetch_add_acquire #define atomic64_fetch_add_release atomic64_fetch_add_release @@ -179,8 +157,6 @@ #define atomic64_cmpxchg_release atomic_cmpxchg_release #define atomic64_cmpxchg atomic_cmpxchg -#define atomic64_inc(v) atomic64_add(1, (v)) -#define atomic64_dec(v) atomic64_sub(1, (v)) #define atomic64_andnot atomic64_andnot #endif diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h index b658b20d2232..532f3409f751 100644 --- a/arch/h8300/include/asm/atomic.h +++ b/arch/h8300/include/asm/atomic.h @@ -69,13 +69,6 @@ ATOMIC_OPS(sub, -=) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_inc_return(v) atomic_add_return(1, v) -#define atomic_dec_return(v) atomic_sub_return(1, v) - -#define atomic_inc(v) (void)atomic_inc_return(v) - -#define atomic_dec(v) (void)atomic_dec_return(v) - static inline int atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 31638f511674..311b9894ccc8 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -198,10 +198,4 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) } #define atomic_fetch_add_unless atomic_fetch_add_unless -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) - -#define atomic_inc_return(v) (atomic_add_return(1, v)) -#define atomic_dec_return(v) (atomic_sub_return(1, v)) - #endif diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index e4143c462e65..46a15a974bed 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -231,19 +231,10 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) return dec; } -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) - #define atomic_add(i,v) (void)atomic_add_return((i), (v)) #define atomic_sub(i,v) (void)atomic_sub_return((i), (v)) -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) #define atomic64_add(i,v) (void)atomic64_add_return((i), (v)) #define atomic64_sub(i,v) (void)atomic64_sub_return((i), (v)) -#define atomic64_inc(v) atomic64_add(1, (v)) -#define atomic64_dec(v) atomic64_sub(1, (v)) #endif /* _ASM_IA64_ATOMIC_H */ diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h index 9df09c876fa2..47228b0d4163 100644 --- a/arch/m68k/include/asm/atomic.h +++ b/arch/m68k/include/asm/atomic.h @@ -126,11 +126,13 @@ static inline void atomic_inc(atomic_t *v) { __asm__ __volatile__("addql #1,%0" : "+m" (*v)); } +#define atomic_inc atomic_inc static inline void atomic_dec(atomic_t *v) { __asm__ __volatile__("subql #1,%0" : "+m" (*v)); } +#define atomic_dec atomic_dec static inline int atomic_dec_and_test(atomic_t *v) { @@ -192,9 +194,6 @@ static inline int atomic_xchg(atomic_t *v, int new) #endif /* !CONFIG_RMW_INSNS */ -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - static inline int atomic_sub_and_test(int i, atomic_t *v) { char c; diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index fd3008ae164c..79be687de4ab 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -274,31 +274,12 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), (new))) -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - /* * atomic_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic_t */ #define atomic_dec_if_positive(v) atomic_sub_if_positive(1, v) -/* - * atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ -#define atomic_inc(v) atomic_add(1, (v)) - -/* - * atomic_dec - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ -#define atomic_dec(v) atomic_sub(1, (v)) - #ifdef CONFIG_64BIT #define ATOMIC64_INIT(i) { (i) } @@ -554,31 +535,12 @@ static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), (new))) -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) - /* * atomic64_dec_if_positive - decrement by 1 if old value positive * @v: pointer of type atomic64_t */ #define atomic64_dec_if_positive(v) atomic64_sub_if_positive(1, v) -/* - * atomic64_inc - increment atomic variable - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1. - */ -#define atomic64_inc(v) atomic64_add(1, (v)) - -/* - * atomic64_dec - decrement and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1. - */ -#define atomic64_dec(v) atomic64_sub(1, (v)) - #endif /* CONFIG_64BIT */ #endif /* _ASM_ATOMIC_H */ diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index f85844ff6336..10bc490327c1 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -136,12 +136,6 @@ ATOMIC_OPS(xor, ^=) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_inc(v) (atomic_add( 1,(v))) -#define atomic_dec(v) (atomic_add( -1,(v))) - -#define atomic_inc_return(v) (atomic_add_return( 1,(v))) -#define atomic_dec_return(v) (atomic_add_return( -1,(v))) - #define ATOMIC_INIT(i) { (i) } #ifdef CONFIG_64BIT @@ -224,12 +218,6 @@ atomic64_read(const atomic64_t *v) return READ_ONCE((v)->counter); } -#define atomic64_inc(v) (atomic64_add( 1,(v))) -#define atomic64_dec(v) (atomic64_add( -1,(v))) - -#define atomic64_inc_return(v) (atomic64_add_return( 1,(v))) -#define atomic64_dec_return(v) (atomic64_add_return( -1,(v))) - /* exported interface */ #define atomic64_cmpxchg(v, o, n) \ ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 5d76f05d2be3..ebaefdee4a57 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -143,6 +143,7 @@ static __inline__ void atomic_inc(atomic_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic_inc atomic_inc static __inline__ int atomic_inc_return_relaxed(atomic_t *v) { @@ -175,6 +176,7 @@ static __inline__ void atomic_dec(atomic_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic_dec atomic_dec static __inline__ int atomic_dec_return_relaxed(atomic_t *v) { @@ -411,6 +413,7 @@ static __inline__ void atomic64_inc(atomic64_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic64_inc atomic64_inc static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) { @@ -441,6 +444,7 @@ static __inline__ void atomic64_dec(atomic64_t *v) : "r" (&v->counter) : "cc", "xer"); } +#define atomic64_dec atomic64_dec static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) { diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 68eef0a805ca..512b89485790 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -209,82 +209,6 @@ ATOMIC_OPS(xor, xor, i) #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN -#define ATOMIC_OP(op, func_op, I, c_type, prefix) \ -static __always_inline \ -void atomic##prefix##_##op(atomic##prefix##_t *v) \ -{ \ - atomic##prefix##_##func_op(I, v); \ -} - -#define ATOMIC_FETCH_OP(op, func_op, I, c_type, prefix) \ -static __always_inline \ -c_type atomic##prefix##_fetch_##op##_relaxed(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##func_op##_relaxed(I, v); \ -} \ -static __always_inline \ -c_type atomic##prefix##_fetch_##op(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##func_op(I, v); \ -} - -#define ATOMIC_OP_RETURN(op, asm_op, c_op, I, c_type, prefix) \ -static __always_inline \ -c_type atomic##prefix##_##op##_return_relaxed(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##op##_relaxed(v) c_op I; \ -} \ -static __always_inline \ -c_type atomic##prefix##_##op##_return(atomic##prefix##_t *v) \ -{ \ - return atomic##prefix##_fetch_##op(v) c_op I; \ -} - -#ifdef CONFIG_GENERIC_ATOMIC64 -#define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_OP( op, asm_op, I, int, ) \ - ATOMIC_FETCH_OP( op, asm_op, I, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, int, ) -#else -#define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_OP( op, asm_op, I, int, ) \ - ATOMIC_FETCH_OP( op, asm_op, I, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, int, ) \ - ATOMIC_OP( op, asm_op, I, long, 64) \ - ATOMIC_FETCH_OP( op, asm_op, I, long, 64) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, long, 64) -#endif - -ATOMIC_OPS(inc, add, +, 1) -ATOMIC_OPS(dec, add, +, -1) - -#define atomic_inc_return_relaxed atomic_inc_return_relaxed -#define atomic_dec_return_relaxed atomic_dec_return_relaxed -#define atomic_inc_return atomic_inc_return -#define atomic_dec_return atomic_dec_return - -#define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed -#define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed -#define atomic_fetch_inc atomic_fetch_inc -#define atomic_fetch_dec atomic_fetch_dec - -#ifndef CONFIG_GENERIC_ATOMIC64 -#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed -#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed -#define atomic64_inc_return atomic64_inc_return -#define atomic64_dec_return atomic64_dec_return - -#define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed -#define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed -#define atomic64_fetch_inc atomic64_fetch_inc -#define atomic64_fetch_dec atomic64_fetch_dec -#endif - -#undef ATOMIC_OPS -#undef ATOMIC_OP -#undef ATOMIC_FETCH_OP -#undef ATOMIC_OP_RETURN - /* This is required to provide a full barrier on success. */ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index 7f5fbd595f01..376e64af951f 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -55,13 +55,9 @@ static inline void atomic_add(int i, atomic_t *v) __atomic_add(i, &v->counter); } -#define atomic_inc(_v) atomic_add(1, _v) -#define atomic_inc_return(_v) atomic_add_return(1, _v) #define atomic_sub(_i, _v) atomic_add(-(int)(_i), _v) #define atomic_sub_return(_i, _v) atomic_add_return(-(int)(_i), _v) #define atomic_fetch_sub(_i, _v) atomic_fetch_add(-(int)(_i), _v) -#define atomic_dec(_v) atomic_sub(1, _v) -#define atomic_dec_return(_v) atomic_sub_return(1, _v) #define ATOMIC_OPS(op) \ static inline void atomic_##op(int i, atomic_t *v) \ @@ -166,12 +162,8 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) return dec; } -#define atomic64_inc(_v) atomic64_add(1, _v) -#define atomic64_inc_return(_v) atomic64_add_return(1, _v) #define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v) #define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v) #define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v) -#define atomic64_dec(_v) atomic64_sub(1, _v) -#define atomic64_dec_return(_v) atomic64_sub_return(1, _v) #endif /* __ARCH_S390_ATOMIC__ */ diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index d438494fa112..f37b95a80232 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -32,12 +32,6 @@ #include #endif -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) - #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index 3a26573790c6..94c930f0bc62 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -38,8 +38,6 @@ void atomic_set(atomic_t *, int); #define atomic_add(i, v) ((void)atomic_add_return( (int)(i), (v))) #define atomic_sub(i, v) ((void)atomic_add_return(-(int)(i), (v))) -#define atomic_inc(v) ((void)atomic_add_return( 1, (v))) -#define atomic_dec(v) ((void)atomic_add_return( -1, (v))) #define atomic_and(i, v) ((void)atomic_fetch_and((i), (v))) #define atomic_or(i, v) ((void)atomic_fetch_or((i), (v))) @@ -48,7 +46,4 @@ void atomic_set(atomic_t *, int); #define atomic_sub_return(i, v) (atomic_add_return(-(int)(i), (v))) #define atomic_fetch_sub(i, v) (atomic_fetch_add (-(int)(i), (v))) -#define atomic_inc_return(v) (atomic_add_return( 1, (v))) -#define atomic_dec_return(v) (atomic_add_return( -1, (v))) - #endif /* !(__ARCH_SPARC_ATOMIC__) */ diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 634508282aea..304865c7cdbb 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -50,18 +50,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_dec_return(v) atomic_sub_return(1, v) -#define atomic64_dec_return(v) atomic64_sub_return(1, v) - -#define atomic_inc_return(v) atomic_add_return(1, v) -#define atomic64_inc_return(v) atomic64_add_return(1, v) - -#define atomic_inc(v) atomic_add(1, v) -#define atomic64_inc(v) atomic64_add(1, v) - -#define atomic_dec(v) atomic_sub(1, v) -#define atomic64_dec(v) atomic64_sub(1, v) - #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) static inline int atomic_xchg(atomic_t *v, int new) diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 73bda4abe180..823fd2f320cf 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -92,6 +92,7 @@ static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v) * * Atomically increments @v by 1. */ +#define arch_atomic_inc arch_atomic_inc static __always_inline void arch_atomic_inc(atomic_t *v) { asm volatile(LOCK_PREFIX "incl %0" @@ -104,6 +105,7 @@ static __always_inline void arch_atomic_inc(atomic_t *v) * * Atomically decrements @v by 1. */ +#define arch_atomic_dec arch_atomic_dec static __always_inline void arch_atomic_dec(atomic_t *v) { asm volatile(LOCK_PREFIX "decl %0" @@ -177,9 +179,6 @@ static __always_inline int arch_atomic_sub_return(int i, atomic_t *v) return arch_atomic_add_return(-i, v); } -#define arch_atomic_inc_return(v) (arch_atomic_add_return(1, v)) -#define arch_atomic_dec_return(v) (arch_atomic_sub_return(1, v)) - static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v) { return xadd(&v->counter, i); diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index a26810d005e0..472c7af0ed48 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -158,6 +158,7 @@ static inline long long arch_atomic64_inc_return(atomic64_t *v) "S" (v) : "memory", "ecx"); return a; } +#define arch_atomic64_inc_return arch_atomic64_inc_return static inline long long arch_atomic64_dec_return(atomic64_t *v) { @@ -166,6 +167,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v) "S" (v) : "memory", "ecx"); return a; } +#define arch_atomic64_dec_return arch_atomic64_dec_return /** * arch_atomic64_add - add integer to atomic64 variable @@ -203,6 +205,7 @@ static inline long long arch_atomic64_sub(long long i, atomic64_t *v) * * Atomically increments @v by 1. */ +#define arch_atomic64_inc arch_atomic64_inc static inline void arch_atomic64_inc(atomic64_t *v) { __alternative_atomic64(inc, inc_return, /* no output */, @@ -215,6 +218,7 @@ static inline void arch_atomic64_inc(atomic64_t *v) * * Atomically decrements @v by 1. */ +#define arch_atomic64_dec arch_atomic64_dec static inline void arch_atomic64_dec(atomic64_t *v) { __alternative_atomic64(dec, dec_return, /* no output */, diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 6a65228a3db6..1b282272a801 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -83,6 +83,7 @@ static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v) * * Atomically increments @v by 1. */ +#define arch_atomic64_inc arch_atomic64_inc static __always_inline void arch_atomic64_inc(atomic64_t *v) { asm volatile(LOCK_PREFIX "incq %0" @@ -96,6 +97,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v) * * Atomically decrements @v by 1. */ +#define arch_atomic64_dec arch_atomic64_dec static __always_inline void arch_atomic64_dec(atomic64_t *v) { asm volatile(LOCK_PREFIX "decq %0" @@ -173,9 +175,6 @@ static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v) return xadd(&v->counter, -i); } -#define arch_atomic64_inc_return(v) (arch_atomic64_add_return(1, (v))) -#define arch_atomic64_dec_return(v) (arch_atomic64_sub_return(1, (v))) - static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new) { return arch_cmpxchg(&v->counter, old, new); diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h index 332ae4eca737..7de0149e1cf7 100644 --- a/arch/xtensa/include/asm/atomic.h +++ b/arch/xtensa/include/asm/atomic.h @@ -197,38 +197,6 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -/** - * atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ -#define atomic_inc(v) atomic_add(1,(v)) - -/** - * atomic_inc - increment atomic variable - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1. - */ -#define atomic_inc_return(v) atomic_add_return(1,(v)) - -/** - * atomic_dec - decrement atomic variable - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ -#define atomic_dec(v) atomic_sub(1,(v)) - -/** - * atomic_dec_return - decrement atomic variable - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1. - */ -#define atomic_dec_return(v) atomic_sub_return(1,(v)) - #define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index b59edd520e39..8c9887a0cb1d 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -102,29 +102,41 @@ static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u } #endif +#ifdef arch_atomic_inc +#define atomic_inc atomic_inc static __always_inline void atomic_inc(atomic_t *v) { kasan_check_write(v, sizeof(*v)); arch_atomic_inc(v); } +#endif +#ifdef arch_atomic64_inc +#define atomic64_inc atomic64_inc static __always_inline void atomic64_inc(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); arch_atomic64_inc(v); } +#endif +#ifdef arch_atomic_dec +#define atomic_dec atomic_dec static __always_inline void atomic_dec(atomic_t *v) { kasan_check_write(v, sizeof(*v)); arch_atomic_dec(v); } +#endif +#ifdef atch_atomic64_dec +#define atomic64_dec static __always_inline void atomic64_dec(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); arch_atomic64_dec(v); } +#endif static __always_inline void atomic_add(int i, atomic_t *v) { @@ -186,29 +198,41 @@ static __always_inline void atomic64_xor(s64 i, atomic64_t *v) arch_atomic64_xor(i, v); } +#ifdef arch_atomic_inc_return +#define atomic_inc_return atomic_inc_return static __always_inline int atomic_inc_return(atomic_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic_inc_return(v); } +#endif +#ifdef arch_atomic64_in_return +#define atomic64_inc_return atomic64_inc_return static __always_inline s64 atomic64_inc_return(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_inc_return(v); } +#endif +#ifdef arch_atomic_dec_return +#define atomic_dec_return atomic_dec_return static __always_inline int atomic_dec_return(atomic_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic_dec_return(v); } +#endif +#ifdef arch_atomic64_dec_return +#define atomic64_dec_return atomic64_dec_return static __always_inline s64 atomic64_dec_return(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_dec_return(v); } +#endif #ifdef arch_atomic64_inc_not_zero #define atomic64_inc_not_zero atomic64_inc_not_zero diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h index 40cab858aaaa..13324aa828eb 100644 --- a/include/asm-generic/atomic.h +++ b/include/asm-generic/atomic.h @@ -196,19 +196,6 @@ static inline void atomic_sub(int i, atomic_t *v) atomic_sub_return(i, v); } -static inline void atomic_inc(atomic_t *v) -{ - atomic_add_return(1, v); -} - -static inline void atomic_dec(atomic_t *v) -{ - atomic_sub_return(1, v); -} - -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - #define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new))) diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index 15fe7cca29ab..1810a0ca96ec 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -55,9 +55,4 @@ extern long long atomic64_xchg(atomic64_t *v, long long new); extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u); #define atomic64_fetch_add_unless atomic64_fetch_add_unless -#define atomic64_inc(v) atomic64_add(1LL, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1LL, (v)) -#define atomic64_dec(v) atomic64_sub(1LL, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) - #endif /* _ASM_GENERIC_ATOMIC64_H */ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 8ec4d120b264..d6ec2dc37057 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -95,11 +95,23 @@ #endif #endif /* atomic_add_return_relaxed */ +#ifndef atomic_inc +#define atomic_inc(v) atomic_add(1, (v)) +#endif + /* atomic_inc_return_relaxed */ #ifndef atomic_inc_return_relaxed + +#ifndef atomic_inc_return +#define atomic_inc_return(v) atomic_add_return(1, (v)) +#define atomic_inc_return_relaxed(v) atomic_add_return_relaxed(1, (v)) +#define atomic_inc_return_acquire(v) atomic_add_return_acquire(1, (v)) +#define atomic_inc_return_release(v) atomic_add_return_release(1, (v)) +#else /* atomic_inc_return */ #define atomic_inc_return_relaxed atomic_inc_return #define atomic_inc_return_acquire atomic_inc_return #define atomic_inc_return_release atomic_inc_return +#endif /* atomic_inc_return */ #else /* atomic_inc_return_relaxed */ @@ -143,11 +155,23 @@ #endif #endif /* atomic_sub_return_relaxed */ +#ifndef atomic_dec +#define atomic_dec(v) atomic_sub(1, (v)) +#endif + /* atomic_dec_return_relaxed */ #ifndef atomic_dec_return_relaxed + +#ifndef atomic_dec_return +#define atomic_dec_return(v) atomic_sub_return(1, (v)) +#define atomic_dec_return_relaxed(v) atomic_sub_return_relaxed(1, (v)) +#define atomic_dec_return_acquire(v) atomic_sub_return_acquire(1, (v)) +#define atomic_dec_return_release(v) atomic_sub_return_release(1, (v)) +#else /* atomic_dec_return */ #define atomic_dec_return_relaxed atomic_dec_return #define atomic_dec_return_acquire atomic_dec_return #define atomic_dec_return_release atomic_dec_return +#endif /* atomic_dec_return */ #else /* atomic_dec_return_relaxed */ @@ -745,11 +769,23 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif #endif /* atomic64_add_return_relaxed */ +#ifndef atomic64_inc +#define atomic64_inc(v) atomic64_add(1, (v)) +#endif + /* atomic64_inc_return_relaxed */ #ifndef atomic64_inc_return_relaxed + +#ifndef atomic64_inc_return +#define atomic64_inc_return(v) atomic64_add_return(1, (v)) +#define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1, (v)) +#define atomic64_inc_return_acquire(v) atomic64_add_return_acquire(1, (v)) +#define atomic64_inc_return_release(v) atomic64_add_return_release(1, (v)) +#else /* atomic64_inc_return */ #define atomic64_inc_return_relaxed atomic64_inc_return #define atomic64_inc_return_acquire atomic64_inc_return #define atomic64_inc_return_release atomic64_inc_return +#endif /* atomic64_inc_return */ #else /* atomic64_inc_return_relaxed */ @@ -794,11 +830,23 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif #endif /* atomic64_sub_return_relaxed */ +#ifndef atomic64_dec +#define atomic64_dec(v) atomic64_sub(1, (v)) +#endif + /* atomic64_dec_return_relaxed */ #ifndef atomic64_dec_return_relaxed + +#ifndef atomic64_dec_return +#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) +#define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1, (v)) +#define atomic64_dec_return_acquire(v) atomic64_sub_return_acquire(1, (v)) +#define atomic64_dec_return_release(v) atomic64_sub_return_release(1, (v)) +#else /* atomic64_dec_return */ #define atomic64_dec_return_relaxed atomic64_dec_return #define atomic64_dec_return_acquire atomic64_dec_return #define atomic64_dec_return_release atomic64_dec_return +#endif /* atomic64_dec_return */ #else /* atomic64_dec_return_relaxed */ From patchwork Tue May 29 15:43:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137188 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4214065lji; Tue, 29 May 2018 08:45:50 -0700 (PDT) X-Google-Smtp-Source: AB8JxZplMahM6DyzRcoL2rlyJuDu/R3t7Q2aTVAMwz8AJ7Sl4zJINMn1bz/fB6GT+iDYwpDS8tdd X-Received: by 2002:a62:3a59:: with SMTP id h86-v6mr16611305pfa.209.1527608749962; Tue, 29 May 2018 08:45:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527608749; cv=none; d=google.com; s=arc-20160816; b=gYFWajw/pDOB+m3sobex31kRe1LaGbFN23omz0wvTIcp/Gjse+O0ZCNSWYyw8dZrfU /rvlgE0iaPDw6lrtJzWhfUHw5DutkJnN5uo0cQ+m6SLPIZOgt1POwwM4iR2dAijja1pb dNCv9zID9uI1Hl4NX3Mmb2hzXCeZJ3R+Xw/k9sRPErWU3pq/FaBciWkNh+Szu1n/FYsZ 2uLeSIWaO0IEdtfiQ+TukP9r4erf8MY1hQHSU/Fp8ny3ozCWrm+GEGgHr2tuOytjAlWh EKrD+rqf3vhxxeFE5zM4dXLafPLvM4loi4TJoLtZ3hK210hUjSao+70G3Go+KyJOgLNo tmiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=20Z2x4G24xyURAl9cXhq48S/9JbwuRlV3UHAj6Zrwm4=; b=baPU7PDfoPcWtJegMVhLnbNhfrVQzvG63bICtv62MBsMpWC7vVesh+NgkSM+0i/GGu AYdB2Zgz4apEsUvViRoIdKwXpQJxbpJQNONZehMTkjHO3Rkx0F2kycbpZG3qevkBP11L N48sVZ9F8IAZS/iFn1mPflSIIlegmFyAUVXHhYAZ5p4hGGsqxIFXhgwrgOa1Ow+WkJfU eNGl9FOEl3aeC6XOyyegEmjVyvFpwE1zbfqRUDWn9bH+xuDPbywCttfFi27Gz3VL0wa2 90Ps8fz4KUYXZgKhNDWJYCE3CSKh8koVrN9/Bx46wmCHK+/qoRZaxv5gySDsm5VV7/Bj sjbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 189-v6si25999725pgi.254.2018.05.29.08.45.49; Tue, 29 May 2018 08:45:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936952AbeE2Pps (ORCPT + 30 others); Tue, 29 May 2018 11:45:48 -0400 Received: from foss.arm.com ([217.140.101.70]:43530 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964833AbeE2Pog (ORCPT ); Tue, 29 May 2018 11:44:36 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA23415B2; Tue, 29 May 2018 08:44:35 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9CEB83F25D; Tue, 29 May 2018 08:44:34 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon Subject: [PATCHv2 16/16] atomics/treewide: make conditional inc/dec ops optional Date: Tue, 29 May 2018 16:43:46 +0100 Message-Id: <20180529154346.3168-17-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529154346.3168-1-mark.rutland@arm.com> References: <20180529154346.3168-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The conditional inc/dec ops differ for atomic_t and atomic64_t: * atomic_inc_unless_positive() is optional for atomic_t, and doesn't exist for atomic64_t. * atomic_dec_unless_negative() is optional for atomic_t, and doesn't exist for atomic64_t. * atomic_dec_if_positive is optional for atomic_t, and is mandatory for atomic64_t. Let's make these consistently optional for both. At the same time, let's clean up the exisiting fallbacks to use atomic_try_cmpxchg(). The instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 1 + arch/arc/include/asm/atomic.h | 1 + arch/arm/include/asm/atomic.h | 1 + arch/arm64/include/asm/atomic.h | 2 + arch/ia64/include/asm/atomic.h | 16 ----- arch/parisc/include/asm/atomic.h | 23 -------- arch/powerpc/include/asm/atomic.h | 1 + arch/s390/include/asm/atomic.h | 17 ------ arch/sparc/include/asm/atomic_64.h | 1 + arch/x86/include/asm/atomic64_32.h | 1 + arch/x86/include/asm/atomic64_64.h | 18 ------ include/asm-generic/atomic-instrumented.h | 3 + include/asm-generic/atomic64.h | 1 + include/linux/atomic.h | 97 +++++++++++++++++++++++-------- 14 files changed, 85 insertions(+), 98 deletions(-) -- 2.11.0 diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 9c50cc512a78..81bf9d15dcc1 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -296,5 +296,6 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) smp_mb(); return old - 1; } +#define atomic64_dec_if_positive atomic64_dec_if_positive #endif /* _ALPHA_ATOMIC_H */ diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index abffdc525b46..261c6a4d3f05 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -517,6 +517,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) return val; } +#define atomic64_dec_if_positive atomic64_dec_if_positive /** * atomic64_fetch_add_unless - add unless the number is a given value diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index 5a58d061d3d2..884c241424fe 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -474,6 +474,7 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) return result; } +#define atomic64_dec_if_positive atomic64_dec_if_positive static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u) diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index 078f785cd97f..9bca54dda75c 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -159,5 +159,7 @@ #define atomic64_andnot atomic64_andnot +#define atomic64_dec_if_positive atomic64_dec_if_positive + #endif #endif diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 46a15a974bed..206530d0751b 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -215,22 +215,6 @@ ATOMIC64_FETCH_OP(xor, ^) (cmpxchg(&((v)->counter), old, new)) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static __inline__ long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - #define atomic_add(i,v) (void)atomic_add_return((i), (v)) #define atomic_sub(i,v) (void)atomic_sub_return((i), (v)) diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h index 10bc490327c1..118953d41763 100644 --- a/arch/parisc/include/asm/atomic.h +++ b/arch/parisc/include/asm/atomic.h @@ -223,29 +223,6 @@ atomic64_read(const atomic64_t *v) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -/* - * atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ -static inline long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - #endif /* !CONFIG_64BIT */ diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index ebaefdee4a57..a0156cb43d1f 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -488,6 +488,7 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) return t; } +#define atomic64_dec_if_positive atomic64_dec_if_positive #define atomic64_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic64_cmpxchg_relaxed(v, o, n) \ diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index 376e64af951f..fd20ab5d4cf7 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -145,23 +145,6 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OPS -static inline long atomic64_dec_if_positive(atomic64_t *v) -{ - long c, old, dec; - - c = atomic64_read(v); - for (;;) { - dec = c - 1; - if (unlikely(dec < 0)) - break; - old = atomic64_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } - return dec; -} - #define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v) #define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v) #define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v) diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 304865c7cdbb..6963482c81d8 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -62,5 +62,6 @@ static inline int atomic_xchg(atomic_t *v, int new) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) long atomic64_dec_if_positive(atomic64_t *v); +#define atomic64_dec_if_positive atomic64_dec_if_positive #endif /* !(__ARCH_SPARC64_ATOMIC__) */ diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 472c7af0ed48..ef959f02d070 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -254,6 +254,7 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v) return r; } +#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive static inline long long arch_atomic64_dec_if_positive(atomic64_t *v) { long long r; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 1b282272a801..849f1c566a11 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -191,24 +191,6 @@ static inline long arch_atomic64_xchg(atomic64_t *v, long new) return xchg(&v->counter, new); } -/* - * arch_atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ -static inline long arch_atomic64_dec_if_positive(atomic64_t *v) -{ - s64 dec, c = arch_atomic64_read(v); - do { - dec = c - 1; - if (unlikely(dec < 0)) - break; - } while (!arch_atomic64_try_cmpxchg(v, &c, dec)); - return dec; -} - static inline void arch_atomic64_and(long i, atomic64_t *v) { asm volatile(LOCK_PREFIX "andq %1,%0" diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 8c9887a0cb1d..1458de7ece87 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -243,11 +243,14 @@ static __always_inline s64 atomic64_inc_not_zero(atomic64_t *v) } #endif +#ifdef arch_atomic64_dec_if_positive +#define atomic64_dec_if_positive atomic64_dec_if_positive static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_dec_if_positive(v); } +#endif #ifdef arch_atomic_dec_and_test #define atomic_dec_and_test atomic_dec_and_test diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index 1810a0ca96ec..296f1b3c6661 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -50,6 +50,7 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OP extern long long atomic64_dec_if_positive(atomic64_t *v); +#define atomic64_dec_if_positive atomic64_dec_if_positive extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n); extern long long atomic64_xchg(atomic64_t *v, long long new); extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u); diff --git a/include/linux/atomic.h b/include/linux/atomic.h index d6ec2dc37057..9c1b0769a846 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -681,28 +681,30 @@ static inline int atomic_fetch_andnot_release(int i, atomic_t *v) #endif #ifndef atomic_inc_unless_negative -static inline int atomic_inc_unless_negative(atomic_t *p) +static inline bool atomic_inc_unless_negative(atomic_t *v) { - int v, v1; - for (v = 0; v >= 0; v = v1) { - v1 = atomic_cmpxchg(p, v, v + 1); - if (likely(v1 == v)) - return 1; - } - return 0; + int c = atomic_read(v); + + do { + if (unlikely(c < 0)) + return false; + } while (!atomic_try_cmpxchg(v, &c, c + 1)); + + return true; } #endif #ifndef atomic_dec_unless_positive -static inline int atomic_dec_unless_positive(atomic_t *p) +static inline bool atomic_dec_unless_positive(atomic_t *v) { - int v, v1; - for (v = 0; v <= 0; v = v1) { - v1 = atomic_cmpxchg(p, v, v - 1); - if (likely(v1 == v)) - return 1; - } - return 0; + int c = atomic_read(v); + + do { + if (unlikely(c > 0)) + return false; + } while (!atomic_try_cmpxchg(v, &c, c - 1)); + + return true; } #endif @@ -716,17 +718,14 @@ static inline int atomic_dec_unless_positive(atomic_t *p) #ifndef atomic_dec_if_positive static inline int atomic_dec_if_positive(atomic_t *v) { - int c, old, dec; - c = atomic_read(v); - for (;;) { + int dec, c = atomic_read(v); + + do { dec = c - 1; if (unlikely(dec < 0)) break; - old = atomic_cmpxchg((v), c, dec); - if (likely(old == c)) - break; - c = old; - } + } while (!atomic_try_cmpxchg(v, &c, dec)); + return dec; } #endif @@ -1287,6 +1286,56 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v } #endif +#ifndef atomic64_inc_unless_negative +static inline bool atomic64_inc_unless_negative(atomic64_t *v) +{ + long long c = atomic64_read(v); + + do { + if (unlikely(c < 0)) + return false; + } while (!atomic64_try_cmpxchg(v, &c, c + 1)); + + return true; +} +#endif + +#ifndef atomic64_dec_unless_positive +static inline bool atomic64_dec_unless_positive(atomic64_t *v) +{ + long long c = atomic64_read(v); + + do { + if (unlikely(c > 0)) + return false; + } while (!atomic64_try_cmpxchg(v, &c, c - 1)); + + return true; +} +#endif + +/* + * atomic64_dec_if_positive - decrement by 1 if old value positive + * @v: pointer of type atomic64_t + * + * The function returns the old value of *v minus 1, even if + * the atomic64 variable, v, was not decremented. + */ +#ifndef atomic64_dec_if_positive +static inline long long atomic64_dec_if_positive(atomic64_t *v) +{ + long long dec, c = atomic64_read(v); + + do { + dec = c - 1; + if (unlikely(dec < 0)) + break; + } while (!atomic64_try_cmpxchg(v, &c, dec)); + + return dec; +} +#endif + #define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c)) #include