From patchwork Mon Jun 18 10:19:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 138878 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp3770140lji; Mon, 18 Jun 2018 03:19:53 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIU97zutW4/0Kx7v/a+YvsIIjjktJk18GMgRoAy64N3Tuawxv6vCU2GeRXF3AoRjIKHnDI3 X-Received: by 2002:a17:902:5402:: with SMTP id d2-v6mr9144045pli.38.1529317193790; Mon, 18 Jun 2018 03:19:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529317193; cv=none; d=google.com; s=arc-20160816; b=EHbuTXWZ0/ouE8F5XMjoAb8m7rCiAPxtoLVpJWoJEMEN+BSiUoxErsUFlyqjCkA23M PqHWPA6K2wxA/opaGF0i1V2gLUV8ESyOk3jaOonXN7TG28SC5ZPvR6hABHcHLQy8gU9h R+UOeAjGqS3RVN1aPlf7GjskJhG7Mv2MV0/C6sEazROSM/B8Tg7SWj7Qa5EulbdTG7fI BI6wHutqnzCPM61sg/de79N08pTHDt/vLHksQIgXivwn810AaPa0ENFH1O3CbuLbIN2l 6p0DcyKj9M0+4LDiuGCF1c+7RZJUAadaQ88qaPXQsxycxHcNzDDYlz48QiY3J5UxyAsT usFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=L1L8MuHcvGWolzLYqu7yDwAzkz+wIiw/j+qldYfA9aw=; b=ihQZmifMwrwYzvpn7fxqKGi6ryvpEIjZBhmjfJgaN6+8n5MK6QQaCQTvcOOobRwqBr 2HzJBwdfdnbTetlBx/ZMwOjL6JHYeGTHGN4WaAzf70j/zgjCCpBy4uAQy76YKpDlrU2W mHwssUh1jvEHckKCo5D8FkUHZuhuG8arQqwaE/9mNMrQXEoQFcThJXmfTYmmP9Pt+LSo KnaSosi2OhSvjebwz0StLUVlNyHEwXCWGRPfC+JiAkb59R2ns1/lgJBUqR7MqW6V1y6y SGd0w2MwdsBfea8W1RSuTYRuWSfyGZgnkAAovfIRAEhxeWQBfUp1YJL9yjhOsp9fL4ac SEPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f5-v6si16138841pln.414.2018.06.18.03.19.53; Mon, 18 Jun 2018 03:19:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934144AbeFRKTu (ORCPT + 30 others); Mon, 18 Jun 2018 06:19:50 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59408 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936958AbeFRKTq (ORCPT ); Mon, 18 Jun 2018 06:19:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6001480D; Mon, 18 Jun 2018 03:19:46 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 92EB03F25D; Mon, 18 Jun 2018 03:19:43 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, will.deacon@arm.com, peterz@infradead.org, boqun.feng@gmail.com Cc: mingo@kernel.org, Mark Rutland , Arnd Bergmann , Richard Henderson , Ivan Kokshaysky , Matt Turner , Vineet Gupta , Russell King , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Palmer Dabbelt , Albert Ou Subject: [PATCHv3 07/18] atomics: prepare for atomic64_fetch_add_unless() Date: Mon, 18 Jun 2018 11:19:08 +0100 Message-Id: <20180618101919.51973-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180618101919.51973-1-mark.rutland@arm.com> References: <20180618101919.51973-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently architecture must implement atomic_fetch_add_unless(), with common code providing atomic_add_unless(). Architectures must also implement atmic64_add_unless() directly, with no corresponding atomic64_fetch_add_unless(). This divergenece is unfortunate, and means that the APIs for atomic_t, atomic64_t, and atomic_long_t differ. In preparation for unifying things, with architectures providing atomic64_fetch_add_unless, this patch adds a generic atomic64_add_unless() which will use atomic64_fetch_add_unless(). The instrumented atomics are updated to take this case into account. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Acked-by: Peter Zijlstra (Intel) Cc: Boqun Feng Cc: Will Deacon Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: Vineet Gupta Cc: Russell King Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Palmer Dabbelt Cc: Albert Ou --- include/asm-generic/atomic-instrumented.h | 9 +++++++++ include/linux/atomic.h | 16 ++++++++++++++++ 2 files changed, 25 insertions(+) -- 2.11.0 diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 1f9b2a767d3c..ab011e1a02fc 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -93,11 +93,20 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) } #endif +#ifdef arch_atomic64_fetch_add_unless +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_unless(v, a, u); +} +#else static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } +#endif static __always_inline void atomic_inc(atomic_t *v) { diff --git a/include/linux/atomic.h b/include/linux/atomic.h index b89ba36cab94..3c03de648007 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1043,6 +1043,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif /* atomic64_try_cmpxchg */ /** + * atomic64_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, if @v was not already @u. + * Returns true if the addition was done. + */ +#ifdef atomic64_fetch_add_unless +static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u) +{ + return atomic64_fetch_add_unless(v, a, u) != u; +} +#endif + +/** * atomic64_inc_not_zero - increment unless the number is zero * @v: pointer of type atomic64_t *