From patchwork Wed May 23 13:35:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 136654 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp907233lji; Wed, 23 May 2018 06:36:23 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpqCOXAYlVNGEtdedziZmvFXcJxit4yJjmt20SlLHa1Yn4JlRrK31wU9ePRlpR/1KCnUj/A X-Received: by 2002:a17:902:8307:: with SMTP id bd7-v6mr3040436plb.234.1527082582890; Wed, 23 May 2018 06:36:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527082582; cv=none; d=google.com; s=arc-20160816; b=o1ZJ5lzX96kaJyiOt8T/Ig3Uvfe7HOG1F92Lv5KuVLYMiXimlhUelbyY2XwwfG0/xT g58Z4CuPlzTXfDOTZd4vWTgQPYaWIO8KDXsL7YfmznsGH/LnIuNrUGnYpYxBHaO2F+Av G7l1xscRSmUftBl34Z8rruMiNkddQfg+LcekkC/qjGDLxs8uv/ncxCrvxng9aGQB/4Lb bCgbVfSfKdBuooejL3E8CPkZbjcIbpxKxkZUB9dRjeQdeGMG/xCokcFez86iq+T0KzRY 7wVp0g23+3xIoPyYT5hJVcn2LdMAhg4uF3ZdMi4b6tHNkw0qWCUSBJ0XNXVfsvfTav9M 4g5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=GyHtwq2X8n4mA93Nj18XjhtF9AEZhJ/ViYa0omN3Z0Y=; b=tZ6TEDYxXPbjmZ4hVNPPebSoMPu4o267QgfCkZ0BoQ161iYgxH2nugV58mgSBgPIwk VYqzWbl576c3I32/wmgrNBH1WVq2vGgsIjQ7p5sM1a+sY0TAWqSBhRR2+T4V/Dpd7Cat SxrF0A4EXNxZExCxIHXn2ZOQG9BtWzQ+2QibFvUphcke3o/T7dvfhrKQAacQi1PIVtlw do4ZAj6joHuhbUvFow3swhVFY+tMTfAyWv+zleCRaejRRQyUBwRcRe/VenbcycUyzp3v FO7pvJ8bOIBZH9NMI49o2IjWvjodCf3J04S9RvWJE7aPGzm6QTklLNIBUhDE5TMBNIc8 LKLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r39-v6si19250494pld.249.2018.05.23.06.36.22; Wed, 23 May 2018 06:36:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933140AbeEWNgV (ORCPT + 30 others); Wed, 23 May 2018 09:36:21 -0400 Received: from foss.arm.com ([217.140.101.70]:55364 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933127AbeEWNgQ (ORCPT ); Wed, 23 May 2018 09:36:16 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 99C0E1435; Wed, 23 May 2018 06:36:15 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F06B43F24A; Wed, 23 May 2018 06:36:12 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon , Arnd Bergmann , Richard Henderson , Ivan Kokshaysky , Matt Turner , Vineet Gupta , Russell King , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Palmer Dabbelt , Albert Ou Subject: [PATCH 05/13] atomics: prepare for atomic64_fetch_add_unless() Date: Wed, 23 May 2018 14:35:25 +0100 Message-Id: <20180523133533.1076-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180523133533.1076-1-mark.rutland@arm.com> References: <20180523133533.1076-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently architecture must implement atomic_fetch_add_unless(), with common code providing atomic_add_unless(). Architectures must also implement atmic64_add_unless() directly, with no corresponding atomic64_fetch_add_unless(). This divergenece is unfortunate, and means that the APIs for atomic_t, atomic64_t, and atomic_long_t differ. In preparation for unifying things, with architectures providing atomic64_fetch_add_unless, this patch adds a generic atomic64_add_unless() which will use atomic64_fetch_add_unless(). The instrumented atomics are updated to take this case into account. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: Vineet Gupta Cc: Russell King Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Palmer Dabbelt Cc: Albert Ou --- include/asm-generic/atomic-instrumented.h | 9 +++++++++ include/linux/atomic.h | 16 ++++++++++++++++ 2 files changed, 25 insertions(+) -- 2.11.0 diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 6e0818c182e2..e22d7e5f4ce7 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -93,11 +93,20 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) } #endif +#ifdef arch_atomic64_fetch_add_unless +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_unless(v, a, u); +} +#else static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } +#endif static __always_inline void atomic_inc(atomic_t *v) { diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 1105c0b37f27..8d93209052e1 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1072,6 +1072,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif /* atomic64_try_cmpxchg */ /** + * atomic64_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +#ifdef atomic64_fetch_add_unless +static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) +{ + return atomic64_fetch_add_unless(v, a, u) != u; +} +#endif + +/** * atomic64_inc_not_zero - increment unless the number is zero * @v: pointer of type atomic64_t *