From patchwork Tue May 29 18:07:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137203 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4364426lji; Tue, 29 May 2018 11:09:07 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKAG4gM0/C2b58m2OwJcNpVodayp6R1wp8LRmunSi3FU/DgjYcu4tglJyErzrTtLedNf4qd X-Received: by 2002:a17:902:566:: with SMTP id 93-v6mr8377383plf.385.1527617347371; Tue, 29 May 2018 11:09:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617347; cv=none; d=google.com; s=arc-20160816; b=Ue9tUXJ0ZyziFpoYnowP2KDI6A4ctJAwOxmMSIwlybBERd6vSQEcsdMcM4PDIi3HDm 4NEE2M9qGt6faK1LhNe+IXj0jPj3hOJFEg/OajvgFCRGK2d8BV90cAAwADO+uaAWl+j6 rmN8/I2QxxrF/16/AJsErgRi76MSc/nRTZ54bCzf5IzzsA0xKy07PLsTlG+5A6mGlmL7 kzBUWv/oAl7lcOqD2N9uZx2A/vowH4MNmHKxX9LvCMZ3dnPNDQEDN9gp2OzqI5EiYAcN 40lyi2+NJXru40k1FUoaAXRHaqFzaS0kRzN3C+iY2UnectTfZxbGdI+pTjB/8FW3Bn7l cpiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=G2lTK5ERmuzR0Q4ZcebfEBIry9NWkTr8qxpgD0WkJZY=; b=COu4twclBjiuPQskHGOtP9sid/YX2oG0OIdou+RGA/uhM+45hGFUcpcQ6d7ryc+VdB ScY7lF6VgmVglxvmkBaty69jYEfCdO1+xc+EGTytVroTxRYsEl8flOVU5a6bRGGVtp5J uNtTsNm6mdZvAmvOygrR6r2n57zb2r2C1OjLaFiqPkEkhRWxtPtAyb5OL9u7B/KBK58T +TkxNcSw1cKZTwdMzqkRL63bA1leLQ3bfy5lcLgC3isMMjg1NRlfMXHhs7zXV0OKAuER exA7t5oQNsBE6nds2SIUmF3xo9iH0BSSJ3Ix7zkr/xtukGnQxcg9jsnccw3274wQ4OZY jsHQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j186-v6si25836563pgc.20.2018.05.29.11.09.07; Tue, 29 May 2018 11:09:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966056AbeE2SJE (ORCPT + 30 others); Tue, 29 May 2018 14:09:04 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:45920 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965895AbeE2SIM (ORCPT ); Tue, 29 May 2018 14:08:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6C299165D; Tue, 29 May 2018 11:08:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3B86A3F557; Tue, 29 May 2018 11:08:11 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon , Arnd Bergmann Subject: [PATCH 6/7] atomics: switch to generated atomic-long Date: Tue, 29 May 2018 19:07:45 +0100 Message-Id: <20180529180746.29684-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards ensuring the atomic* APIs are consistent, let's switch to wrappers generated by gen-atomic-long.h, using the same table that gen-atomic-fallbacks.h uses to fill in gaps in the atomic_* and atomic64_* APIs. These are checked in rather than generated with Kbuild, since: * This allows inspection of the atomics with git grep and ctags on a pristine tree, which Linus strongly prefers being able to do. * The fallbacks are not expected to change very often, and are not affected by machine details or configuration options, so regenerating them for *every* build is somewhat wasteful. * These are included by files required *very* early in the build process (e.g. for generating bounds.h), and we'd rather not complicate the top-level Kbuild file. Other than *_INIT() and *_cond_read_acquire(), all API functions are implemented as static inline C functions, ensuring consistent type promotion and/or truncation without requiring explicit casts to be applied to parameters or return values. Since we typedef atomic_long_t to either atomic_t or atomic64_t, we know these types are equivalent, and don't require explicit casts between them. However, as the try_cmpxchg*() functions take a pointer for the 'old' parameter, which may be an int or s64, an explicit cast is generated for this. There should be no functional change as a result of this patch (i.e. existing code should not be affected). However, this introduces a number of functions into the atomic_long_* API, bringing it into line with the atomic_* and atomic64_* APIs. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon Cc: Arnd Bergmann --- include/asm-generic/atomic-long.h | 1150 ++++++++++++++++++++++++++++++------- 1 file changed, 955 insertions(+), 195 deletions(-) Note: this is generated code. Please see patch 3 for the code which generated it, along with rationale in the cover letter. Mark. -- 2.11.0 diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h index 34a028a7bcc5..e007bf80f693 100644 --- a/include/asm-generic/atomic-long.h +++ b/include/asm-generic/atomic-long.h @@ -1,250 +1,1010 @@ -/* SPDX-License-Identifier: GPL-2.0 */ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by ./scripts/atomic/gen-atomic-long.sh +// DO NOT MODIFY THIS FILE DIRECTLY + #ifndef _ASM_GENERIC_ATOMIC_LONG_H #define _ASM_GENERIC_ATOMIC_LONG_H -/* - * Copyright (C) 2005 Silicon Graphics, Inc. - * Christoph Lameter - * - * Allows to provide arch independent atomic definitions without the need to - * edit all arch specific atomic.h files. - */ #include -/* - * Suppport for atomic_long_t - * - * Casts for parameters are avoided for existing atomic functions in order to - * avoid issues with cast-as-lval under gcc 4.x and other limitations that the - * macros of a platform may have. - */ +#ifdef CONFIG_64BIT +typedef atomic64_t atomic_long_t; +#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) +#define atomic_long_cond_read_acquire atomic64_cond_read_acquire +#else +typedef atomic_t atomic_long_t; +#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i) +#define atomic_long_cond_read_acquire atomic_cond_read_acquire +#endif -#if BITS_PER_LONG == 64 +#ifdef CONFIG_64BIT -typedef atomic64_t atomic_long_t; +static inline long +atomic_long_read(const atomic_long_t *v) +{ + return atomic64_read(v); +} -#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) -#define ATOMIC_LONG_PFX(x) atomic64 ## x +static inline long +atomic_long_read_acquire(const atomic_long_t *v) +{ + return atomic64_read_acquire(v); +} -#else +static inline void +atomic_long_set(atomic_long_t *v, long i) +{ + atomic64_set(v, i); +} -typedef atomic_t atomic_long_t; +static inline void +atomic_long_set_release(atomic_long_t *v, long i) +{ + atomic64_set_release(v, i); +} -#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i) -#define ATOMIC_LONG_PFX(x) atomic ## x +static inline void +atomic_long_add(long i, atomic_long_t *v) +{ + atomic64_add(i, v); +} -#endif +static inline long +atomic_long_add_return(long i, atomic_long_t *v) +{ + return atomic64_add_return(i, v); +} + +static inline long +atomic_long_add_return_acquire(long i, atomic_long_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +static inline long +atomic_long_add_return_release(long i, atomic_long_t *v) +{ + return atomic64_add_return_release(i, v); +} + +static inline long +atomic_long_add_return_relaxed(long i, atomic_long_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_add(long i, atomic_long_t *v) +{ + return atomic64_fetch_add(i, v); +} + +static inline long +atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +static inline long +atomic_long_fetch_add_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +static inline long +atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +static inline void +atomic_long_sub(long i, atomic_long_t *v) +{ + atomic64_sub(i, v); +} + +static inline long +atomic_long_sub_return(long i, atomic_long_t *v) +{ + return atomic64_sub_return(i, v); +} + +static inline long +atomic_long_sub_return_acquire(long i, atomic_long_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +static inline long +atomic_long_sub_return_release(long i, atomic_long_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +static inline long +atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_sub(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +static inline long +atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +static inline long +atomic_long_fetch_sub_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +static inline long +atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +static inline void +atomic_long_inc(atomic_long_t *v) +{ + atomic64_inc(v); +} + +static inline long +atomic_long_inc_return(atomic_long_t *v) +{ + return atomic64_inc_return(v); +} + +static inline long +atomic_long_inc_return_acquire(atomic_long_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +static inline long +atomic_long_inc_return_release(atomic_long_t *v) +{ + return atomic64_inc_return_release(v); +} + +static inline long +atomic_long_inc_return_relaxed(atomic_long_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +static inline long +atomic_long_fetch_inc(atomic_long_t *v) +{ + return atomic64_fetch_inc(v); +} + +static inline long +atomic_long_fetch_inc_acquire(atomic_long_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +static inline long +atomic_long_fetch_inc_release(atomic_long_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +static inline long +atomic_long_fetch_inc_relaxed(atomic_long_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +static inline void +atomic_long_dec(atomic_long_t *v) +{ + atomic64_dec(v); +} + +static inline long +atomic_long_dec_return(atomic_long_t *v) +{ + return atomic64_dec_return(v); +} + +static inline long +atomic_long_dec_return_acquire(atomic_long_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +static inline long +atomic_long_dec_return_release(atomic_long_t *v) +{ + return atomic64_dec_return_release(v); +} + +static inline long +atomic_long_dec_return_relaxed(atomic_long_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +static inline long +atomic_long_fetch_dec(atomic_long_t *v) +{ + return atomic64_fetch_dec(v); +} + +static inline long +atomic_long_fetch_dec_acquire(atomic_long_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +static inline long +atomic_long_fetch_dec_release(atomic_long_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +static inline long +atomic_long_fetch_dec_relaxed(atomic_long_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +static inline void +atomic_long_and(long i, atomic_long_t *v) +{ + atomic64_and(i, v); +} + +static inline long +atomic_long_fetch_and(long i, atomic_long_t *v) +{ + return atomic64_fetch_and(i, v); +} + +static inline long +atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +static inline long +atomic_long_fetch_and_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +static inline long +atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +static inline void +atomic_long_andnot(long i, atomic_long_t *v) +{ + atomic64_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +static inline long +atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +static inline long +atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +static inline void +atomic_long_or(long i, atomic_long_t *v) +{ + atomic64_or(i, v); +} + +static inline long +atomic_long_fetch_or(long i, atomic_long_t *v) +{ + return atomic64_fetch_or(i, v); +} + +static inline long +atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +static inline long +atomic_long_fetch_or_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +static inline long +atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +static inline void +atomic_long_xor(long i, atomic_long_t *v) +{ + atomic64_xor(i, v); +} -#define ATOMIC_LONG_READ_OP(mo) \ -static inline long atomic_long_read##mo(const atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_read##mo)(v); \ -} -ATOMIC_LONG_READ_OP() -ATOMIC_LONG_READ_OP(_acquire) - -#undef ATOMIC_LONG_READ_OP - -#define ATOMIC_LONG_SET_OP(mo) \ -static inline void atomic_long_set##mo(atomic_long_t *l, long i) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - ATOMIC_LONG_PFX(_set##mo)(v, i); \ -} -ATOMIC_LONG_SET_OP() -ATOMIC_LONG_SET_OP(_release) - -#undef ATOMIC_LONG_SET_OP - -#define ATOMIC_LONG_ADD_SUB_OP(op, mo) \ -static inline long \ -atomic_long_##op##_return##mo(long i, atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_##op##_return##mo)(i, v); \ -} -ATOMIC_LONG_ADD_SUB_OP(add,) -ATOMIC_LONG_ADD_SUB_OP(add, _relaxed) -ATOMIC_LONG_ADD_SUB_OP(add, _acquire) -ATOMIC_LONG_ADD_SUB_OP(add, _release) -ATOMIC_LONG_ADD_SUB_OP(sub,) -ATOMIC_LONG_ADD_SUB_OP(sub, _relaxed) -ATOMIC_LONG_ADD_SUB_OP(sub, _acquire) -ATOMIC_LONG_ADD_SUB_OP(sub, _release) - -#undef ATOMIC_LONG_ADD_SUB_OP - -#define atomic_long_cmpxchg_relaxed(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg_relaxed)((ATOMIC_LONG_PFX(_t) *)(l), \ - (old), (new))) -#define atomic_long_cmpxchg_acquire(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg_acquire)((ATOMIC_LONG_PFX(_t) *)(l), \ - (old), (new))) -#define atomic_long_cmpxchg_release(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg_release)((ATOMIC_LONG_PFX(_t) *)(l), \ - (old), (new))) -#define atomic_long_cmpxchg(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg)((ATOMIC_LONG_PFX(_t) *)(l), (old), (new))) - -#define atomic_long_xchg_relaxed(v, new) \ - (ATOMIC_LONG_PFX(_xchg_relaxed)((ATOMIC_LONG_PFX(_t) *)(v), (new))) -#define atomic_long_xchg_acquire(v, new) \ - (ATOMIC_LONG_PFX(_xchg_acquire)((ATOMIC_LONG_PFX(_t) *)(v), (new))) -#define atomic_long_xchg_release(v, new) \ - (ATOMIC_LONG_PFX(_xchg_release)((ATOMIC_LONG_PFX(_t) *)(v), (new))) -#define atomic_long_xchg(v, new) \ - (ATOMIC_LONG_PFX(_xchg)((ATOMIC_LONG_PFX(_t) *)(v), (new))) - -static __always_inline void atomic_long_inc(atomic_long_t *l) -{ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; - - ATOMIC_LONG_PFX(_inc)(v); +static inline long +atomic_long_fetch_xor(long i, atomic_long_t *v) +{ + return atomic64_fetch_xor(i, v); } - -static __always_inline void atomic_long_dec(atomic_long_t *l) + +static inline long +atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_fetch_xor_acquire(i, v); +} - ATOMIC_LONG_PFX(_dec)(v); +static inline long +atomic_long_fetch_xor_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_xor_release(i, v); } -#define ATOMIC_LONG_FETCH_OP(op, mo) \ -static inline long \ -atomic_long_fetch_##op##mo(long i, atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_fetch_##op##mo)(i, v); \ +static inline long +atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); } -ATOMIC_LONG_FETCH_OP(add, ) -ATOMIC_LONG_FETCH_OP(add, _relaxed) -ATOMIC_LONG_FETCH_OP(add, _acquire) -ATOMIC_LONG_FETCH_OP(add, _release) -ATOMIC_LONG_FETCH_OP(sub, ) -ATOMIC_LONG_FETCH_OP(sub, _relaxed) -ATOMIC_LONG_FETCH_OP(sub, _acquire) -ATOMIC_LONG_FETCH_OP(sub, _release) -ATOMIC_LONG_FETCH_OP(and, ) -ATOMIC_LONG_FETCH_OP(and, _relaxed) -ATOMIC_LONG_FETCH_OP(and, _acquire) -ATOMIC_LONG_FETCH_OP(and, _release) -ATOMIC_LONG_FETCH_OP(andnot, ) -ATOMIC_LONG_FETCH_OP(andnot, _relaxed) -ATOMIC_LONG_FETCH_OP(andnot, _acquire) -ATOMIC_LONG_FETCH_OP(andnot, _release) -ATOMIC_LONG_FETCH_OP(or, ) -ATOMIC_LONG_FETCH_OP(or, _relaxed) -ATOMIC_LONG_FETCH_OP(or, _acquire) -ATOMIC_LONG_FETCH_OP(or, _release) -ATOMIC_LONG_FETCH_OP(xor, ) -ATOMIC_LONG_FETCH_OP(xor, _relaxed) -ATOMIC_LONG_FETCH_OP(xor, _acquire) -ATOMIC_LONG_FETCH_OP(xor, _release) +static inline long +atomic_long_xchg(atomic_long_t *v, long i) +{ + return atomic64_xchg(v, i); +} -#undef ATOMIC_LONG_FETCH_OP +static inline long +atomic_long_xchg_acquire(atomic_long_t *v, long i) +{ + return atomic64_xchg_acquire(v, i); +} -#define ATOMIC_LONG_FETCH_INC_DEC_OP(op, mo) \ -static inline long \ -atomic_long_fetch_##op##mo(atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_fetch_##op##mo)(v); \ +static inline long +atomic_long_xchg_release(atomic_long_t *v, long i) +{ + return atomic64_xchg_release(v, i); } -ATOMIC_LONG_FETCH_INC_DEC_OP(inc,) -ATOMIC_LONG_FETCH_INC_DEC_OP(inc, _relaxed) -ATOMIC_LONG_FETCH_INC_DEC_OP(inc, _acquire) -ATOMIC_LONG_FETCH_INC_DEC_OP(inc, _release) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec,) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec, _relaxed) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec, _acquire) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec, _release) +static inline long +atomic_long_xchg_relaxed(atomic_long_t *v, long i) +{ + return atomic64_xchg_relaxed(v, i); +} -#undef ATOMIC_LONG_FETCH_INC_DEC_OP +static inline long +atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg(v, old, new); +} -#define ATOMIC_LONG_OP(op) \ -static __always_inline void \ -atomic_long_##op(long i, atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - ATOMIC_LONG_PFX(_##op)(i, v); \ +static inline long +atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg_acquire(v, old, new); } -ATOMIC_LONG_OP(add) -ATOMIC_LONG_OP(sub) -ATOMIC_LONG_OP(and) -ATOMIC_LONG_OP(andnot) -ATOMIC_LONG_OP(or) -ATOMIC_LONG_OP(xor) +static inline long +atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg_release(v, old, new); +} -#undef ATOMIC_LONG_OP +static inline long +atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} -static inline int atomic_long_sub_and_test(long i, atomic_long_t *l) +static inline bool +atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_try_cmpxchg(v, (s64 *)old, new); +} - return ATOMIC_LONG_PFX(_sub_and_test)(i, v); +static inline bool +atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +{ + return atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); } -static inline int atomic_long_dec_and_test(atomic_long_t *l) +static inline bool +atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_try_cmpxchg_release(v, (s64 *)old, new); +} - return ATOMIC_LONG_PFX(_dec_and_test)(v); +static inline bool +atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +{ + return atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); } -static inline int atomic_long_inc_and_test(atomic_long_t *l) +static inline bool +atomic_long_sub_and_test(long i, atomic_long_t *v) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_sub_and_test(i, v); +} - return ATOMIC_LONG_PFX(_inc_and_test)(v); +static inline bool +atomic_long_dec_and_test(atomic_long_t *v) +{ + return atomic64_dec_and_test(v); } -static inline int atomic_long_add_negative(long i, atomic_long_t *l) +static inline bool +atomic_long_inc_and_test(atomic_long_t *v) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_inc_and_test(v); +} - return ATOMIC_LONG_PFX(_add_negative)(i, v); +static inline bool +atomic_long_add_negative(long i, atomic_long_t *v) +{ + return atomic64_add_negative(i, v); } -#define ATOMIC_LONG_INC_DEC_OP(op, mo) \ -static inline long \ -atomic_long_##op##_return##mo(atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_##op##_return##mo)(v); \ +static inline long +atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic64_fetch_add_unless(v, a, u); } -ATOMIC_LONG_INC_DEC_OP(inc,) -ATOMIC_LONG_INC_DEC_OP(inc, _relaxed) -ATOMIC_LONG_INC_DEC_OP(inc, _acquire) -ATOMIC_LONG_INC_DEC_OP(inc, _release) -ATOMIC_LONG_INC_DEC_OP(dec,) -ATOMIC_LONG_INC_DEC_OP(dec, _relaxed) -ATOMIC_LONG_INC_DEC_OP(dec, _acquire) -ATOMIC_LONG_INC_DEC_OP(dec, _release) -#undef ATOMIC_LONG_INC_DEC_OP +static inline bool +atomic_long_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic64_add_unless(v, a, u); +} + +static inline bool +atomic_long_inc_not_zero(atomic_long_t *v) +{ + return atomic64_inc_not_zero(v); +} + +static inline bool +atomic_long_inc_unless_negative(atomic_long_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +static inline bool +atomic_long_dec_unless_positive(atomic_long_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +static inline long +atomic_long_dec_if_positive(atomic_long_t *v) +{ + return atomic64_dec_if_positive(v); +} -static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u) +#else /* CONFIG_64BIT */ + +static inline long +atomic_long_read(const atomic_long_t *v) +{ + return atomic_read(v); +} + +static inline long +atomic_long_read_acquire(const atomic_long_t *v) +{ + return atomic_read_acquire(v); +} + +static inline void +atomic_long_set(atomic_long_t *v, long i) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + atomic_set(v, i); +} - return (long)ATOMIC_LONG_PFX(_add_unless)(v, a, u); +static inline void +atomic_long_set_release(atomic_long_t *v, long i) +{ + atomic_set_release(v, i); } -#define atomic_long_inc_not_zero(l) \ - ATOMIC_LONG_PFX(_inc_not_zero)((ATOMIC_LONG_PFX(_t) *)(l)) +static inline void +atomic_long_add(long i, atomic_long_t *v) +{ + atomic_add(i, v); +} + +static inline long +atomic_long_add_return(long i, atomic_long_t *v) +{ + return atomic_add_return(i, v); +} -#define atomic_long_cond_read_acquire(v, c) \ - ATOMIC_LONG_PFX(_cond_read_acquire)((ATOMIC_LONG_PFX(_t) *)(v), (c)) +static inline long +atomic_long_add_return_acquire(long i, atomic_long_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +static inline long +atomic_long_add_return_release(long i, atomic_long_t *v) +{ + return atomic_add_return_release(i, v); +} + +static inline long +atomic_long_add_return_relaxed(long i, atomic_long_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_add(long i, atomic_long_t *v) +{ + return atomic_fetch_add(i, v); +} + +static inline long +atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +static inline long +atomic_long_fetch_add_release(long i, atomic_long_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +static inline long +atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +static inline void +atomic_long_sub(long i, atomic_long_t *v) +{ + atomic_sub(i, v); +} + +static inline long +atomic_long_sub_return(long i, atomic_long_t *v) +{ + return atomic_sub_return(i, v); +} + +static inline long +atomic_long_sub_return_acquire(long i, atomic_long_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +static inline long +atomic_long_sub_return_release(long i, atomic_long_t *v) +{ + return atomic_sub_return_release(i, v); +} + +static inline long +atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_sub(long i, atomic_long_t *v) +{ + return atomic_fetch_sub(i, v); +} + +static inline long +atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +static inline long +atomic_long_fetch_sub_release(long i, atomic_long_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +static inline long +atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +static inline void +atomic_long_inc(atomic_long_t *v) +{ + atomic_inc(v); +} + +static inline long +atomic_long_inc_return(atomic_long_t *v) +{ + return atomic_inc_return(v); +} + +static inline long +atomic_long_inc_return_acquire(atomic_long_t *v) +{ + return atomic_inc_return_acquire(v); +} + +static inline long +atomic_long_inc_return_release(atomic_long_t *v) +{ + return atomic_inc_return_release(v); +} + +static inline long +atomic_long_inc_return_relaxed(atomic_long_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +static inline long +atomic_long_fetch_inc(atomic_long_t *v) +{ + return atomic_fetch_inc(v); +} + +static inline long +atomic_long_fetch_inc_acquire(atomic_long_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +static inline long +atomic_long_fetch_inc_release(atomic_long_t *v) +{ + return atomic_fetch_inc_release(v); +} + +static inline long +atomic_long_fetch_inc_relaxed(atomic_long_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +static inline void +atomic_long_dec(atomic_long_t *v) +{ + atomic_dec(v); +} + +static inline long +atomic_long_dec_return(atomic_long_t *v) +{ + return atomic_dec_return(v); +} + +static inline long +atomic_long_dec_return_acquire(atomic_long_t *v) +{ + return atomic_dec_return_acquire(v); +} + +static inline long +atomic_long_dec_return_release(atomic_long_t *v) +{ + return atomic_dec_return_release(v); +} + +static inline long +atomic_long_dec_return_relaxed(atomic_long_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +static inline long +atomic_long_fetch_dec(atomic_long_t *v) +{ + return atomic_fetch_dec(v); +} + +static inline long +atomic_long_fetch_dec_acquire(atomic_long_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +static inline long +atomic_long_fetch_dec_release(atomic_long_t *v) +{ + return atomic_fetch_dec_release(v); +} + +static inline long +atomic_long_fetch_dec_relaxed(atomic_long_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +static inline void +atomic_long_and(long i, atomic_long_t *v) +{ + atomic_and(i, v); +} + +static inline long +atomic_long_fetch_and(long i, atomic_long_t *v) +{ + return atomic_fetch_and(i, v); +} + +static inline long +atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +static inline long +atomic_long_fetch_and_release(long i, atomic_long_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +static inline long +atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +static inline void +atomic_long_andnot(long i, atomic_long_t *v) +{ + atomic_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +static inline long +atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +static inline long +atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +static inline void +atomic_long_or(long i, atomic_long_t *v) +{ + atomic_or(i, v); +} + +static inline long +atomic_long_fetch_or(long i, atomic_long_t *v) +{ + return atomic_fetch_or(i, v); +} + +static inline long +atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +static inline long +atomic_long_fetch_or_release(long i, atomic_long_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +static inline long +atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +static inline void +atomic_long_xor(long i, atomic_long_t *v) +{ + atomic_xor(i, v); +} + +static inline long +atomic_long_fetch_xor(long i, atomic_long_t *v) +{ + return atomic_fetch_xor(i, v); +} + +static inline long +atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +static inline long +atomic_long_fetch_xor_release(long i, atomic_long_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +static inline long +atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +static inline long +atomic_long_xchg(atomic_long_t *v, long i) +{ + return atomic_xchg(v, i); +} + +static inline long +atomic_long_xchg_acquire(atomic_long_t *v, long i) +{ + return atomic_xchg_acquire(v, i); +} + +static inline long +atomic_long_xchg_release(atomic_long_t *v, long i) +{ + return atomic_xchg_release(v, i); +} + +static inline long +atomic_long_xchg_relaxed(atomic_long_t *v, long i) +{ + return atomic_xchg_relaxed(v, i); +} + +static inline long +atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg(v, old, new); +} + +static inline long +atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +static inline long +atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +static inline long +atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +static inline bool +atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg(v, (int *)old, new); +} + +static inline bool +atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg_acquire(v, (int *)old, new); +} + +static inline bool +atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg_release(v, (int *)old, new); +} + +static inline bool +atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg_relaxed(v, (int *)old, new); +} + +static inline bool +atomic_long_sub_and_test(long i, atomic_long_t *v) +{ + return atomic_sub_and_test(i, v); +} + +static inline bool +atomic_long_dec_and_test(atomic_long_t *v) +{ + return atomic_dec_and_test(v); +} + +static inline bool +atomic_long_inc_and_test(atomic_long_t *v) +{ + return atomic_inc_and_test(v); +} + +static inline bool +atomic_long_add_negative(long i, atomic_long_t *v) +{ + return atomic_add_negative(i, v); +} + +static inline long +atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +static inline bool +atomic_long_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic_add_unless(v, a, u); +} + +static inline bool +atomic_long_inc_not_zero(atomic_long_t *v) +{ + return atomic_inc_not_zero(v); +} + +static inline bool +atomic_long_inc_unless_negative(atomic_long_t *v) +{ + return atomic_inc_unless_negative(v); +} + +static inline bool +atomic_long_dec_unless_positive(atomic_long_t *v) +{ + return atomic_dec_unless_positive(v); +} + +static inline long +atomic_long_dec_if_positive(atomic_long_t *v) +{ + return atomic_dec_if_positive(v); +} -#endif /* _ASM_GENERIC_ATOMIC_LONG_H */ +#endif /* CONFIG_64BIT */ +#endif /* _ASM_GENERIC_ATOMIC_LONG_H */