From patchwork Tue May 29 18:07:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137205 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4365940lji; Tue, 29 May 2018 11:10:39 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrQgg84io1CvVi56NAjXnlCN8/IpQ5dS95hvNruX5HwzOtVXPbqWvyeMC84zlOslDkoND3F X-Received: by 2002:a17:902:3181:: with SMTP id x1-v6mr18776890plb.198.1527617439426; Tue, 29 May 2018 11:10:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617439; cv=none; d=google.com; s=arc-20160816; b=DM//HXBQYRxcFINNQX82WHJPydj0dGmUb2bdjVLPjheMLAuFJqNOJqfzxBTD1hSy3t 5IcNMM/6r08kMlIibYNidKqaEB604cti+mhC8/Lm5qDC8QZDQsS7fsctMmR1LvbU1LIw oAttLsihegoKwWmzAbj5Fpn0oWNzv39LxKPohSkX6OtuGii1yJyOf89enrnwac+brS+7 hf6IoQGFEVCeGj1JWQTKOECdOVclDmQFKmieklIsmbkzcH0e9UwCgKsA2ZNFFVpHto84 T/zFRut8H71ak7Vy9uVWGTPzX8r6T5YUV91TbzBIovBilgwnme2dZcO9CIzxVGD7aFlz aXEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=LpV4HpGdic+CzCwIAOh+GAvvk8d2/P6rxZfR/p+F0kM=; b=fd0wtWWUdxpVPr5ck1/gqHb9ZZsfWHWCHBcBlDq7q35/YMMQjY5V00YmFu8qcrmnUP acF3VVcH+DfJxkD2siALxdHY2Wa1IlJIUkylSNMoQWN9l393QVu6QoaKXjdgyaJ9u/SM VNgCVD/eiPsDh2vMwOzO0HsS0u4YaLTUTZH7omjlflN+CMaH3SfEyzjW285SPAbHwi5U 5ruB6tkhhKxXgudjFGmZgIr6aL1q1mM7WqCWU3F1lZ7Icygr9WHHgbJeqjJDppKDkETG DUIlJ/1q0jdBGvw1NAL7YyovQ3Rl5fLVSg9UmdqrgcET5zbwdNXM7es+vxqw+PH1tRN1 VvyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h68-v6si10727890pgc.429.2018.05.29.11.10.39; Tue, 29 May 2018 11:10:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936996AbeE2SKh (ORCPT + 30 others); Tue, 29 May 2018 14:10:37 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:45870 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965703AbeE2SIB (ORCPT ); Tue, 29 May 2018 14:08:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E3B515AD; Tue, 29 May 2018 11:08:01 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B4A213F557; Tue, 29 May 2018 11:07:59 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon , Greg Kroah-Hartman , Jiri Slaby Subject: [PATCH 1/7] atomics/tty: add missing atomic_long_t * cast Date: Tue, 29 May 2018 19:07:40 +0100 Message-Id: <20180529180746.29684-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In ldsem_cmpxchg a pointer to unsigned long is passed to atomic_long_cmpxchg(), which expects a pointer to atomic_long_t. In preparation for making the atomic_long_* APIs type safe, add a cast before passing the value to atomic_long_cmpxchg(). Similar is already done in ldsem_atomic_update() when it calls atomic_long_add_return(). Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon Cc: Greg Kroah-Hartman Cc: Jiri Slaby --- drivers/tty/tty_ldsem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.11.0 Acked-by: Greg Kroah-Hartman Reported-by: Mark Rutland Signed-off-by: Peter Zijlstra (Intel) Acked-by: Mark Rutland diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c index 37a91b3df980..5f8aef97973f 100644 --- a/drivers/tty/tty_ldsem.c +++ b/drivers/tty/tty_ldsem.c @@ -86,7 +86,7 @@ static inline long ldsem_atomic_update(long delta, struct ld_semaphore *sem) */ static inline int ldsem_cmpxchg(long *old, long new, struct ld_semaphore *sem) { - long tmp = atomic_long_cmpxchg(&sem->count, *old, new); + long tmp = atomic_long_cmpxchg((atomic_long_t *)&sem->count, *old, new); if (tmp == *old) { *old = new; return 1; From patchwork Tue May 29 18:07:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137199 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4363529lji; Tue, 29 May 2018 11:08:11 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrrZVM7RsPxAWDJ9HXQLiuzeZQZSDhOE+Rh7voFXX7b7KwWgpmKSfHLnAS8kheLrNcEUif7 X-Received: by 2002:a65:4ecc:: with SMTP id w12-v6mr14627949pgq.214.1527617290847; Tue, 29 May 2018 11:08:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617290; cv=none; d=google.com; s=arc-20160816; b=gZIfHC8qvcDfDF3PW4am4oArEw1AmRt6yRdcpld90qMzKKHm8+rng4As+nwDviG0te UPFUFGbf2p4d9TovReKrSzST9AqqeE6pCbltZR7uzqvHnbBco2TCaS8Iw/r4qyF0uAmT Sf0/6fD1Tirl3roFc2EMzW5S1+RHKpMGCFxsS8SK9Y6Kutqzm7yrjd36bYVCLZq/58u8 3HbOKbgjVYmCD/4U8AjVG9CZVjL2LfdXcHfdu+BMvNaUq033qIWd4tVXJYdyEuPDIQeY +ZJAGSCKbMPWfXeqGLOIr5eohd+Q+FGARo3s9tsW7EKeqc2AhVg/ExjfVru8H4Osj6yQ 3WIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=dNfbCmpaa3SS8R6TAlYyLLJMb61ITWn/IGfs+RbWsE4=; b=jIUjtnL1W99Vgei3+uNCp3D2ugIH7LDhvp0AWS4QQFEOQ8+Xb0HrumZKfldKkavlDk GyiW3YDxQ5QUyKKH+80r5wRdvPJFRhmNo/M8xhFiaYMO3dn6iDXs6cz5gcqY5THMrc/p o1rc/5ON4fIToLjYa2E6t5nmmBpoE0T6mR7RIyaHjrhOAMHDr1PDuJJ8PSjZTG7FEz8K S9JIWmuMvkHepdzVMCg2Hovh1cfrinRQNlHjZZC1Z3Ke+cxZZlubkMJ0xu+1ZBi7jHq8 8npITK9gmStJegZiCXQ8inXA1Z27GN6OKDtMN/X3M1crL0yvP9lTKe2e+ZuxYbeiNNZ0 DAEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b19-v6si32700472pfh.358.2018.05.29.11.08.10; Tue, 29 May 2018 11:08:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965843AbeE2SIJ (ORCPT + 30 others); Tue, 29 May 2018 14:08:09 -0400 Received: from foss.arm.com ([217.140.101.70]:45884 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965778AbeE2SID (ORCPT ); Tue, 29 May 2018 14:08:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0362715B2; Tue, 29 May 2018 11:08:03 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C42723F557; Tue, 29 May 2018 11:08:01 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon , Andrea Parri Subject: [PATCH 2/7] atomics/treewide: rework ordering barriers Date: Tue, 29 May 2018 19:07:41 +0100 Message-Id: <20180529180746.29684-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently architectures can override __atomic_op_*() to define the barriers used before/after a relaxed atomic when used to build acquire/release/fence variants. This has the unfortunate property of requiring the architecture to define the full wrapper for the atomics, rather than just the barriers they care about, and gets in the way of generating atomics which can be easily read. Instead, this patch has architectures define an optional set of barriers, __atomic_mb_{before,after}_{acquire,release,fence}(), which uses to build the wrappers. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon Cc: Andrea Parri --- arch/alpha/include/asm/atomic.h | 8 ++++---- arch/powerpc/include/asm/atomic.h | 17 +++++------------ arch/riscv/include/asm/atomic.h | 17 +++++------------ include/linux/atomic.h | 38 ++++++++++++++++++++++---------------- 4 files changed, 36 insertions(+), 44 deletions(-) I'm aware that the naming of these barriers is potentially bad, and something like atomic_acquire_mb()/atomic_release_mb() might be better. I only really care that the core code can use the barriers directly rather than this being hidden in an architecture's __atomic_op_*(). Peter/Will, I'll defer to your judgement on naming. Thanks, Mark. -- 2.11.0 diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 81bf9d15dcc1..aba549a0b147 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -18,11 +18,11 @@ * To ensure dependency ordering is preserved for the _relaxed and * _release atomics, an smp_read_barrier_depends() is unconditionally * inserted into the _relaxed variants, which are used to build the - * barriered versions. To avoid redundant back-to-back fences, we can - * define the _acquire and _fence versions explicitly. + * barriered versions. Avoid redundant back-to-back fences in the + * _acquire and _fence versions. */ -#define __atomic_op_acquire(op, args...) op##_relaxed(args) -#define __atomic_op_fence __atomic_op_release +#define __atomic_mb__after_acquire() +#define __atomic_mb__after_fence() #define ATOMIC_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) } diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index a0156cb43d1f..39342a1602e9 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -18,18 +18,11 @@ * a "bne-" instruction at the end, so an isync is enough as a acquire barrier * on the platform without lwsync. */ -#define __atomic_op_acquire(op, args...) \ -({ \ - typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ - __asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory"); \ - __ret; \ -}) - -#define __atomic_op_release(op, args...) \ -({ \ - __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory"); \ - op##_relaxed(args); \ -}) +#define __atomic_mb__after_acquire() \ + __asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory") + +#define __atomic_mb__before_release() \ + __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory") static __inline__ int atomic_read(const atomic_t *v) { diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 512b89485790..1d168857638a 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -25,18 +25,11 @@ #define ATOMIC_INIT(i) { (i) } -#define __atomic_op_acquire(op, args...) \ -({ \ - typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ - __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory"); \ - __ret; \ -}) - -#define __atomic_op_release(op, args...) \ -({ \ - __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory"); \ - op##_relaxed(args); \ -}) +#define __atomic_mb__after_acquire() \ + __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory") + +#define __atomic_mb__before_release() \ + __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory"); static __always_inline int atomic_read(const atomic_t *v) { diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 9c1b0769a846..580d15b10bfd 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -36,40 +36,46 @@ * barriers on top of the relaxed variant. In the case where the relaxed * variant is already fully ordered, no additional barriers are needed. * - * Besides, if an arch has a special barrier for acquire/release, it could - * implement its own __atomic_op_* and use the same framework for building - * variants - * - * If an architecture overrides __atomic_op_acquire() it will probably want - * to define smp_mb__after_spinlock(). + * If an architecture overrides __atomic_mb_after_acquire() it will probably + * want to define smp_mb__after_spinlock(). */ -#ifndef __atomic_op_acquire +#ifndef __atomic_mb__after_acquire +#define __atomic_mb__after_acquire smp_mb__after_atomic +#endif + +#ifndef __atomic_mb__before_release +#define __atomic_mb__before_release smp_mb__before_atomic +#endif + +#ifndef __atomic_mb__before_fence +#define __atomic_mb__before_fence smp_mb__before_atomic +#endif + +#ifndef __atomic_mb__after_fence +#define __atomic_mb__after_fence smp_mb__after_atomic +#endif + #define __atomic_op_acquire(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ - smp_mb__after_atomic(); \ + __atomic_mb__after_acquire(); \ __ret; \ }) -#endif -#ifndef __atomic_op_release #define __atomic_op_release(op, args...) \ ({ \ - smp_mb__before_atomic(); \ + __atomic_mb__before_release(); \ op##_relaxed(args); \ }) -#endif -#ifndef __atomic_op_fence #define __atomic_op_fence(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret; \ - smp_mb__before_atomic(); \ + __atomic_mb__before_fence(); \ __ret = op##_relaxed(args); \ - smp_mb__after_atomic(); \ + __atomic_mb__after_fence(); \ __ret; \ }) -#endif /* atomic_add_return_relaxed */ #ifndef atomic_add_return_relaxed From patchwork Tue May 29 18:07:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137200 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4363609lji; Tue, 29 May 2018 11:08:16 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrx3RUnYJ8Jz82NYHyAYWrLA9q9zkyXOv4UCeLAtu3F6zqjhjxu9kLY5m69Vx0GEgXkg/KD X-Received: by 2002:a62:3d54:: with SMTP id k81-v6mr18347457pfa.193.1527617295882; Tue, 29 May 2018 11:08:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617295; cv=none; d=google.com; s=arc-20160816; b=JZoxVDFYfjlPzS1LZSWGXqPLZw+DTiqbP4UuxIhT2IMLRuyRc9yHOgCy54Fmu1gg+3 ZkWOgix1ZcWhiBt+i/Z7xjEWloH+hheOyOApct4L5LW9jP9jhZKLI2GLgIYvYVyuProR /au4LP08RHuW4VesVcvvHGV0oI1jipdGy0Bx4lTbk8iDajbWCdjWlJZEDEsM2wsvyVZU u3wiHMT2b6D22KnPcN6XvTUeYT5ZTH0szTdEUFDolYn+2onMA2QzRz3yIF2KmF9bM43r YiucIRmkj7F4jBqQnEkWKcTrtcwOu8TTGGM2s3xox6naCc9pvR62rIIAnxQqpKFuMRVF GeyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=TXwjuRaEN0tDvU2+eaPwLY2DnEN1rN8ri562dBBTj/Y=; b=mim0GgiaXp/jax3b9O3O3MebHr04bPwXwBq7QCbY1H76z1eS3dBfnfu7P4KiIBaAAE qLX9Xk/6TzZ1uvEo1f+Mk7H6A8doRcvEdaAET/7k6UPWFuAlOQUgYf2aD4GWgAVoPFLw 4mzIM7Ext0E18FlrOEQ6E5LasFzfinvqGhYc+0jFelyg9PkAU+ry568hMbazkFE0E8Av l//atPNP8Ad+sFocs5E826WcOHl87GwyTsWnu88TqNDWi5lzjloe2S741bD9igjl2NU+ EH0u3WK+PimQaWN7LYCBdZXljJnGOl1pVenFSjERYKGp6c5jQT7rju3G8r/kU6ojNjrf M8Vw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b19-v6si32700472pfh.358.2018.05.29.11.08.15; Tue, 29 May 2018 11:08:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965917AbeE2SIM (ORCPT + 30 others); Tue, 29 May 2018 14:08:12 -0400 Received: from foss.arm.com ([217.140.101.70]:45892 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965792AbeE2SIG (ORCPT ); Tue, 29 May 2018 14:08:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 50DA715BE; Tue, 29 May 2018 11:08:06 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 436263F557; Tue, 29 May 2018 11:08:05 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon Subject: [PATCH 3/7] atomics: separate basic {cmp,}xchg() Date: Tue, 29 May 2018 19:07:42 +0100 Message-Id: <20180529180746.29684-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the basic {cmp,}xchg() definitions live in the middle of the atomic_t definitions. Let's pull them earlier so that the file has, in order: * __atomic_op_() definitions * {cmp,}xchg() definitions * atomic_t definiitions * atomic64_t definitions * atomic_long_t definitions ... which will make subsequent rework less messy. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic.h | 138 ++++++++++++++++++++++++------------------------- 1 file changed, 69 insertions(+), 69 deletions(-) -- 2.11.0 diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 580d15b10bfd..28c4aa3c1a4c 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -77,6 +77,75 @@ __ret; \ }) +/* cmpxchg_relaxed */ +#ifndef cmpxchg_relaxed +#define cmpxchg_relaxed cmpxchg +#define cmpxchg_acquire cmpxchg +#define cmpxchg_release cmpxchg + +#else /* cmpxchg_relaxed */ + +#ifndef cmpxchg_acquire +#define cmpxchg_acquire(...) \ + __atomic_op_acquire(cmpxchg, __VA_ARGS__) +#endif + +#ifndef cmpxchg_release +#define cmpxchg_release(...) \ + __atomic_op_release(cmpxchg, __VA_ARGS__) +#endif + +#ifndef cmpxchg +#define cmpxchg(...) \ + __atomic_op_fence(cmpxchg, __VA_ARGS__) +#endif +#endif /* cmpxchg_relaxed */ + +/* cmpxchg64_relaxed */ +#ifndef cmpxchg64_relaxed +#define cmpxchg64_relaxed cmpxchg64 +#define cmpxchg64_acquire cmpxchg64 +#define cmpxchg64_release cmpxchg64 + +#else /* cmpxchg64_relaxed */ + +#ifndef cmpxchg64_acquire +#define cmpxchg64_acquire(...) \ + __atomic_op_acquire(cmpxchg64, __VA_ARGS__) +#endif + +#ifndef cmpxchg64_release +#define cmpxchg64_release(...) \ + __atomic_op_release(cmpxchg64, __VA_ARGS__) +#endif + +#ifndef cmpxchg64 +#define cmpxchg64(...) \ + __atomic_op_fence(cmpxchg64, __VA_ARGS__) +#endif +#endif /* cmpxchg64_relaxed */ + +/* xchg_relaxed */ +#ifndef xchg_relaxed +#define xchg_relaxed xchg +#define xchg_acquire xchg +#define xchg_release xchg + +#else /* xchg_relaxed */ + +#ifndef xchg_acquire +#define xchg_acquire(...) __atomic_op_acquire(xchg, __VA_ARGS__) +#endif + +#ifndef xchg_release +#define xchg_release(...) __atomic_op_release(xchg, __VA_ARGS__) +#endif + +#ifndef xchg +#define xchg(...) __atomic_op_fence(xchg, __VA_ARGS__) +#endif +#endif /* xchg_relaxed */ + /* atomic_add_return_relaxed */ #ifndef atomic_add_return_relaxed #define atomic_add_return_relaxed atomic_add_return @@ -480,75 +549,6 @@ #define atomic_try_cmpxchg_release atomic_try_cmpxchg #endif /* atomic_try_cmpxchg */ -/* cmpxchg_relaxed */ -#ifndef cmpxchg_relaxed -#define cmpxchg_relaxed cmpxchg -#define cmpxchg_acquire cmpxchg -#define cmpxchg_release cmpxchg - -#else /* cmpxchg_relaxed */ - -#ifndef cmpxchg_acquire -#define cmpxchg_acquire(...) \ - __atomic_op_acquire(cmpxchg, __VA_ARGS__) -#endif - -#ifndef cmpxchg_release -#define cmpxchg_release(...) \ - __atomic_op_release(cmpxchg, __VA_ARGS__) -#endif - -#ifndef cmpxchg -#define cmpxchg(...) \ - __atomic_op_fence(cmpxchg, __VA_ARGS__) -#endif -#endif /* cmpxchg_relaxed */ - -/* cmpxchg64_relaxed */ -#ifndef cmpxchg64_relaxed -#define cmpxchg64_relaxed cmpxchg64 -#define cmpxchg64_acquire cmpxchg64 -#define cmpxchg64_release cmpxchg64 - -#else /* cmpxchg64_relaxed */ - -#ifndef cmpxchg64_acquire -#define cmpxchg64_acquire(...) \ - __atomic_op_acquire(cmpxchg64, __VA_ARGS__) -#endif - -#ifndef cmpxchg64_release -#define cmpxchg64_release(...) \ - __atomic_op_release(cmpxchg64, __VA_ARGS__) -#endif - -#ifndef cmpxchg64 -#define cmpxchg64(...) \ - __atomic_op_fence(cmpxchg64, __VA_ARGS__) -#endif -#endif /* cmpxchg64_relaxed */ - -/* xchg_relaxed */ -#ifndef xchg_relaxed -#define xchg_relaxed xchg -#define xchg_acquire xchg -#define xchg_release xchg - -#else /* xchg_relaxed */ - -#ifndef xchg_acquire -#define xchg_acquire(...) __atomic_op_acquire(xchg, __VA_ARGS__) -#endif - -#ifndef xchg_release -#define xchg_release(...) __atomic_op_release(xchg, __VA_ARGS__) -#endif - -#ifndef xchg -#define xchg(...) __atomic_op_fence(xchg, __VA_ARGS__) -#endif -#endif /* xchg_relaxed */ - /** * atomic_fetch_add_unless - add unless the number is already a given value * @v: pointer of type atomic_t From patchwork Tue May 29 18:07:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137204 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4365529lji; Tue, 29 May 2018 11:10:16 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqXBtQkMu5w9+hO3IqowdssQZq9aZlfJMVcO8BXX20RaVW9OcqMLW2AvOGDvCNWMaFDudeO X-Received: by 2002:a62:883:: with SMTP id 3-v6mr18627243pfi.154.1527617416525; Tue, 29 May 2018 11:10:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617416; cv=none; d=google.com; s=arc-20160816; b=JJYdqKTwEngdIBjJfCIPKAowsapsDs3F5eMkMVHQ7poRC6g/j58uEiANnRI0YLYJn4 cu18XW7RQvlHWHElmcUv9FWtyuvIT/B/+bOmU0PfdpvDIuU56BFFsh786SwtTwMU5E8k SmYfrQEW2bChpuC3C3k4s3ulDB4abGyGt1rcQFYlNeC5UAcn9GERBjk6AJut/AW2xw5Q 0mdqFibl9/IgCUXyxbROqhPgxGVrSFO+M4VLSy/T8qwBWRJoYh+zDTbUlxp5u2/tm1uy 6bdkROAE4xge3OZ+b906Gv7SxXZzzmvpGJDyUaCtUo8W6iqMA/H5pjk4931gNAvN1EMc /9Ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=IBzbG/K/kzivCK5Lowd/PINq6/SRb+U12i+pmgeU9gY=; b=hFwZv8kx9aKP48pCCp9wVcN2Q/YZ+qN805CZZtyAOihsN5LZT+jTipSDKsRsX4vz2W xKrl7FbzAgNtldJ8Ae4h/ikLN/w1JIZNZvEgmG4Z3Te186/Lr7Qfmji7wWhJJyIpqEIt f+dUtqm7R21V8xKpiwjvF1dOovpwiw7f3a1K6mHKBLWUWtHph0pb1SUsPuUvkxCkZZOo dGXPzdDNZNrvf5CBkPVRpnng0X+aKyZFn80qWPr2rJUjd3h3/wglRH8G52iSUi71RyL7 iVGCrcETrN9zEslioGneZtIuCpwhEYoaCyvu8v2yz/tkiBFXuj5aorWq70v+qq5DjcOb jDZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v41-v6si32727617plg.451.2018.05.29.11.10.16; Tue, 29 May 2018 11:10:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966137AbeE2SKO (ORCPT + 30 others); Tue, 29 May 2018 14:10:14 -0400 Received: from foss.arm.com ([217.140.101.70]:45898 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965814AbeE2SII (ORCPT ); Tue, 29 May 2018 14:08:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6468D15BF; Tue, 29 May 2018 11:08:08 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 109703F557; Tue, 29 May 2018 11:08:06 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon Subject: [PATCH 4/7] atomics: add common header generation files Date: Tue, 29 May 2018 19:07:43 +0100 Message-Id: <20180529180746.29684-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To minimize repetition, to allow for future rework, and to ensure regularity of the various atomic APIs, we'd like to automatically generate (the bulk of) a number of headers related to atomics. This patch adds the infrastructure to do so, leaving actual conversion of headers to subsequent patches. This infrastructure consists of: * atomics.tbl - a table describing the functions in the atomics API, with names, prototypes, and metadata describing the variants that exist (e.g fetch/return, acquire/release/relaxed). Note that the return type is dependent on the particular variant. * atomic-tbl.sh - a library of routines useful for dealing with atomics.tbl (e.g. querying which variants exist, or generating argument/parameter lists for a given function variant). * gen-atomic-fallback.sh - a script which generates a header of fallbacks, covering cases where architecture omit certain functions (e.g. omitting relaxed variants). * gen-atomic-long.sh - a script which generates wrappers providing the atomic_long API atomic of the relevant atomic or atomic64 API, ensuring the APIs are consistent. * gen-atomic-instrumented.sh - a script which generates atomic* wrappers atop of arch_atomic* functions, with automatically generated KASAN instrumentation. * fallbacks/* - a set of fallback implementations for atomics, which should be used when no implementation of a given atomic is provided. These are used by gen-atomic-fallback.sh to generate fallbacks, and these are also used by other scripts to determine the set of optional atomics (as required to generate preprocessor guards correctly). Fallbacks may use the following variables: ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be used to derive the atomic type, and to prefix functions ${int} integer type: int/s64/long ${pfx} variant prefix, e.g. fetch_ ${name} base function name, e.g. add ${sfx} variant suffix, e.g. _return ${order} order suffix, e.g. _relaxed ${atomicname} full name, e.g. atomic64_fetch_add_relaxed ${ret} return type of the function, e.g. void ${retstmt} a return statement (with a trailing space), unless the variant returns void ${params} parameter list for the function declaration, e.g. "int i, atomic_t *v" ${args} argument list for invoking the function, e.g. "i, v" ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are open-coded for fallbacks where these do not vary, or are critical to understanding the logic of the fallback. The MAINTAINERS entry for the atomic infrastructure is updated to cover the new scripts. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon --- MAINTAINERS | 1 + scripts/atomic/atomic-tbl.sh | 186 +++++++++++++++++++++++++++ scripts/atomic/atomics.tbl | 41 ++++++ scripts/atomic/fallbacks/acquire | 9 ++ scripts/atomic/fallbacks/add_negative | 16 +++ scripts/atomic/fallbacks/add_unless | 16 +++ scripts/atomic/fallbacks/andnot | 7 + scripts/atomic/fallbacks/dec | 7 + scripts/atomic/fallbacks/dec_and_test | 15 +++ scripts/atomic/fallbacks/dec_if_positive | 15 +++ scripts/atomic/fallbacks/dec_unless_positive | 14 ++ scripts/atomic/fallbacks/fence | 11 ++ scripts/atomic/fallbacks/fetch_add_unless | 23 ++++ scripts/atomic/fallbacks/inc | 7 + scripts/atomic/fallbacks/inc_and_test | 15 +++ scripts/atomic/fallbacks/inc_not_zero | 14 ++ scripts/atomic/fallbacks/inc_unless_negative | 14 ++ scripts/atomic/fallbacks/read_acquire | 7 + scripts/atomic/fallbacks/release | 8 ++ scripts/atomic/fallbacks/set_release | 7 + scripts/atomic/fallbacks/sub_and_test | 16 +++ scripts/atomic/fallbacks/try_cmpxchg | 11 ++ scripts/atomic/gen-atomic-fallback.sh | 144 +++++++++++++++++++++ scripts/atomic/gen-atomic-instrumented.sh | 122 ++++++++++++++++++ scripts/atomic/gen-atomic-long.sh | 98 ++++++++++++++ 25 files changed, 824 insertions(+) create mode 100755 scripts/atomic/atomic-tbl.sh create mode 100644 scripts/atomic/atomics.tbl create mode 100644 scripts/atomic/fallbacks/acquire create mode 100644 scripts/atomic/fallbacks/add_negative create mode 100644 scripts/atomic/fallbacks/add_unless create mode 100644 scripts/atomic/fallbacks/andnot create mode 100644 scripts/atomic/fallbacks/dec create mode 100644 scripts/atomic/fallbacks/dec_and_test create mode 100644 scripts/atomic/fallbacks/dec_if_positive create mode 100644 scripts/atomic/fallbacks/dec_unless_positive create mode 100644 scripts/atomic/fallbacks/fence create mode 100644 scripts/atomic/fallbacks/fetch_add_unless create mode 100644 scripts/atomic/fallbacks/inc create mode 100644 scripts/atomic/fallbacks/inc_and_test create mode 100644 scripts/atomic/fallbacks/inc_not_zero create mode 100644 scripts/atomic/fallbacks/inc_unless_negative create mode 100644 scripts/atomic/fallbacks/read_acquire create mode 100644 scripts/atomic/fallbacks/release create mode 100644 scripts/atomic/fallbacks/set_release create mode 100644 scripts/atomic/fallbacks/sub_and_test create mode 100644 scripts/atomic/fallbacks/try_cmpxchg create mode 100755 scripts/atomic/gen-atomic-fallback.sh create mode 100755 scripts/atomic/gen-atomic-instrumented.sh create mode 100755 scripts/atomic/gen-atomic-long.sh -- 2.11.0 diff --git a/MAINTAINERS b/MAINTAINERS index 078fd80f664f..575fddb13208 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2492,6 +2492,7 @@ L: linux-kernel@vger.kernel.org S: Maintained F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h +F: scripts/atomic/ ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/scripts/atomic/atomic-tbl.sh b/scripts/atomic/atomic-tbl.sh new file mode 100755 index 000000000000..8a5edc1133e3 --- /dev/null +++ b/scripts/atomic/atomic-tbl.sh @@ -0,0 +1,186 @@ +#!/bin/sh + +# helpers for dealing with atomics.tbl + +#meta_in(meta, match) +meta_in() +{ + case "$1" in + [$2]) return 0;; + esac + + return 1 +} + +#meta_has_ret(meta) +meta_has_ret() +{ + meta_in "$1" "bBiIfFlR" +} + +#meta_has_acquire(meta) +meta_has_acquire() +{ + meta_in "$1" "BFIlR" +} + +#meta_has_release(meta) +meta_has_release() +{ + meta_in "$1" "BFIRs" +} + +#meta_has_relaxed(meta) +meta_has_relaxed() +{ + meta_in "$1" "BFIR" +} + +#find_fallback_template(pfx, name, sfx, order) +find_fallback_template() +{ + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + + local base="" + local file="" + + # We may have fallbacks for a specific case (e.g. read_acquire()), or + # an entire class, e.g. *inc*(). + # + # Start at the most specific, and fall back to the most general. Once + # we find a specific fallback, don't bother looking for more. + for base in "${pfx}${name}${sfx}${order}" "${name}"; do + file="${ATOMICDIR}/fallbacks/${base}" + + if [ -f "${file}" ]; then + printf "${file}" + break + fi + done +} + +#gen_ret_type(meta, int) +gen_ret_type() { + local meta="$1"; shift + local int="$1"; shift + + case "${meta}" in + [sv]) printf "void";; + [bB]) printf "bool";; + [aiIlfFR]) printf "${int}";; + esac +} + +#gen_ret_stmt(meta) +gen_ret_stmt() +{ + if meta_has_ret "${meta}"; then + printf "return "; + fi +} + +# gen_param_name(arg) +gen_param_name() +{ + # strip off the leading 'c' for 'cv' + local name="${1#c}" + printf "${name#*:}" +} + +# gen_param_type(arg, int, atomic) +gen_param_type() +{ + local type="${1%%:*}"; shift + local int="$1"; shift + local atomic="$1"; shift + + case "${type}" in + i) type="${int} ";; + I) type="${int} *";; + v) type="${atomic}_t *";; + cv) type="const ${atomic}_t *";; + esac + + printf "${type}" +} + +#gen_param(arg, int, atomic) +gen_param() +{ + local arg="$1"; shift + local int="$1"; shift + local atomic="$1"; shift + local name="$(gen_param_name "${arg}")" + local type="$(gen_param_type "${arg}" "${int}" "${atomic}")" + + printf "${type}${name}" +} + +#gen_params(int, atomic, arg...) +gen_params() +{ + local int="$1"; shift + local atomic="$1"; shift + + while [ "$#" -gt 0 ]; do + gen_param "$1" "${int}" "${atomic}" + [ "$#" -gt 1 ] && printf ", " + shift; + done +} + +#gen_args(arg...) +gen_args() +{ + while [ "$#" -gt 0 ]; do + printf "$(gen_param_name "$1")" + [ "$#" -gt 1 ] && printf ", " + shift; + done +} + +#gen_proto_order_variants(meta, pfx, name, sfx, ...) +gen_proto_order_variants() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@" + + if meta_has_acquire "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@" + fi + if meta_has_release "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@" + fi + if meta_has_relaxed "${meta}"; then + gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@" + fi +} + +#gen_proto_variants(meta, name, ...) +gen_proto_variants() +{ + local meta="$1"; shift + local name="$1"; shift + local pfx="" + local sfx="" + + meta_in "${meta}" "fF" && pfx="fetch_" + meta_in "${meta}" "R" && sfx="_return" + + gen_proto_order_variants "${meta}" "${pfx}" "${name}" "${sfx}" "$@" +} + +#gen_proto(meta, ...) +gen_proto() { + local meta="$1"; shift + for m in $(echo "${meta}" | fold -w1); do + gen_proto_variants "${m}" "$@" + done +} diff --git a/scripts/atomic/atomics.tbl b/scripts/atomic/atomics.tbl new file mode 100644 index 000000000000..fba27fbc8cfe --- /dev/null +++ b/scripts/atomic/atomics.tbl @@ -0,0 +1,41 @@ +# name meta args... +# +# Where meta contains a string of variants to generate. +# Upper-case implies _{acquire,release,relaxed} variants. +# Valid meta values are: +# * B/b - bool: returns bool +# * v - void: returns void +# * I/i - int: returns base type +# * R - return: returns base type (has _return variants) +# * F/f - fetch: returns base type (has fetch_ variants) +# * l - load: returns base type (has _acquire order variant) +# * s - store: returns void (has _release order variant) +# +# Where args contains list of type[:name], where type is: +# * cv - const pointer to atomic base type (atomic_t/atomic64_t/atomic_long_t) +# * v - pointer to atomic base type (atomic_t/atomic64_t/atomic_long_t) +# * i - base type (int/s64/long) +# * I - pointer to base type (int/s64/long) +# +read l cv +set s v i +add vRF i v +sub vRF i v +inc vRF v +dec vRF v +and vF i v +andnot vF i v +or vF i v +xor vF i v +xchg I v i +cmpxchg I v i:old i:new +try_cmpxchg B v I:old i:new +sub_and_test b i v +dec_and_test b v +inc_and_test b v +add_negative b i v +add_unless fb v i:a i:u +inc_not_zero b v +inc_unless_negative b v +dec_unless_positive b v +dec_if_positive i v diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire new file mode 100644 index 000000000000..76f834ba2cd1 --- /dev/null +++ b/scripts/atomic/fallbacks/acquire @@ -0,0 +1,9 @@ +cat < 0) + return false; + } while (!${atomic}_try_cmpxchg(v, &c, c - 1)); + + return true; +} +EOF diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence new file mode 100644 index 000000000000..645956389e46 --- /dev/null +++ b/scripts/atomic/fallbacks/fence @@ -0,0 +1,11 @@ +cat <counter); +} +EOF diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release new file mode 100644 index 000000000000..88acb45ed123 --- /dev/null +++ b/scripts/atomic/fallbacks/release @@ -0,0 +1,8 @@ +cat <counter, i); +} +EOF diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test new file mode 100644 index 000000000000..289ef17a2d7a --- /dev/null +++ b/scripts/atomic/fallbacks/sub_and_test @@ -0,0 +1,16 @@ +cat <counter, (c)) + +#ifdef CONFIG_GENERIC_ATOMIC64 +#include +#endif + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat <counter, (c)) + +#endif /* _LINUX_ATOMIC_FALLBACK_H */ +EOF diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh new file mode 100755 index 000000000000..b209a67f633d --- /dev/null +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -0,0 +1,122 @@ +#!/bin/sh + +ATOMICDIR=$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_param_check(arg) +gen_param_check() +{ + local arg="$1"; shift + local type="${arg%%:*}" + local name="$(gen_param_name "${arg}")" + local rw="write" + + case "${type#c}" in + i) return;; + esac + + # We don't write to constant parameters + [ ${type#c} != ${type} ] && rw="read" + + printf "\tkasan_check_${rw}(${name}, sizeof(*${name}));\n" +} + +#gen_param_check(arg...) +gen_params_checks() +{ + while [ "$#" -gt 0 ]; do + gen_param_check "$1" + shift; + done +} + +# gen_guard(meta, atomic, pfx, name, sfx, order) +gen_guard() +{ + local meta="$1"; shift + local atomic="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + + local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}" + + local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")" + + # We definitely need a preprocessor symbol for this atomic if it is an + # ordering variant, or if there's a generic fallback. + # as are base variants if there's a fallback + if [ ! -z "${order}" ] || [ ! -z "${template}" ]; then + printf "defined(${atomicname})" + return + fi + + # If this is a base variant, but a relaxed variant *may* exist, then we + # only have a preprocessor symbol if the relaxed variant isn't defined + if meta_has_relaxed "${meta}"; then + printf "!defined(${atomicname}_relaxed) || defined(${atomicname})" + fi +} + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + local atomic="$1"; shift + local int="$1"; shift + + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + + local guard="$(gen_guard "${meta}" "${atomic}" "${pfx}" "${name}" "${sfx}" "${order}")" + + local ret="$(gen_ret_type "${meta}" "${int}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local checks="$(gen_params_checks "$@")" + local args="$(gen_args "$@")" + local retstmt="$(gen_ret_stmt "${meta}")" + + [ ! -z "${guard}" ] && printf "#if ${guard}\n" + +cat < + +#ifdef CONFIG_64BIT +typedef atomic64_t atomic_long_t; +#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) +#define atomic_long_cond_read_acquire atomic64_cond_read_acquire +#else +typedef atomic_t atomic_long_t; +#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i) +#define atomic_long_cond_read_acquire atomic_cond_read_acquire +#endif + +#ifdef CONFIG_64BIT + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-Patchwork-Id: 137201 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4363677lji; Tue, 29 May 2018 11:08:20 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKhy7CKqLwiyShzxIDHg5ntzDjA9JpsfKEq/10k+oFKG66JfRvikBs9s2SzdWYkhzdCy958 X-Received: by 2002:a63:2c0f:: with SMTP id s15-v6mr11136760pgs.75.1527617299969; Tue, 29 May 2018 11:08:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617299; cv=none; d=google.com; s=arc-20160816; b=foLS9n325kakpe3kb58nKEhHKQyRppz9WmUEx/+3DmCuG/S19RX7n/QAFk58xWNGF6 hHLTZ6T9VFTnSZu6gAvaN3X/mUrJxQsn4i+ridFUn/3TY/4mtllgalBsV+9zjWS3T09o s6hzvlbK8XRnaniyD/a1w3BeYUW3JMb8mhZFZuCJtLLYTR6ncStktfEs0iqNbnFhbDWp aD/Ptodne3k78VA4b0cFEsyqpZ4gYVDY/NQ+xymfLSm65DfHVjGbZ4lkA4TgKpXK+QxG MScOGyBuagMFVDgJrYtSKAtspBezSNVCyY0qiot4ywhCoBSG/7q/9cFcYIORAfZ7gOdW g0gA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=YNpTiAUPE413/0kTl83qEL8Sh+6ar/qiGlrRIOiiQ3Q=; b=v7SUNcNUGPDhB+DkQS1cmcIU7GvpBoqI2qFZHPq1TXp3QPQzCA+4mjpSi5xB2u3L02 OE9oenJpx8mg1bMVTv3jUm/YSby0SsLIYcZELHEVqdFshEllUm7M+RTO/t278UvYWMtU xNraYAylTpyecXgb1xmnKBiaY0e1RPdpx5RWY1nzAc1B8dV0UuUsotNGpM66bi4bewIV EhNMhclPS1kW+0BXnR7JH3q63/XoK2Dc0Un8FAQns2s0JkC+u0EK+MMKcr2ZiCoTj03n XU95re/gXESGprrUthQPIdbdgr8zW2bXovcI5XtPmghrYV9iSjeJDjC5SJrhRQjCdqHu GI/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m12-v6si25948996pgn.215.2018.05.29.11.08.19; Tue, 29 May 2018 11:08:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965970AbeE2SIR (ORCPT + 30 others); Tue, 29 May 2018 14:08:17 -0400 Received: from foss.arm.com ([217.140.101.70]:45908 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965863AbeE2SIM (ORCPT ); Tue, 29 May 2018 14:08:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 847F91650; Tue, 29 May 2018 11:08:10 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2FD703F557; Tue, 29 May 2018 11:08:09 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon Subject: [PATCH 5/7] atomics: switch to generated fallbacks Date: Tue, 29 May 2018 19:07:44 +0100 Message-Id: <20180529180746.29684-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step to ensuring the atomic* APIs are consistent, switch to fallbacks generated by gen-atomic-fallback.sh. These are checked in rather than generated with Kbuild, since: * This allows inspection of the atomics with git grep and ctags on a pristine tree, which Linus strongly prefers being able to do. * The fallbacks are not expected to change very often, and are not affected by machine details or configuration options, so regenerating them for *every* build is somewhat wasteful. * These are included by files required *very* early in the build process (e.g. for generating bounds.h), and we'd rather not complicate the top-level Kbuild file. The new fallback header should be equivalent to the old fallbacks in , but it is formatted a little differently due to scripting ensuring regularity, and things are somewhat larger as fallbacks are now expanded in place as static inline functions, with return type consitently on a separate line to try to keep each line at a sensible length. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/atomic-fallback.h | 2223 +++++++++++++++++++++++++++++++++++++++ include/linux/atomic.h | 1206 +-------------------- 2 files changed, 2224 insertions(+), 1205 deletions(-) create mode 100644 include/linux/atomic-fallback.h Note: this is generated code. Please see patch 3 for the code which generated it, along with rationale in the cover letter. Mark. -- 2.11.0 diff --git a/include/linux/atomic-fallback.h b/include/linux/atomic-fallback.h new file mode 100644 index 000000000000..bc6fc213e177 --- /dev/null +++ b/include/linux/atomic-fallback.h @@ -0,0 +1,2223 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by ./scripts/atomic/gen-atomic-fallback.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +#ifndef _LINUX_ATOMIC_FALLBACK_H +#define _LINUX_ATOMIC_FALLBACK_H + +#ifndef atomic_read_acquire +static inline int +atomic_read_acquire(const atomic_t *v) +{ + return smp_load_acquire(&(v)->counter); +} +#define atomic_read_acquire atomic_read_acquire +#endif + +#ifndef atomic_set_release +static inline void +atomic_set_release(atomic_t *v, int i) +{ + smp_store_release(&(v)->counter, i); +} +#define atomic_set_release atomic_set_release +#endif + +#ifndef atomic_add_return_relaxed +#define atomic_add_return_acquire atomic_add_return +#define atomic_add_return_release atomic_add_return +#define atomic_add_return_relaxed atomic_add_return +#else /* atomic_add_return_relaxed */ + +#ifndef atomic_add_return_acquire +static inline int +atomic_add_return_acquire(int i, atomic_t *v) +{ + int ret = atomic_add_return_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_add_return_acquire atomic_add_return_acquire +#endif + +#ifndef atomic_add_return_release +static inline int +atomic_add_return_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_add_return_relaxed(i, v); +} +#define atomic_add_return_release atomic_add_return_release +#endif + +#ifndef atomic_add_return +static inline int +atomic_add_return(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_add_return_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_add_return atomic_add_return +#endif + +#endif /* atomic_add_return_relaxed */ + +#ifndef atomic_fetch_add_relaxed +#define atomic_fetch_add_acquire atomic_fetch_add +#define atomic_fetch_add_release atomic_fetch_add +#define atomic_fetch_add_relaxed atomic_fetch_add +#else /* atomic_fetch_add_relaxed */ + +#ifndef atomic_fetch_add_acquire +static inline int +atomic_fetch_add_acquire(int i, atomic_t *v) +{ + int ret = atomic_fetch_add_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_add_acquire atomic_fetch_add_acquire +#endif + +#ifndef atomic_fetch_add_release +static inline int +atomic_fetch_add_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_add_relaxed(i, v); +} +#define atomic_fetch_add_release atomic_fetch_add_release +#endif + +#ifndef atomic_fetch_add +static inline int +atomic_fetch_add(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_add_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_add atomic_fetch_add +#endif + +#endif /* atomic_fetch_add_relaxed */ + +#ifndef atomic_sub_return_relaxed +#define atomic_sub_return_acquire atomic_sub_return +#define atomic_sub_return_release atomic_sub_return +#define atomic_sub_return_relaxed atomic_sub_return +#else /* atomic_sub_return_relaxed */ + +#ifndef atomic_sub_return_acquire +static inline int +atomic_sub_return_acquire(int i, atomic_t *v) +{ + int ret = atomic_sub_return_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_sub_return_acquire atomic_sub_return_acquire +#endif + +#ifndef atomic_sub_return_release +static inline int +atomic_sub_return_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_sub_return_relaxed(i, v); +} +#define atomic_sub_return_release atomic_sub_return_release +#endif + +#ifndef atomic_sub_return +static inline int +atomic_sub_return(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_sub_return_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_sub_return atomic_sub_return +#endif + +#endif /* atomic_sub_return_relaxed */ + +#ifndef atomic_fetch_sub_relaxed +#define atomic_fetch_sub_acquire atomic_fetch_sub +#define atomic_fetch_sub_release atomic_fetch_sub +#define atomic_fetch_sub_relaxed atomic_fetch_sub +#else /* atomic_fetch_sub_relaxed */ + +#ifndef atomic_fetch_sub_acquire +static inline int +atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + int ret = atomic_fetch_sub_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire +#endif + +#ifndef atomic_fetch_sub_release +static inline int +atomic_fetch_sub_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_sub_relaxed(i, v); +} +#define atomic_fetch_sub_release atomic_fetch_sub_release +#endif + +#ifndef atomic_fetch_sub +static inline int +atomic_fetch_sub(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_sub_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_sub atomic_fetch_sub +#endif + +#endif /* atomic_fetch_sub_relaxed */ + +#ifndef atomic_inc +static inline void +atomic_inc(atomic_t *v) +{ + atomic_add(1, v); +} +#define atomic_inc atomic_inc +#endif + +#ifndef atomic_inc_return_relaxed +#ifdef atomic_inc_return +#define atomic_inc_return_acquire atomic_inc_return +#define atomic_inc_return_release atomic_inc_return +#define atomic_inc_return_relaxed atomic_inc_return +#endif /* atomic_inc_return */ + +#ifndef atomic_inc_return +static inline int +atomic_inc_return(atomic_t *v) +{ + return atomic_add_return(1, v); +} +#define atomic_inc_return atomic_inc_return +#endif + +#ifndef atomic_inc_return_acquire +static inline int +atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_add_return_acquire(1, v); +} +#define atomic_inc_return_acquire atomic_inc_return_acquire +#endif + +#ifndef atomic_inc_return_release +static inline int +atomic_inc_return_release(atomic_t *v) +{ + return atomic_add_return_release(1, v); +} +#define atomic_inc_return_release atomic_inc_return_release +#endif + +#ifndef atomic_inc_return_relaxed +static inline int +atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_add_return_relaxed(1, v); +} +#define atomic_inc_return_relaxed atomic_inc_return_relaxed +#endif + +#else /* atomic_inc_return_relaxed */ + +#ifndef atomic_inc_return_acquire +static inline int +atomic_inc_return_acquire(atomic_t *v) +{ + int ret = atomic_inc_return_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_inc_return_acquire atomic_inc_return_acquire +#endif + +#ifndef atomic_inc_return_release +static inline int +atomic_inc_return_release(atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_inc_return_relaxed(v); +} +#define atomic_inc_return_release atomic_inc_return_release +#endif + +#ifndef atomic_inc_return +static inline int +atomic_inc_return(atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_inc_return_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_inc_return atomic_inc_return +#endif + +#endif /* atomic_inc_return_relaxed */ + +#ifndef atomic_fetch_inc_relaxed +#ifdef atomic_fetch_inc +#define atomic_fetch_inc_acquire atomic_fetch_inc +#define atomic_fetch_inc_release atomic_fetch_inc +#define atomic_fetch_inc_relaxed atomic_fetch_inc +#endif /* atomic_fetch_inc */ + +#ifndef atomic_fetch_inc +static inline int +atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_add(1, v); +} +#define atomic_fetch_inc atomic_fetch_inc +#endif + +#ifndef atomic_fetch_inc_acquire +static inline int +atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_add_acquire(1, v); +} +#define atomic_fetch_inc_acquire atomic_fetch_inc_acquire +#endif + +#ifndef atomic_fetch_inc_release +static inline int +atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_add_release(1, v); +} +#define atomic_fetch_inc_release atomic_fetch_inc_release +#endif + +#ifndef atomic_fetch_inc_relaxed +static inline int +atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_add_relaxed(1, v); +} +#define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed +#endif + +#else /* atomic_fetch_inc_relaxed */ + +#ifndef atomic_fetch_inc_acquire +static inline int +atomic_fetch_inc_acquire(atomic_t *v) +{ + int ret = atomic_fetch_inc_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_inc_acquire atomic_fetch_inc_acquire +#endif + +#ifndef atomic_fetch_inc_release +static inline int +atomic_fetch_inc_release(atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_inc_relaxed(v); +} +#define atomic_fetch_inc_release atomic_fetch_inc_release +#endif + +#ifndef atomic_fetch_inc +static inline int +atomic_fetch_inc(atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_inc_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_inc atomic_fetch_inc +#endif + +#endif /* atomic_fetch_inc_relaxed */ + +#ifndef atomic_dec +static inline void +atomic_dec(atomic_t *v) +{ + atomic_sub(1, v); +} +#define atomic_dec atomic_dec +#endif + +#ifndef atomic_dec_return_relaxed +#ifdef atomic_dec_return +#define atomic_dec_return_acquire atomic_dec_return +#define atomic_dec_return_release atomic_dec_return +#define atomic_dec_return_relaxed atomic_dec_return +#endif /* atomic_dec_return */ + +#ifndef atomic_dec_return +static inline int +atomic_dec_return(atomic_t *v) +{ + return atomic_sub_return(1, v); +} +#define atomic_dec_return atomic_dec_return +#endif + +#ifndef atomic_dec_return_acquire +static inline int +atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_sub_return_acquire(1, v); +} +#define atomic_dec_return_acquire atomic_dec_return_acquire +#endif + +#ifndef atomic_dec_return_release +static inline int +atomic_dec_return_release(atomic_t *v) +{ + return atomic_sub_return_release(1, v); +} +#define atomic_dec_return_release atomic_dec_return_release +#endif + +#ifndef atomic_dec_return_relaxed +static inline int +atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_sub_return_relaxed(1, v); +} +#define atomic_dec_return_relaxed atomic_dec_return_relaxed +#endif + +#else /* atomic_dec_return_relaxed */ + +#ifndef atomic_dec_return_acquire +static inline int +atomic_dec_return_acquire(atomic_t *v) +{ + int ret = atomic_dec_return_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_dec_return_acquire atomic_dec_return_acquire +#endif + +#ifndef atomic_dec_return_release +static inline int +atomic_dec_return_release(atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_dec_return_relaxed(v); +} +#define atomic_dec_return_release atomic_dec_return_release +#endif + +#ifndef atomic_dec_return +static inline int +atomic_dec_return(atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_dec_return_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_dec_return atomic_dec_return +#endif + +#endif /* atomic_dec_return_relaxed */ + +#ifndef atomic_fetch_dec_relaxed +#ifdef atomic_fetch_dec +#define atomic_fetch_dec_acquire atomic_fetch_dec +#define atomic_fetch_dec_release atomic_fetch_dec +#define atomic_fetch_dec_relaxed atomic_fetch_dec +#endif /* atomic_fetch_dec */ + +#ifndef atomic_fetch_dec +static inline int +atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_sub(1, v); +} +#define atomic_fetch_dec atomic_fetch_dec +#endif + +#ifndef atomic_fetch_dec_acquire +static inline int +atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_sub_acquire(1, v); +} +#define atomic_fetch_dec_acquire atomic_fetch_dec_acquire +#endif + +#ifndef atomic_fetch_dec_release +static inline int +atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_sub_release(1, v); +} +#define atomic_fetch_dec_release atomic_fetch_dec_release +#endif + +#ifndef atomic_fetch_dec_relaxed +static inline int +atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_sub_relaxed(1, v); +} +#define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed +#endif + +#else /* atomic_fetch_dec_relaxed */ + +#ifndef atomic_fetch_dec_acquire +static inline int +atomic_fetch_dec_acquire(atomic_t *v) +{ + int ret = atomic_fetch_dec_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_dec_acquire atomic_fetch_dec_acquire +#endif + +#ifndef atomic_fetch_dec_release +static inline int +atomic_fetch_dec_release(atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_dec_relaxed(v); +} +#define atomic_fetch_dec_release atomic_fetch_dec_release +#endif + +#ifndef atomic_fetch_dec +static inline int +atomic_fetch_dec(atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_dec_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_dec atomic_fetch_dec +#endif + +#endif /* atomic_fetch_dec_relaxed */ + +#ifndef atomic_fetch_and_relaxed +#define atomic_fetch_and_acquire atomic_fetch_and +#define atomic_fetch_and_release atomic_fetch_and +#define atomic_fetch_and_relaxed atomic_fetch_and +#else /* atomic_fetch_and_relaxed */ + +#ifndef atomic_fetch_and_acquire +static inline int +atomic_fetch_and_acquire(int i, atomic_t *v) +{ + int ret = atomic_fetch_and_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_and_acquire atomic_fetch_and_acquire +#endif + +#ifndef atomic_fetch_and_release +static inline int +atomic_fetch_and_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_and_relaxed(i, v); +} +#define atomic_fetch_and_release atomic_fetch_and_release +#endif + +#ifndef atomic_fetch_and +static inline int +atomic_fetch_and(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_and_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_and atomic_fetch_and +#endif + +#endif /* atomic_fetch_and_relaxed */ + +#ifndef atomic_andnot +static inline void +atomic_andnot(int i, atomic_t *v) +{ + atomic_and(~i, v); +} +#define atomic_andnot atomic_andnot +#endif + +#ifndef atomic_fetch_andnot_relaxed +#ifdef atomic_fetch_andnot +#define atomic_fetch_andnot_acquire atomic_fetch_andnot +#define atomic_fetch_andnot_release atomic_fetch_andnot +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot +#endif /* atomic_fetch_andnot */ + +#ifndef atomic_fetch_andnot +static inline int +atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_and(~i, v); +} +#define atomic_fetch_andnot atomic_fetch_andnot +#endif + +#ifndef atomic_fetch_andnot_acquire +static inline int +atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(~i, v); +} +#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire +#endif + +#ifndef atomic_fetch_andnot_release +static inline int +atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(~i, v); +} +#define atomic_fetch_andnot_release atomic_fetch_andnot_release +#endif + +#ifndef atomic_fetch_andnot_relaxed +static inline int +atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(~i, v); +} +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#endif + +#else /* atomic_fetch_andnot_relaxed */ + +#ifndef atomic_fetch_andnot_acquire +static inline int +atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + int ret = atomic_fetch_andnot_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire +#endif + +#ifndef atomic_fetch_andnot_release +static inline int +atomic_fetch_andnot_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_andnot_relaxed(i, v); +} +#define atomic_fetch_andnot_release atomic_fetch_andnot_release +#endif + +#ifndef atomic_fetch_andnot +static inline int +atomic_fetch_andnot(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_andnot_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_andnot atomic_fetch_andnot +#endif + +#endif /* atomic_fetch_andnot_relaxed */ + +#ifndef atomic_fetch_or_relaxed +#define atomic_fetch_or_acquire atomic_fetch_or +#define atomic_fetch_or_release atomic_fetch_or +#define atomic_fetch_or_relaxed atomic_fetch_or +#else /* atomic_fetch_or_relaxed */ + +#ifndef atomic_fetch_or_acquire +static inline int +atomic_fetch_or_acquire(int i, atomic_t *v) +{ + int ret = atomic_fetch_or_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_or_acquire atomic_fetch_or_acquire +#endif + +#ifndef atomic_fetch_or_release +static inline int +atomic_fetch_or_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_or_relaxed(i, v); +} +#define atomic_fetch_or_release atomic_fetch_or_release +#endif + +#ifndef atomic_fetch_or +static inline int +atomic_fetch_or(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_or_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_or atomic_fetch_or +#endif + +#endif /* atomic_fetch_or_relaxed */ + +#ifndef atomic_fetch_xor_relaxed +#define atomic_fetch_xor_acquire atomic_fetch_xor +#define atomic_fetch_xor_release atomic_fetch_xor +#define atomic_fetch_xor_relaxed atomic_fetch_xor +#else /* atomic_fetch_xor_relaxed */ + +#ifndef atomic_fetch_xor_acquire +static inline int +atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + int ret = atomic_fetch_xor_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire +#endif + +#ifndef atomic_fetch_xor_release +static inline int +atomic_fetch_xor_release(int i, atomic_t *v) +{ + __atomic_mb__before_release(); + return atomic_fetch_xor_relaxed(i, v); +} +#define atomic_fetch_xor_release atomic_fetch_xor_release +#endif + +#ifndef atomic_fetch_xor +static inline int +atomic_fetch_xor(int i, atomic_t *v) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_fetch_xor_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_fetch_xor atomic_fetch_xor +#endif + +#endif /* atomic_fetch_xor_relaxed */ + +#ifndef atomic_xchg_relaxed +#define atomic_xchg_acquire atomic_xchg +#define atomic_xchg_release atomic_xchg +#define atomic_xchg_relaxed atomic_xchg +#else /* atomic_xchg_relaxed */ + +#ifndef atomic_xchg_acquire +static inline int +atomic_xchg_acquire(atomic_t *v, int i) +{ + int ret = atomic_xchg_relaxed(v, i); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_xchg_acquire atomic_xchg_acquire +#endif + +#ifndef atomic_xchg_release +static inline int +atomic_xchg_release(atomic_t *v, int i) +{ + __atomic_mb__before_release(); + return atomic_xchg_relaxed(v, i); +} +#define atomic_xchg_release atomic_xchg_release +#endif + +#ifndef atomic_xchg +static inline int +atomic_xchg(atomic_t *v, int i) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_xchg_relaxed(v, i); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_xchg atomic_xchg +#endif + +#endif /* atomic_xchg_relaxed */ + +#ifndef atomic_cmpxchg_relaxed +#define atomic_cmpxchg_acquire atomic_cmpxchg +#define atomic_cmpxchg_release atomic_cmpxchg +#define atomic_cmpxchg_relaxed atomic_cmpxchg +#else /* atomic_cmpxchg_relaxed */ + +#ifndef atomic_cmpxchg_acquire +static inline int +atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + int ret = atomic_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire +#endif + +#ifndef atomic_cmpxchg_release +static inline int +atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + __atomic_mb__before_release(); + return atomic_cmpxchg_relaxed(v, old, new); +} +#define atomic_cmpxchg_release atomic_cmpxchg_release +#endif + +#ifndef atomic_cmpxchg +static inline int +atomic_cmpxchg(atomic_t *v, int old, int new) +{ + int ret; + __atomic_mb__before_fence(); + ret = atomic_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_cmpxchg atomic_cmpxchg +#endif + +#endif /* atomic_cmpxchg_relaxed */ + +#ifndef atomic_try_cmpxchg_relaxed +#ifdef atomic_try_cmpxchg +#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg +#define atomic_try_cmpxchg_release atomic_try_cmpxchg +#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg +#endif /* atomic_try_cmpxchg */ + +#ifndef atomic_try_cmpxchg +static inline bool +atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + int r, o = *old; + r = atomic_cmpxchg(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic_try_cmpxchg atomic_try_cmpxchg +#endif + +#ifndef atomic_try_cmpxchg_acquire +static inline bool +atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + int r, o = *old; + r = atomic_cmpxchg_acquire(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire +#endif + +#ifndef atomic_try_cmpxchg_release +static inline bool +atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + int r, o = *old; + r = atomic_cmpxchg_release(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release +#endif + +#ifndef atomic_try_cmpxchg_relaxed +static inline bool +atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + int r, o = *old; + r = atomic_cmpxchg_relaxed(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed +#endif + +#else /* atomic_try_cmpxchg_relaxed */ + +#ifndef atomic_try_cmpxchg_acquire +static inline bool +atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + bool ret = atomic_try_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire +#endif + +#ifndef atomic_try_cmpxchg_release +static inline bool +atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + __atomic_mb__before_release(); + return atomic_try_cmpxchg_relaxed(v, old, new); +} +#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release +#endif + +#ifndef atomic_try_cmpxchg +static inline bool +atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + bool ret; + __atomic_mb__before_fence(); + ret = atomic_try_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_fence(); + return ret; +} +#define atomic_try_cmpxchg atomic_try_cmpxchg +#endif + +#endif /* atomic_try_cmpxchg_relaxed */ + +#ifndef atomic_sub_and_test +/** + * atomic_sub_and_test - subtract value from variable and test result + * @i: integer value to subtract + * @v: pointer of type atomic_t + * + * Atomically subtracts @i from @v and returns + * true if the result is zero, or false for all + * other cases. + */ +static inline bool +atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_return(i, v) == 0; +} +#define atomic_sub_and_test atomic_sub_and_test +#endif + +#ifndef atomic_dec_and_test +/** + * atomic_dec_and_test - decrement and test + * @v: pointer of type atomic_t + * + * Atomically decrements @v by 1 and + * returns true if the result is 0, or false for all other + * cases. + */ +static inline bool +atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_return(v) == 0; +} +#define atomic_dec_and_test atomic_dec_and_test +#endif + +#ifndef atomic_inc_and_test +/** + * atomic_inc_and_test - increment and test + * @v: pointer of type atomic_t + * + * Atomically increments @v by 1 + * and returns true if the result is zero, or false for all + * other cases. + */ +static inline bool +atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_return(v) == 0; +} +#define atomic_inc_and_test atomic_inc_and_test +#endif + +#ifndef atomic_add_negative +/** + * atomic_add_negative - add and test if negative + * @i: integer value to add + * @v: pointer of type atomic_t + * + * Atomically adds @i to @v and returns true + * if the result is negative, or false when + * result is greater than or equal to zero. + */ +static inline bool +atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_return(i, v) < 0; +} +#define atomic_add_negative atomic_add_negative +#endif + +#ifndef atomic_fetch_add_unless +/** + * atomic_fetch_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns original value of @v + */ +static inline int +atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + int c = atomic_read(v); + + do { + if (unlikely(c == u)) + break; + } while (!atomic_try_cmpxchg(v, &c, c + a)); + + return c; +} +#define atomic_fetch_add_unless atomic_fetch_add_unless +#endif + +#ifndef atomic_add_unless +/** + * atomic_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +static inline bool +atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u) != u; +} +#define atomic_add_unless atomic_add_unless +#endif + +#ifndef atomic_inc_not_zero +/** + * atomic_inc_not_zero - increment unless the number is zero + * @v: pointer of type atomic_t + * + * Atomically increments @v by 1, so long as @v is non-zero. + * Returns non-zero if @v was non-zero, and zero otherwise. + */ +static inline bool +atomic_inc_not_zero(atomic_t *v) +{ + return atomic_add_unless(v, 1, 0); +} +#define atomic_inc_not_zero atomic_inc_not_zero +#endif + +#ifndef atomic_inc_unless_negative +static inline bool +atomic_inc_unless_negative(atomic_t *v) +{ + int c = atomic_read(v); + + do { + if (c < 0) + return false; + } while (!atomic_try_cmpxchg(v, &c, c + 1)); + + return true; +} +#define atomic_inc_unless_negative atomic_inc_unless_negative +#endif + +#ifndef atomic_dec_unless_positive +static inline bool +atomic_dec_unless_positive(atomic_t *v) +{ + int c = atomic_read(v); + + do { + if (c > 0) + return false; + } while (!atomic_try_cmpxchg(v, &c, c - 1)); + + return true; +} +#define atomic_dec_unless_positive atomic_dec_unless_positive +#endif + +#ifndef atomic_dec_if_positive +static inline int +atomic_dec_if_positive(atomic_t *v) +{ + int dec, c = atomic_read(v); + + do { + dec = c - 1; + if (dec < 0) + break; + } while (!atomic_try_cmpxchg(v, &c, dec)); + + return dec; +} +#define atomic_dec_if_positive atomic_dec_if_positive +#endif + +#define atomic_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c)) + +#ifdef CONFIG_GENERIC_ATOMIC64 +#include +#endif + +#ifndef atomic64_read_acquire +static inline s64 +atomic64_read_acquire(const atomic64_t *v) +{ + return smp_load_acquire(&(v)->counter); +} +#define atomic64_read_acquire atomic64_read_acquire +#endif + +#ifndef atomic64_set_release +static inline void +atomic64_set_release(atomic64_t *v, s64 i) +{ + smp_store_release(&(v)->counter, i); +} +#define atomic64_set_release atomic64_set_release +#endif + +#ifndef atomic64_add_return_relaxed +#define atomic64_add_return_acquire atomic64_add_return +#define atomic64_add_return_release atomic64_add_return +#define atomic64_add_return_relaxed atomic64_add_return +#else /* atomic64_add_return_relaxed */ + +#ifndef atomic64_add_return_acquire +static inline s64 +atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_add_return_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_add_return_acquire atomic64_add_return_acquire +#endif + +#ifndef atomic64_add_return_release +static inline s64 +atomic64_add_return_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_add_return_relaxed(i, v); +} +#define atomic64_add_return_release atomic64_add_return_release +#endif + +#ifndef atomic64_add_return +static inline s64 +atomic64_add_return(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_add_return_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_add_return atomic64_add_return +#endif + +#endif /* atomic64_add_return_relaxed */ + +#ifndef atomic64_fetch_add_relaxed +#define atomic64_fetch_add_acquire atomic64_fetch_add +#define atomic64_fetch_add_release atomic64_fetch_add +#define atomic64_fetch_add_relaxed atomic64_fetch_add +#else /* atomic64_fetch_add_relaxed */ + +#ifndef atomic64_fetch_add_acquire +static inline s64 +atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_fetch_add_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire +#endif + +#ifndef atomic64_fetch_add_release +static inline s64 +atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_add_relaxed(i, v); +} +#define atomic64_fetch_add_release atomic64_fetch_add_release +#endif + +#ifndef atomic64_fetch_add +static inline s64 +atomic64_fetch_add(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_add_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_add atomic64_fetch_add +#endif + +#endif /* atomic64_fetch_add_relaxed */ + +#ifndef atomic64_sub_return_relaxed +#define atomic64_sub_return_acquire atomic64_sub_return +#define atomic64_sub_return_release atomic64_sub_return +#define atomic64_sub_return_relaxed atomic64_sub_return +#else /* atomic64_sub_return_relaxed */ + +#ifndef atomic64_sub_return_acquire +static inline s64 +atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_sub_return_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_sub_return_acquire atomic64_sub_return_acquire +#endif + +#ifndef atomic64_sub_return_release +static inline s64 +atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_sub_return_relaxed(i, v); +} +#define atomic64_sub_return_release atomic64_sub_return_release +#endif + +#ifndef atomic64_sub_return +static inline s64 +atomic64_sub_return(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_sub_return_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_sub_return atomic64_sub_return +#endif + +#endif /* atomic64_sub_return_relaxed */ + +#ifndef atomic64_fetch_sub_relaxed +#define atomic64_fetch_sub_acquire atomic64_fetch_sub +#define atomic64_fetch_sub_release atomic64_fetch_sub +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub +#else /* atomic64_fetch_sub_relaxed */ + +#ifndef atomic64_fetch_sub_acquire +static inline s64 +atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_fetch_sub_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire +#endif + +#ifndef atomic64_fetch_sub_release +static inline s64 +atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_sub_relaxed(i, v); +} +#define atomic64_fetch_sub_release atomic64_fetch_sub_release +#endif + +#ifndef atomic64_fetch_sub +static inline s64 +atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_sub_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_sub atomic64_fetch_sub +#endif + +#endif /* atomic64_fetch_sub_relaxed */ + +#ifndef atomic64_inc +static inline void +atomic64_inc(atomic64_t *v) +{ + atomic64_add(1, v); +} +#define atomic64_inc atomic64_inc +#endif + +#ifndef atomic64_inc_return_relaxed +#ifdef atomic64_inc_return +#define atomic64_inc_return_acquire atomic64_inc_return +#define atomic64_inc_return_release atomic64_inc_return +#define atomic64_inc_return_relaxed atomic64_inc_return +#endif /* atomic64_inc_return */ + +#ifndef atomic64_inc_return +static inline s64 +atomic64_inc_return(atomic64_t *v) +{ + return atomic64_add_return(1, v); +} +#define atomic64_inc_return atomic64_inc_return +#endif + +#ifndef atomic64_inc_return_acquire +static inline s64 +atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_add_return_acquire(1, v); +} +#define atomic64_inc_return_acquire atomic64_inc_return_acquire +#endif + +#ifndef atomic64_inc_return_release +static inline s64 +atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_add_return_release(1, v); +} +#define atomic64_inc_return_release atomic64_inc_return_release +#endif + +#ifndef atomic64_inc_return_relaxed +static inline s64 +atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_add_return_relaxed(1, v); +} +#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed +#endif + +#else /* atomic64_inc_return_relaxed */ + +#ifndef atomic64_inc_return_acquire +static inline s64 +atomic64_inc_return_acquire(atomic64_t *v) +{ + s64 ret = atomic64_inc_return_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_inc_return_acquire atomic64_inc_return_acquire +#endif + +#ifndef atomic64_inc_return_release +static inline s64 +atomic64_inc_return_release(atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_inc_return_relaxed(v); +} +#define atomic64_inc_return_release atomic64_inc_return_release +#endif + +#ifndef atomic64_inc_return +static inline s64 +atomic64_inc_return(atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_inc_return_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_inc_return atomic64_inc_return +#endif + +#endif /* atomic64_inc_return_relaxed */ + +#ifndef atomic64_fetch_inc_relaxed +#ifdef atomic64_fetch_inc +#define atomic64_fetch_inc_acquire atomic64_fetch_inc +#define atomic64_fetch_inc_release atomic64_fetch_inc +#define atomic64_fetch_inc_relaxed atomic64_fetch_inc +#endif /* atomic64_fetch_inc */ + +#ifndef atomic64_fetch_inc +static inline s64 +atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_add(1, v); +} +#define atomic64_fetch_inc atomic64_fetch_inc +#endif + +#ifndef atomic64_fetch_inc_acquire +static inline s64 +atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_add_acquire(1, v); +} +#define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire +#endif + +#ifndef atomic64_fetch_inc_release +static inline s64 +atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_add_release(1, v); +} +#define atomic64_fetch_inc_release atomic64_fetch_inc_release +#endif + +#ifndef atomic64_fetch_inc_relaxed +static inline s64 +atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(1, v); +} +#define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed +#endif + +#else /* atomic64_fetch_inc_relaxed */ + +#ifndef atomic64_fetch_inc_acquire +static inline s64 +atomic64_fetch_inc_acquire(atomic64_t *v) +{ + s64 ret = atomic64_fetch_inc_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire +#endif + +#ifndef atomic64_fetch_inc_release +static inline s64 +atomic64_fetch_inc_release(atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_inc_relaxed(v); +} +#define atomic64_fetch_inc_release atomic64_fetch_inc_release +#endif + +#ifndef atomic64_fetch_inc +static inline s64 +atomic64_fetch_inc(atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_inc_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_inc atomic64_fetch_inc +#endif + +#endif /* atomic64_fetch_inc_relaxed */ + +#ifndef atomic64_dec +static inline void +atomic64_dec(atomic64_t *v) +{ + atomic64_sub(1, v); +} +#define atomic64_dec atomic64_dec +#endif + +#ifndef atomic64_dec_return_relaxed +#ifdef atomic64_dec_return +#define atomic64_dec_return_acquire atomic64_dec_return +#define atomic64_dec_return_release atomic64_dec_return +#define atomic64_dec_return_relaxed atomic64_dec_return +#endif /* atomic64_dec_return */ + +#ifndef atomic64_dec_return +static inline s64 +atomic64_dec_return(atomic64_t *v) +{ + return atomic64_sub_return(1, v); +} +#define atomic64_dec_return atomic64_dec_return +#endif + +#ifndef atomic64_dec_return_acquire +static inline s64 +atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_sub_return_acquire(1, v); +} +#define atomic64_dec_return_acquire atomic64_dec_return_acquire +#endif + +#ifndef atomic64_dec_return_release +static inline s64 +atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_sub_return_release(1, v); +} +#define atomic64_dec_return_release atomic64_dec_return_release +#endif + +#ifndef atomic64_dec_return_relaxed +static inline s64 +atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_sub_return_relaxed(1, v); +} +#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed +#endif + +#else /* atomic64_dec_return_relaxed */ + +#ifndef atomic64_dec_return_acquire +static inline s64 +atomic64_dec_return_acquire(atomic64_t *v) +{ + s64 ret = atomic64_dec_return_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_dec_return_acquire atomic64_dec_return_acquire +#endif + +#ifndef atomic64_dec_return_release +static inline s64 +atomic64_dec_return_release(atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_dec_return_relaxed(v); +} +#define atomic64_dec_return_release atomic64_dec_return_release +#endif + +#ifndef atomic64_dec_return +static inline s64 +atomic64_dec_return(atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_dec_return_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_dec_return atomic64_dec_return +#endif + +#endif /* atomic64_dec_return_relaxed */ + +#ifndef atomic64_fetch_dec_relaxed +#ifdef atomic64_fetch_dec +#define atomic64_fetch_dec_acquire atomic64_fetch_dec +#define atomic64_fetch_dec_release atomic64_fetch_dec +#define atomic64_fetch_dec_relaxed atomic64_fetch_dec +#endif /* atomic64_fetch_dec */ + +#ifndef atomic64_fetch_dec +static inline s64 +atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_sub(1, v); +} +#define atomic64_fetch_dec atomic64_fetch_dec +#endif + +#ifndef atomic64_fetch_dec_acquire +static inline s64 +atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(1, v); +} +#define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire +#endif + +#ifndef atomic64_fetch_dec_release +static inline s64 +atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_sub_release(1, v); +} +#define atomic64_fetch_dec_release atomic64_fetch_dec_release +#endif + +#ifndef atomic64_fetch_dec_relaxed +static inline s64 +atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(1, v); +} +#define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed +#endif + +#else /* atomic64_fetch_dec_relaxed */ + +#ifndef atomic64_fetch_dec_acquire +static inline s64 +atomic64_fetch_dec_acquire(atomic64_t *v) +{ + s64 ret = atomic64_fetch_dec_relaxed(v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire +#endif + +#ifndef atomic64_fetch_dec_release +static inline s64 +atomic64_fetch_dec_release(atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_dec_relaxed(v); +} +#define atomic64_fetch_dec_release atomic64_fetch_dec_release +#endif + +#ifndef atomic64_fetch_dec +static inline s64 +atomic64_fetch_dec(atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_dec_relaxed(v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_dec atomic64_fetch_dec +#endif + +#endif /* atomic64_fetch_dec_relaxed */ + +#ifndef atomic64_fetch_and_relaxed +#define atomic64_fetch_and_acquire atomic64_fetch_and +#define atomic64_fetch_and_release atomic64_fetch_and +#define atomic64_fetch_and_relaxed atomic64_fetch_and +#else /* atomic64_fetch_and_relaxed */ + +#ifndef atomic64_fetch_and_acquire +static inline s64 +atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_fetch_and_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire +#endif + +#ifndef atomic64_fetch_and_release +static inline s64 +atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_and_relaxed(i, v); +} +#define atomic64_fetch_and_release atomic64_fetch_and_release +#endif + +#ifndef atomic64_fetch_and +static inline s64 +atomic64_fetch_and(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_and_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_and atomic64_fetch_and +#endif + +#endif /* atomic64_fetch_and_relaxed */ + +#ifndef atomic64_andnot +static inline void +atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_and(~i, v); +} +#define atomic64_andnot atomic64_andnot +#endif + +#ifndef atomic64_fetch_andnot_relaxed +#ifdef atomic64_fetch_andnot +#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot +#define atomic64_fetch_andnot_release atomic64_fetch_andnot +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot +#endif /* atomic64_fetch_andnot */ + +#ifndef atomic64_fetch_andnot +static inline s64 +atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(~i, v); +} +#define atomic64_fetch_andnot atomic64_fetch_andnot +#endif + +#ifndef atomic64_fetch_andnot_acquire +static inline s64 +atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(~i, v); +} +#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire +#endif + +#ifndef atomic64_fetch_andnot_release +static inline s64 +atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(~i, v); +} +#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release +#endif + +#ifndef atomic64_fetch_andnot_relaxed +static inline s64 +atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(~i, v); +} +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#endif + +#else /* atomic64_fetch_andnot_relaxed */ + +#ifndef atomic64_fetch_andnot_acquire +static inline s64 +atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_fetch_andnot_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire +#endif + +#ifndef atomic64_fetch_andnot_release +static inline s64 +atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_andnot_relaxed(i, v); +} +#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release +#endif + +#ifndef atomic64_fetch_andnot +static inline s64 +atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_andnot_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_andnot atomic64_fetch_andnot +#endif + +#endif /* atomic64_fetch_andnot_relaxed */ + +#ifndef atomic64_fetch_or_relaxed +#define atomic64_fetch_or_acquire atomic64_fetch_or +#define atomic64_fetch_or_release atomic64_fetch_or +#define atomic64_fetch_or_relaxed atomic64_fetch_or +#else /* atomic64_fetch_or_relaxed */ + +#ifndef atomic64_fetch_or_acquire +static inline s64 +atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_fetch_or_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire +#endif + +#ifndef atomic64_fetch_or_release +static inline s64 +atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_or_relaxed(i, v); +} +#define atomic64_fetch_or_release atomic64_fetch_or_release +#endif + +#ifndef atomic64_fetch_or +static inline s64 +atomic64_fetch_or(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_or_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_or atomic64_fetch_or +#endif + +#endif /* atomic64_fetch_or_relaxed */ + +#ifndef atomic64_fetch_xor_relaxed +#define atomic64_fetch_xor_acquire atomic64_fetch_xor +#define atomic64_fetch_xor_release atomic64_fetch_xor +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor +#else /* atomic64_fetch_xor_relaxed */ + +#ifndef atomic64_fetch_xor_acquire +static inline s64 +atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + s64 ret = atomic64_fetch_xor_relaxed(i, v); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire +#endif + +#ifndef atomic64_fetch_xor_release +static inline s64 +atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + __atomic_mb__before_release(); + return atomic64_fetch_xor_relaxed(i, v); +} +#define atomic64_fetch_xor_release atomic64_fetch_xor_release +#endif + +#ifndef atomic64_fetch_xor +static inline s64 +atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_fetch_xor_relaxed(i, v); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_fetch_xor atomic64_fetch_xor +#endif + +#endif /* atomic64_fetch_xor_relaxed */ + +#ifndef atomic64_xchg_relaxed +#define atomic64_xchg_acquire atomic64_xchg +#define atomic64_xchg_release atomic64_xchg +#define atomic64_xchg_relaxed atomic64_xchg +#else /* atomic64_xchg_relaxed */ + +#ifndef atomic64_xchg_acquire +static inline s64 +atomic64_xchg_acquire(atomic64_t *v, s64 i) +{ + s64 ret = atomic64_xchg_relaxed(v, i); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_xchg_acquire atomic64_xchg_acquire +#endif + +#ifndef atomic64_xchg_release +static inline s64 +atomic64_xchg_release(atomic64_t *v, s64 i) +{ + __atomic_mb__before_release(); + return atomic64_xchg_relaxed(v, i); +} +#define atomic64_xchg_release atomic64_xchg_release +#endif + +#ifndef atomic64_xchg +static inline s64 +atomic64_xchg(atomic64_t *v, s64 i) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_xchg_relaxed(v, i); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_xchg atomic64_xchg +#endif + +#endif /* atomic64_xchg_relaxed */ + +#ifndef atomic64_cmpxchg_relaxed +#define atomic64_cmpxchg_acquire atomic64_cmpxchg +#define atomic64_cmpxchg_release atomic64_cmpxchg +#define atomic64_cmpxchg_relaxed atomic64_cmpxchg +#else /* atomic64_cmpxchg_relaxed */ + +#ifndef atomic64_cmpxchg_acquire +static inline s64 +atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + s64 ret = atomic64_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire +#endif + +#ifndef atomic64_cmpxchg_release +static inline s64 +atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + __atomic_mb__before_release(); + return atomic64_cmpxchg_relaxed(v, old, new); +} +#define atomic64_cmpxchg_release atomic64_cmpxchg_release +#endif + +#ifndef atomic64_cmpxchg +static inline s64 +atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + s64 ret; + __atomic_mb__before_fence(); + ret = atomic64_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_cmpxchg atomic64_cmpxchg +#endif + +#endif /* atomic64_cmpxchg_relaxed */ + +#ifndef atomic64_try_cmpxchg_relaxed +#ifdef atomic64_try_cmpxchg +#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg +#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg +#define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg +#endif /* atomic64_try_cmpxchg */ + +#ifndef atomic64_try_cmpxchg +static inline bool +atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + s64 r, o = *old; + r = atomic64_cmpxchg(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic64_try_cmpxchg atomic64_try_cmpxchg +#endif + +#ifndef atomic64_try_cmpxchg_acquire +static inline bool +atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + s64 r, o = *old; + r = atomic64_cmpxchg_acquire(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire +#endif + +#ifndef atomic64_try_cmpxchg_release +static inline bool +atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + s64 r, o = *old; + r = atomic64_cmpxchg_release(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release +#endif + +#ifndef atomic64_try_cmpxchg_relaxed +static inline bool +atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + s64 r, o = *old; + r = atomic64_cmpxchg_relaxed(v, o, new); + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} +#define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed +#endif + +#else /* atomic64_try_cmpxchg_relaxed */ + +#ifndef atomic64_try_cmpxchg_acquire +static inline bool +atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + bool ret = atomic64_try_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_acquire(); + return ret; +} +#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire +#endif + +#ifndef atomic64_try_cmpxchg_release +static inline bool +atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + __atomic_mb__before_release(); + return atomic64_try_cmpxchg_relaxed(v, old, new); +} +#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release +#endif + +#ifndef atomic64_try_cmpxchg +static inline bool +atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + bool ret; + __atomic_mb__before_fence(); + ret = atomic64_try_cmpxchg_relaxed(v, old, new); + __atomic_mb__after_fence(); + return ret; +} +#define atomic64_try_cmpxchg atomic64_try_cmpxchg +#endif + +#endif /* atomic64_try_cmpxchg_relaxed */ + +#ifndef atomic64_sub_and_test +/** + * atomic64_sub_and_test - subtract value from variable and test result + * @i: integer value to subtract + * @v: pointer of type atomic64_t + * + * Atomically subtracts @i from @v and returns + * true if the result is zero, or false for all + * other cases. + */ +static inline bool +atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v) == 0; +} +#define atomic64_sub_and_test atomic64_sub_and_test +#endif + +#ifndef atomic64_dec_and_test +/** + * atomic64_dec_and_test - decrement and test + * @v: pointer of type atomic64_t + * + * Atomically decrements @v by 1 and + * returns true if the result is 0, or false for all other + * cases. + */ +static inline bool +atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_return(v) == 0; +} +#define atomic64_dec_and_test atomic64_dec_and_test +#endif + +#ifndef atomic64_inc_and_test +/** + * atomic64_inc_and_test - increment and test + * @v: pointer of type atomic64_t + * + * Atomically increments @v by 1 + * and returns true if the result is zero, or false for all + * other cases. + */ +static inline bool +atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_return(v) == 0; +} +#define atomic64_inc_and_test atomic64_inc_and_test +#endif + +#ifndef atomic64_add_negative +/** + * atomic64_add_negative - add and test if negative + * @i: integer value to add + * @v: pointer of type atomic64_t + * + * Atomically adds @i to @v and returns true + * if the result is negative, or false when + * result is greater than or equal to zero. + */ +static inline bool +atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v) < 0; +} +#define atomic64_add_negative atomic64_add_negative +#endif + +#ifndef atomic64_fetch_add_unless +/** + * atomic64_fetch_add_unless - add unless the number is already a given value + * @v: pointer of type atomic64_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns original value of @v + */ +static inline s64 +atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + s64 c = atomic64_read(v); + + do { + if (unlikely(c == u)) + break; + } while (!atomic64_try_cmpxchg(v, &c, c + a)); + + return c; +} +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +#endif + +#ifndef atomic64_add_unless +/** + * atomic64_add_unless - add unless the number is already a given value + * @v: pointer of type atomic64_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as @v was not already @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +static inline bool +atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u) != u; +} +#define atomic64_add_unless atomic64_add_unless +#endif + +#ifndef atomic64_inc_not_zero +/** + * atomic64_inc_not_zero - increment unless the number is zero + * @v: pointer of type atomic64_t + * + * Atomically increments @v by 1, so long as @v is non-zero. + * Returns non-zero if @v was non-zero, and zero otherwise. + */ +static inline bool +atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_add_unless(v, 1, 0); +} +#define atomic64_inc_not_zero atomic64_inc_not_zero +#endif + +#ifndef atomic64_inc_unless_negative +static inline bool +atomic64_inc_unless_negative(atomic64_t *v) +{ + s64 c = atomic64_read(v); + + do { + if (c < 0) + return false; + } while (!atomic64_try_cmpxchg(v, &c, c + 1)); + + return true; +} +#define atomic64_inc_unless_negative atomic64_inc_unless_negative +#endif + +#ifndef atomic64_dec_unless_positive +static inline bool +atomic64_dec_unless_positive(atomic64_t *v) +{ + s64 c = atomic64_read(v); + + do { + if (c > 0) + return false; + } while (!atomic64_try_cmpxchg(v, &c, c - 1)); + + return true; +} +#define atomic64_dec_unless_positive atomic64_dec_unless_positive +#endif + +#ifndef atomic64_dec_if_positive +static inline s64 +atomic64_dec_if_positive(atomic64_t *v) +{ + s64 dec, c = atomic64_read(v); + + do { + dec = c - 1; + if (dec < 0) + break; + } while (!atomic64_try_cmpxchg(v, &c, dec)); + + return dec; +} +#define atomic64_dec_if_positive atomic64_dec_if_positive +#endif + +#define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c)) + +#endif /* _LINUX_ATOMIC_FALLBACK_H */ diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 28c4aa3c1a4c..9d42786ac939 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -23,14 +23,6 @@ * See Documentation/memory-barriers.txt for ACQUIRE/RELEASE definitions. */ -#ifndef atomic_read_acquire -#define atomic_read_acquire(v) smp_load_acquire(&(v)->counter) -#endif - -#ifndef atomic_set_release -#define atomic_set_release(v, i) smp_store_release(&(v)->counter, (i)) -#endif - /* * The idea here is to build acquire/release variants by adding explicit * barriers on top of the relaxed variant. In the case where the relaxed @@ -146,1203 +138,7 @@ #endif #endif /* xchg_relaxed */ -/* atomic_add_return_relaxed */ -#ifndef atomic_add_return_relaxed -#define atomic_add_return_relaxed atomic_add_return -#define atomic_add_return_acquire atomic_add_return -#define atomic_add_return_release atomic_add_return - -#else /* atomic_add_return_relaxed */ - -#ifndef atomic_add_return_acquire -#define atomic_add_return_acquire(...) \ - __atomic_op_acquire(atomic_add_return, __VA_ARGS__) -#endif - -#ifndef atomic_add_return_release -#define atomic_add_return_release(...) \ - __atomic_op_release(atomic_add_return, __VA_ARGS__) -#endif - -#ifndef atomic_add_return -#define atomic_add_return(...) \ - __atomic_op_fence(atomic_add_return, __VA_ARGS__) -#endif -#endif /* atomic_add_return_relaxed */ - -#ifndef atomic_inc -#define atomic_inc(v) atomic_add(1, (v)) -#endif - -/* atomic_inc_return_relaxed */ -#ifndef atomic_inc_return_relaxed - -#ifndef atomic_inc_return -#define atomic_inc_return(v) atomic_add_return(1, (v)) -#define atomic_inc_return_relaxed(v) atomic_add_return_relaxed(1, (v)) -#define atomic_inc_return_acquire(v) atomic_add_return_acquire(1, (v)) -#define atomic_inc_return_release(v) atomic_add_return_release(1, (v)) -#else /* atomic_inc_return */ -#define atomic_inc_return_relaxed atomic_inc_return -#define atomic_inc_return_acquire atomic_inc_return -#define atomic_inc_return_release atomic_inc_return -#endif /* atomic_inc_return */ - -#else /* atomic_inc_return_relaxed */ - -#ifndef atomic_inc_return_acquire -#define atomic_inc_return_acquire(...) \ - __atomic_op_acquire(atomic_inc_return, __VA_ARGS__) -#endif - -#ifndef atomic_inc_return_release -#define atomic_inc_return_release(...) \ - __atomic_op_release(atomic_inc_return, __VA_ARGS__) -#endif - -#ifndef atomic_inc_return -#define atomic_inc_return(...) \ - __atomic_op_fence(atomic_inc_return, __VA_ARGS__) -#endif -#endif /* atomic_inc_return_relaxed */ - -/* atomic_sub_return_relaxed */ -#ifndef atomic_sub_return_relaxed -#define atomic_sub_return_relaxed atomic_sub_return -#define atomic_sub_return_acquire atomic_sub_return -#define atomic_sub_return_release atomic_sub_return - -#else /* atomic_sub_return_relaxed */ - -#ifndef atomic_sub_return_acquire -#define atomic_sub_return_acquire(...) \ - __atomic_op_acquire(atomic_sub_return, __VA_ARGS__) -#endif - -#ifndef atomic_sub_return_release -#define atomic_sub_return_release(...) \ - __atomic_op_release(atomic_sub_return, __VA_ARGS__) -#endif - -#ifndef atomic_sub_return -#define atomic_sub_return(...) \ - __atomic_op_fence(atomic_sub_return, __VA_ARGS__) -#endif -#endif /* atomic_sub_return_relaxed */ - -#ifndef atomic_dec -#define atomic_dec(v) atomic_sub(1, (v)) -#endif - -/* atomic_dec_return_relaxed */ -#ifndef atomic_dec_return_relaxed - -#ifndef atomic_dec_return -#define atomic_dec_return(v) atomic_sub_return(1, (v)) -#define atomic_dec_return_relaxed(v) atomic_sub_return_relaxed(1, (v)) -#define atomic_dec_return_acquire(v) atomic_sub_return_acquire(1, (v)) -#define atomic_dec_return_release(v) atomic_sub_return_release(1, (v)) -#else /* atomic_dec_return */ -#define atomic_dec_return_relaxed atomic_dec_return -#define atomic_dec_return_acquire atomic_dec_return -#define atomic_dec_return_release atomic_dec_return -#endif /* atomic_dec_return */ - -#else /* atomic_dec_return_relaxed */ - -#ifndef atomic_dec_return_acquire -#define atomic_dec_return_acquire(...) \ - __atomic_op_acquire(atomic_dec_return, __VA_ARGS__) -#endif - -#ifndef atomic_dec_return_release -#define atomic_dec_return_release(...) \ - __atomic_op_release(atomic_dec_return, __VA_ARGS__) -#endif - -#ifndef atomic_dec_return -#define atomic_dec_return(...) \ - __atomic_op_fence(atomic_dec_return, __VA_ARGS__) -#endif -#endif /* atomic_dec_return_relaxed */ - - -/* atomic_fetch_add_relaxed */ -#ifndef atomic_fetch_add_relaxed -#define atomic_fetch_add_relaxed atomic_fetch_add -#define atomic_fetch_add_acquire atomic_fetch_add -#define atomic_fetch_add_release atomic_fetch_add - -#else /* atomic_fetch_add_relaxed */ - -#ifndef atomic_fetch_add_acquire -#define atomic_fetch_add_acquire(...) \ - __atomic_op_acquire(atomic_fetch_add, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_add_release -#define atomic_fetch_add_release(...) \ - __atomic_op_release(atomic_fetch_add, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_add -#define atomic_fetch_add(...) \ - __atomic_op_fence(atomic_fetch_add, __VA_ARGS__) -#endif -#endif /* atomic_fetch_add_relaxed */ - -/* atomic_fetch_inc_relaxed */ -#ifndef atomic_fetch_inc_relaxed - -#ifndef atomic_fetch_inc -#define atomic_fetch_inc(v) atomic_fetch_add(1, (v)) -#define atomic_fetch_inc_relaxed(v) atomic_fetch_add_relaxed(1, (v)) -#define atomic_fetch_inc_acquire(v) atomic_fetch_add_acquire(1, (v)) -#define atomic_fetch_inc_release(v) atomic_fetch_add_release(1, (v)) -#else /* atomic_fetch_inc */ -#define atomic_fetch_inc_relaxed atomic_fetch_inc -#define atomic_fetch_inc_acquire atomic_fetch_inc -#define atomic_fetch_inc_release atomic_fetch_inc -#endif /* atomic_fetch_inc */ - -#else /* atomic_fetch_inc_relaxed */ - -#ifndef atomic_fetch_inc_acquire -#define atomic_fetch_inc_acquire(...) \ - __atomic_op_acquire(atomic_fetch_inc, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_inc_release -#define atomic_fetch_inc_release(...) \ - __atomic_op_release(atomic_fetch_inc, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_inc -#define atomic_fetch_inc(...) \ - __atomic_op_fence(atomic_fetch_inc, __VA_ARGS__) -#endif -#endif /* atomic_fetch_inc_relaxed */ - -/* atomic_fetch_sub_relaxed */ -#ifndef atomic_fetch_sub_relaxed -#define atomic_fetch_sub_relaxed atomic_fetch_sub -#define atomic_fetch_sub_acquire atomic_fetch_sub -#define atomic_fetch_sub_release atomic_fetch_sub - -#else /* atomic_fetch_sub_relaxed */ - -#ifndef atomic_fetch_sub_acquire -#define atomic_fetch_sub_acquire(...) \ - __atomic_op_acquire(atomic_fetch_sub, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_sub_release -#define atomic_fetch_sub_release(...) \ - __atomic_op_release(atomic_fetch_sub, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_sub -#define atomic_fetch_sub(...) \ - __atomic_op_fence(atomic_fetch_sub, __VA_ARGS__) -#endif -#endif /* atomic_fetch_sub_relaxed */ - -/* atomic_fetch_dec_relaxed */ -#ifndef atomic_fetch_dec_relaxed - -#ifndef atomic_fetch_dec -#define atomic_fetch_dec(v) atomic_fetch_sub(1, (v)) -#define atomic_fetch_dec_relaxed(v) atomic_fetch_sub_relaxed(1, (v)) -#define atomic_fetch_dec_acquire(v) atomic_fetch_sub_acquire(1, (v)) -#define atomic_fetch_dec_release(v) atomic_fetch_sub_release(1, (v)) -#else /* atomic_fetch_dec */ -#define atomic_fetch_dec_relaxed atomic_fetch_dec -#define atomic_fetch_dec_acquire atomic_fetch_dec -#define atomic_fetch_dec_release atomic_fetch_dec -#endif /* atomic_fetch_dec */ - -#else /* atomic_fetch_dec_relaxed */ - -#ifndef atomic_fetch_dec_acquire -#define atomic_fetch_dec_acquire(...) \ - __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_dec_release -#define atomic_fetch_dec_release(...) \ - __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_dec -#define atomic_fetch_dec(...) \ - __atomic_op_fence(atomic_fetch_dec, __VA_ARGS__) -#endif -#endif /* atomic_fetch_dec_relaxed */ - -/* atomic_fetch_or_relaxed */ -#ifndef atomic_fetch_or_relaxed -#define atomic_fetch_or_relaxed atomic_fetch_or -#define atomic_fetch_or_acquire atomic_fetch_or -#define atomic_fetch_or_release atomic_fetch_or - -#else /* atomic_fetch_or_relaxed */ - -#ifndef atomic_fetch_or_acquire -#define atomic_fetch_or_acquire(...) \ - __atomic_op_acquire(atomic_fetch_or, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_or_release -#define atomic_fetch_or_release(...) \ - __atomic_op_release(atomic_fetch_or, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_or -#define atomic_fetch_or(...) \ - __atomic_op_fence(atomic_fetch_or, __VA_ARGS__) -#endif -#endif /* atomic_fetch_or_relaxed */ - -/* atomic_fetch_and_relaxed */ -#ifndef atomic_fetch_and_relaxed -#define atomic_fetch_and_relaxed atomic_fetch_and -#define atomic_fetch_and_acquire atomic_fetch_and -#define atomic_fetch_and_release atomic_fetch_and - -#else /* atomic_fetch_and_relaxed */ - -#ifndef atomic_fetch_and_acquire -#define atomic_fetch_and_acquire(...) \ - __atomic_op_acquire(atomic_fetch_and, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_and_release -#define atomic_fetch_and_release(...) \ - __atomic_op_release(atomic_fetch_and, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_and -#define atomic_fetch_and(...) \ - __atomic_op_fence(atomic_fetch_and, __VA_ARGS__) -#endif -#endif /* atomic_fetch_and_relaxed */ - -#ifdef atomic_andnot -/* atomic_fetch_andnot_relaxed */ -#ifndef atomic_fetch_andnot_relaxed -#define atomic_fetch_andnot_relaxed atomic_fetch_andnot -#define atomic_fetch_andnot_acquire atomic_fetch_andnot -#define atomic_fetch_andnot_release atomic_fetch_andnot - -#else /* atomic_fetch_andnot_relaxed */ - -#ifndef atomic_fetch_andnot_acquire -#define atomic_fetch_andnot_acquire(...) \ - __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_andnot_release -#define atomic_fetch_andnot_release(...) \ - __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_andnot -#define atomic_fetch_andnot(...) \ - __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) -#endif -#endif /* atomic_fetch_andnot_relaxed */ -#endif /* atomic_andnot */ - -/* atomic_fetch_xor_relaxed */ -#ifndef atomic_fetch_xor_relaxed -#define atomic_fetch_xor_relaxed atomic_fetch_xor -#define atomic_fetch_xor_acquire atomic_fetch_xor -#define atomic_fetch_xor_release atomic_fetch_xor - -#else /* atomic_fetch_xor_relaxed */ - -#ifndef atomic_fetch_xor_acquire -#define atomic_fetch_xor_acquire(...) \ - __atomic_op_acquire(atomic_fetch_xor, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_xor_release -#define atomic_fetch_xor_release(...) \ - __atomic_op_release(atomic_fetch_xor, __VA_ARGS__) -#endif - -#ifndef atomic_fetch_xor -#define atomic_fetch_xor(...) \ - __atomic_op_fence(atomic_fetch_xor, __VA_ARGS__) -#endif -#endif /* atomic_fetch_xor_relaxed */ - - -/* atomic_xchg_relaxed */ -#ifndef atomic_xchg_relaxed -#define atomic_xchg_relaxed atomic_xchg -#define atomic_xchg_acquire atomic_xchg -#define atomic_xchg_release atomic_xchg - -#else /* atomic_xchg_relaxed */ - -#ifndef atomic_xchg_acquire -#define atomic_xchg_acquire(...) \ - __atomic_op_acquire(atomic_xchg, __VA_ARGS__) -#endif - -#ifndef atomic_xchg_release -#define atomic_xchg_release(...) \ - __atomic_op_release(atomic_xchg, __VA_ARGS__) -#endif - -#ifndef atomic_xchg -#define atomic_xchg(...) \ - __atomic_op_fence(atomic_xchg, __VA_ARGS__) -#endif -#endif /* atomic_xchg_relaxed */ - -/* atomic_cmpxchg_relaxed */ -#ifndef atomic_cmpxchg_relaxed -#define atomic_cmpxchg_relaxed atomic_cmpxchg -#define atomic_cmpxchg_acquire atomic_cmpxchg -#define atomic_cmpxchg_release atomic_cmpxchg - -#else /* atomic_cmpxchg_relaxed */ - -#ifndef atomic_cmpxchg_acquire -#define atomic_cmpxchg_acquire(...) \ - __atomic_op_acquire(atomic_cmpxchg, __VA_ARGS__) -#endif - -#ifndef atomic_cmpxchg_release -#define atomic_cmpxchg_release(...) \ - __atomic_op_release(atomic_cmpxchg, __VA_ARGS__) -#endif - -#ifndef atomic_cmpxchg -#define atomic_cmpxchg(...) \ - __atomic_op_fence(atomic_cmpxchg, __VA_ARGS__) -#endif -#endif /* atomic_cmpxchg_relaxed */ - -#ifndef atomic_try_cmpxchg - -#define __atomic_try_cmpxchg(type, _p, _po, _n) \ -({ \ - typeof(_po) __po = (_po); \ - typeof(*(_po)) __r, __o = *__po; \ - __r = atomic_cmpxchg##type((_p), __o, (_n)); \ - if (unlikely(__r != __o)) \ - *__po = __r; \ - likely(__r == __o); \ -}) - -#define atomic_try_cmpxchg(_p, _po, _n) __atomic_try_cmpxchg(, _p, _po, _n) -#define atomic_try_cmpxchg_relaxed(_p, _po, _n) __atomic_try_cmpxchg(_relaxed, _p, _po, _n) -#define atomic_try_cmpxchg_acquire(_p, _po, _n) __atomic_try_cmpxchg(_acquire, _p, _po, _n) -#define atomic_try_cmpxchg_release(_p, _po, _n) __atomic_try_cmpxchg(_release, _p, _po, _n) - -#else /* atomic_try_cmpxchg */ -#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg -#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg -#define atomic_try_cmpxchg_release atomic_try_cmpxchg -#endif /* atomic_try_cmpxchg */ - -/** - * atomic_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ -#ifndef atomic_fetch_add_unless -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int c = atomic_read(v); - - do { - if (unlikely(c == u)) - break; - } while (!atomic_try_cmpxchg(v, &c, c + a)); - - return c; -} -#endif - -/** - * atomic_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns non-zero if @v was not @u, and zero otherwise. - */ -static inline int atomic_add_unless(atomic_t *v, int a, int u) -{ - return atomic_fetch_add_unless(v, a, u) != u; -} - -/** - * atomic_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1, so long as @v is non-zero. - * Returns non-zero if @v was non-zero, and zero otherwise. - */ -#ifndef atomic_inc_not_zero -#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) -#endif - -/** - * atomic_inc_and_test - increment and test - * @v: pointer of type atomic_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#ifndef atomic_inc_and_test -static inline bool atomic_inc_and_test(atomic_t *v) -{ - return atomic_inc_return(v) == 0; -} -#endif - -/** - * atomic_dec_and_test - decrement and test - * @v: pointer of type atomic_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#ifndef atomic_dec_and_test -static inline bool atomic_dec_and_test(atomic_t *v) -{ - return atomic_dec_return(v) == 0; -} -#endif - -/** - * atomic_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#ifndef atomic_sub_and_test -static inline bool atomic_sub_and_test(int i, atomic_t *v) -{ - return atomic_sub_return(i, v) == 0; -} -#endif - -/** - * atomic_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer of type atomic_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#ifndef atomic_add_negative -static inline bool atomic_add_negative(int i, atomic_t *v) -{ - return atomic_add_return(i, v) < 0; -} -#endif - -#ifndef atomic_andnot -static inline void atomic_andnot(int i, atomic_t *v) -{ - atomic_and(~i, v); -} - -static inline int atomic_fetch_andnot(int i, atomic_t *v) -{ - return atomic_fetch_and(~i, v); -} - -static inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) -{ - return atomic_fetch_and_relaxed(~i, v); -} - -static inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) -{ - return atomic_fetch_and_acquire(~i, v); -} - -static inline int atomic_fetch_andnot_release(int i, atomic_t *v) -{ - return atomic_fetch_and_release(~i, v); -} -#endif - -#ifndef atomic_inc_unless_negative -static inline bool atomic_inc_unless_negative(atomic_t *v) -{ - int c = atomic_read(v); - - do { - if (unlikely(c < 0)) - return false; - } while (!atomic_try_cmpxchg(v, &c, c + 1)); - - return true; -} -#endif - -#ifndef atomic_dec_unless_positive -static inline bool atomic_dec_unless_positive(atomic_t *v) -{ - int c = atomic_read(v); - - do { - if (unlikely(c > 0)) - return false; - } while (!atomic_try_cmpxchg(v, &c, c - 1)); - - return true; -} -#endif - -/* - * atomic_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic_t - * - * The function returns the old value of *v minus 1, even if - * the atomic variable, v, was not decremented. - */ -#ifndef atomic_dec_if_positive -static inline int atomic_dec_if_positive(atomic_t *v) -{ - int dec, c = atomic_read(v); - - do { - dec = c - 1; - if (unlikely(dec < 0)) - break; - } while (!atomic_try_cmpxchg(v, &c, dec)); - - return dec; -} -#endif - -#define atomic_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c)) - -#ifdef CONFIG_GENERIC_ATOMIC64 -#include -#endif - -#ifndef atomic64_read_acquire -#define atomic64_read_acquire(v) smp_load_acquire(&(v)->counter) -#endif - -#ifndef atomic64_set_release -#define atomic64_set_release(v, i) smp_store_release(&(v)->counter, (i)) -#endif - -/* atomic64_add_return_relaxed */ -#ifndef atomic64_add_return_relaxed -#define atomic64_add_return_relaxed atomic64_add_return -#define atomic64_add_return_acquire atomic64_add_return -#define atomic64_add_return_release atomic64_add_return - -#else /* atomic64_add_return_relaxed */ - -#ifndef atomic64_add_return_acquire -#define atomic64_add_return_acquire(...) \ - __atomic_op_acquire(atomic64_add_return, __VA_ARGS__) -#endif - -#ifndef atomic64_add_return_release -#define atomic64_add_return_release(...) \ - __atomic_op_release(atomic64_add_return, __VA_ARGS__) -#endif - -#ifndef atomic64_add_return -#define atomic64_add_return(...) \ - __atomic_op_fence(atomic64_add_return, __VA_ARGS__) -#endif -#endif /* atomic64_add_return_relaxed */ - -#ifndef atomic64_inc -#define atomic64_inc(v) atomic64_add(1, (v)) -#endif - -/* atomic64_inc_return_relaxed */ -#ifndef atomic64_inc_return_relaxed - -#ifndef atomic64_inc_return -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) -#define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1, (v)) -#define atomic64_inc_return_acquire(v) atomic64_add_return_acquire(1, (v)) -#define atomic64_inc_return_release(v) atomic64_add_return_release(1, (v)) -#else /* atomic64_inc_return */ -#define atomic64_inc_return_relaxed atomic64_inc_return -#define atomic64_inc_return_acquire atomic64_inc_return -#define atomic64_inc_return_release atomic64_inc_return -#endif /* atomic64_inc_return */ - -#else /* atomic64_inc_return_relaxed */ - -#ifndef atomic64_inc_return_acquire -#define atomic64_inc_return_acquire(...) \ - __atomic_op_acquire(atomic64_inc_return, __VA_ARGS__) -#endif - -#ifndef atomic64_inc_return_release -#define atomic64_inc_return_release(...) \ - __atomic_op_release(atomic64_inc_return, __VA_ARGS__) -#endif - -#ifndef atomic64_inc_return -#define atomic64_inc_return(...) \ - __atomic_op_fence(atomic64_inc_return, __VA_ARGS__) -#endif -#endif /* atomic64_inc_return_relaxed */ - - -/* atomic64_sub_return_relaxed */ -#ifndef atomic64_sub_return_relaxed -#define atomic64_sub_return_relaxed atomic64_sub_return -#define atomic64_sub_return_acquire atomic64_sub_return -#define atomic64_sub_return_release atomic64_sub_return - -#else /* atomic64_sub_return_relaxed */ - -#ifndef atomic64_sub_return_acquire -#define atomic64_sub_return_acquire(...) \ - __atomic_op_acquire(atomic64_sub_return, __VA_ARGS__) -#endif - -#ifndef atomic64_sub_return_release -#define atomic64_sub_return_release(...) \ - __atomic_op_release(atomic64_sub_return, __VA_ARGS__) -#endif - -#ifndef atomic64_sub_return -#define atomic64_sub_return(...) \ - __atomic_op_fence(atomic64_sub_return, __VA_ARGS__) -#endif -#endif /* atomic64_sub_return_relaxed */ - -#ifndef atomic64_dec -#define atomic64_dec(v) atomic64_sub(1, (v)) -#endif - -/* atomic64_dec_return_relaxed */ -#ifndef atomic64_dec_return_relaxed - -#ifndef atomic64_dec_return -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) -#define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1, (v)) -#define atomic64_dec_return_acquire(v) atomic64_sub_return_acquire(1, (v)) -#define atomic64_dec_return_release(v) atomic64_sub_return_release(1, (v)) -#else /* atomic64_dec_return */ -#define atomic64_dec_return_relaxed atomic64_dec_return -#define atomic64_dec_return_acquire atomic64_dec_return -#define atomic64_dec_return_release atomic64_dec_return -#endif /* atomic64_dec_return */ - -#else /* atomic64_dec_return_relaxed */ - -#ifndef atomic64_dec_return_acquire -#define atomic64_dec_return_acquire(...) \ - __atomic_op_acquire(atomic64_dec_return, __VA_ARGS__) -#endif - -#ifndef atomic64_dec_return_release -#define atomic64_dec_return_release(...) \ - __atomic_op_release(atomic64_dec_return, __VA_ARGS__) -#endif - -#ifndef atomic64_dec_return -#define atomic64_dec_return(...) \ - __atomic_op_fence(atomic64_dec_return, __VA_ARGS__) -#endif -#endif /* atomic64_dec_return_relaxed */ - - -/* atomic64_fetch_add_relaxed */ -#ifndef atomic64_fetch_add_relaxed -#define atomic64_fetch_add_relaxed atomic64_fetch_add -#define atomic64_fetch_add_acquire atomic64_fetch_add -#define atomic64_fetch_add_release atomic64_fetch_add - -#else /* atomic64_fetch_add_relaxed */ - -#ifndef atomic64_fetch_add_acquire -#define atomic64_fetch_add_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_add, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_add_release -#define atomic64_fetch_add_release(...) \ - __atomic_op_release(atomic64_fetch_add, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_add -#define atomic64_fetch_add(...) \ - __atomic_op_fence(atomic64_fetch_add, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_add_relaxed */ - -/* atomic64_fetch_inc_relaxed */ -#ifndef atomic64_fetch_inc_relaxed - -#ifndef atomic64_fetch_inc -#define atomic64_fetch_inc(v) atomic64_fetch_add(1, (v)) -#define atomic64_fetch_inc_relaxed(v) atomic64_fetch_add_relaxed(1, (v)) -#define atomic64_fetch_inc_acquire(v) atomic64_fetch_add_acquire(1, (v)) -#define atomic64_fetch_inc_release(v) atomic64_fetch_add_release(1, (v)) -#else /* atomic64_fetch_inc */ -#define atomic64_fetch_inc_relaxed atomic64_fetch_inc -#define atomic64_fetch_inc_acquire atomic64_fetch_inc -#define atomic64_fetch_inc_release atomic64_fetch_inc -#endif /* atomic64_fetch_inc */ - -#else /* atomic64_fetch_inc_relaxed */ - -#ifndef atomic64_fetch_inc_acquire -#define atomic64_fetch_inc_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_inc, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_inc_release -#define atomic64_fetch_inc_release(...) \ - __atomic_op_release(atomic64_fetch_inc, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_inc -#define atomic64_fetch_inc(...) \ - __atomic_op_fence(atomic64_fetch_inc, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_inc_relaxed */ - -/* atomic64_fetch_sub_relaxed */ -#ifndef atomic64_fetch_sub_relaxed -#define atomic64_fetch_sub_relaxed atomic64_fetch_sub -#define atomic64_fetch_sub_acquire atomic64_fetch_sub -#define atomic64_fetch_sub_release atomic64_fetch_sub - -#else /* atomic64_fetch_sub_relaxed */ - -#ifndef atomic64_fetch_sub_acquire -#define atomic64_fetch_sub_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_sub, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_sub_release -#define atomic64_fetch_sub_release(...) \ - __atomic_op_release(atomic64_fetch_sub, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_sub -#define atomic64_fetch_sub(...) \ - __atomic_op_fence(atomic64_fetch_sub, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_sub_relaxed */ - -/* atomic64_fetch_dec_relaxed */ -#ifndef atomic64_fetch_dec_relaxed - -#ifndef atomic64_fetch_dec -#define atomic64_fetch_dec(v) atomic64_fetch_sub(1, (v)) -#define atomic64_fetch_dec_relaxed(v) atomic64_fetch_sub_relaxed(1, (v)) -#define atomic64_fetch_dec_acquire(v) atomic64_fetch_sub_acquire(1, (v)) -#define atomic64_fetch_dec_release(v) atomic64_fetch_sub_release(1, (v)) -#else /* atomic64_fetch_dec */ -#define atomic64_fetch_dec_relaxed atomic64_fetch_dec -#define atomic64_fetch_dec_acquire atomic64_fetch_dec -#define atomic64_fetch_dec_release atomic64_fetch_dec -#endif /* atomic64_fetch_dec */ - -#else /* atomic64_fetch_dec_relaxed */ - -#ifndef atomic64_fetch_dec_acquire -#define atomic64_fetch_dec_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_dec, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_dec_release -#define atomic64_fetch_dec_release(...) \ - __atomic_op_release(atomic64_fetch_dec, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_dec -#define atomic64_fetch_dec(...) \ - __atomic_op_fence(atomic64_fetch_dec, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_dec_relaxed */ - -/* atomic64_fetch_or_relaxed */ -#ifndef atomic64_fetch_or_relaxed -#define atomic64_fetch_or_relaxed atomic64_fetch_or -#define atomic64_fetch_or_acquire atomic64_fetch_or -#define atomic64_fetch_or_release atomic64_fetch_or - -#else /* atomic64_fetch_or_relaxed */ - -#ifndef atomic64_fetch_or_acquire -#define atomic64_fetch_or_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_or, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_or_release -#define atomic64_fetch_or_release(...) \ - __atomic_op_release(atomic64_fetch_or, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_or -#define atomic64_fetch_or(...) \ - __atomic_op_fence(atomic64_fetch_or, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_or_relaxed */ - -/* atomic64_fetch_and_relaxed */ -#ifndef atomic64_fetch_and_relaxed -#define atomic64_fetch_and_relaxed atomic64_fetch_and -#define atomic64_fetch_and_acquire atomic64_fetch_and -#define atomic64_fetch_and_release atomic64_fetch_and - -#else /* atomic64_fetch_and_relaxed */ - -#ifndef atomic64_fetch_and_acquire -#define atomic64_fetch_and_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_and, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_and_release -#define atomic64_fetch_and_release(...) \ - __atomic_op_release(atomic64_fetch_and, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_and -#define atomic64_fetch_and(...) \ - __atomic_op_fence(atomic64_fetch_and, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_and_relaxed */ - -#ifdef atomic64_andnot -/* atomic64_fetch_andnot_relaxed */ -#ifndef atomic64_fetch_andnot_relaxed -#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot -#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot -#define atomic64_fetch_andnot_release atomic64_fetch_andnot - -#else /* atomic64_fetch_andnot_relaxed */ - -#ifndef atomic64_fetch_andnot_acquire -#define atomic64_fetch_andnot_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_andnot_release -#define atomic64_fetch_andnot_release(...) \ - __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_andnot -#define atomic64_fetch_andnot(...) \ - __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_andnot_relaxed */ -#endif /* atomic64_andnot */ - -/* atomic64_fetch_xor_relaxed */ -#ifndef atomic64_fetch_xor_relaxed -#define atomic64_fetch_xor_relaxed atomic64_fetch_xor -#define atomic64_fetch_xor_acquire atomic64_fetch_xor -#define atomic64_fetch_xor_release atomic64_fetch_xor - -#else /* atomic64_fetch_xor_relaxed */ - -#ifndef atomic64_fetch_xor_acquire -#define atomic64_fetch_xor_acquire(...) \ - __atomic_op_acquire(atomic64_fetch_xor, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_xor_release -#define atomic64_fetch_xor_release(...) \ - __atomic_op_release(atomic64_fetch_xor, __VA_ARGS__) -#endif - -#ifndef atomic64_fetch_xor -#define atomic64_fetch_xor(...) \ - __atomic_op_fence(atomic64_fetch_xor, __VA_ARGS__) -#endif -#endif /* atomic64_fetch_xor_relaxed */ - - -/* atomic64_xchg_relaxed */ -#ifndef atomic64_xchg_relaxed -#define atomic64_xchg_relaxed atomic64_xchg -#define atomic64_xchg_acquire atomic64_xchg -#define atomic64_xchg_release atomic64_xchg - -#else /* atomic64_xchg_relaxed */ - -#ifndef atomic64_xchg_acquire -#define atomic64_xchg_acquire(...) \ - __atomic_op_acquire(atomic64_xchg, __VA_ARGS__) -#endif - -#ifndef atomic64_xchg_release -#define atomic64_xchg_release(...) \ - __atomic_op_release(atomic64_xchg, __VA_ARGS__) -#endif - -#ifndef atomic64_xchg -#define atomic64_xchg(...) \ - __atomic_op_fence(atomic64_xchg, __VA_ARGS__) -#endif -#endif /* atomic64_xchg_relaxed */ - -/* atomic64_cmpxchg_relaxed */ -#ifndef atomic64_cmpxchg_relaxed -#define atomic64_cmpxchg_relaxed atomic64_cmpxchg -#define atomic64_cmpxchg_acquire atomic64_cmpxchg -#define atomic64_cmpxchg_release atomic64_cmpxchg - -#else /* atomic64_cmpxchg_relaxed */ - -#ifndef atomic64_cmpxchg_acquire -#define atomic64_cmpxchg_acquire(...) \ - __atomic_op_acquire(atomic64_cmpxchg, __VA_ARGS__) -#endif - -#ifndef atomic64_cmpxchg_release -#define atomic64_cmpxchg_release(...) \ - __atomic_op_release(atomic64_cmpxchg, __VA_ARGS__) -#endif - -#ifndef atomic64_cmpxchg -#define atomic64_cmpxchg(...) \ - __atomic_op_fence(atomic64_cmpxchg, __VA_ARGS__) -#endif -#endif /* atomic64_cmpxchg_relaxed */ - -#ifndef atomic64_try_cmpxchg - -#define __atomic64_try_cmpxchg(type, _p, _po, _n) \ -({ \ - typeof(_po) __po = (_po); \ - typeof(*(_po)) __r, __o = *__po; \ - __r = atomic64_cmpxchg##type((_p), __o, (_n)); \ - if (unlikely(__r != __o)) \ - *__po = __r; \ - likely(__r == __o); \ -}) - -#define atomic64_try_cmpxchg(_p, _po, _n) __atomic64_try_cmpxchg(, _p, _po, _n) -#define atomic64_try_cmpxchg_relaxed(_p, _po, _n) __atomic64_try_cmpxchg(_relaxed, _p, _po, _n) -#define atomic64_try_cmpxchg_acquire(_p, _po, _n) __atomic64_try_cmpxchg(_acquire, _p, _po, _n) -#define atomic64_try_cmpxchg_release(_p, _po, _n) __atomic64_try_cmpxchg(_release, _p, _po, _n) - -#else /* atomic64_try_cmpxchg */ -#define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg -#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg -#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg -#endif /* atomic64_try_cmpxchg */ - -/** - * atomic64_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type atomic64_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ -#ifndef atomic64_fetch_add_unless -static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, - long long u) -{ - long long c = atomic64_read(v); - - do { - if (unlikely(c == u)) - break; - } while (!atomic64_try_cmpxchg(v, &c, c + a)); - - return c; -} -#endif - -/** - * atomic64_add_unless - add unless the number is already a given value - * @v: pointer of type atomic_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns non-zero if @v was not @u, and zero otherwise. - */ -static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) -{ - return atomic64_fetch_add_unless(v, a, u) != u; -} - -/** - * atomic64_inc_not_zero - increment unless the number is zero - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1, so long as @v is non-zero. - * Returns non-zero if @v was non-zero, and zero otherwise. - */ -#ifndef atomic64_inc_not_zero -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) -#endif - -/** - * atomic64_inc_and_test - increment and test - * @v: pointer of type atomic64_t - * - * Atomically increments @v by 1 - * and returns true if the result is zero, or false for all - * other cases. - */ -#ifndef atomic64_inc_and_test -static inline bool atomic64_inc_and_test(atomic64_t *v) -{ - return atomic64_inc_return(v) == 0; -} -#endif - -/** - * atomic64_dec_and_test - decrement and test - * @v: pointer of type atomic64_t - * - * Atomically decrements @v by 1 and - * returns true if the result is 0, or false for all other - * cases. - */ -#ifndef atomic64_dec_and_test -static inline bool atomic64_dec_and_test(atomic64_t *v) -{ - return atomic64_dec_return(v) == 0; -} -#endif - -/** - * atomic64_sub_and_test - subtract value from variable and test result - * @i: integer value to subtract - * @v: pointer of type atomic64_t - * - * Atomically subtracts @i from @v and returns - * true if the result is zero, or false for all - * other cases. - */ -#ifndef atomic64_sub_and_test -static inline bool atomic64_sub_and_test(long long i, atomic64_t *v) -{ - return atomic64_sub_return(i, v) == 0; -} -#endif - -/** - * atomic64_add_negative - add and test if negative - * @i: integer value to add - * @v: pointer of type atomic64_t - * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. - */ -#ifndef atomic64_add_negative -static inline bool atomic64_add_negative(long long i, atomic64_t *v) -{ - return atomic64_add_return(i, v) < 0; -} -#endif - -#ifndef atomic64_andnot -static inline void atomic64_andnot(long long i, atomic64_t *v) -{ - atomic64_and(~i, v); -} - -static inline long long atomic64_fetch_andnot(long long i, atomic64_t *v) -{ - return atomic64_fetch_and(~i, v); -} - -static inline long long atomic64_fetch_andnot_relaxed(long long i, atomic64_t *v) -{ - return atomic64_fetch_and_relaxed(~i, v); -} - -static inline long long atomic64_fetch_andnot_acquire(long long i, atomic64_t *v) -{ - return atomic64_fetch_and_acquire(~i, v); -} - -static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v) -{ - return atomic64_fetch_and_release(~i, v); -} -#endif - -#ifndef atomic64_inc_unless_negative -static inline bool atomic64_inc_unless_negative(atomic64_t *v) -{ - long long c = atomic64_read(v); - - do { - if (unlikely(c < 0)) - return false; - } while (!atomic64_try_cmpxchg(v, &c, c + 1)); - - return true; -} -#endif - -#ifndef atomic64_dec_unless_positive -static inline bool atomic64_dec_unless_positive(atomic64_t *v) -{ - long long c = atomic64_read(v); - - do { - if (unlikely(c > 0)) - return false; - } while (!atomic64_try_cmpxchg(v, &c, c - 1)); - - return true; -} -#endif - -/* - * atomic64_dec_if_positive - decrement by 1 if old value positive - * @v: pointer of type atomic64_t - * - * The function returns the old value of *v minus 1, even if - * the atomic64 variable, v, was not decremented. - */ -#ifndef atomic64_dec_if_positive -static inline long long atomic64_dec_if_positive(atomic64_t *v) -{ - long long dec, c = atomic64_read(v); - - do { - dec = c - 1; - if (unlikely(dec < 0)) - break; - } while (!atomic64_try_cmpxchg(v, &c, dec)); - - return dec; -} -#endif - -#define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c)) +#include #include From patchwork Tue May 29 18:07:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137203 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4364426lji; Tue, 29 May 2018 11:09:07 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKAG4gM0/C2b58m2OwJcNpVodayp6R1wp8LRmunSi3FU/DgjYcu4tglJyErzrTtLedNf4qd X-Received: by 2002:a17:902:566:: with SMTP id 93-v6mr8377383plf.385.1527617347371; Tue, 29 May 2018 11:09:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617347; cv=none; d=google.com; s=arc-20160816; b=Ue9tUXJ0ZyziFpoYnowP2KDI6A4ctJAwOxmMSIwlybBERd6vSQEcsdMcM4PDIi3HDm 4NEE2M9qGt6faK1LhNe+IXj0jPj3hOJFEg/OajvgFCRGK2d8BV90cAAwADO+uaAWl+j6 rmN8/I2QxxrF/16/AJsErgRi76MSc/nRTZ54bCzf5IzzsA0xKy07PLsTlG+5A6mGlmL7 kzBUWv/oAl7lcOqD2N9uZx2A/vowH4MNmHKxX9LvCMZ3dnPNDQEDN9gp2OzqI5EiYAcN 40lyi2+NJXru40k1FUoaAXRHaqFzaS0kRzN3C+iY2UnectTfZxbGdI+pTjB/8FW3Bn7l cpiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=G2lTK5ERmuzR0Q4ZcebfEBIry9NWkTr8qxpgD0WkJZY=; b=COu4twclBjiuPQskHGOtP9sid/YX2oG0OIdou+RGA/uhM+45hGFUcpcQ6d7ryc+VdB ScY7lF6VgmVglxvmkBaty69jYEfCdO1+xc+EGTytVroTxRYsEl8flOVU5a6bRGGVtp5J uNtTsNm6mdZvAmvOygrR6r2n57zb2r2C1OjLaFiqPkEkhRWxtPtAyb5OL9u7B/KBK58T +TkxNcSw1cKZTwdMzqkRL63bA1leLQ3bfy5lcLgC3isMMjg1NRlfMXHhs7zXV0OKAuER exA7t5oQNsBE6nds2SIUmF3xo9iH0BSSJ3Ix7zkr/xtukGnQxcg9jsnccw3274wQ4OZY jsHQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j186-v6si25836563pgc.20.2018.05.29.11.09.07; Tue, 29 May 2018 11:09:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966056AbeE2SJE (ORCPT + 30 others); Tue, 29 May 2018 14:09:04 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:45920 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965895AbeE2SIM (ORCPT ); Tue, 29 May 2018 14:08:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6C299165D; Tue, 29 May 2018 11:08:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3B86A3F557; Tue, 29 May 2018 11:08:11 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon , Arnd Bergmann Subject: [PATCH 6/7] atomics: switch to generated atomic-long Date: Tue, 29 May 2018 19:07:45 +0100 Message-Id: <20180529180746.29684-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards ensuring the atomic* APIs are consistent, let's switch to wrappers generated by gen-atomic-long.h, using the same table that gen-atomic-fallbacks.h uses to fill in gaps in the atomic_* and atomic64_* APIs. These are checked in rather than generated with Kbuild, since: * This allows inspection of the atomics with git grep and ctags on a pristine tree, which Linus strongly prefers being able to do. * The fallbacks are not expected to change very often, and are not affected by machine details or configuration options, so regenerating them for *every* build is somewhat wasteful. * These are included by files required *very* early in the build process (e.g. for generating bounds.h), and we'd rather not complicate the top-level Kbuild file. Other than *_INIT() and *_cond_read_acquire(), all API functions are implemented as static inline C functions, ensuring consistent type promotion and/or truncation without requiring explicit casts to be applied to parameters or return values. Since we typedef atomic_long_t to either atomic_t or atomic64_t, we know these types are equivalent, and don't require explicit casts between them. However, as the try_cmpxchg*() functions take a pointer for the 'old' parameter, which may be an int or s64, an explicit cast is generated for this. There should be no functional change as a result of this patch (i.e. existing code should not be affected). However, this introduces a number of functions into the atomic_long_* API, bringing it into line with the atomic_* and atomic64_* APIs. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon Cc: Arnd Bergmann --- include/asm-generic/atomic-long.h | 1150 ++++++++++++++++++++++++++++++------- 1 file changed, 955 insertions(+), 195 deletions(-) Note: this is generated code. Please see patch 3 for the code which generated it, along with rationale in the cover letter. Mark. -- 2.11.0 diff --git a/include/asm-generic/atomic-long.h b/include/asm-generic/atomic-long.h index 34a028a7bcc5..e007bf80f693 100644 --- a/include/asm-generic/atomic-long.h +++ b/include/asm-generic/atomic-long.h @@ -1,250 +1,1010 @@ -/* SPDX-License-Identifier: GPL-2.0 */ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by ./scripts/atomic/gen-atomic-long.sh +// DO NOT MODIFY THIS FILE DIRECTLY + #ifndef _ASM_GENERIC_ATOMIC_LONG_H #define _ASM_GENERIC_ATOMIC_LONG_H -/* - * Copyright (C) 2005 Silicon Graphics, Inc. - * Christoph Lameter - * - * Allows to provide arch independent atomic definitions without the need to - * edit all arch specific atomic.h files. - */ #include -/* - * Suppport for atomic_long_t - * - * Casts for parameters are avoided for existing atomic functions in order to - * avoid issues with cast-as-lval under gcc 4.x and other limitations that the - * macros of a platform may have. - */ +#ifdef CONFIG_64BIT +typedef atomic64_t atomic_long_t; +#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) +#define atomic_long_cond_read_acquire atomic64_cond_read_acquire +#else +typedef atomic_t atomic_long_t; +#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i) +#define atomic_long_cond_read_acquire atomic_cond_read_acquire +#endif -#if BITS_PER_LONG == 64 +#ifdef CONFIG_64BIT -typedef atomic64_t atomic_long_t; +static inline long +atomic_long_read(const atomic_long_t *v) +{ + return atomic64_read(v); +} -#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) -#define ATOMIC_LONG_PFX(x) atomic64 ## x +static inline long +atomic_long_read_acquire(const atomic_long_t *v) +{ + return atomic64_read_acquire(v); +} -#else +static inline void +atomic_long_set(atomic_long_t *v, long i) +{ + atomic64_set(v, i); +} -typedef atomic_t atomic_long_t; +static inline void +atomic_long_set_release(atomic_long_t *v, long i) +{ + atomic64_set_release(v, i); +} -#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i) -#define ATOMIC_LONG_PFX(x) atomic ## x +static inline void +atomic_long_add(long i, atomic_long_t *v) +{ + atomic64_add(i, v); +} -#endif +static inline long +atomic_long_add_return(long i, atomic_long_t *v) +{ + return atomic64_add_return(i, v); +} + +static inline long +atomic_long_add_return_acquire(long i, atomic_long_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +static inline long +atomic_long_add_return_release(long i, atomic_long_t *v) +{ + return atomic64_add_return_release(i, v); +} + +static inline long +atomic_long_add_return_relaxed(long i, atomic_long_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_add(long i, atomic_long_t *v) +{ + return atomic64_fetch_add(i, v); +} + +static inline long +atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +static inline long +atomic_long_fetch_add_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +static inline long +atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +static inline void +atomic_long_sub(long i, atomic_long_t *v) +{ + atomic64_sub(i, v); +} + +static inline long +atomic_long_sub_return(long i, atomic_long_t *v) +{ + return atomic64_sub_return(i, v); +} + +static inline long +atomic_long_sub_return_acquire(long i, atomic_long_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +static inline long +atomic_long_sub_return_release(long i, atomic_long_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +static inline long +atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_sub(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +static inline long +atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +static inline long +atomic_long_fetch_sub_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +static inline long +atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +static inline void +atomic_long_inc(atomic_long_t *v) +{ + atomic64_inc(v); +} + +static inline long +atomic_long_inc_return(atomic_long_t *v) +{ + return atomic64_inc_return(v); +} + +static inline long +atomic_long_inc_return_acquire(atomic_long_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +static inline long +atomic_long_inc_return_release(atomic_long_t *v) +{ + return atomic64_inc_return_release(v); +} + +static inline long +atomic_long_inc_return_relaxed(atomic_long_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +static inline long +atomic_long_fetch_inc(atomic_long_t *v) +{ + return atomic64_fetch_inc(v); +} + +static inline long +atomic_long_fetch_inc_acquire(atomic_long_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +static inline long +atomic_long_fetch_inc_release(atomic_long_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +static inline long +atomic_long_fetch_inc_relaxed(atomic_long_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +static inline void +atomic_long_dec(atomic_long_t *v) +{ + atomic64_dec(v); +} + +static inline long +atomic_long_dec_return(atomic_long_t *v) +{ + return atomic64_dec_return(v); +} + +static inline long +atomic_long_dec_return_acquire(atomic_long_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +static inline long +atomic_long_dec_return_release(atomic_long_t *v) +{ + return atomic64_dec_return_release(v); +} + +static inline long +atomic_long_dec_return_relaxed(atomic_long_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +static inline long +atomic_long_fetch_dec(atomic_long_t *v) +{ + return atomic64_fetch_dec(v); +} + +static inline long +atomic_long_fetch_dec_acquire(atomic_long_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +static inline long +atomic_long_fetch_dec_release(atomic_long_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +static inline long +atomic_long_fetch_dec_relaxed(atomic_long_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +static inline void +atomic_long_and(long i, atomic_long_t *v) +{ + atomic64_and(i, v); +} + +static inline long +atomic_long_fetch_and(long i, atomic_long_t *v) +{ + return atomic64_fetch_and(i, v); +} + +static inline long +atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +static inline long +atomic_long_fetch_and_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +static inline long +atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +static inline void +atomic_long_andnot(long i, atomic_long_t *v) +{ + atomic64_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +static inline long +atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +static inline long +atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +static inline void +atomic_long_or(long i, atomic_long_t *v) +{ + atomic64_or(i, v); +} + +static inline long +atomic_long_fetch_or(long i, atomic_long_t *v) +{ + return atomic64_fetch_or(i, v); +} + +static inline long +atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +static inline long +atomic_long_fetch_or_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +static inline long +atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +static inline void +atomic_long_xor(long i, atomic_long_t *v) +{ + atomic64_xor(i, v); +} -#define ATOMIC_LONG_READ_OP(mo) \ -static inline long atomic_long_read##mo(const atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_read##mo)(v); \ -} -ATOMIC_LONG_READ_OP() -ATOMIC_LONG_READ_OP(_acquire) - -#undef ATOMIC_LONG_READ_OP - -#define ATOMIC_LONG_SET_OP(mo) \ -static inline void atomic_long_set##mo(atomic_long_t *l, long i) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - ATOMIC_LONG_PFX(_set##mo)(v, i); \ -} -ATOMIC_LONG_SET_OP() -ATOMIC_LONG_SET_OP(_release) - -#undef ATOMIC_LONG_SET_OP - -#define ATOMIC_LONG_ADD_SUB_OP(op, mo) \ -static inline long \ -atomic_long_##op##_return##mo(long i, atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_##op##_return##mo)(i, v); \ -} -ATOMIC_LONG_ADD_SUB_OP(add,) -ATOMIC_LONG_ADD_SUB_OP(add, _relaxed) -ATOMIC_LONG_ADD_SUB_OP(add, _acquire) -ATOMIC_LONG_ADD_SUB_OP(add, _release) -ATOMIC_LONG_ADD_SUB_OP(sub,) -ATOMIC_LONG_ADD_SUB_OP(sub, _relaxed) -ATOMIC_LONG_ADD_SUB_OP(sub, _acquire) -ATOMIC_LONG_ADD_SUB_OP(sub, _release) - -#undef ATOMIC_LONG_ADD_SUB_OP - -#define atomic_long_cmpxchg_relaxed(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg_relaxed)((ATOMIC_LONG_PFX(_t) *)(l), \ - (old), (new))) -#define atomic_long_cmpxchg_acquire(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg_acquire)((ATOMIC_LONG_PFX(_t) *)(l), \ - (old), (new))) -#define atomic_long_cmpxchg_release(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg_release)((ATOMIC_LONG_PFX(_t) *)(l), \ - (old), (new))) -#define atomic_long_cmpxchg(l, old, new) \ - (ATOMIC_LONG_PFX(_cmpxchg)((ATOMIC_LONG_PFX(_t) *)(l), (old), (new))) - -#define atomic_long_xchg_relaxed(v, new) \ - (ATOMIC_LONG_PFX(_xchg_relaxed)((ATOMIC_LONG_PFX(_t) *)(v), (new))) -#define atomic_long_xchg_acquire(v, new) \ - (ATOMIC_LONG_PFX(_xchg_acquire)((ATOMIC_LONG_PFX(_t) *)(v), (new))) -#define atomic_long_xchg_release(v, new) \ - (ATOMIC_LONG_PFX(_xchg_release)((ATOMIC_LONG_PFX(_t) *)(v), (new))) -#define atomic_long_xchg(v, new) \ - (ATOMIC_LONG_PFX(_xchg)((ATOMIC_LONG_PFX(_t) *)(v), (new))) - -static __always_inline void atomic_long_inc(atomic_long_t *l) -{ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; - - ATOMIC_LONG_PFX(_inc)(v); +static inline long +atomic_long_fetch_xor(long i, atomic_long_t *v) +{ + return atomic64_fetch_xor(i, v); } - -static __always_inline void atomic_long_dec(atomic_long_t *l) + +static inline long +atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_fetch_xor_acquire(i, v); +} - ATOMIC_LONG_PFX(_dec)(v); +static inline long +atomic_long_fetch_xor_release(long i, atomic_long_t *v) +{ + return atomic64_fetch_xor_release(i, v); } -#define ATOMIC_LONG_FETCH_OP(op, mo) \ -static inline long \ -atomic_long_fetch_##op##mo(long i, atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_fetch_##op##mo)(i, v); \ +static inline long +atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); } -ATOMIC_LONG_FETCH_OP(add, ) -ATOMIC_LONG_FETCH_OP(add, _relaxed) -ATOMIC_LONG_FETCH_OP(add, _acquire) -ATOMIC_LONG_FETCH_OP(add, _release) -ATOMIC_LONG_FETCH_OP(sub, ) -ATOMIC_LONG_FETCH_OP(sub, _relaxed) -ATOMIC_LONG_FETCH_OP(sub, _acquire) -ATOMIC_LONG_FETCH_OP(sub, _release) -ATOMIC_LONG_FETCH_OP(and, ) -ATOMIC_LONG_FETCH_OP(and, _relaxed) -ATOMIC_LONG_FETCH_OP(and, _acquire) -ATOMIC_LONG_FETCH_OP(and, _release) -ATOMIC_LONG_FETCH_OP(andnot, ) -ATOMIC_LONG_FETCH_OP(andnot, _relaxed) -ATOMIC_LONG_FETCH_OP(andnot, _acquire) -ATOMIC_LONG_FETCH_OP(andnot, _release) -ATOMIC_LONG_FETCH_OP(or, ) -ATOMIC_LONG_FETCH_OP(or, _relaxed) -ATOMIC_LONG_FETCH_OP(or, _acquire) -ATOMIC_LONG_FETCH_OP(or, _release) -ATOMIC_LONG_FETCH_OP(xor, ) -ATOMIC_LONG_FETCH_OP(xor, _relaxed) -ATOMIC_LONG_FETCH_OP(xor, _acquire) -ATOMIC_LONG_FETCH_OP(xor, _release) +static inline long +atomic_long_xchg(atomic_long_t *v, long i) +{ + return atomic64_xchg(v, i); +} -#undef ATOMIC_LONG_FETCH_OP +static inline long +atomic_long_xchg_acquire(atomic_long_t *v, long i) +{ + return atomic64_xchg_acquire(v, i); +} -#define ATOMIC_LONG_FETCH_INC_DEC_OP(op, mo) \ -static inline long \ -atomic_long_fetch_##op##mo(atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_fetch_##op##mo)(v); \ +static inline long +atomic_long_xchg_release(atomic_long_t *v, long i) +{ + return atomic64_xchg_release(v, i); } -ATOMIC_LONG_FETCH_INC_DEC_OP(inc,) -ATOMIC_LONG_FETCH_INC_DEC_OP(inc, _relaxed) -ATOMIC_LONG_FETCH_INC_DEC_OP(inc, _acquire) -ATOMIC_LONG_FETCH_INC_DEC_OP(inc, _release) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec,) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec, _relaxed) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec, _acquire) -ATOMIC_LONG_FETCH_INC_DEC_OP(dec, _release) +static inline long +atomic_long_xchg_relaxed(atomic_long_t *v, long i) +{ + return atomic64_xchg_relaxed(v, i); +} -#undef ATOMIC_LONG_FETCH_INC_DEC_OP +static inline long +atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg(v, old, new); +} -#define ATOMIC_LONG_OP(op) \ -static __always_inline void \ -atomic_long_##op(long i, atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - ATOMIC_LONG_PFX(_##op)(i, v); \ +static inline long +atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg_acquire(v, old, new); } -ATOMIC_LONG_OP(add) -ATOMIC_LONG_OP(sub) -ATOMIC_LONG_OP(and) -ATOMIC_LONG_OP(andnot) -ATOMIC_LONG_OP(or) -ATOMIC_LONG_OP(xor) +static inline long +atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg_release(v, old, new); +} -#undef ATOMIC_LONG_OP +static inline long +atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} -static inline int atomic_long_sub_and_test(long i, atomic_long_t *l) +static inline bool +atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_try_cmpxchg(v, (s64 *)old, new); +} - return ATOMIC_LONG_PFX(_sub_and_test)(i, v); +static inline bool +atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +{ + return atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); } -static inline int atomic_long_dec_and_test(atomic_long_t *l) +static inline bool +atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_try_cmpxchg_release(v, (s64 *)old, new); +} - return ATOMIC_LONG_PFX(_dec_and_test)(v); +static inline bool +atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +{ + return atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); } -static inline int atomic_long_inc_and_test(atomic_long_t *l) +static inline bool +atomic_long_sub_and_test(long i, atomic_long_t *v) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_sub_and_test(i, v); +} - return ATOMIC_LONG_PFX(_inc_and_test)(v); +static inline bool +atomic_long_dec_and_test(atomic_long_t *v) +{ + return atomic64_dec_and_test(v); } -static inline int atomic_long_add_negative(long i, atomic_long_t *l) +static inline bool +atomic_long_inc_and_test(atomic_long_t *v) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + return atomic64_inc_and_test(v); +} - return ATOMIC_LONG_PFX(_add_negative)(i, v); +static inline bool +atomic_long_add_negative(long i, atomic_long_t *v) +{ + return atomic64_add_negative(i, v); } -#define ATOMIC_LONG_INC_DEC_OP(op, mo) \ -static inline long \ -atomic_long_##op##_return##mo(atomic_long_t *l) \ -{ \ - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ - \ - return (long)ATOMIC_LONG_PFX(_##op##_return##mo)(v); \ +static inline long +atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic64_fetch_add_unless(v, a, u); } -ATOMIC_LONG_INC_DEC_OP(inc,) -ATOMIC_LONG_INC_DEC_OP(inc, _relaxed) -ATOMIC_LONG_INC_DEC_OP(inc, _acquire) -ATOMIC_LONG_INC_DEC_OP(inc, _release) -ATOMIC_LONG_INC_DEC_OP(dec,) -ATOMIC_LONG_INC_DEC_OP(dec, _relaxed) -ATOMIC_LONG_INC_DEC_OP(dec, _acquire) -ATOMIC_LONG_INC_DEC_OP(dec, _release) -#undef ATOMIC_LONG_INC_DEC_OP +static inline bool +atomic_long_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic64_add_unless(v, a, u); +} + +static inline bool +atomic_long_inc_not_zero(atomic_long_t *v) +{ + return atomic64_inc_not_zero(v); +} + +static inline bool +atomic_long_inc_unless_negative(atomic_long_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +static inline bool +atomic_long_dec_unless_positive(atomic_long_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +static inline long +atomic_long_dec_if_positive(atomic_long_t *v) +{ + return atomic64_dec_if_positive(v); +} -static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u) +#else /* CONFIG_64BIT */ + +static inline long +atomic_long_read(const atomic_long_t *v) +{ + return atomic_read(v); +} + +static inline long +atomic_long_read_acquire(const atomic_long_t *v) +{ + return atomic_read_acquire(v); +} + +static inline void +atomic_long_set(atomic_long_t *v, long i) { - ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; + atomic_set(v, i); +} - return (long)ATOMIC_LONG_PFX(_add_unless)(v, a, u); +static inline void +atomic_long_set_release(atomic_long_t *v, long i) +{ + atomic_set_release(v, i); } -#define atomic_long_inc_not_zero(l) \ - ATOMIC_LONG_PFX(_inc_not_zero)((ATOMIC_LONG_PFX(_t) *)(l)) +static inline void +atomic_long_add(long i, atomic_long_t *v) +{ + atomic_add(i, v); +} + +static inline long +atomic_long_add_return(long i, atomic_long_t *v) +{ + return atomic_add_return(i, v); +} -#define atomic_long_cond_read_acquire(v, c) \ - ATOMIC_LONG_PFX(_cond_read_acquire)((ATOMIC_LONG_PFX(_t) *)(v), (c)) +static inline long +atomic_long_add_return_acquire(long i, atomic_long_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +static inline long +atomic_long_add_return_release(long i, atomic_long_t *v) +{ + return atomic_add_return_release(i, v); +} + +static inline long +atomic_long_add_return_relaxed(long i, atomic_long_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_add(long i, atomic_long_t *v) +{ + return atomic_fetch_add(i, v); +} + +static inline long +atomic_long_fetch_add_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +static inline long +atomic_long_fetch_add_release(long i, atomic_long_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +static inline long +atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +static inline void +atomic_long_sub(long i, atomic_long_t *v) +{ + atomic_sub(i, v); +} + +static inline long +atomic_long_sub_return(long i, atomic_long_t *v) +{ + return atomic_sub_return(i, v); +} + +static inline long +atomic_long_sub_return_acquire(long i, atomic_long_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +static inline long +atomic_long_sub_return_release(long i, atomic_long_t *v) +{ + return atomic_sub_return_release(i, v); +} + +static inline long +atomic_long_sub_return_relaxed(long i, atomic_long_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +static inline long +atomic_long_fetch_sub(long i, atomic_long_t *v) +{ + return atomic_fetch_sub(i, v); +} + +static inline long +atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +static inline long +atomic_long_fetch_sub_release(long i, atomic_long_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +static inline long +atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +static inline void +atomic_long_inc(atomic_long_t *v) +{ + atomic_inc(v); +} + +static inline long +atomic_long_inc_return(atomic_long_t *v) +{ + return atomic_inc_return(v); +} + +static inline long +atomic_long_inc_return_acquire(atomic_long_t *v) +{ + return atomic_inc_return_acquire(v); +} + +static inline long +atomic_long_inc_return_release(atomic_long_t *v) +{ + return atomic_inc_return_release(v); +} + +static inline long +atomic_long_inc_return_relaxed(atomic_long_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +static inline long +atomic_long_fetch_inc(atomic_long_t *v) +{ + return atomic_fetch_inc(v); +} + +static inline long +atomic_long_fetch_inc_acquire(atomic_long_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +static inline long +atomic_long_fetch_inc_release(atomic_long_t *v) +{ + return atomic_fetch_inc_release(v); +} + +static inline long +atomic_long_fetch_inc_relaxed(atomic_long_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +static inline void +atomic_long_dec(atomic_long_t *v) +{ + atomic_dec(v); +} + +static inline long +atomic_long_dec_return(atomic_long_t *v) +{ + return atomic_dec_return(v); +} + +static inline long +atomic_long_dec_return_acquire(atomic_long_t *v) +{ + return atomic_dec_return_acquire(v); +} + +static inline long +atomic_long_dec_return_release(atomic_long_t *v) +{ + return atomic_dec_return_release(v); +} + +static inline long +atomic_long_dec_return_relaxed(atomic_long_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +static inline long +atomic_long_fetch_dec(atomic_long_t *v) +{ + return atomic_fetch_dec(v); +} + +static inline long +atomic_long_fetch_dec_acquire(atomic_long_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +static inline long +atomic_long_fetch_dec_release(atomic_long_t *v) +{ + return atomic_fetch_dec_release(v); +} + +static inline long +atomic_long_fetch_dec_relaxed(atomic_long_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +static inline void +atomic_long_and(long i, atomic_long_t *v) +{ + atomic_and(i, v); +} + +static inline long +atomic_long_fetch_and(long i, atomic_long_t *v) +{ + return atomic_fetch_and(i, v); +} + +static inline long +atomic_long_fetch_and_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +static inline long +atomic_long_fetch_and_release(long i, atomic_long_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +static inline long +atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +static inline void +atomic_long_andnot(long i, atomic_long_t *v) +{ + atomic_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +static inline long +atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +static inline long +atomic_long_fetch_andnot_release(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +static inline long +atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +static inline void +atomic_long_or(long i, atomic_long_t *v) +{ + atomic_or(i, v); +} + +static inline long +atomic_long_fetch_or(long i, atomic_long_t *v) +{ + return atomic_fetch_or(i, v); +} + +static inline long +atomic_long_fetch_or_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +static inline long +atomic_long_fetch_or_release(long i, atomic_long_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +static inline long +atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +static inline void +atomic_long_xor(long i, atomic_long_t *v) +{ + atomic_xor(i, v); +} + +static inline long +atomic_long_fetch_xor(long i, atomic_long_t *v) +{ + return atomic_fetch_xor(i, v); +} + +static inline long +atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +static inline long +atomic_long_fetch_xor_release(long i, atomic_long_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +static inline long +atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +static inline long +atomic_long_xchg(atomic_long_t *v, long i) +{ + return atomic_xchg(v, i); +} + +static inline long +atomic_long_xchg_acquire(atomic_long_t *v, long i) +{ + return atomic_xchg_acquire(v, i); +} + +static inline long +atomic_long_xchg_release(atomic_long_t *v, long i) +{ + return atomic_xchg_release(v, i); +} + +static inline long +atomic_long_xchg_relaxed(atomic_long_t *v, long i) +{ + return atomic_xchg_relaxed(v, i); +} + +static inline long +atomic_long_cmpxchg(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg(v, old, new); +} + +static inline long +atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +static inline long +atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +static inline long +atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +static inline bool +atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg(v, (int *)old, new); +} + +static inline bool +atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg_acquire(v, (int *)old, new); +} + +static inline bool +atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg_release(v, (int *)old, new); +} + +static inline bool +atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) +{ + return atomic_try_cmpxchg_relaxed(v, (int *)old, new); +} + +static inline bool +atomic_long_sub_and_test(long i, atomic_long_t *v) +{ + return atomic_sub_and_test(i, v); +} + +static inline bool +atomic_long_dec_and_test(atomic_long_t *v) +{ + return atomic_dec_and_test(v); +} + +static inline bool +atomic_long_inc_and_test(atomic_long_t *v) +{ + return atomic_inc_and_test(v); +} + +static inline bool +atomic_long_add_negative(long i, atomic_long_t *v) +{ + return atomic_add_negative(i, v); +} + +static inline long +atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +static inline bool +atomic_long_add_unless(atomic_long_t *v, long a, long u) +{ + return atomic_add_unless(v, a, u); +} + +static inline bool +atomic_long_inc_not_zero(atomic_long_t *v) +{ + return atomic_inc_not_zero(v); +} + +static inline bool +atomic_long_inc_unless_negative(atomic_long_t *v) +{ + return atomic_inc_unless_negative(v); +} + +static inline bool +atomic_long_dec_unless_positive(atomic_long_t *v) +{ + return atomic_dec_unless_positive(v); +} + +static inline long +atomic_long_dec_if_positive(atomic_long_t *v) +{ + return atomic_dec_if_positive(v); +} -#endif /* _ASM_GENERIC_ATOMIC_LONG_H */ +#endif /* CONFIG_64BIT */ +#endif /* _ASM_GENERIC_ATOMIC_LONG_H */ From patchwork Tue May 29 18:07:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137202 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp4364154lji; Tue, 29 May 2018 11:08:49 -0700 (PDT) X-Google-Smtp-Source: ADUXVKL7dKYnUkjEA53zbvUtPtd7Dl8LoyWClxYPvm6vqw+bQpXegj82PeoDhPXUORqRLFup8f5u X-Received: by 2002:a17:902:b410:: with SMTP id x16-v6mr8026838plr.324.1527617329424; Tue, 29 May 2018 11:08:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527617329; cv=none; d=google.com; s=arc-20160816; b=LqUCji7xKj8ilGfGyBPxHtkSFoHPWswr2/psRV0CUvOF/sxG/cJzpt0FxpmWzCzp4o LN/hTEvn6JiI1Uqa+6ogO89RhpRZypH5aJ0G4hJojLuxAASoxDc8ypQY2to9QDroc9Rb AkcAuKpgKMgSESdE8wqCNcBUAH+HdrlHsjv7BNz9x8qQR17ugjTzN8CY6BPzt4fMM73m RF4LRB5LfdkH4SbKSDfHcSL0aIY0FgcJg1WZRhnMM3VkYoi+tSymrlf/swyeq6SaBplo zNFt7mqyJ2iiDjlZ5G6lkGcrZ0Aky/hIEUJYY9ClgCCPvp5zKlevIkIlNFtGotuG90SK Xc1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=c8iqOnYX8y4/GTiHc4BBWT+Po5rtsQixVoUy5RpWhSE=; b=fRZ5E/RpYCzYzE58wScZs1Td74cyj7gXaj4QzVv+3D9WQcG85UoZGtKhs540/VXVqh fPQsbWqhuDWEZryjcazFrky2/QoWYWC+wNNvQiRnoqSkEMETtKqa5LJcq7pT7Qt5Bpgr ut8p8fHTTAcVVECc/n9YCRMJ/nXpInX17uAJGX5H4zLocAXEK+Zc0JQdhgeexnT3ac0b kJI1YdGX/Jt2rAbiEaPpmCwwMsRt74jlxzEX0WRhefI9emRgaT9xJpzoN5sVI/OuRS/0 PFJp+sh3BsnTvLJb21I3Ensi3Cm4AQXY1CBuaktJwdX4ls4LY9oBVY8cm5ueNGSs8d6Z C0TA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a23-v6si32198472pfe.364.2018.05.29.11.08.49; Tue, 29 May 2018 11:08:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965987AbeE2SI2 (ORCPT + 30 others); Tue, 29 May 2018 14:08:28 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:45938 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965937AbeE2SIP (ORCPT ); Tue, 29 May 2018 14:08:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C955515AD; Tue, 29 May 2018 11:08:14 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2BE9F3F557; Tue, 29 May 2018 11:08:13 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Boqun Feng , Peter Zijlstra , Will Deacon , Arnd Bergmann , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov Subject: [PATCH 7/7] atomics: switch to generated instrumentation Date: Tue, 29 May 2018 19:07:46 +0100 Message-Id: <20180529180746.29684-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180529180746.29684-1-mark.rutland@arm.com> References: <20180529180746.29684-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As a step towards ensuring the atomic* APIs are consistent, let's switch to wrappers generated by gen-atomic-instrumented.h, using the same table used to generate the fallbacks and atomic-long wrappers. These are checked in rather than generated with Kbuild, since: * This allows inspection of the atomics with git grep and ctags on a pristine tree, which Linus strongly prefers being able to do. * The fallbacks are not expected to change very often, and are not affected by machine details or configuration options, so regenerating them for *every* build is somewhat wasteful. * These are included by files required *very* early in the build process (e.g. for generating bounds.h), and we'd rather not complicate the top-level Kbuild file. Generating the atomic headers means that the instrumented wrappers will remain in sync with the rest of the atomic APIs, and we gain all the ordering variants of each atomic without having to manually expaned them all. The KASAN checks are automatically generated based on the function parameters defined in atomics.tbl. Note that try_cmpxchg() now correctly treats 'old' as a parameter that may be written to, and not only read as the hand-written instrumentation assumed. Other than try_cmpxchg, there should be no functional change as a result of this patch (i.e. existing code should not be affected). However, this change will allow architectures with relaxed atomics to gain instrumentation. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Peter Zijlstra Cc: Will Deacon Cc: Arnd Bergmann Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Dmitry Vyukov --- include/asm-generic/atomic-instrumented-atomic.h | 1629 ++++++++++++++++++++++ include/asm-generic/atomic-instrumented.h | 394 +----- 2 files changed, 1631 insertions(+), 392 deletions(-) create mode 100644 include/asm-generic/atomic-instrumented-atomic.h Note: this is generated code. Please see patch 3 for the code which generated it, along with rationale in the cover letter. Mark. -- 2.11.0 diff --git a/include/asm-generic/atomic-instrumented-atomic.h b/include/asm-generic/atomic-instrumented-atomic.h new file mode 100644 index 000000000000..95f9cce977dd --- /dev/null +++ b/include/asm-generic/atomic-instrumented-atomic.h @@ -0,0 +1,1629 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by ./scripts/atomic/gen-atomic-instrumented.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +#ifndef _ASM_GENERIC_ATOMIC_INSTRUMENTED_ATOMIC_H +#define _ASM_GENERIC_ATOMIC_INSTRUMENTED_ATOMIC_H + +static inline int +atomic_read(const atomic_t *v) +{ + kasan_check_read(v, sizeof(*v)); + return arch_atomic_read(v); +} +#define atomic_read atomic_read + +#if defined(arch_atomic_read_acquire) +static inline int +atomic_read_acquire(const atomic_t *v) +{ + kasan_check_read(v, sizeof(*v)); + return arch_atomic_read_acquire(v); +} +#define atomic_read_acquire atomic_read_acquire +#endif + +static inline void +atomic_set(atomic_t *v, int i) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_set(v, i); +} +#define atomic_set atomic_set + +#if defined(arch_atomic_set_release) +static inline void +atomic_set_release(atomic_t *v, int i) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_set_release(v, i); +} +#define atomic_set_release atomic_set_release +#endif + +static inline void +atomic_add(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_add(i, v); +} +#define atomic_add atomic_add + +#if !defined(arch_atomic_add_return_relaxed) || defined(arch_atomic_add_return) +static inline int +atomic_add_return(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_add_return(i, v); +} +#define atomic_add_return atomic_add_return +#endif + +#if defined(arch_atomic_add_return_acquire) +static inline int +atomic_add_return_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_add_return_acquire(i, v); +} +#define atomic_add_return_acquire atomic_add_return_acquire +#endif + +#if defined(arch_atomic_add_return_release) +static inline int +atomic_add_return_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_add_return_release(i, v); +} +#define atomic_add_return_release atomic_add_return_release +#endif + +#if defined(arch_atomic_add_return_relaxed) +static inline int +atomic_add_return_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_add_return_relaxed(i, v); +} +#define atomic_add_return_relaxed atomic_add_return_relaxed +#endif + +#if !defined(arch_atomic_fetch_add_relaxed) || defined(arch_atomic_fetch_add) +static inline int +atomic_fetch_add(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_add(i, v); +} +#define atomic_fetch_add atomic_fetch_add +#endif + +#if defined(arch_atomic_fetch_add_acquire) +static inline int +atomic_fetch_add_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_add_acquire(i, v); +} +#define atomic_fetch_add_acquire atomic_fetch_add_acquire +#endif + +#if defined(arch_atomic_fetch_add_release) +static inline int +atomic_fetch_add_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_add_release(i, v); +} +#define atomic_fetch_add_release atomic_fetch_add_release +#endif + +#if defined(arch_atomic_fetch_add_relaxed) +static inline int +atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_add_relaxed(i, v); +} +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#endif + +static inline void +atomic_sub(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_sub(i, v); +} +#define atomic_sub atomic_sub + +#if !defined(arch_atomic_sub_return_relaxed) || defined(arch_atomic_sub_return) +static inline int +atomic_sub_return(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_sub_return(i, v); +} +#define atomic_sub_return atomic_sub_return +#endif + +#if defined(arch_atomic_sub_return_acquire) +static inline int +atomic_sub_return_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_sub_return_acquire(i, v); +} +#define atomic_sub_return_acquire atomic_sub_return_acquire +#endif + +#if defined(arch_atomic_sub_return_release) +static inline int +atomic_sub_return_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_sub_return_release(i, v); +} +#define atomic_sub_return_release atomic_sub_return_release +#endif + +#if defined(arch_atomic_sub_return_relaxed) +static inline int +atomic_sub_return_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_sub_return_relaxed(i, v); +} +#define atomic_sub_return_relaxed atomic_sub_return_relaxed +#endif + +#if !defined(arch_atomic_fetch_sub_relaxed) || defined(arch_atomic_fetch_sub) +static inline int +atomic_fetch_sub(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_sub(i, v); +} +#define atomic_fetch_sub atomic_fetch_sub +#endif + +#if defined(arch_atomic_fetch_sub_acquire) +static inline int +atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_sub_acquire(i, v); +} +#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire +#endif + +#if defined(arch_atomic_fetch_sub_release) +static inline int +atomic_fetch_sub_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_sub_release(i, v); +} +#define atomic_fetch_sub_release atomic_fetch_sub_release +#endif + +#if defined(arch_atomic_fetch_sub_relaxed) +static inline int +atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_sub_relaxed(i, v); +} +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed +#endif + +#if defined(arch_atomic_inc) +static inline void +atomic_inc(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_inc(v); +} +#define atomic_inc atomic_inc +#endif + +#if defined(arch_atomic_inc_return) +static inline int +atomic_inc_return(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_inc_return(v); +} +#define atomic_inc_return atomic_inc_return +#endif + +#if defined(arch_atomic_inc_return_acquire) +static inline int +atomic_inc_return_acquire(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_inc_return_acquire(v); +} +#define atomic_inc_return_acquire atomic_inc_return_acquire +#endif + +#if defined(arch_atomic_inc_return_release) +static inline int +atomic_inc_return_release(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_inc_return_release(v); +} +#define atomic_inc_return_release atomic_inc_return_release +#endif + +#if defined(arch_atomic_inc_return_relaxed) +static inline int +atomic_inc_return_relaxed(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_inc_return_relaxed(v); +} +#define atomic_inc_return_relaxed atomic_inc_return_relaxed +#endif + +#if defined(arch_atomic_fetch_inc) +static inline int +atomic_fetch_inc(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_inc(v); +} +#define atomic_fetch_inc atomic_fetch_inc +#endif + +#if defined(arch_atomic_fetch_inc_acquire) +static inline int +atomic_fetch_inc_acquire(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_inc_acquire(v); +} +#define atomic_fetch_inc_acquire atomic_fetch_inc_acquire +#endif + +#if defined(arch_atomic_fetch_inc_release) +static inline int +atomic_fetch_inc_release(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_inc_release(v); +} +#define atomic_fetch_inc_release atomic_fetch_inc_release +#endif + +#if defined(arch_atomic_fetch_inc_relaxed) +static inline int +atomic_fetch_inc_relaxed(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_inc_relaxed(v); +} +#define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed +#endif + +#if defined(arch_atomic_dec) +static inline void +atomic_dec(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_dec(v); +} +#define atomic_dec atomic_dec +#endif + +#if defined(arch_atomic_dec_return) +static inline int +atomic_dec_return(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_dec_return(v); +} +#define atomic_dec_return atomic_dec_return +#endif + +#if defined(arch_atomic_dec_return_acquire) +static inline int +atomic_dec_return_acquire(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_dec_return_acquire(v); +} +#define atomic_dec_return_acquire atomic_dec_return_acquire +#endif + +#if defined(arch_atomic_dec_return_release) +static inline int +atomic_dec_return_release(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_dec_return_release(v); +} +#define atomic_dec_return_release atomic_dec_return_release +#endif + +#if defined(arch_atomic_dec_return_relaxed) +static inline int +atomic_dec_return_relaxed(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_dec_return_relaxed(v); +} +#define atomic_dec_return_relaxed atomic_dec_return_relaxed +#endif + +#if defined(arch_atomic_fetch_dec) +static inline int +atomic_fetch_dec(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_dec(v); +} +#define atomic_fetch_dec atomic_fetch_dec +#endif + +#if defined(arch_atomic_fetch_dec_acquire) +static inline int +atomic_fetch_dec_acquire(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_dec_acquire(v); +} +#define atomic_fetch_dec_acquire atomic_fetch_dec_acquire +#endif + +#if defined(arch_atomic_fetch_dec_release) +static inline int +atomic_fetch_dec_release(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_dec_release(v); +} +#define atomic_fetch_dec_release atomic_fetch_dec_release +#endif + +#if defined(arch_atomic_fetch_dec_relaxed) +static inline int +atomic_fetch_dec_relaxed(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_dec_relaxed(v); +} +#define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed +#endif + +static inline void +atomic_and(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_and(i, v); +} +#define atomic_and atomic_and + +#if !defined(arch_atomic_fetch_and_relaxed) || defined(arch_atomic_fetch_and) +static inline int +atomic_fetch_and(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_and(i, v); +} +#define atomic_fetch_and atomic_fetch_and +#endif + +#if defined(arch_atomic_fetch_and_acquire) +static inline int +atomic_fetch_and_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_and_acquire(i, v); +} +#define atomic_fetch_and_acquire atomic_fetch_and_acquire +#endif + +#if defined(arch_atomic_fetch_and_release) +static inline int +atomic_fetch_and_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_and_release(i, v); +} +#define atomic_fetch_and_release atomic_fetch_and_release +#endif + +#if defined(arch_atomic_fetch_and_relaxed) +static inline int +atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_and_relaxed(i, v); +} +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#endif + +#if defined(arch_atomic_andnot) +static inline void +atomic_andnot(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_andnot(i, v); +} +#define atomic_andnot atomic_andnot +#endif + +#if defined(arch_atomic_fetch_andnot) +static inline int +atomic_fetch_andnot(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_andnot(i, v); +} +#define atomic_fetch_andnot atomic_fetch_andnot +#endif + +#if defined(arch_atomic_fetch_andnot_acquire) +static inline int +atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_andnot_acquire(i, v); +} +#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire +#endif + +#if defined(arch_atomic_fetch_andnot_release) +static inline int +atomic_fetch_andnot_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_andnot_release(i, v); +} +#define atomic_fetch_andnot_release atomic_fetch_andnot_release +#endif + +#if defined(arch_atomic_fetch_andnot_relaxed) +static inline int +atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_andnot_relaxed(i, v); +} +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#endif + +static inline void +atomic_or(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_or(i, v); +} +#define atomic_or atomic_or + +#if !defined(arch_atomic_fetch_or_relaxed) || defined(arch_atomic_fetch_or) +static inline int +atomic_fetch_or(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_or(i, v); +} +#define atomic_fetch_or atomic_fetch_or +#endif + +#if defined(arch_atomic_fetch_or_acquire) +static inline int +atomic_fetch_or_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_or_acquire(i, v); +} +#define atomic_fetch_or_acquire atomic_fetch_or_acquire +#endif + +#if defined(arch_atomic_fetch_or_release) +static inline int +atomic_fetch_or_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_or_release(i, v); +} +#define atomic_fetch_or_release atomic_fetch_or_release +#endif + +#if defined(arch_atomic_fetch_or_relaxed) +static inline int +atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_or_relaxed(i, v); +} +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#endif + +static inline void +atomic_xor(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic_xor(i, v); +} +#define atomic_xor atomic_xor + +#if !defined(arch_atomic_fetch_xor_relaxed) || defined(arch_atomic_fetch_xor) +static inline int +atomic_fetch_xor(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_xor(i, v); +} +#define atomic_fetch_xor atomic_fetch_xor +#endif + +#if defined(arch_atomic_fetch_xor_acquire) +static inline int +atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_xor_acquire(i, v); +} +#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire +#endif + +#if defined(arch_atomic_fetch_xor_release) +static inline int +atomic_fetch_xor_release(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_xor_release(i, v); +} +#define atomic_fetch_xor_release atomic_fetch_xor_release +#endif + +#if defined(arch_atomic_fetch_xor_relaxed) +static inline int +atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_xor_relaxed(i, v); +} +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed +#endif + +#if !defined(arch_atomic_xchg_relaxed) || defined(arch_atomic_xchg) +static inline int +atomic_xchg(atomic_t *v, int i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_xchg(v, i); +} +#define atomic_xchg atomic_xchg +#endif + +#if defined(arch_atomic_xchg_acquire) +static inline int +atomic_xchg_acquire(atomic_t *v, int i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_xchg_acquire(v, i); +} +#define atomic_xchg_acquire atomic_xchg_acquire +#endif + +#if defined(arch_atomic_xchg_release) +static inline int +atomic_xchg_release(atomic_t *v, int i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_xchg_release(v, i); +} +#define atomic_xchg_release atomic_xchg_release +#endif + +#if defined(arch_atomic_xchg_relaxed) +static inline int +atomic_xchg_relaxed(atomic_t *v, int i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_xchg_relaxed(v, i); +} +#define atomic_xchg_relaxed atomic_xchg_relaxed +#endif + +#if !defined(arch_atomic_cmpxchg_relaxed) || defined(arch_atomic_cmpxchg) +static inline int +atomic_cmpxchg(atomic_t *v, int old, int new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_cmpxchg(v, old, new); +} +#define atomic_cmpxchg atomic_cmpxchg +#endif + +#if defined(arch_atomic_cmpxchg_acquire) +static inline int +atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_cmpxchg_acquire(v, old, new); +} +#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire +#endif + +#if defined(arch_atomic_cmpxchg_release) +static inline int +atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_cmpxchg_release(v, old, new); +} +#define atomic_cmpxchg_release atomic_cmpxchg_release +#endif + +#if defined(arch_atomic_cmpxchg_relaxed) +static inline int +atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_cmpxchg_relaxed(v, old, new); +} +#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed +#endif + +#if defined(arch_atomic_try_cmpxchg) +static inline bool +atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic_try_cmpxchg(v, old, new); +} +#define atomic_try_cmpxchg atomic_try_cmpxchg +#endif + +#if defined(arch_atomic_try_cmpxchg_acquire) +static inline bool +atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic_try_cmpxchg_acquire(v, old, new); +} +#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire +#endif + +#if defined(arch_atomic_try_cmpxchg_release) +static inline bool +atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic_try_cmpxchg_release(v, old, new); +} +#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release +#endif + +#if defined(arch_atomic_try_cmpxchg_relaxed) +static inline bool +atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic_try_cmpxchg_relaxed(v, old, new); +} +#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed +#endif + +#if defined(arch_atomic_sub_and_test) +static inline bool +atomic_sub_and_test(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_sub_and_test(i, v); +} +#define atomic_sub_and_test atomic_sub_and_test +#endif + +#if defined(arch_atomic_dec_and_test) +static inline bool +atomic_dec_and_test(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_dec_and_test(v); +} +#define atomic_dec_and_test atomic_dec_and_test +#endif + +#if defined(arch_atomic_inc_and_test) +static inline bool +atomic_inc_and_test(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_inc_and_test(v); +} +#define atomic_inc_and_test atomic_inc_and_test +#endif + +#if defined(arch_atomic_add_negative) +static inline bool +atomic_add_negative(int i, atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_add_negative(i, v); +} +#define atomic_add_negative atomic_add_negative +#endif + +#if defined(arch_atomic_fetch_add_unless) +static inline int +atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_fetch_add_unless(v, a, u); +} +#define atomic_fetch_add_unless atomic_fetch_add_unless +#endif + +#if defined(arch_atomic_add_unless) +static inline bool +atomic_add_unless(atomic_t *v, int a, int u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_add_unless(v, a, u); +} +#define atomic_add_unless atomic_add_unless +#endif + +#if defined(arch_atomic_inc_not_zero) +static inline bool +atomic_inc_not_zero(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_inc_not_zero(v); +} +#define atomic_inc_not_zero atomic_inc_not_zero +#endif + +#if defined(arch_atomic_inc_unless_negative) +static inline bool +atomic_inc_unless_negative(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_inc_unless_negative(v); +} +#define atomic_inc_unless_negative atomic_inc_unless_negative +#endif + +#if defined(arch_atomic_dec_unless_positive) +static inline bool +atomic_dec_unless_positive(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_dec_unless_positive(v); +} +#define atomic_dec_unless_positive atomic_dec_unless_positive +#endif + +#if defined(arch_atomic_dec_if_positive) +static inline int +atomic_dec_if_positive(atomic_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic_dec_if_positive(v); +} +#define atomic_dec_if_positive atomic_dec_if_positive +#endif + +static inline s64 +atomic64_read(const atomic64_t *v) +{ + kasan_check_read(v, sizeof(*v)); + return arch_atomic64_read(v); +} +#define atomic64_read atomic64_read + +#if defined(arch_atomic64_read_acquire) +static inline s64 +atomic64_read_acquire(const atomic64_t *v) +{ + kasan_check_read(v, sizeof(*v)); + return arch_atomic64_read_acquire(v); +} +#define atomic64_read_acquire atomic64_read_acquire +#endif + +static inline void +atomic64_set(atomic64_t *v, s64 i) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_set(v, i); +} +#define atomic64_set atomic64_set + +#if defined(arch_atomic64_set_release) +static inline void +atomic64_set_release(atomic64_t *v, s64 i) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_set_release(v, i); +} +#define atomic64_set_release atomic64_set_release +#endif + +static inline void +atomic64_add(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_add(i, v); +} +#define atomic64_add atomic64_add + +#if !defined(arch_atomic64_add_return_relaxed) || defined(arch_atomic64_add_return) +static inline s64 +atomic64_add_return(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_add_return(i, v); +} +#define atomic64_add_return atomic64_add_return +#endif + +#if defined(arch_atomic64_add_return_acquire) +static inline s64 +atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_add_return_acquire(i, v); +} +#define atomic64_add_return_acquire atomic64_add_return_acquire +#endif + +#if defined(arch_atomic64_add_return_release) +static inline s64 +atomic64_add_return_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_add_return_release(i, v); +} +#define atomic64_add_return_release atomic64_add_return_release +#endif + +#if defined(arch_atomic64_add_return_relaxed) +static inline s64 +atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_add_return_relaxed(i, v); +} +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#endif + +#if !defined(arch_atomic64_fetch_add_relaxed) || defined(arch_atomic64_fetch_add) +static inline s64 +atomic64_fetch_add(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add(i, v); +} +#define atomic64_fetch_add atomic64_fetch_add +#endif + +#if defined(arch_atomic64_fetch_add_acquire) +static inline s64 +atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_acquire(i, v); +} +#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire +#endif + +#if defined(arch_atomic64_fetch_add_release) +static inline s64 +atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_release(i, v); +} +#define atomic64_fetch_add_release atomic64_fetch_add_release +#endif + +#if defined(arch_atomic64_fetch_add_relaxed) +static inline s64 +atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_relaxed(i, v); +} +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#endif + +static inline void +atomic64_sub(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_sub(i, v); +} +#define atomic64_sub atomic64_sub + +#if !defined(arch_atomic64_sub_return_relaxed) || defined(arch_atomic64_sub_return) +static inline s64 +atomic64_sub_return(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_sub_return(i, v); +} +#define atomic64_sub_return atomic64_sub_return +#endif + +#if defined(arch_atomic64_sub_return_acquire) +static inline s64 +atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_sub_return_acquire(i, v); +} +#define atomic64_sub_return_acquire atomic64_sub_return_acquire +#endif + +#if defined(arch_atomic64_sub_return_release) +static inline s64 +atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_sub_return_release(i, v); +} +#define atomic64_sub_return_release atomic64_sub_return_release +#endif + +#if defined(arch_atomic64_sub_return_relaxed) +static inline s64 +atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_sub_return_relaxed(i, v); +} +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed +#endif + +#if !defined(arch_atomic64_fetch_sub_relaxed) || defined(arch_atomic64_fetch_sub) +static inline s64 +atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_sub(i, v); +} +#define atomic64_fetch_sub atomic64_fetch_sub +#endif + +#if defined(arch_atomic64_fetch_sub_acquire) +static inline s64 +atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_sub_acquire(i, v); +} +#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire +#endif + +#if defined(arch_atomic64_fetch_sub_release) +static inline s64 +atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_sub_release(i, v); +} +#define atomic64_fetch_sub_release atomic64_fetch_sub_release +#endif + +#if defined(arch_atomic64_fetch_sub_relaxed) +static inline s64 +atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_sub_relaxed(i, v); +} +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed +#endif + +#if defined(arch_atomic64_inc) +static inline void +atomic64_inc(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_inc(v); +} +#define atomic64_inc atomic64_inc +#endif + +#if defined(arch_atomic64_inc_return) +static inline s64 +atomic64_inc_return(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_inc_return(v); +} +#define atomic64_inc_return atomic64_inc_return +#endif + +#if defined(arch_atomic64_inc_return_acquire) +static inline s64 +atomic64_inc_return_acquire(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_inc_return_acquire(v); +} +#define atomic64_inc_return_acquire atomic64_inc_return_acquire +#endif + +#if defined(arch_atomic64_inc_return_release) +static inline s64 +atomic64_inc_return_release(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_inc_return_release(v); +} +#define atomic64_inc_return_release atomic64_inc_return_release +#endif + +#if defined(arch_atomic64_inc_return_relaxed) +static inline s64 +atomic64_inc_return_relaxed(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_inc_return_relaxed(v); +} +#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed +#endif + +#if defined(arch_atomic64_fetch_inc) +static inline s64 +atomic64_fetch_inc(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_inc(v); +} +#define atomic64_fetch_inc atomic64_fetch_inc +#endif + +#if defined(arch_atomic64_fetch_inc_acquire) +static inline s64 +atomic64_fetch_inc_acquire(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_inc_acquire(v); +} +#define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire +#endif + +#if defined(arch_atomic64_fetch_inc_release) +static inline s64 +atomic64_fetch_inc_release(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_inc_release(v); +} +#define atomic64_fetch_inc_release atomic64_fetch_inc_release +#endif + +#if defined(arch_atomic64_fetch_inc_relaxed) +static inline s64 +atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_inc_relaxed(v); +} +#define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed +#endif + +#if defined(arch_atomic64_dec) +static inline void +atomic64_dec(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_dec(v); +} +#define atomic64_dec atomic64_dec +#endif + +#if defined(arch_atomic64_dec_return) +static inline s64 +atomic64_dec_return(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_dec_return(v); +} +#define atomic64_dec_return atomic64_dec_return +#endif + +#if defined(arch_atomic64_dec_return_acquire) +static inline s64 +atomic64_dec_return_acquire(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_dec_return_acquire(v); +} +#define atomic64_dec_return_acquire atomic64_dec_return_acquire +#endif + +#if defined(arch_atomic64_dec_return_release) +static inline s64 +atomic64_dec_return_release(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_dec_return_release(v); +} +#define atomic64_dec_return_release atomic64_dec_return_release +#endif + +#if defined(arch_atomic64_dec_return_relaxed) +static inline s64 +atomic64_dec_return_relaxed(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_dec_return_relaxed(v); +} +#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed +#endif + +#if defined(arch_atomic64_fetch_dec) +static inline s64 +atomic64_fetch_dec(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_dec(v); +} +#define atomic64_fetch_dec atomic64_fetch_dec +#endif + +#if defined(arch_atomic64_fetch_dec_acquire) +static inline s64 +atomic64_fetch_dec_acquire(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_dec_acquire(v); +} +#define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire +#endif + +#if defined(arch_atomic64_fetch_dec_release) +static inline s64 +atomic64_fetch_dec_release(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_dec_release(v); +} +#define atomic64_fetch_dec_release atomic64_fetch_dec_release +#endif + +#if defined(arch_atomic64_fetch_dec_relaxed) +static inline s64 +atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_dec_relaxed(v); +} +#define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed +#endif + +static inline void +atomic64_and(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_and(i, v); +} +#define atomic64_and atomic64_and + +#if !defined(arch_atomic64_fetch_and_relaxed) || defined(arch_atomic64_fetch_and) +static inline s64 +atomic64_fetch_and(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_and(i, v); +} +#define atomic64_fetch_and atomic64_fetch_and +#endif + +#if defined(arch_atomic64_fetch_and_acquire) +static inline s64 +atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_and_acquire(i, v); +} +#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire +#endif + +#if defined(arch_atomic64_fetch_and_release) +static inline s64 +atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_and_release(i, v); +} +#define atomic64_fetch_and_release atomic64_fetch_and_release +#endif + +#if defined(arch_atomic64_fetch_and_relaxed) +static inline s64 +atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_and_relaxed(i, v); +} +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#endif + +#if defined(arch_atomic64_andnot) +static inline void +atomic64_andnot(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_andnot(i, v); +} +#define atomic64_andnot atomic64_andnot +#endif + +#if defined(arch_atomic64_fetch_andnot) +static inline s64 +atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_andnot(i, v); +} +#define atomic64_fetch_andnot atomic64_fetch_andnot +#endif + +#if defined(arch_atomic64_fetch_andnot_acquire) +static inline s64 +atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_andnot_acquire(i, v); +} +#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire +#endif + +#if defined(arch_atomic64_fetch_andnot_release) +static inline s64 +atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_andnot_release(i, v); +} +#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release +#endif + +#if defined(arch_atomic64_fetch_andnot_relaxed) +static inline s64 +atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_andnot_relaxed(i, v); +} +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#endif + +static inline void +atomic64_or(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_or(i, v); +} +#define atomic64_or atomic64_or + +#if !defined(arch_atomic64_fetch_or_relaxed) || defined(arch_atomic64_fetch_or) +static inline s64 +atomic64_fetch_or(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_or(i, v); +} +#define atomic64_fetch_or atomic64_fetch_or +#endif + +#if defined(arch_atomic64_fetch_or_acquire) +static inline s64 +atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_or_acquire(i, v); +} +#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire +#endif + +#if defined(arch_atomic64_fetch_or_release) +static inline s64 +atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_or_release(i, v); +} +#define atomic64_fetch_or_release atomic64_fetch_or_release +#endif + +#if defined(arch_atomic64_fetch_or_relaxed) +static inline s64 +atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_or_relaxed(i, v); +} +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#endif + +static inline void +atomic64_xor(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + arch_atomic64_xor(i, v); +} +#define atomic64_xor atomic64_xor + +#if !defined(arch_atomic64_fetch_xor_relaxed) || defined(arch_atomic64_fetch_xor) +static inline s64 +atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_xor(i, v); +} +#define atomic64_fetch_xor atomic64_fetch_xor +#endif + +#if defined(arch_atomic64_fetch_xor_acquire) +static inline s64 +atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_xor_acquire(i, v); +} +#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire +#endif + +#if defined(arch_atomic64_fetch_xor_release) +static inline s64 +atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_xor_release(i, v); +} +#define atomic64_fetch_xor_release atomic64_fetch_xor_release +#endif + +#if defined(arch_atomic64_fetch_xor_relaxed) +static inline s64 +atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_xor_relaxed(i, v); +} +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed +#endif + +#if !defined(arch_atomic64_xchg_relaxed) || defined(arch_atomic64_xchg) +static inline s64 +atomic64_xchg(atomic64_t *v, s64 i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_xchg(v, i); +} +#define atomic64_xchg atomic64_xchg +#endif + +#if defined(arch_atomic64_xchg_acquire) +static inline s64 +atomic64_xchg_acquire(atomic64_t *v, s64 i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_xchg_acquire(v, i); +} +#define atomic64_xchg_acquire atomic64_xchg_acquire +#endif + +#if defined(arch_atomic64_xchg_release) +static inline s64 +atomic64_xchg_release(atomic64_t *v, s64 i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_xchg_release(v, i); +} +#define atomic64_xchg_release atomic64_xchg_release +#endif + +#if defined(arch_atomic64_xchg_relaxed) +static inline s64 +atomic64_xchg_relaxed(atomic64_t *v, s64 i) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_xchg_relaxed(v, i); +} +#define atomic64_xchg_relaxed atomic64_xchg_relaxed +#endif + +#if !defined(arch_atomic64_cmpxchg_relaxed) || defined(arch_atomic64_cmpxchg) +static inline s64 +atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_cmpxchg(v, old, new); +} +#define atomic64_cmpxchg atomic64_cmpxchg +#endif + +#if defined(arch_atomic64_cmpxchg_acquire) +static inline s64 +atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_cmpxchg_acquire(v, old, new); +} +#define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire +#endif + +#if defined(arch_atomic64_cmpxchg_release) +static inline s64 +atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_cmpxchg_release(v, old, new); +} +#define atomic64_cmpxchg_release atomic64_cmpxchg_release +#endif + +#if defined(arch_atomic64_cmpxchg_relaxed) +static inline s64 +atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_cmpxchg_relaxed(v, old, new); +} +#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed +#endif + +#if defined(arch_atomic64_try_cmpxchg) +static inline bool +atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic64_try_cmpxchg(v, old, new); +} +#define atomic64_try_cmpxchg atomic64_try_cmpxchg +#endif + +#if defined(arch_atomic64_try_cmpxchg_acquire) +static inline bool +atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic64_try_cmpxchg_acquire(v, old, new); +} +#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire +#endif + +#if defined(arch_atomic64_try_cmpxchg_release) +static inline bool +atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic64_try_cmpxchg_release(v, old, new); +} +#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release +#endif + +#if defined(arch_atomic64_try_cmpxchg_relaxed) +static inline bool +atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + kasan_check_write(v, sizeof(*v)); + kasan_check_write(old, sizeof(*old)); + return arch_atomic64_try_cmpxchg_relaxed(v, old, new); +} +#define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed +#endif + +#if defined(arch_atomic64_sub_and_test) +static inline bool +atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_sub_and_test(i, v); +} +#define atomic64_sub_and_test atomic64_sub_and_test +#endif + +#if defined(arch_atomic64_dec_and_test) +static inline bool +atomic64_dec_and_test(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_dec_and_test(v); +} +#define atomic64_dec_and_test atomic64_dec_and_test +#endif + +#if defined(arch_atomic64_inc_and_test) +static inline bool +atomic64_inc_and_test(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_inc_and_test(v); +} +#define atomic64_inc_and_test atomic64_inc_and_test +#endif + +#if defined(arch_atomic64_add_negative) +static inline bool +atomic64_add_negative(s64 i, atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_add_negative(i, v); +} +#define atomic64_add_negative atomic64_add_negative +#endif + +#if defined(arch_atomic64_fetch_add_unless) +static inline s64 +atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_unless(v, a, u); +} +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +#endif + +#if defined(arch_atomic64_add_unless) +static inline bool +atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_add_unless(v, a, u); +} +#define atomic64_add_unless atomic64_add_unless +#endif + +#if defined(arch_atomic64_inc_not_zero) +static inline bool +atomic64_inc_not_zero(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_inc_not_zero(v); +} +#define atomic64_inc_not_zero atomic64_inc_not_zero +#endif + +#if defined(arch_atomic64_inc_unless_negative) +static inline bool +atomic64_inc_unless_negative(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_inc_unless_negative(v); +} +#define atomic64_inc_unless_negative atomic64_inc_unless_negative +#endif + +#if defined(arch_atomic64_dec_unless_positive) +static inline bool +atomic64_dec_unless_positive(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_dec_unless_positive(v); +} +#define atomic64_dec_unless_positive atomic64_dec_unless_positive +#endif + +#if defined(arch_atomic64_dec_if_positive) +static inline s64 +atomic64_dec_if_positive(atomic64_t *v) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_dec_if_positive(v); +} +#define atomic64_dec_if_positive atomic64_dec_if_positive +#endif + +#endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_ATOMIC_H */ diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 1458de7ece87..a26317c97132 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -16,398 +16,6 @@ #include #include -static __always_inline int atomic_read(const atomic_t *v) -{ - kasan_check_read(v, sizeof(*v)); - return arch_atomic_read(v); -} - -static __always_inline s64 atomic64_read(const atomic64_t *v) -{ - kasan_check_read(v, sizeof(*v)); - return arch_atomic64_read(v); -} - -static __always_inline void atomic_set(atomic_t *v, int i) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_set(v, i); -} - -static __always_inline void atomic64_set(atomic64_t *v, s64 i) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_set(v, i); -} - -static __always_inline int atomic_xchg(atomic_t *v, int i) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_xchg(v, i); -} - -static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 i) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_xchg(v, i); -} - -static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_cmpxchg(v, old, new); -} - -static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg(v, old, new); -} - -#ifdef arch_atomic_try_cmpxchg -#define atomic_try_cmpxchg atomic_try_cmpxchg -static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ - kasan_check_write(v, sizeof(*v)); - kasan_check_read(old, sizeof(*old)); - return arch_atomic_try_cmpxchg(v, old, new); -} -#endif - -#ifdef arch_atomic64_try_cmpxchg -#define atomic64_try_cmpxchg atomic64_try_cmpxchg -static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ - kasan_check_write(v, sizeof(*v)); - kasan_check_read(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg(v, old, new); -} -#endif - -#ifdef arch_atomic_fetch_add_unless -#define atomic_fetch_add_unless atomic_fetch_add_unless -static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_add_unless(v, a, u); -} -#endif - -#ifdef arch_atomic64_fetch_add_unless -#define atomic64_fetch_add_unless atomic64_fetch_add_unless -static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_add_unless(v, a, u); -} -#endif - -#ifdef arch_atomic_inc -#define atomic_inc atomic_inc -static __always_inline void atomic_inc(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_inc(v); -} -#endif - -#ifdef arch_atomic64_inc -#define atomic64_inc atomic64_inc -static __always_inline void atomic64_inc(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_inc(v); -} -#endif - -#ifdef arch_atomic_dec -#define atomic_dec atomic_dec -static __always_inline void atomic_dec(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_dec(v); -} -#endif - -#ifdef atch_atomic64_dec -#define atomic64_dec -static __always_inline void atomic64_dec(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_dec(v); -} -#endif - -static __always_inline void atomic_add(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_add(i, v); -} - -static __always_inline void atomic64_add(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_add(i, v); -} - -static __always_inline void atomic_sub(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_sub(i, v); -} - -static __always_inline void atomic64_sub(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_sub(i, v); -} - -static __always_inline void atomic_and(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_and(i, v); -} - -static __always_inline void atomic64_and(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_and(i, v); -} - -static __always_inline void atomic_or(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_or(i, v); -} - -static __always_inline void atomic64_or(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_or(i, v); -} - -static __always_inline void atomic_xor(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_xor(i, v); -} - -static __always_inline void atomic64_xor(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_xor(i, v); -} - -#ifdef arch_atomic_inc_return -#define atomic_inc_return atomic_inc_return -static __always_inline int atomic_inc_return(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_inc_return(v); -} -#endif - -#ifdef arch_atomic64_in_return -#define atomic64_inc_return atomic64_inc_return -static __always_inline s64 atomic64_inc_return(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_inc_return(v); -} -#endif - -#ifdef arch_atomic_dec_return -#define atomic_dec_return atomic_dec_return -static __always_inline int atomic_dec_return(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_dec_return(v); -} -#endif - -#ifdef arch_atomic64_dec_return -#define atomic64_dec_return atomic64_dec_return -static __always_inline s64 atomic64_dec_return(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_dec_return(v); -} -#endif - -#ifdef arch_atomic64_inc_not_zero -#define atomic64_inc_not_zero atomic64_inc_not_zero -static __always_inline s64 atomic64_inc_not_zero(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_inc_not_zero(v); -} -#endif - -#ifdef arch_atomic64_dec_if_positive -#define atomic64_dec_if_positive atomic64_dec_if_positive -static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_dec_if_positive(v); -} -#endif - -#ifdef arch_atomic_dec_and_test -#define atomic_dec_and_test atomic_dec_and_test -static __always_inline bool atomic_dec_and_test(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_dec_and_test(v); -} -#endif - -#ifdef arch_atomic64_dec_and_test -#define atomic64_dec_and_test atomic64_dec_and_test -static __always_inline bool atomic64_dec_and_test(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_dec_and_test(v); -} -#endif - -#ifdef arch_atomic_inc_and_test -#define atomic_inc_and_test atomic_inc_and_test -static __always_inline bool atomic_inc_and_test(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_inc_and_test(v); -} -#endif - -#ifdef arch_atomic64_inc_and_test -#define atomic64_inc_and_test atomic64_inc_and_test -static __always_inline bool atomic64_inc_and_test(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_inc_and_test(v); -} -#endif - -static __always_inline int atomic_add_return(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_add_return(i, v); -} - -static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_add_return(i, v); -} - -static __always_inline int atomic_sub_return(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_sub_return(i, v); -} - -static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_sub_return(i, v); -} - -static __always_inline int atomic_fetch_add(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_add(i, v); -} - -static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_add(i, v); -} - -static __always_inline int atomic_fetch_sub(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_sub(i, v); -} - -static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub(i, v); -} - -static __always_inline int atomic_fetch_and(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_and(i, v); -} - -static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_and(i, v); -} - -static __always_inline int atomic_fetch_or(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_or(i, v); -} - -static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_or(i, v); -} - -static __always_inline int atomic_fetch_xor(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_xor(i, v); -} - -static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor(i, v); -} - -#ifdef arch_atomic_sub_and_test -#define atomic_sub_and_test atomic_sub_and_test -static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_sub_and_test(i, v); -} -#endif - -#ifdef arch_atomic64_sub_and_test -#define atomic64_sub_and_test atomic64_sub_and_test -static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_sub_and_test(i, v); -} -#endif - -#ifdef arch_atomic_add_negative -#define atomic_add_negative atomic_add_negative -static __always_inline bool atomic_add_negative(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_add_negative(i, v); -} -#endif - -#ifdef arch_atomic64_add_negative -#define atomic64_add_negative atomic64_add_negative -static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_add_negative(i, v); -} -#endif - static __always_inline unsigned long cmpxchg_size(volatile void *ptr, unsigned long old, unsigned long new, int size) { @@ -532,4 +140,6 @@ cmpxchg64_local_size(volatile u64 *ptr, u64 old, u64 new) arch_cmpxchg_double_local((p1), (p2), (o1), (o2), (n1), (n2)); \ }) +#include + #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */