From patchwork Fri May 4 17:39:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135012 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp323338lji; Fri, 4 May 2018 10:39:59 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrDmbHlH3/ntuIcqWqZ4YYoIQRM/R9BRXo99TN+bT0AO0BIWzEnez9VPhJtJ3ic/UGBejyJ X-Received: by 10.98.214.218 with SMTP id a87mr27907395pfl.200.1525455599494; Fri, 04 May 2018 10:39:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455599; cv=none; d=google.com; s=arc-20160816; b=xHf+3gJ4Z2CYEFwM+HWa38xPkDe+3YFA3kGSBJRY6VQs9NAqbJQMyWe7Esq3Hf7jSm XfUuyTF6ZKnW4UmserHLvUQMs8/BUsVPvNWVIjIVIssJ4SiI3DNXG6/z/GKGoxbWYU1v LoP8qBdZmOa0+shq108if5D3hFJkFX+8xA+R6A8i6km4FC3Vz5s5sIVcW44m6Yfb8dmo uishmA5qwt1tjuwjWwfexhc+cr1YCpBDFpqfPW/FSXsDPU8mxoml6lkDiGvnZfGjvrsp COrLsrjahT2KbmLLMC8zjtTIkYP/6VH/OKh22LpV/QiR+mM84o2r2qSgptDfDKmvjDlr 75ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=j4YkQIPpGMa6e8i2VLIVZKe/Bjnh2wXygu89uZdvACQ=; b=kTJaYegz+3RJz36ttSmfwqVw9YD4dyTu+v7tGTjjI+xvC1vTQNnIjVievShMeAxEs7 barEVtaWCM3OH9epDgt80dT7hYgOk8ibkTqshLaVNkKfnD6sFlzd9+YSH8WFEXJABtxL bgHObVZRLmTgWPnZRLROdoheIkgyCkp/sdbWDoPbOtPh5HcMJZNvNCJmB+A7GMADaYTh GJKBv86geVznE9qAYnFwZ5GGLfYc/w/4CMLd/yhEUyIvyOMmSedufd0etEHcP03mTF1T TsC0Az6vV7OZRyVg+XlPW1hUqeHISymrNQCtpqhGjmF9sW8Gi920cEGKKpi9EWEbORsU DWBQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y67-v6si13293389pgb.35.2018.05.04.10.39.59; Fri, 04 May 2018 10:39:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751993AbeEDRj4 (ORCPT + 29 others); Fri, 4 May 2018 13:39:56 -0400 Received: from foss.arm.com ([217.140.101.70]:57358 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751527AbeEDRjy (ORCPT ); Fri, 4 May 2018 13:39:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 00E7A1596; Fri, 4 May 2018 10:39:54 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E3ECC3F487; Fri, 4 May 2018 10:39:51 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 1/6] locking/atomic, asm-generic: instrument ordering variants Date: Fri, 4 May 2018 18:39:32 +0100 Message-Id: <20180504173937.25300-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently only instruments the fully ordered variants of atomic functions, ignoring the {relaxed,acquire,release} ordering variants. This patch reworks the header to instrument all ordering variants of the atomic functions, so that architectures implementing these are instrumented appropriately. To minimise repetition, a macro is used to generate each variant from a common template. The {full,relaxed,acquire,release} order variants respectively are then built using this template, where the architecture provides an implementation. To stick to an 80 column limit while keeping the templates legible, the return type and function name of each template are split over two lines. For consistency, this is done even when not strictly necessary. Signed-off-by: Mark Rutland Cc: Andrey Ryabinin Cc: Boqun Feng Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Will Deacon --- include/asm-generic/atomic-instrumented.h | 1195 ++++++++++++++++++++++++----- 1 file changed, 1008 insertions(+), 187 deletions(-) -- 2.11.0 Signed-off-by: Andrea Parri diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index ec07f23678ea..26f0e3098442 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -40,171 +40,664 @@ static __always_inline void atomic64_set(atomic64_t *v, s64 i) arch_atomic64_set(v, i); } -static __always_inline int atomic_xchg(atomic_t *v, int i) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_xchg(v, i); +#define INSTR_ATOMIC_XCHG(order) \ +static __always_inline int \ +atomic_xchg##order(atomic_t *v, int i) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_xchg##order(v, i); \ } -static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 i) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_xchg(v, i); +INSTR_ATOMIC_XCHG() + +#ifdef arch_atomic_xchg_relaxed +INSTR_ATOMIC_XCHG(_relaxed) +#define atomic_xchg_relaxed atomic_xchg_relaxed +#endif + +#ifdef arch_atomic_xchg_acquire +INSTR_ATOMIC_XCHG(_acquire) +#define atomic_xchg_acquire atomic_xchg_acquire +#endif + +#ifdef arch_atomic_xchg_release +INSTR_ATOMIC_XCHG(_release) +#define atomic_xchg_release atomic_xchg_release +#endif + +#define INSTR_ATOMIC64_XCHG(order) \ +static __always_inline s64 \ +atomic64_xchg##order(atomic64_t *v, s64 i) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_xchg##order(v, i); \ } -static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_cmpxchg(v, old, new); +INSTR_ATOMIC64_XCHG() + +#ifdef arch_atomic64_xchg_relaxed +INSTR_ATOMIC64_XCHG(_relaxed) +#define atomic64_xchg_relaxed atomic64_xchg_relaxed +#endif + +#ifdef arch_atomic64_xchg_acquire +INSTR_ATOMIC64_XCHG(_acquire) +#define atomic64_xchg_acquire atomic64_xchg_acquire +#endif + +#ifdef arch_atomic64_xchg_release +INSTR_ATOMIC64_XCHG(_release) +#define atomic64_xchg_release atomic64_xchg_release +#endif + +#define INSTR_ATOMIC_CMPXCHG(order) \ +static __always_inline int \ +atomic_cmpxchg##order(atomic_t *v, int old, int new) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_cmpxchg##order(v, old, new); \ } -static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_cmpxchg(v, old, new); +INSTR_ATOMIC_CMPXCHG() + +#ifdef arch_atomic_cmpxchg_relaxed +INSTR_ATOMIC_CMPXCHG(_relaxed) +#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed +#endif + +#ifdef arch_atomic_cmpxchg_acquire +INSTR_ATOMIC_CMPXCHG(_acquire) +#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire +#endif + +#ifdef arch_atomic_cmpxchg_release +INSTR_ATOMIC_CMPXCHG(_release) +#define atomic_cmpxchg_release atomic_cmpxchg_release +#endif + +#define INSTR_ATOMIC64_CMPXCHG(order) \ +static __always_inline s64 \ +atomic64_cmpxchg##order(atomic64_t *v, s64 old, s64 new) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_cmpxchg##order(v, old, new); \ +} + +INSTR_ATOMIC64_CMPXCHG() + +#ifdef arch_atomic64_cmpxchg_relaxed +INSTR_ATOMIC64_CMPXCHG(_relaxed) +#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed +#endif + +#ifdef arch_atomic64_cmpxchg_acquire +INSTR_ATOMIC64_CMPXCHG(_acquire) +#define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire +#endif + +#ifdef arch_atomic64_cmpxchg_release +INSTR_ATOMIC64_CMPXCHG(_release) +#define atomic64_cmpxchg_release atomic64_cmpxchg_release +#endif + +#define INSTR_ATOMIC_TRY_CMPXCHG(order) \ +static __always_inline bool \ +atomic_try_cmpxchg##order(atomic_t *v, int *old, int new) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + kasan_check_read(old, sizeof(*old)); \ + return arch_atomic_try_cmpxchg##order(v, old, new); \ } #ifdef arch_atomic_try_cmpxchg +INSTR_ATOMIC_TRY_CMPXCHG() #define atomic_try_cmpxchg atomic_try_cmpxchg -static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) -{ - kasan_check_write(v, sizeof(*v)); - kasan_check_read(old, sizeof(*old)); - return arch_atomic_try_cmpxchg(v, old, new); -} #endif -#ifdef arch_atomic64_try_cmpxchg -#define atomic64_try_cmpxchg atomic64_try_cmpxchg -static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) -{ - kasan_check_write(v, sizeof(*v)); - kasan_check_read(old, sizeof(*old)); - return arch_atomic64_try_cmpxchg(v, old, new); +#ifdef arch_atomic_try_cmpxchg_relaxed +INSTR_ATOMIC_TRY_CMPXCHG(_relaxed) +#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed +#endif + +#ifdef arch_atomic_try_cmpxchg_acquire +INSTR_ATOMIC_TRY_CMPXCHG(_acquire) +#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire +#endif + +#ifdef arch_atomic_try_cmpxchg_release +INSTR_ATOMIC_TRY_CMPXCHG(_release) +#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release +#endif + +#define INSTR_ATOMIC64_TRY_CMPXCHG(order) \ +static __always_inline bool \ +atomic64_try_cmpxchg##order(atomic64_t *v, s64 *old, s64 new) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + kasan_check_read(old, sizeof(*old)); \ + return arch_atomic64_try_cmpxchg##order(v, old, new); \ } + +#ifdef arch_atomic64_try_cmpxchg +INSTR_ATOMIC64_TRY_CMPXCHG() +#define atomic_try_cmpxchg atomic_try_cmpxchg #endif -static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) -{ - kasan_check_write(v, sizeof(*v)); - return __arch_atomic_add_unless(v, a, u); +#ifdef arch_atomic64_try_cmpxchg_relaxed +INSTR_ATOMIC64_TRY_CMPXCHG(_relaxed) +#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed +#endif + +#ifdef arch_atomic64_try_cmpxchg_acquire +INSTR_ATOMIC64_TRY_CMPXCHG(_acquire) +#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire +#endif + +#ifdef arch_atomic64_try_cmpxchg_release +INSTR_ATOMIC64_TRY_CMPXCHG(_release) +#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release +#endif + +#define __INSTR_ATOMIC_ADD_UNLESS(order) \ +static __always_inline int \ +__atomic_add_unless##order(atomic_t *v, int a, int u) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return __arch_atomic_add_unless##order(v, a, u); \ } +__INSTR_ATOMIC_ADD_UNLESS() -static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_add_unless(v, a, u); +#ifdef __arch_atomic_add_unless_relaxed +__INSTR_ATOMIC_ADD_UNLESS(_relaxed) +#define __atomic_add_unless_relaxed __atomic_add_unless_relaxed +#endif + +#ifdef __arch_atomic_add_unless_acquire +__INSTR_ATOMIC_ADD_UNLESS(_acquire) +#define __atomic_add_unless_acquire __atomic_add_unless_acquire +#endif + +#ifdef __arch_atomic_add_unless_release +__INSTR_ATOMIC_ADD_UNLESS(_release) +#define __atomic_add_unless_release __atomic_add_unless_release +#endif + +#define INSTR_ATOMIC64_ADD_UNLESS(order) \ +static __always_inline bool \ +atomic64_add_unless##order(atomic64_t *v, s64 a, s64 u) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_add_unless##order(v, a, u); \ } -static __always_inline void atomic_inc(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_inc(v); +INSTR_ATOMIC64_ADD_UNLESS() + +#ifdef arch_atomic64_add_unless_relaxed +INSTR_ATOMIC64_ADD_UNLESS(_relaxed) +#define atomic64_add_unless_relaxed atomic64_add_unless_relaxed +#endif + +#ifdef arch_atomic64_add_unless_acquire +INSTR_ATOMIC64_ADD_UNLESS(_acquire) +#define atomic64_add_unless_acquire atomic64_add_unless_acquire +#endif + +#ifdef arch_atomic64_add_unless_release +INSTR_ATOMIC64_ADD_UNLESS(_release) +#define atomic64_add_unless_release atomic64_add_unless_release +#endif + +#define INSTR_ATOMIC_INC(order) \ +static __always_inline void \ +atomic_inc##order(atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_inc##order(v); \ } -static __always_inline void atomic64_inc(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_inc(v); +INSTR_ATOMIC_INC() + +#ifdef arch_atomic_inc_relaxed +INSTR_ATOMIC_INC(_relaxed) +#define atomic_inc_relaxed atomic_inc_relaxed +#endif + +#ifdef arch_atomic_inc_acquire +INSTR_ATOMIC_INC(_acquire) +#define atomic_inc_acquire atomic_inc_acquire +#endif + +#ifdef arch_atomic_inc_release +INSTR_ATOMIC_INC(_release) +#define atomic_inc_release atomic_inc_release +#endif + +#define INSTR_ATOMIC64_INC(order) \ +static __always_inline void \ +atomic64_inc##order(atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_inc##order(v); \ } -static __always_inline void atomic_dec(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_dec(v); +INSTR_ATOMIC64_INC() + +#ifdef arch_atomic64_inc_relaxed +INSTR_ATOMIC64_INC(_relaxed) +#define atomic64_inc_relaxed atomic64_inc_relaxed +#endif + +#ifdef arch_atomic64_inc_acquire +INSTR_ATOMIC64_INC(_acquire) +#define atomic64_inc_acquire atomic64_inc_acquire +#endif + +#ifdef arch_atomic64_inc_release +INSTR_ATOMIC64_INC(_release) +#define atomic64_inc_release atomic64_inc_release +#endif + +#define INSTR_ATOMIC_DEC(order) \ +static __always_inline void \ +atomic_dec##order(atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_dec##order(v); \ } -static __always_inline void atomic64_dec(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_dec(v); +INSTR_ATOMIC_DEC() + +#ifdef arch_atomic_dec_relaxed +INSTR_ATOMIC_DEC(_relaxed) +#define atomic_dec_relaxed atomic_dec_relaxed +#endif + +#ifdef arch_atomic_dec_acquire +INSTR_ATOMIC_DEC(_acquire) +#define atomic_dec_acquire atomic_dec_acquire +#endif + +#ifdef arch_atomic_dec_release +INSTR_ATOMIC_DEC(_release) +#define atomic_dec_release atomic_dec_release +#endif + +#define INSTR_ATOMIC64_DEC(order) \ +static __always_inline void \ +atomic64_dec##order(atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_dec##order(v); \ } -static __always_inline void atomic_add(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_add(i, v); +INSTR_ATOMIC64_DEC() + +#ifdef arch_atomic64_dec_relaxed +INSTR_ATOMIC64_DEC(_relaxed) +#define atomic64_dec_relaxed atomic64_dec_relaxed +#endif + +#ifdef arch_atomic64_dec_acquire +INSTR_ATOMIC64_DEC(_acquire) +#define atomic64_dec_acquire atomic64_dec_acquire +#endif + +#ifdef arch_atomic64_dec_release +INSTR_ATOMIC64_DEC(_release) +#define atomic64_dec_release atomic64_dec_release +#endif + +#define INSTR_ATOMIC_ADD(order) \ +static __always_inline void \ +atomic_add##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_add##order(i, v); \ } -static __always_inline void atomic64_add(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_add(i, v); +INSTR_ATOMIC_ADD() + +#ifdef arch_atomic_add_relaxed +INSTR_ATOMIC_ADD(_relaxed) +#define atomic_add_relaxed atomic_add_relaxed +#endif + +#ifdef arch_atomic_add_acquire +INSTR_ATOMIC_ADD(_acquire) +#define atomic_add_acquire atomic_add_acquire +#endif + +#ifdef arch_atomic_add_release +INSTR_ATOMIC_ADD(_release) +#define atomic_add_release atomic_add_release +#endif + +#define INSTR_ATOMIC64_ADD(order) \ +static __always_inline void \ +atomic64_add##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_add##order(i, v); \ } -static __always_inline void atomic_sub(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_sub(i, v); +INSTR_ATOMIC64_ADD() + +#ifdef arch_atomic64_add_relaxed +INSTR_ATOMIC64_ADD(_relaxed) +#define atomic64_add_relaxed atomic64_add_relaxed +#endif + +#ifdef arch_atomic64_add_acquire +INSTR_ATOMIC64_ADD(_acquire) +#define atomic64_add_acquire atomic64_add_acquire +#endif + +#ifdef arch_atomic64_add_release +INSTR_ATOMIC64_ADD(_release) +#define atomic64_add_release atomic64_add_release +#endif + +#define INSTR_ATOMIC_SUB(order) \ +static __always_inline void \ +atomic_sub##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_sub##order(i, v); \ } -static __always_inline void atomic64_sub(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_sub(i, v); +INSTR_ATOMIC_SUB() + +#ifdef arch_atomic_sub_relaxed +INSTR_ATOMIC_SUB(_relaxed) +#define atomic_sub_relaxed atomic_sub_relaxed +#endif + +#ifdef arch_atomic_sub_acquire +INSTR_ATOMIC_SUB(_acquire) +#define atomic_sub_acquire atomic_sub_acquire +#endif + +#ifdef arch_atomic_sub_release +INSTR_ATOMIC_SUB(_release) +#define atomic_sub_release atomic_sub_release +#endif + +#define INSTR_ATOMIC64_SUB(order) \ +static __always_inline void \ +atomic64_sub##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_sub##order(i, v); \ } -static __always_inline void atomic_and(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_and(i, v); +INSTR_ATOMIC64_SUB() + +#ifdef arch_atomic64_sub_relaxed +INSTR_ATOMIC64_SUB(_relaxed) +#define atomic64_sub_relaxed atomic64_sub_relaxed +#endif + +#ifdef arch_atomic64_sub_acquire +INSTR_ATOMIC64_SUB(_acquire) +#define atomic64_sub_acquire atomic64_sub_acquire +#endif + +#ifdef arch_atomic64_sub_release +INSTR_ATOMIC64_SUB(_release) +#define atomic64_sub_release atomic64_sub_release +#endif + +#define INSTR_ATOMIC_AND(order) \ +static __always_inline void \ +atomic_and##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_and##order(i, v); \ } -static __always_inline void atomic64_and(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_and(i, v); +INSTR_ATOMIC_AND() + +#ifdef arch_atomic_and_relaxed +INSTR_ATOMIC_AND(_relaxed) +#define atomic_and_relaxed atomic_and_relaxed +#endif + +#ifdef arch_atomic_and_acquire +INSTR_ATOMIC_AND(_acquire) +#define atomic_and_acquire atomic_and_acquire +#endif + +#ifdef arch_atomic_and_release +INSTR_ATOMIC_AND(_release) +#define atomic_and_release atomic_and_release +#endif + +#define INSTR_ATOMIC64_AND(order) \ +static __always_inline void \ +atomic64_and##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_and##order(i, v); \ } -static __always_inline void atomic_or(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_or(i, v); +INSTR_ATOMIC64_AND() + +#ifdef arch_atomic64_and_relaxed +INSTR_ATOMIC64_AND(_relaxed) +#define atomic64_and_relaxed atomic64_and_relaxed +#endif + +#ifdef arch_atomic64_and_acquire +INSTR_ATOMIC64_AND(_acquire) +#define atomic64_and_acquire atomic64_and_acquire +#endif + +#ifdef arch_atomic64_and_release +INSTR_ATOMIC64_AND(_release) +#define atomic64_and_release atomic64_and_release +#endif + +#define INSTR_ATOMIC_OR(order) \ +static __always_inline void \ +atomic_or##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_or##order(i, v); \ } -static __always_inline void atomic64_or(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_or(i, v); +INSTR_ATOMIC_OR() + +#ifdef arch_atomic_or_relaxed +INSTR_ATOMIC_OR(_relaxed) +#define atomic_or_relaxed atomic_or_relaxed +#endif + +#ifdef arch_atomic_or_acquire +INSTR_ATOMIC_OR(_acquire) +#define atomic_or_acquire atomic_or_acquire +#endif + +#ifdef arch_atomic_or_release +INSTR_ATOMIC_OR(_release) +#define atomic_or_release atomic_or_release +#endif + +#define INSTR_ATOMIC64_OR(order) \ +static __always_inline void \ +atomic64_or##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_or##order(i, v); \ } -static __always_inline void atomic_xor(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic_xor(i, v); +INSTR_ATOMIC64_OR() + +#ifdef arch_atomic64_or_relaxed +INSTR_ATOMIC64_OR(_relaxed) +#define atomic64_or_relaxed atomic64_or_relaxed +#endif + +#ifdef arch_atomic64_or_acquire +INSTR_ATOMIC64_OR(_acquire) +#define atomic64_or_acquire atomic64_or_acquire +#endif + +#ifdef arch_atomic64_or_release +INSTR_ATOMIC64_OR(_release) +#define atomic64_or_release atomic64_or_release +#endif + +#define INSTR_ATOMIC_XOR(order) \ +static __always_inline void \ +atomic_xor##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_xor##order(i, v); \ } -static __always_inline void atomic64_xor(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - arch_atomic64_xor(i, v); +INSTR_ATOMIC_XOR() + +#ifdef arch_atomic_xor_relaxed +INSTR_ATOMIC_XOR(_relaxed) +#define atomic_xor_relaxed atomic_xor_relaxed +#endif + +#ifdef arch_atomic_xor_acquire +INSTR_ATOMIC_XOR(_acquire) +#define atomic_xor_acquire atomic_xor_acquire +#endif + +#ifdef arch_atomic_xor_release +INSTR_ATOMIC_XOR(_release) +#define atomic_xor_release atomic_xor_release +#endif + +#define INSTR_ATOMIC64_XOR(order) \ +static __always_inline void \ +atomic64_xor##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_xor##order(i, v); \ } -static __always_inline int atomic_inc_return(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_inc_return(v); +INSTR_ATOMIC64_XOR() + +#ifdef arch_atomic64_xor_relaxed +INSTR_ATOMIC64_XOR(_relaxed) +#define atomic64_xor_relaxed atomic64_xor_relaxed +#endif + +#ifdef arch_atomic64_xor_acquire +INSTR_ATOMIC64_XOR(_acquire) +#define atomic64_xor_acquire atomic64_xor_acquire +#endif + +#ifdef arch_atomic64_xor_release +INSTR_ATOMIC64_XOR(_release) +#define atomic64_xor_release atomic64_xor_release +#endif + +#define INSTR_ATOMIC_INC_RETURN(order) \ +static __always_inline int \ +atomic_inc_return##order(atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_inc_return##order(v); \ } -static __always_inline s64 atomic64_inc_return(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_inc_return(v); +INSTR_ATOMIC_INC_RETURN() + +#ifdef arch_atomic_inc_return_relaxed +INSTR_ATOMIC_INC_RETURN(_relaxed) +#define atomic_inc_return_relaxed atomic_inc_return_relaxed +#endif + +#ifdef arch_atomic_inc_return_acquire +INSTR_ATOMIC_INC_RETURN(_acquire) +#define atomic_inc_return_acquire atomic_inc_return_acquire +#endif + +#ifdef arch_atomic_inc_return_release +INSTR_ATOMIC_INC_RETURN(_release) +#define atomic_inc_return_release atomic_inc_return_release +#endif + +#define INSTR_ATOMIC64_INC_RETURN(order) \ +static __always_inline s64 \ +atomic64_inc_return##order(atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_inc_return##order(v); \ } -static __always_inline int atomic_dec_return(atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_dec_return(v); +INSTR_ATOMIC64_INC_RETURN() + +#ifdef arch_atomic64_inc_return_relaxed +INSTR_ATOMIC64_INC_RETURN(_relaxed) +#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed +#endif + +#ifdef arch_atomic64_inc_return_acquire +INSTR_ATOMIC64_INC_RETURN(_acquire) +#define atomic64_inc_return_acquire atomic64_inc_return_acquire +#endif + +#ifdef arch_atomic64_inc_return_release +INSTR_ATOMIC64_INC_RETURN(_release) +#define atomic64_inc_return_release atomic64_inc_return_release +#endif + +#define INSTR_ATOMIC_DEC_RETURN(order) \ +static __always_inline int \ +atomic_dec_return##order(atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_dec_return##order(v); \ } -static __always_inline s64 atomic64_dec_return(atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_dec_return(v); +INSTR_ATOMIC_DEC_RETURN() + +#ifdef arch_atomic_dec_return_relaxed +INSTR_ATOMIC_DEC_RETURN(_relaxed) +#define atomic_dec_return_relaxed atomic_dec_return_relaxed +#endif + +#ifdef arch_atomic_dec_return_acquire +INSTR_ATOMIC_DEC_RETURN(_acquire) +#define atomic_dec_return_acquire atomic_dec_return_acquire +#endif + +#ifdef arch_atomic_dec_return_release +INSTR_ATOMIC_DEC_RETURN(_release) +#define atomic_dec_return_release atomic_dec_return_release +#endif + +#define INSTR_ATOMIC64_DEC_RETURN(order) \ +static __always_inline s64 \ +atomic64_dec_return##order(atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_dec_return##order(v); \ } +INSTR_ATOMIC64_DEC_RETURN() + +#ifdef arch_atomic64_dec_return_relaxed +INSTR_ATOMIC64_DEC_RETURN(_relaxed) +#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed +#endif + +#ifdef arch_atomic64_dec_return_acquire +INSTR_ATOMIC64_DEC_RETURN(_acquire) +#define atomic64_dec_return_acquire atomic64_dec_return_acquire +#endif + +#ifdef arch_atomic64_dec_return_release +INSTR_ATOMIC64_DEC_RETURN(_release) +#define atomic64_dec_return_release atomic64_dec_return_release +#endif + static __always_inline s64 atomic64_inc_not_zero(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); @@ -241,90 +734,356 @@ static __always_inline bool atomic64_inc_and_test(atomic64_t *v) return arch_atomic64_inc_and_test(v); } -static __always_inline int atomic_add_return(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_add_return(i, v); +#define INSTR_ATOMIC_ADD_RETURN(order) \ +static __always_inline int \ +atomic_add_return##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_add_return##order(i, v); \ } -static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_add_return(i, v); +INSTR_ATOMIC_ADD_RETURN() + +#ifdef arch_atomic_add_return_relaxed +INSTR_ATOMIC_ADD_RETURN(_relaxed) +#define atomic_add_return_relaxed atomic_add_return_relaxed +#endif + +#ifdef arch_atomic_add_return_acquire +INSTR_ATOMIC_ADD_RETURN(_acquire) +#define atomic_add_return_acquire atomic_add_return_acquire +#endif + +#ifdef arch_atomic_add_return_release +INSTR_ATOMIC_ADD_RETURN(_release) +#define atomic_add_return_release atomic_add_return_release +#endif + +#define INSTR_ATOMIC64_ADD_RETURN(order) \ +static __always_inline s64 \ +atomic64_add_return##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_add_return##order(i, v); \ } -static __always_inline int atomic_sub_return(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_sub_return(i, v); +INSTR_ATOMIC64_ADD_RETURN() + +#ifdef arch_atomic64_add_return_relaxed +INSTR_ATOMIC64_ADD_RETURN(_relaxed) +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#endif + +#ifdef arch_atomic64_add_return_acquire +INSTR_ATOMIC64_ADD_RETURN(_acquire) +#define atomic64_add_return_acquire atomic64_add_return_acquire +#endif + +#ifdef arch_atomic64_add_return_release +INSTR_ATOMIC64_ADD_RETURN(_release) +#define atomic64_add_return_release atomic64_add_return_release +#endif + +#define INSTR_ATOMIC_SUB_RETURN(order) \ +static __always_inline int \ +atomic_sub_return##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_sub_return##order(i, v); \ } -static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_sub_return(i, v); +INSTR_ATOMIC_SUB_RETURN() + +#ifdef arch_atomic_sub_return_relaxed +INSTR_ATOMIC_SUB_RETURN(_relaxed) +#define atomic_sub_return_relaxed atomic_sub_return_relaxed +#endif + +#ifdef arch_atomic_sub_return_acquire +INSTR_ATOMIC_SUB_RETURN(_acquire) +#define atomic_sub_return_acquire atomic_sub_return_acquire +#endif + +#ifdef arch_atomic_sub_return_release +INSTR_ATOMIC_SUB_RETURN(_release) +#define atomic_sub_return_release atomic_sub_return_release +#endif + +#define INSTR_ATOMIC64_SUB_RETURN(order) \ +static __always_inline s64 \ +atomic64_sub_return##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_sub_return##order(i, v); \ } -static __always_inline int atomic_fetch_add(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_add(i, v); +INSTR_ATOMIC64_SUB_RETURN() + +#ifdef arch_atomic64_sub_return_relaxed +INSTR_ATOMIC64_SUB_RETURN(_relaxed) +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed +#endif + +#ifdef arch_atomic64_sub_return_acquire +INSTR_ATOMIC64_SUB_RETURN(_acquire) +#define atomic64_sub_return_acquire atomic64_sub_return_acquire +#endif + +#ifdef arch_atomic64_sub_return_release +INSTR_ATOMIC64_SUB_RETURN(_release) +#define atomic64_sub_return_release atomic64_sub_return_release +#endif + +#define INSTR_ATOMIC_FETCH_ADD(order) \ +static __always_inline int \ +atomic_fetch_add##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_fetch_add##order(i, v); \ } -static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_add(i, v); +INSTR_ATOMIC_FETCH_ADD() + +#ifdef arch_atomic_fetch_add_relaxed +INSTR_ATOMIC_FETCH_ADD(_relaxed) +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#endif + +#ifdef arch_atomic_fetch_add_acquire +INSTR_ATOMIC_FETCH_ADD(_acquire) +#define atomic_fetch_add_acquire atomic_fetch_add_acquire +#endif + +#ifdef arch_atomic_fetch_add_release +INSTR_ATOMIC_FETCH_ADD(_release) +#define atomic_fetch_add_release atomic_fetch_add_release +#endif + +#define INSTR_ATOMIC64_FETCH_ADD(order) \ +static __always_inline s64 \ +atomic64_fetch_add##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_fetch_add##order(i, v); \ } -static __always_inline int atomic_fetch_sub(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_sub(i, v); +INSTR_ATOMIC64_FETCH_ADD() + +#ifdef arch_atomic64_fetch_add_relaxed +INSTR_ATOMIC64_FETCH_ADD(_relaxed) +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#endif + +#ifdef arch_atomic64_fetch_add_acquire +INSTR_ATOMIC64_FETCH_ADD(_acquire) +#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire +#endif + +#ifdef arch_atomic64_fetch_add_release +INSTR_ATOMIC64_FETCH_ADD(_release) +#define atomic64_fetch_add_release atomic64_fetch_add_release +#endif + +#define INSTR_ATOMIC_FETCH_SUB(order) \ +static __always_inline int \ +atomic_fetch_sub##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_fetch_sub##order(i, v); \ } -static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_sub(i, v); +INSTR_ATOMIC_FETCH_SUB() + +#ifdef arch_atomic_fetch_sub_relaxed +INSTR_ATOMIC_FETCH_SUB(_relaxed) +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed +#endif + +#ifdef arch_atomic_fetch_sub_acquire +INSTR_ATOMIC_FETCH_SUB(_acquire) +#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire +#endif + +#ifdef arch_atomic_fetch_sub_release +INSTR_ATOMIC_FETCH_SUB(_release) +#define atomic_fetch_sub_release atomic_fetch_sub_release +#endif + +#define INSTR_ATOMIC64_FETCH_SUB(order) \ +static __always_inline s64 \ +atomic64_fetch_sub##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_fetch_sub##order(i, v); \ } -static __always_inline int atomic_fetch_and(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_and(i, v); +INSTR_ATOMIC64_FETCH_SUB() + +#ifdef arch_atomic64_fetch_sub_relaxed +INSTR_ATOMIC64_FETCH_SUB(_relaxed) +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed +#endif + +#ifdef arch_atomic64_fetch_sub_acquire +INSTR_ATOMIC64_FETCH_SUB(_acquire) +#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire +#endif + +#ifdef arch_atomic64_fetch_sub_release +INSTR_ATOMIC64_FETCH_SUB(_release) +#define atomic64_fetch_sub_release atomic64_fetch_sub_release +#endif + +#define INSTR_ATOMIC_FETCH_AND(order) \ +static __always_inline int \ +atomic_fetch_and##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_fetch_and##order(i, v); \ } -static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_and(i, v); +INSTR_ATOMIC_FETCH_AND() + +#ifdef arch_atomic_fetch_and_relaxed +INSTR_ATOMIC_FETCH_AND(_relaxed) +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#endif + +#ifdef arch_atomic_fetch_and_acquire +INSTR_ATOMIC_FETCH_AND(_acquire) +#define atomic_fetch_and_acquire atomic_fetch_and_acquire +#endif + +#ifdef arch_atomic_fetch_and_release +INSTR_ATOMIC_FETCH_AND(_release) +#define atomic_fetch_and_release atomic_fetch_and_release +#endif + +#define INSTR_ATOMIC64_FETCH_AND(order) \ +static __always_inline s64 \ +atomic64_fetch_and##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_fetch_and##order(i, v); \ } -static __always_inline int atomic_fetch_or(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_or(i, v); +INSTR_ATOMIC64_FETCH_AND() + +#ifdef arch_atomic64_fetch_and_relaxed +INSTR_ATOMIC64_FETCH_AND(_relaxed) +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#endif + +#ifdef arch_atomic64_fetch_and_acquire +INSTR_ATOMIC64_FETCH_AND(_acquire) +#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire +#endif + +#ifdef arch_atomic64_fetch_and_release +INSTR_ATOMIC64_FETCH_AND(_release) +#define atomic64_fetch_and_release atomic64_fetch_and_release +#endif + +#define INSTR_ATOMIC_FETCH_OR(order) \ +static __always_inline int \ +atomic_fetch_or##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_fetch_or##order(i, v); \ } -static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_or(i, v); +INSTR_ATOMIC_FETCH_OR() + +#ifdef arch_atomic_fetch_or_relaxed +INSTR_ATOMIC_FETCH_OR(_relaxed) +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#endif + +#ifdef arch_atomic_fetch_or_acquire +INSTR_ATOMIC_FETCH_OR(_acquire) +#define atomic_fetch_or_acquire atomic_fetch_or_acquire +#endif + +#ifdef arch_atomic_fetch_or_release +INSTR_ATOMIC_FETCH_OR(_release) +#define atomic_fetch_or_release atomic_fetch_or_release +#endif + +#define INSTR_ATOMIC64_FETCH_OR(order) \ +static __always_inline s64 \ +atomic64_fetch_or##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_fetch_or##order(i, v); \ } -static __always_inline int atomic_fetch_xor(int i, atomic_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic_fetch_xor(i, v); +INSTR_ATOMIC64_FETCH_OR() + +#ifdef arch_atomic64_fetch_or_relaxed +INSTR_ATOMIC64_FETCH_OR(_relaxed) +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#endif + +#ifdef arch_atomic64_fetch_or_acquire +INSTR_ATOMIC64_FETCH_OR(_acquire) +#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire +#endif + +#ifdef arch_atomic64_fetch_or_release +INSTR_ATOMIC64_FETCH_OR(_release) +#define atomic64_fetch_or_release atomic64_fetch_or_release +#endif + +#define INSTR_ATOMIC_FETCH_XOR(order) \ +static __always_inline int \ +atomic_fetch_xor##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_fetch_xor##order(i, v); \ } -static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) -{ - kasan_check_write(v, sizeof(*v)); - return arch_atomic64_fetch_xor(i, v); +INSTR_ATOMIC_FETCH_XOR() + +#ifdef arch_atomic_fetch_xor_relaxed +INSTR_ATOMIC_FETCH_XOR(_relaxed) +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed +#endif + +#ifdef arch_atomic_fetch_xor_acquire +INSTR_ATOMIC_FETCH_XOR(_acquire) +#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire +#endif + +#ifdef arch_atomic_fetch_xor_release +INSTR_ATOMIC_FETCH_XOR(_release) +#define atomic_fetch_xor_release atomic_fetch_xor_release +#endif + +#define INSTR_ATOMIC64_FETCH_XOR(xorder) \ +static __always_inline s64 \ +atomic64_fetch_xor##xorder(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_fetch_xor##xorder(i, v); \ } +INSTR_ATOMIC64_FETCH_XOR() + +#ifdef arch_atomic64_fetch_xor_relaxed +INSTR_ATOMIC64_FETCH_XOR(_relaxed) +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed +#endif + +#ifdef arch_atomic64_fetch_xor_acquire +INSTR_ATOMIC64_FETCH_XOR(_acquire) +#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire +#endif + +#ifdef arch_atomic64_fetch_xor_release +INSTR_ATOMIC64_FETCH_XOR(_release) +#define atomic64_fetch_xor_release atomic64_fetch_xor_release +#endif + static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); @@ -349,31 +1108,64 @@ static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) return arch_atomic64_add_negative(i, v); } -static __always_inline unsigned long -cmpxchg_size(volatile void *ptr, unsigned long old, unsigned long new, int size) -{ - kasan_check_write(ptr, size); - switch (size) { - case 1: - return arch_cmpxchg((u8 *)ptr, (u8)old, (u8)new); - case 2: - return arch_cmpxchg((u16 *)ptr, (u16)old, (u16)new); - case 4: - return arch_cmpxchg((u32 *)ptr, (u32)old, (u32)new); - case 8: - BUILD_BUG_ON(sizeof(unsigned long) != 8); - return arch_cmpxchg((u64 *)ptr, (u64)old, (u64)new); - } - BUILD_BUG(); - return 0; +#define INSTR_CMPXCHG(order) \ +static __always_inline unsigned long \ +cmpxchg##order##_size(volatile void *ptr, unsigned long old, \ + unsigned long new, int size) \ +{ \ + kasan_check_write(ptr, size); \ + switch (size) { \ + case 1: \ + return arch_cmpxchg##order((u8 *)ptr, (u8)old, (u8)new); \ + case 2: \ + return arch_cmpxchg##order((u16 *)ptr, (u16)old, (u16)new); \ + case 4: \ + return arch_cmpxchg##order((u32 *)ptr, (u32)old, (u32)new); \ + case 8: \ + BUILD_BUG_ON(sizeof(unsigned long) != 8); \ + return arch_cmpxchg##order((u64 *)ptr, (u64)old, (u64)new); \ + } \ + BUILD_BUG(); \ + return 0; \ } +INSTR_CMPXCHG() #define cmpxchg(ptr, old, new) \ ({ \ ((__typeof__(*(ptr)))cmpxchg_size((ptr), (unsigned long)(old), \ (unsigned long)(new), sizeof(*(ptr)))); \ }) +#ifdef arch_cmpxchg_relaxed +INSTR_CMPXCHG(_relaxed) +#define cmpxchg_relaxed(ptr, old, new) \ +({ \ + ((__typeof__(*(ptr)))cmpxchg_relaxed_size((ptr), \ + (unsigned long)(old), (unsigned long)(new), \ + sizeof(*(ptr)))); \ +}) +#endif + +#ifdef arch_cmpxchg_acquire +INSTR_CMPXCHG(_acquire) +#define cmpxchg_acquire(ptr, old, new) \ +({ \ + ((__typeof__(*(ptr)))cmpxchg_acquire_size((ptr), \ + (unsigned long)(old), (unsigned long)(new), \ + sizeof(*(ptr)))); \ +}) +#endif + +#ifdef arch_cmpxchg_release +INSTR_CMPXCHG(_release) +#define cmpxchg_release(ptr, old, new) \ +({ \ + ((__typeof__(*(ptr)))cmpxchg_release_size((ptr), \ + (unsigned long)(old), (unsigned long)(new), \ + sizeof(*(ptr)))); \ +}) +#endif + static __always_inline unsigned long sync_cmpxchg_size(volatile void *ptr, unsigned long old, unsigned long new, int size) @@ -428,19 +1220,48 @@ cmpxchg_local_size(volatile void *ptr, unsigned long old, unsigned long new, sizeof(*(ptr)))); \ }) -static __always_inline u64 -cmpxchg64_size(volatile u64 *ptr, u64 old, u64 new) -{ - kasan_check_write(ptr, sizeof(*ptr)); - return arch_cmpxchg64(ptr, old, new); +#define INSTR_CMPXCHG64(order) \ +static __always_inline u64 \ +cmpxchg64##order##_size(volatile u64 *ptr, u64 old, u64 new) \ +{ \ + kasan_check_write(ptr, sizeof(*ptr)); \ + return arch_cmpxchg64##order(ptr, old, new); \ } +INSTR_CMPXCHG64() #define cmpxchg64(ptr, old, new) \ ({ \ ((__typeof__(*(ptr)))cmpxchg64_size((ptr), (u64)(old), \ (u64)(new))); \ }) +#ifdef arch_cmpxchg64_relaxed +INSTR_CMPXCHG64(_relaxed) +#define cmpxchg64_relaxed(ptr, old, new) \ +({ \ + ((__typeof__(*(ptr)))cmpxchg64_relaxed_size((ptr), (u64)(old), \ + (u64)(new))); \ +}) +#endif + +#ifdef arch_cmpxchg64_acquire +INSTR_CMPXCHG64(_acquire) +#define cmpxchg64_acquire(ptr, old, new) \ +({ \ + ((__typeof__(*(ptr)))cmpxchg64_acquire_size((ptr), (u64)(old), \ + (u64)(new))); \ +}) +#endif + +#ifdef arch_cmpxchg64_release +INSTR_CMPXCHG64(_release) +#define cmpxchg64_release(ptr, old, new) \ +({ \ + ((__typeof__(*(ptr)))cmpxchg64_release_size((ptr), (u64)(old), \ + (u64)(new))); \ +}) +#endif + static __always_inline u64 cmpxchg64_local_size(volatile u64 *ptr, u64 old, u64 new) { From patchwork Fri May 4 17:39:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135013 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp323396lji; Fri, 4 May 2018 10:40:03 -0700 (PDT) X-Google-Smtp-Source: AB8JxZogb9wjLnb6POpv06MfwRVv5CII+a2vrgzMIWbNPvz+pTT7Gh47sI04zvy8uVejZf6aTLTS X-Received: by 2002:a17:902:8c95:: with SMTP id t21-v6mr24551882plo.306.1525455602898; Fri, 04 May 2018 10:40:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455602; cv=none; d=google.com; s=arc-20160816; b=0yw2TNORJvfcGLnbJGYBDqhTqE/1Z0/l7gG5fqilBH7DWZKtruQt8UJd5NJ++2dH8J yLXYHbpF826C2QhrhAjoRoi0st9DSVqq6CU8ZBjkRFV+Vk2HAQ335CKeaDohXqFg8Q2W fjcc9VmFOZ9TWQCPH8SkFLpb3GKN5s277yEZkODPMaqUVwS0jUbXqWJZUrJDIl6CBGOa LPnNW4FUvNFs4iHpm5HhhcTQZeEtUzUltF7khGprhL8SCOpZ4eUL+TJnQX6goJomDnIL ytf9/3cUeyFHL6a/NFJ4HhQHisD6OMSl6wnbjbk8lNdaqwPz6+PQ3YnN2zt5wGelrWge Y5Nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ePDAyjPaQ3WvTgdN3hNsJ1EKTenud2zoA07iLL8SS34=; b=BLIKSPyMNu77ZZURoe75/wHgw1ISOJamVlV509DK1JNHCWPiao8q2T0D/j1PBUKPsF bgPVNcVX3ECWwhaZdz1aXPNF1Xh29ZW4c1p8tcsqkPSx9sjDB2ThUvJCiS6Xyn7qpXWJ cvOzYzfpTKqOd6yxOdyD96bFkQ3dxxXbn5b7MCxc4YUj6soJYkskrrAp47L10j107/R3 UVm67HqQbQJJTjsFu3ryojshv/ZOP9+Rnen+Bik4aOa5pHC+WLyc8HK89iOVD+LA+pLl 8Jx56WJzayqbBEnpuVCagpak3n0DOEGH9PRpRafjeujtG7Bp7p6/OdSaJ1J/hcdviGWF aA+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k2-v6si10988592plt.374.2018.05.04.10.40.02; Fri, 04 May 2018 10:40:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752003AbeEDRj6 (ORCPT + 29 others); Fri, 4 May 2018 13:39:58 -0400 Received: from foss.arm.com ([217.140.101.70]:57374 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751988AbeEDRj4 (ORCPT ); Fri, 4 May 2018 13:39:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7CF3A164F; Fri, 4 May 2018 10:39:56 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B302B3F487; Fri, 4 May 2018 10:39:54 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 2/6] locking/atomic, asm-generic: instrument atomic*andnot*() Date: Fri, 4 May 2018 18:39:33 +0100 Message-Id: <20180504173937.25300-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We don't currently define instrumentation wrappers for the various forms of atomic*andnot*(), as these aren't implemented directly by x86. So that we can instrument architectures which provide these, let's define wrappers for all the variants of these atomics. Signed-off-by: Mark Rutland Cc: Andrey Ryabinin Cc: Boqun Feng Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Will Deacon --- include/asm-generic/atomic-instrumented.h | 112 ++++++++++++++++++++++++++++++ 1 file changed, 112 insertions(+) -- 2.11.0 diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 26f0e3098442..b1920f0f64ab 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -498,6 +498,62 @@ INSTR_ATOMIC64_AND(_release) #define atomic64_and_release atomic64_and_release #endif +#define INSTR_ATOMIC_ANDNOT(order) \ +static __always_inline void \ +atomic_andnot##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_andnot##order(i, v); \ +} + +#ifdef arch_atomic_andnot +INSTR_ATOMIC_ANDNOT() +#define atomic_andnot atomic_andnot +#endif + +#ifdef arch_atomic_andnot_relaxed +INSTR_ATOMIC_ANDNOT(_relaxed) +#define atomic_andnot_relaxed atomic_andnot_relaxed +#endif + +#ifdef arch_atomic_andnot_acquire +INSTR_ATOMIC_ANDNOT(_acquire) +#define atomic_andnot_acquire atomic_andnot_acquire +#endif + +#ifdef arch_atomic_andnot_release +INSTR_ATOMIC_ANDNOT(_release) +#define atomic_andnot_release atomic_andnot_release +#endif + +#define INSTR_ATOMIC64_ANDNOT(order) \ +static __always_inline void \ +atomic64_andnot##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_andnot##order(i, v); \ +} + +#ifdef arch_atomic64_andnot +INSTR_ATOMIC64_ANDNOT() +#define atomic64_andnot atomic64_andnot +#endif + +#ifdef arch_atomic64_andnot_relaxed +INSTR_ATOMIC64_ANDNOT(_relaxed) +#define atomic64_andnot_relaxed atomic64_andnot_relaxed +#endif + +#ifdef arch_atomic64_andnot_acquire +INSTR_ATOMIC64_ANDNOT(_acquire) +#define atomic64_andnot_acquire atomic64_andnot_acquire +#endif + +#ifdef arch_atomic64_andnot_release +INSTR_ATOMIC64_ANDNOT(_release) +#define atomic64_andnot_release atomic64_andnot_release +#endif + #define INSTR_ATOMIC_OR(order) \ static __always_inline void \ atomic_or##order(int i, atomic_t *v) \ @@ -984,6 +1040,62 @@ INSTR_ATOMIC64_FETCH_AND(_release) #define atomic64_fetch_and_release atomic64_fetch_and_release #endif +#define INSTR_ATOMIC_FETCH_ANDNOT(order) \ +static __always_inline int \ +atomic_fetch_andnot##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_fetch_andnot##order(i, v); \ +} + +#ifdef arch_atomic_fetch_andnot +INSTR_ATOMIC_FETCH_ANDNOT() +#define atomic_fetch_andnot atomic_fetch_andnot +#endif + +#ifdef arch_atomic_fetch_andnot_relaxed +INSTR_ATOMIC_FETCH_ANDNOT(_relaxed) +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#endif + +#ifdef arch_atomic_fetch_andnot_acquire +INSTR_ATOMIC_FETCH_ANDNOT(_acquire) +#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire +#endif + +#ifdef arch_atomic_fetch_andnot_release +INSTR_ATOMIC_FETCH_ANDNOT(_release) +#define atomic_fetch_andnot_release atomic_fetch_andnot_release +#endif + +#define INSTR_ATOMIC64_FETCH_ANDNOT(order) \ +static __always_inline s64 \ +atomic64_fetch_andnot##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_fetch_andnot##order(i, v); \ +} + +#ifdef arch_atomic64_fetch_andnot +INSTR_ATOMIC64_FETCH_ANDNOT() +#define atomic64_fetch_andnot atomic64_fetch_andnot +#endif + +#ifdef arch_atomic64_fetch_andnot_relaxed +INSTR_ATOMIC64_FETCH_ANDNOT(_relaxed) +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#endif + +#ifdef arch_atomic64_fetch_andnot_acquire +INSTR_ATOMIC64_FETCH_ANDNOT(_acquire) +#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire +#endif + +#ifdef arch_atomic64_fetch_andnot_release +INSTR_ATOMIC64_FETCH_ANDNOT(_release) +#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release +#endif + #define INSTR_ATOMIC_FETCH_OR(order) \ static __always_inline int \ atomic_fetch_or##order(int i, atomic_t *v) \ From patchwork Fri May 4 17:39:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135017 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp324140lji; Fri, 4 May 2018 10:40:44 -0700 (PDT) X-Google-Smtp-Source: AB8JxZog2NrQ6fONT9Gr++CrvwY8c8IKRaCP6rRvSE/V/EVRcyILMHOKhGxul0I48oRoQHg0m+Ty X-Received: by 10.98.64.130 with SMTP id f2mr27752542pfd.83.1525455644124; Fri, 04 May 2018 10:40:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455644; cv=none; d=google.com; s=arc-20160816; b=XrwmKmRRQmnlmwQTnjvi82nPEvvgiY7DqHfr40NpVZtOlgLgkb2EhqNzFxOC2OQGrC 0n6mtLCEwgF+RjNXHj64FrZtj2weZ9I5daYnDx0g8xR7uw01i8pvKmJqP8KT9vO0qTWp Re6JocZqTECzPt1ZaFsdS9yIwbK0oF/ZsXgFqzERxGS9DnTnKnChY6aaIdOgsJjjUBKY /RiGx4f48SJV6mz8WQT+Mux1vhlvCC0SvF4CPkmePaCKiAK1NgbAGjVCwzZ1lm2KINxo Uwidn5o8adt4gGA9KUK17TxS67vhHLbkM20DEflf/sHcL2+DryjGFS2XdkzEGcJPHk75 JNwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=3b4ey3XCaSJqjTZacUpPg6GFGIkUWEWnL1NGv/MfGus=; b=yayBGEn+QDUo6TLjKAYlfrLLvFNpQbL5yGwlafWNdZrg620xrfQRPxV/c46/dyHz02 WyzbB99n0upUMt4TKzEX7v0cUpzPiGQJaHqk/SBz/7QpJDy3KVNMhXmdDoAb4C/XaHoW +eZqsKAcdT33EASmM0QUvCgGPVb5zPlPjLgw/skptgLGNpMzQ3vM94gxN0odxHJ1wkip Tm9Mhp6hpYlepzfW3qurpSYvWtfMEHM5iGKOxuozTmEhcANengBHz05CAbiMnZHzkvS6 uqj13ONhpl75a54bQovnkZr/zm6+MwRz3pq4SpepMBES7c6YRHqp/Oyqqu08fqZzWsWz YcDQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o186-v6si13557208pga.350.2018.05.04.10.40.43; Fri, 04 May 2018 10:40:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752018AbeEDRkE (ORCPT + 29 others); Fri, 4 May 2018 13:40:04 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:57390 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751807AbeEDRj7 (ORCPT ); Fri, 4 May 2018 13:39:59 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EE4B11682; Fri, 4 May 2018 10:39:58 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 301D43F487; Fri, 4 May 2018 10:39:57 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 3/6] arm64: use for cmpxchg Date: Fri, 4 May 2018 18:39:34 +0100 Message-Id: <20180504173937.25300-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently a number of arm64-specific files include for the definition of the cmpxchg helpers. This works fine today, but won't when we switch over to instrumented atomics, and as noted in Documentation/core-api/atomic_ops.rst: If someone wants to use xchg(), cmpxchg() and their variants, linux/atomic.h should be included rather than asm/cmpxchg.h, unless the code is in arch/* and can take care of itself. ... so let's switch to for these definitions. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon --- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/include/asm/sync_bitops.h | 2 +- arch/arm64/mm/fault.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 7c4c8f318ba9..c797c0fbbce2 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -39,8 +39,8 @@ #ifndef __ASSEMBLY__ -#include #include +#include #include #include #include diff --git a/arch/arm64/include/asm/sync_bitops.h b/arch/arm64/include/asm/sync_bitops.h index eee31a9f72a5..24ed8f445b8b 100644 --- a/arch/arm64/include/asm/sync_bitops.h +++ b/arch/arm64/include/asm/sync_bitops.h @@ -3,7 +3,7 @@ #define __ASM_SYNC_BITOPS_H__ #include -#include +#include /* sync_bitops functions are equivalent to the SMP implementation of the * original functions, independently from CONFIG_SMP being defined. diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 4165485e8b6e..bfbc695e2ea2 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -18,6 +18,7 @@ * along with this program. If not, see . */ +#include #include #include #include @@ -34,7 +35,6 @@ #include #include -#include #include #include #include From patchwork Fri May 4 17:39:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135014 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp323481lji; Fri, 4 May 2018 10:40:08 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrQk0RjE3336e/tXFwwZJGhAI+pN83s76zVDOsPxPMxgpHr+qeXq19u0IiJXxdlZpBHXNgh X-Received: by 10.98.11.3 with SMTP id t3mr27869862pfi.32.1525455607915; Fri, 04 May 2018 10:40:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455607; cv=none; d=google.com; s=arc-20160816; b=s4ioX6pD39Sbu2SJ9kYW4twrYzwt8GnpEEgBEptxjWerAs6EAncMVHwC6CVwlPcsNF IP0JKQzwYERc4W+D75yNm0+2C7uYQXydjMDMhgyRYsSMgrSmjzFEhi/yUdWVL/y1qo8l 6uukBI8NMkfvZSW/MjAxsYKah2Suys5WSRfaSL9Bs1iUMzo0GdVef2ECWZ7bWzlw2qwj zgeyWwVhyw8aIuyXm8IwlfaQlFaH3MlJiNmTPCdufcVRw5eMUfJvK3K7+gVYMEl9fOsK FP9ugnMymA2l7CudAUjKwLppkg4Ziir93joAcCMKRQLRsTvotv+7KH1KG2mzIuPDVXE4 CF9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=vtCCki6tA0bRQFNaK1Hh8IoMrMdnWV/whfFTcJfgNaU=; b=rwXW4g67YzEq4i6UEg5BB2LLEKSP30soyhjdXbVQPmvMCwSZOBRwgzZE/3tQSdZR5g nIq3xvEPrUZP84FyvS8CfMzHlSm+HgRsjyZcpLURvlwx3SjkGbriKiEmbwIiDXwIRwHX /QcoyGzrQkNTsN+7ZRbOHHLcMR1e22pdDEUHmirXhDgYfNjZsmlV44KXs/Qa9rHVHA6v JEZezi25lj/0n6NU1KPxJ0SZvr23LtceRWxVNWsRTsF2j4m81JI0kL1kxXoZ8ExBnelH k+Qvmy0xx8dhvK6c89a0QY0Muf1q6wf6Sb32j6EtoCIKyUlAk7SsZEzEl/HCDsj/2EZ2 zLSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k2-v6si10988592plt.374.2018.05.04.10.40.07; Fri, 04 May 2018 10:40:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752033AbeEDRkF (ORCPT + 29 others); Fri, 4 May 2018 13:40:05 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:57404 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751484AbeEDRkB (ORCPT ); Fri, 4 May 2018 13:40:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 673E4168F; Fri, 4 May 2018 10:40:01 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9D7613F487; Fri, 4 May 2018 10:39:59 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 4/6] arm64: fix assembly constraints for cmpxchg Date: Fri, 4 May 2018 18:39:35 +0100 Message-Id: <20180504173937.25300-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Our LL/SC cmpxchg assembly uses "Lr" as the constraint for old, which allows either an integer constant suitable for a 64-bit logical oepration, or a register. However, this assembly is also used for 32-bit cases (where we explicitly add a 'w' prefix to the output format), where the set of valid immediates differ, and we should use a 'Kr' constraint. In some cases, this can result in build failures, when GCC selects an immediate which is valid for a 64-bit logical operation, but we try to assemble a 32-bit logical operation: [mark@lakrids:~/src/linux]% uselinaro 17.05 make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- net/sunrpc/auth_gss/svcauth_gss.o CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h CHK include/generated/bounds.h CHK include/generated/timeconst.h CHK include/generated/asm-offsets.h CALL scripts/checksyscalls.sh CHK scripts/mod/devicetable-offsets.h CC net/sunrpc/auth_gss/svcauth_gss.o /tmp/ccj04KVh.s: Assembler messages: /tmp/ccj04KVh.s:325: Error: immediate out of range at operand 3 -- `eor w2,w1,4294967295' scripts/Makefile.build:324: recipe for target 'net/sunrpc/auth_gss/svcauth_gss.o' failed make[1]: *** [net/sunrpc/auth_gss/svcauth_gss.o] Error 1 Makefile:1704: recipe for target 'net/sunrpc/auth_gss/svcauth_gss.o' failed make: *** [net/sunrpc/auth_gss/svcauth_gss.o] Error 2 Note that today we largely avoid the specific failure above because GCC happens to already have the value in a register, and in most cases uses that rather than generating the immediate. The following code added to an arbitrary file will cause the same failure: unsigned int test_cmpxchg(unsigned int *l) { return cmpxchg(l, -1, 0); } While it would seem that we could conditionally use the 'K' constraint, this seems to be handled erroneously by GCC (at least versions 6.3 and 7.1), with the same immediates being used, despite not being permitted for 32-bit logical operations. Thus we must avoid the use of an immediate in order to prevent failures as above. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon --- arch/arm64/include/asm/atomic_ll_sc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index f5a2d09afb38..3175f4982682 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -267,7 +267,7 @@ __LL_SC_PREFIX(__cmpxchg_case_##name(volatile void *ptr, \ "2:" \ : [tmp] "=&r" (tmp), [oldval] "=&r" (oldval), \ [v] "+Q" (*(unsigned long *)ptr) \ - : [old] "Lr" (old), [new] "r" (new) \ + : [old] "r" (old), [new] "r" (new) \ : cl); \ \ return oldval; \ From patchwork Fri May 4 17:39:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135016 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp323912lji; Fri, 4 May 2018 10:40:31 -0700 (PDT) X-Google-Smtp-Source: AB8JxZr69hKXyF4THeNnb4kbYIKNFxbDqhJKivUEy4Z/NoNPn/fntIuY+YeDlWpn2F+4CzKL7Nea X-Received: by 2002:a63:b008:: with SMTP id h8-v6mr23027096pgf.448.1525455630868; Fri, 04 May 2018 10:40:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455630; cv=none; d=google.com; s=arc-20160816; b=d9EQI9Q2VwGXujg7mluGuSNqpuD5t5UHbtCmg2mW2GAdPlcKWymit+iV7EjYwaqSlz TlL5oGTD6kB4dkjmMMCUUT8Ka9w/Fd4JeBe0qoFP96dKVTlxm3SVE/wx7agmJKI56BE6 S0uE0DwujCYZcCQeSHasRIuBc4zLPVUTqAZYZemnwSbkTjhltHy5w6Zx7I8ztC57xDeL r3W1KQaig9E7RAUbyiWbYYY13LIpzJ9pXP2h3dWnOaloNlk5rmQYM09bxrv4IkrIGh4R 42YBjZ9aI8o61cJ+fOylZVu2Wm0hPymfHZzOuDBYCpndXw8j5WcTIYeymF2xowWIWN+N 4pXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=7aL/doJrzBuipPsdf1assME/zEl0/jHZW2yItWb7cKI=; b=0xOWN/u/LyDmfegXDirg72zzjmTGgu6wx6FRmM/Sx//HNB/ZE//XD2zc7z49vjvrdD Sxlj/8MOh7SHONy7UTDgQ+WBlgljJ18MvEPcTiJWHXrwu1mve4uEVkCsTTOr0nivkUie l+oPgEqXPsx4+Jim43RmZK+8AwxvQ237xbT5QWFkRqigGNRfS/MimzQGqRMwEb4g6Ipm lwmzr1SB9LW+/a5RuUdaVFzv3zODqC4xNe+e1Mxj/jo7B4BHf5pe3onYK+o4PtiH6bEO zpL5/goFUZwj7P8KvPbRf3fSao2Y+zcEWqcEaZSY74PTvNarmHjE4A6izIdcQeVhMACO ROqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x3-v6si14047666plo.303.2018.05.04.10.40.30; Fri, 04 May 2018 10:40:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752053AbeEDRk2 (ORCPT + 29 others); Fri, 4 May 2018 13:40:28 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:57414 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752015AbeEDRkE (ORCPT ); Fri, 4 May 2018 13:40:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D9528169F; Fri, 4 May 2018 10:40:03 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1AAA73F487; Fri, 4 May 2018 10:40:01 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 5/6] arm64: use instrumented atomics Date: Fri, 4 May 2018 18:39:36 +0100 Message-Id: <20180504173937.25300-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As our atomics are written in inline assembly, they don't get instrumented when we enable KASAN, and thus we can miss when they are used on erroneous memory locations. As with x86, let's use atomic-instrumented.h to give arm64 instrumented atomics. This requires that we add an arch_ prefix to our atomic names, but other than naming, no changes are made to the atomics themselves. Due to include dependencies, we must move our definition of sync_cmpxchg into , but this is not harmful. There should be no functional change as a result of this patch when CONFIG_KASAN is not selected. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon --- arch/arm64/include/asm/atomic.h | 299 +++++++++++++++++----------------- arch/arm64/include/asm/atomic_ll_sc.h | 28 ++-- arch/arm64/include/asm/atomic_lse.h | 43 ++--- arch/arm64/include/asm/cmpxchg.h | 25 +-- arch/arm64/include/asm/sync_bitops.h | 1 - 5 files changed, 202 insertions(+), 194 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index c0235e0ff849..aefdce33f81a 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -53,158 +53,161 @@ #define ATOMIC_INIT(i) { (i) } -#define atomic_read(v) READ_ONCE((v)->counter) -#define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i)) - -#define atomic_add_return_relaxed atomic_add_return_relaxed -#define atomic_add_return_acquire atomic_add_return_acquire -#define atomic_add_return_release atomic_add_return_release -#define atomic_add_return atomic_add_return - -#define atomic_inc_return_relaxed(v) atomic_add_return_relaxed(1, (v)) -#define atomic_inc_return_acquire(v) atomic_add_return_acquire(1, (v)) -#define atomic_inc_return_release(v) atomic_add_return_release(1, (v)) -#define atomic_inc_return(v) atomic_add_return(1, (v)) - -#define atomic_sub_return_relaxed atomic_sub_return_relaxed -#define atomic_sub_return_acquire atomic_sub_return_acquire -#define atomic_sub_return_release atomic_sub_return_release -#define atomic_sub_return atomic_sub_return - -#define atomic_dec_return_relaxed(v) atomic_sub_return_relaxed(1, (v)) -#define atomic_dec_return_acquire(v) atomic_sub_return_acquire(1, (v)) -#define atomic_dec_return_release(v) atomic_sub_return_release(1, (v)) -#define atomic_dec_return(v) atomic_sub_return(1, (v)) - -#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed -#define atomic_fetch_add_acquire atomic_fetch_add_acquire -#define atomic_fetch_add_release atomic_fetch_add_release -#define atomic_fetch_add atomic_fetch_add - -#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed -#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire -#define atomic_fetch_sub_release atomic_fetch_sub_release -#define atomic_fetch_sub atomic_fetch_sub - -#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed -#define atomic_fetch_and_acquire atomic_fetch_and_acquire -#define atomic_fetch_and_release atomic_fetch_and_release -#define atomic_fetch_and atomic_fetch_and - -#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed -#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire -#define atomic_fetch_andnot_release atomic_fetch_andnot_release -#define atomic_fetch_andnot atomic_fetch_andnot - -#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed -#define atomic_fetch_or_acquire atomic_fetch_or_acquire -#define atomic_fetch_or_release atomic_fetch_or_release -#define atomic_fetch_or atomic_fetch_or - -#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed -#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire -#define atomic_fetch_xor_release atomic_fetch_xor_release -#define atomic_fetch_xor atomic_fetch_xor - -#define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) -#define atomic_xchg_acquire(v, new) xchg_acquire(&((v)->counter), (new)) -#define atomic_xchg_release(v, new) xchg_release(&((v)->counter), (new)) -#define atomic_xchg(v, new) xchg(&((v)->counter), (new)) - -#define atomic_cmpxchg_relaxed(v, old, new) \ - cmpxchg_relaxed(&((v)->counter), (old), (new)) -#define atomic_cmpxchg_acquire(v, old, new) \ - cmpxchg_acquire(&((v)->counter), (old), (new)) -#define atomic_cmpxchg_release(v, old, new) \ - cmpxchg_release(&((v)->counter), (old), (new)) -#define atomic_cmpxchg(v, old, new) cmpxchg(&((v)->counter), (old), (new)) - -#define atomic_inc(v) atomic_add(1, (v)) -#define atomic_dec(v) atomic_sub(1, (v)) -#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) -#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) -#define atomic_sub_and_test(i, v) (atomic_sub_return((i), (v)) == 0) -#define atomic_add_negative(i, v) (atomic_add_return((i), (v)) < 0) -#define __atomic_add_unless(v, a, u) ___atomic_add_unless(v, a, u,) -#define atomic_andnot atomic_andnot +#define arch_atomic_read(v) READ_ONCE((v)->counter) +#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i)) + +#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed +#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire +#define arch_atomic_add_return_release arch_atomic_add_return_release +#define arch_atomic_add_return arch_atomic_add_return + +#define arch_atomic_inc_return_relaxed(v) arch_atomic_add_return_relaxed(1, (v)) +#define arch_atomic_inc_return_acquire(v) arch_atomic_add_return_acquire(1, (v)) +#define arch_atomic_inc_return_release(v) arch_atomic_add_return_release(1, (v)) +#define arch_atomic_inc_return(v) arch_atomic_add_return(1, (v)) + +#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed +#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire +#define arch_atomic_sub_return_release arch_atomic_sub_return_release +#define arch_atomic_sub_return arch_atomic_sub_return + +#define arch_atomic_dec_return_relaxed(v) arch_atomic_sub_return_relaxed(1, (v)) +#define arch_atomic_dec_return_acquire(v) arch_atomic_sub_return_acquire(1, (v)) +#define arch_atomic_dec_return_release(v) arch_atomic_sub_return_release(1, (v)) +#define arch_atomic_dec_return(v) arch_atomic_sub_return(1, (v)) + +#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed +#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire +#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release +#define arch_atomic_fetch_add arch_atomic_fetch_add + +#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed +#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire +#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release +#define arch_atomic_fetch_sub arch_atomic_fetch_sub + +#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed +#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire +#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release +#define arch_atomic_fetch_and arch_atomic_fetch_and + +#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed +#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire +#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release +#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot + +#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed +#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire +#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release +#define arch_atomic_fetch_or arch_atomic_fetch_or + +#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed +#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire +#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release +#define arch_atomic_fetch_xor arch_atomic_fetch_xor + +#define arch_atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) +#define arch_atomic_xchg_acquire(v, new) xchg_acquire(&((v)->counter), (new)) +#define arch_atomic_xchg_release(v, new) xchg_release(&((v)->counter), (new)) +#define arch_atomic_xchg(v, new) xchg(&((v)->counter), (new)) + +#define arch_atomic_cmpxchg_relaxed(v, old, new) \ + arch_cmpxchg_relaxed(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg_acquire(v, old, new) \ + arch_cmpxchg_acquire(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg_release(v, old, new) \ + arch_cmpxchg_release(&((v)->counter), (old), (new)) +#define arch_atomic_cmpxchg(v, old, new) \ + arch_cmpxchg(&((v)->counter), (old), (new)) + +#define arch_atomic_inc(v) arch_atomic_add(1, (v)) +#define arch_atomic_dec(v) arch_atomic_sub(1, (v)) +#define arch_atomic_inc_and_test(v) (arch_atomic_inc_return(v) == 0) +#define arch_atomic_dec_and_test(v) (arch_atomic_dec_return(v) == 0) +#define arch_atomic_sub_and_test(i, v) (arch_atomic_sub_return((i), (v)) == 0) +#define arch_atomic_add_negative(i, v) (arch_atomic_add_return((i), (v)) < 0) +#define __arch_atomic_add_unless(v, a, u) ___atomic_add_unless(v, a, u,) +#define arch_atomic_andnot arch_atomic_andnot /* * 64-bit atomic operations. */ -#define ATOMIC64_INIT ATOMIC_INIT -#define atomic64_read atomic_read -#define atomic64_set atomic_set - -#define atomic64_add_return_relaxed atomic64_add_return_relaxed -#define atomic64_add_return_acquire atomic64_add_return_acquire -#define atomic64_add_return_release atomic64_add_return_release -#define atomic64_add_return atomic64_add_return - -#define atomic64_inc_return_relaxed(v) atomic64_add_return_relaxed(1, (v)) -#define atomic64_inc_return_acquire(v) atomic64_add_return_acquire(1, (v)) -#define atomic64_inc_return_release(v) atomic64_add_return_release(1, (v)) -#define atomic64_inc_return(v) atomic64_add_return(1, (v)) - -#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed -#define atomic64_sub_return_acquire atomic64_sub_return_acquire -#define atomic64_sub_return_release atomic64_sub_return_release -#define atomic64_sub_return atomic64_sub_return - -#define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1, (v)) -#define atomic64_dec_return_acquire(v) atomic64_sub_return_acquire(1, (v)) -#define atomic64_dec_return_release(v) atomic64_sub_return_release(1, (v)) -#define atomic64_dec_return(v) atomic64_sub_return(1, (v)) - -#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed -#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire -#define atomic64_fetch_add_release atomic64_fetch_add_release -#define atomic64_fetch_add atomic64_fetch_add - -#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed -#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire -#define atomic64_fetch_sub_release atomic64_fetch_sub_release -#define atomic64_fetch_sub atomic64_fetch_sub - -#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed -#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire -#define atomic64_fetch_and_release atomic64_fetch_and_release -#define atomic64_fetch_and atomic64_fetch_and - -#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed -#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire -#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release -#define atomic64_fetch_andnot atomic64_fetch_andnot - -#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed -#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire -#define atomic64_fetch_or_release atomic64_fetch_or_release -#define atomic64_fetch_or atomic64_fetch_or - -#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed -#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire -#define atomic64_fetch_xor_release atomic64_fetch_xor_release -#define atomic64_fetch_xor atomic64_fetch_xor - -#define atomic64_xchg_relaxed atomic_xchg_relaxed -#define atomic64_xchg_acquire atomic_xchg_acquire -#define atomic64_xchg_release atomic_xchg_release -#define atomic64_xchg atomic_xchg - -#define atomic64_cmpxchg_relaxed atomic_cmpxchg_relaxed -#define atomic64_cmpxchg_acquire atomic_cmpxchg_acquire -#define atomic64_cmpxchg_release atomic_cmpxchg_release -#define atomic64_cmpxchg atomic_cmpxchg - -#define atomic64_inc(v) atomic64_add(1, (v)) -#define atomic64_dec(v) atomic64_sub(1, (v)) -#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) -#define atomic64_dec_and_test(v) (atomic64_dec_return(v) == 0) -#define atomic64_sub_and_test(i, v) (atomic64_sub_return((i), (v)) == 0) -#define atomic64_add_negative(i, v) (atomic64_add_return((i), (v)) < 0) -#define atomic64_add_unless(v, a, u) (___atomic_add_unless(v, a, u, 64) != u) -#define atomic64_andnot atomic64_andnot - -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) +#define ATOMIC64_INIT ATOMIC_INIT +#define arch_atomic64_read arch_atomic_read +#define arch_atomic64_set arch_atomic_set + +#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed +#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire +#define arch_atomic64_add_return_release arch_atomic64_add_return_release +#define arch_atomic64_add_return arch_atomic64_add_return + +#define arch_atomic64_inc_return_relaxed(v) arch_atomic64_add_return_relaxed(1, (v)) +#define arch_atomic64_inc_return_acquire(v) arch_atomic64_add_return_acquire(1, (v)) +#define arch_atomic64_inc_return_release(v) arch_atomic64_add_return_release(1, (v)) +#define arch_atomic64_inc_return(v) arch_atomic64_add_return(1, (v)) + +#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed +#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire +#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release +#define arch_atomic64_sub_return arch_atomic64_sub_return + +#define arch_atomic64_dec_return_relaxed(v) arch_atomic64_sub_return_relaxed(1, (v)) +#define arch_atomic64_dec_return_acquire(v) arch_atomic64_sub_return_acquire(1, (v)) +#define arch_atomic64_dec_return_release(v) arch_atomic64_sub_return_release(1, (v)) +#define arch_atomic64_dec_return(v) arch_atomic64_sub_return(1, (v)) + +#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed +#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire +#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release +#define arch_atomic64_fetch_add arch_atomic64_fetch_add + +#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed +#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire +#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub + +#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed +#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire +#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release +#define arch_atomic64_fetch_and arch_atomic64_fetch_and + +#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed +#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire +#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release +#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot + +#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed +#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire +#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release +#define arch_atomic64_fetch_or arch_atomic64_fetch_or + +#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed +#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire +#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor + +#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed +#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire +#define arch_atomic64_xchg_release arch_atomic_xchg_release +#define arch_atomic64_xchg arch_atomic_xchg + +#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed +#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire +#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release +#define arch_atomic64_cmpxchg arch_atomic_cmpxchg + +#define arch_atomic64_inc(v) arch_atomic64_add(1, (v)) +#define arch_atomic64_dec(v) arch_atomic64_sub(1, (v)) +#define arch_atomic64_inc_and_test(v) (arch_atomic64_inc_return(v) == 0) +#define arch_atomic64_dec_and_test(v) (arch_atomic64_dec_return(v) == 0) +#define arch_atomic64_sub_and_test(i, v) (arch_atomic64_sub_return((i), (v)) == 0) +#define arch_atomic64_add_negative(i, v) (arch_atomic64_add_return((i), (v)) < 0) +#define arch_atomic64_add_unless(v, a, u) (___atomic_add_unless(v, a, u, 64) != u) +#define arch_atomic64_andnot arch_atomic64_andnot + +#define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0) + +#include #endif #endif diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index 3175f4982682..c28d5a824104 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -39,7 +39,7 @@ #define ATOMIC_OP(op, asm_op) \ __LL_SC_INLINE void \ -__LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \ +__LL_SC_PREFIX(arch_atomic_##op(int i, atomic_t *v)) \ { \ unsigned long tmp; \ int result; \ @@ -53,11 +53,11 @@ __LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "Ir" (i)); \ } \ -__LL_SC_EXPORT(atomic_##op); +__LL_SC_EXPORT(arch_atomic_##op); #define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \ __LL_SC_INLINE int \ -__LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \ +__LL_SC_PREFIX(arch_atomic_##op##_return##name(int i, atomic_t *v)) \ { \ unsigned long tmp; \ int result; \ @@ -75,11 +75,11 @@ __LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \ \ return result; \ } \ -__LL_SC_EXPORT(atomic_##op##_return##name); +__LL_SC_EXPORT(arch_atomic_##op##_return##name); #define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \ __LL_SC_INLINE int \ -__LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \ +__LL_SC_PREFIX(arch_atomic_fetch_##op##name(int i, atomic_t *v)) \ { \ unsigned long tmp; \ int val, result; \ @@ -97,7 +97,7 @@ __LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \ \ return result; \ } \ -__LL_SC_EXPORT(atomic_fetch_##op##name); +__LL_SC_EXPORT(arch_atomic_fetch_##op##name); #define ATOMIC_OPS(...) \ ATOMIC_OP(__VA_ARGS__) \ @@ -133,7 +133,7 @@ ATOMIC_OPS(xor, eor) #define ATOMIC64_OP(op, asm_op) \ __LL_SC_INLINE void \ -__LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \ +__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \ { \ long result; \ unsigned long tmp; \ @@ -147,11 +147,11 @@ __LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "Ir" (i)); \ } \ -__LL_SC_EXPORT(atomic64_##op); +__LL_SC_EXPORT(arch_atomic64_##op); #define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \ __LL_SC_INLINE long \ -__LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \ +__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\ { \ long result; \ unsigned long tmp; \ @@ -169,11 +169,11 @@ __LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \ \ return result; \ } \ -__LL_SC_EXPORT(atomic64_##op##_return##name); +__LL_SC_EXPORT(arch_atomic64_##op##_return##name); #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \ __LL_SC_INLINE long \ -__LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \ +__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v)) \ { \ long result, val; \ unsigned long tmp; \ @@ -191,7 +191,7 @@ __LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \ \ return result; \ } \ -__LL_SC_EXPORT(atomic64_fetch_##op##name); +__LL_SC_EXPORT(arch_atomic64_fetch_##op##name); #define ATOMIC64_OPS(...) \ ATOMIC64_OP(__VA_ARGS__) \ @@ -226,7 +226,7 @@ ATOMIC64_OPS(xor, eor) #undef ATOMIC64_OP __LL_SC_INLINE long -__LL_SC_PREFIX(atomic64_dec_if_positive(atomic64_t *v)) +__LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v)) { long result; unsigned long tmp; @@ -246,7 +246,7 @@ __LL_SC_PREFIX(atomic64_dec_if_positive(atomic64_t *v)) return result; } -__LL_SC_EXPORT(atomic64_dec_if_positive); +__LL_SC_EXPORT(arch_atomic64_dec_if_positive); #define __CMPXCHG_CASE(w, sz, name, mb, acq, rel, cl) \ __LL_SC_INLINE unsigned long \ diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h index 9ef0797380cb..9a071f71c521 100644 --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -25,9 +25,9 @@ #error "please don't include this file directly" #endif -#define __LL_SC_ATOMIC(op) __LL_SC_CALL(atomic_##op) +#define __LL_SC_ATOMIC(op) __LL_SC_CALL(arch_atomic_##op) #define ATOMIC_OP(op, asm_op) \ -static inline void atomic_##op(int i, atomic_t *v) \ +static inline void arch_atomic_##op(int i, atomic_t *v) \ { \ register int w0 asm ("w0") = i; \ register atomic_t *x1 asm ("x1") = v; \ @@ -47,7 +47,7 @@ ATOMIC_OP(add, stadd) #undef ATOMIC_OP #define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \ -static inline int atomic_fetch_##op##name(int i, atomic_t *v) \ +static inline int arch_atomic_fetch_##op##name(int i, atomic_t *v) \ { \ register int w0 asm ("w0") = i; \ register atomic_t *x1 asm ("x1") = v; \ @@ -79,7 +79,7 @@ ATOMIC_FETCH_OPS(add, ldadd) #undef ATOMIC_FETCH_OPS #define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \ -static inline int atomic_add_return##name(int i, atomic_t *v) \ +static inline int arch_atomic_add_return##name(int i, atomic_t *v) \ { \ register int w0 asm ("w0") = i; \ register atomic_t *x1 asm ("x1") = v; \ @@ -105,7 +105,7 @@ ATOMIC_OP_ADD_RETURN( , al, "memory") #undef ATOMIC_OP_ADD_RETURN -static inline void atomic_and(int i, atomic_t *v) +static inline void arch_atomic_and(int i, atomic_t *v) { register int w0 asm ("w0") = i; register atomic_t *x1 asm ("x1") = v; @@ -123,7 +123,7 @@ static inline void atomic_and(int i, atomic_t *v) } #define ATOMIC_FETCH_OP_AND(name, mb, cl...) \ -static inline int atomic_fetch_and##name(int i, atomic_t *v) \ +static inline int arch_atomic_fetch_and##name(int i, atomic_t *v) \ { \ register int w0 asm ("w0") = i; \ register atomic_t *x1 asm ("x1") = v; \ @@ -149,7 +149,7 @@ ATOMIC_FETCH_OP_AND( , al, "memory") #undef ATOMIC_FETCH_OP_AND -static inline void atomic_sub(int i, atomic_t *v) +static inline void arch_atomic_sub(int i, atomic_t *v) { register int w0 asm ("w0") = i; register atomic_t *x1 asm ("x1") = v; @@ -167,7 +167,7 @@ static inline void atomic_sub(int i, atomic_t *v) } #define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \ -static inline int atomic_sub_return##name(int i, atomic_t *v) \ +static inline int arch_atomic_sub_return##name(int i, atomic_t *v) \ { \ register int w0 asm ("w0") = i; \ register atomic_t *x1 asm ("x1") = v; \ @@ -195,7 +195,7 @@ ATOMIC_OP_SUB_RETURN( , al, "memory") #undef ATOMIC_OP_SUB_RETURN #define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \ -static inline int atomic_fetch_sub##name(int i, atomic_t *v) \ +static inline int arch_atomic_fetch_sub##name(int i, atomic_t *v) \ { \ register int w0 asm ("w0") = i; \ register atomic_t *x1 asm ("x1") = v; \ @@ -222,9 +222,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory") #undef ATOMIC_FETCH_OP_SUB #undef __LL_SC_ATOMIC -#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(atomic64_##op) +#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(arch_atomic64_##op) #define ATOMIC64_OP(op, asm_op) \ -static inline void atomic64_##op(long i, atomic64_t *v) \ +static inline void arch_atomic64_##op(long i, atomic64_t *v) \ { \ register long x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ @@ -244,7 +244,8 @@ ATOMIC64_OP(add, stadd) #undef ATOMIC64_OP #define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ -static inline long atomic64_fetch_##op##name(long i, atomic64_t *v) \ +static inline long \ +arch_atomic64_fetch_##op##name(long i, atomic64_t *v) \ { \ register long x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ @@ -276,7 +277,8 @@ ATOMIC64_FETCH_OPS(add, ldadd) #undef ATOMIC64_FETCH_OPS #define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \ -static inline long atomic64_add_return##name(long i, atomic64_t *v) \ +static inline long \ +arch_atomic64_add_return##name(long i, atomic64_t *v) \ { \ register long x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ @@ -302,7 +304,7 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory") #undef ATOMIC64_OP_ADD_RETURN -static inline void atomic64_and(long i, atomic64_t *v) +static inline void arch_atomic64_and(long i, atomic64_t *v) { register long x0 asm ("x0") = i; register atomic64_t *x1 asm ("x1") = v; @@ -320,7 +322,8 @@ static inline void atomic64_and(long i, atomic64_t *v) } #define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ -static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \ +static inline long \ +arch_atomic64_fetch_and##name(long i, atomic64_t *v) \ { \ register long x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ @@ -346,7 +349,7 @@ ATOMIC64_FETCH_OP_AND( , al, "memory") #undef ATOMIC64_FETCH_OP_AND -static inline void atomic64_sub(long i, atomic64_t *v) +static inline void arch_atomic64_sub(long i, atomic64_t *v) { register long x0 asm ("x0") = i; register atomic64_t *x1 asm ("x1") = v; @@ -364,7 +367,8 @@ static inline void atomic64_sub(long i, atomic64_t *v) } #define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \ -static inline long atomic64_sub_return##name(long i, atomic64_t *v) \ +static inline long \ +arch_atomic64_sub_return##name(long i, atomic64_t *v) \ { \ register long x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ @@ -392,7 +396,8 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory") #undef ATOMIC64_OP_SUB_RETURN #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ -static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \ +static inline long \ +arch_atomic64_fetch_sub##name(long i, atomic64_t *v) \ { \ register long x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ @@ -418,7 +423,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory") #undef ATOMIC64_FETCH_OP_SUB -static inline long atomic64_dec_if_positive(atomic64_t *v) +static inline long arch_atomic64_dec_if_positive(atomic64_t *v) { register long x0 asm ("x0") = (long)v; diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h index 4f5fd2a36e6e..0f470ffd2d59 100644 --- a/arch/arm64/include/asm/cmpxchg.h +++ b/arch/arm64/include/asm/cmpxchg.h @@ -154,18 +154,19 @@ __CMPXCHG_GEN(_mb) }) /* cmpxchg */ -#define cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__) -#define cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__) -#define cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__) -#define cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__) -#define cmpxchg_local cmpxchg_relaxed +#define arch_cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__) +#define arch_cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__) +#define arch_cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__) +#define arch_cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__) +#define arch_cmpxchg_local arch_cmpxchg_relaxed +#define arch_sync_cmpxchg arch_cmpxchg /* cmpxchg64 */ -#define cmpxchg64_relaxed cmpxchg_relaxed -#define cmpxchg64_acquire cmpxchg_acquire -#define cmpxchg64_release cmpxchg_release -#define cmpxchg64 cmpxchg -#define cmpxchg64_local cmpxchg_local +#define arch_cmpxchg64_relaxed arch_cmpxchg_relaxed +#define arch_cmpxchg64_acquire arch_cmpxchg_acquire +#define arch_cmpxchg64_release arch_cmpxchg_release +#define arch_cmpxchg64 arch_cmpxchg +#define arch_cmpxchg64_local arch_cmpxchg_local /* cmpxchg_double */ #define system_has_cmpxchg_double() 1 @@ -177,7 +178,7 @@ __CMPXCHG_GEN(_mb) VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1); \ }) -#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ +#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ ({\ int __ret;\ __cmpxchg_double_check(ptr1, ptr2); \ @@ -187,7 +188,7 @@ __CMPXCHG_GEN(_mb) __ret; \ }) -#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \ +#define arch_cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \ ({\ int __ret;\ __cmpxchg_double_check(ptr1, ptr2); \ diff --git a/arch/arm64/include/asm/sync_bitops.h b/arch/arm64/include/asm/sync_bitops.h index 24ed8f445b8b..e42de14627f2 100644 --- a/arch/arm64/include/asm/sync_bitops.h +++ b/arch/arm64/include/asm/sync_bitops.h @@ -22,6 +22,5 @@ #define sync_test_and_clear_bit(nr, p) test_and_clear_bit(nr, p) #define sync_test_and_change_bit(nr, p) test_and_change_bit(nr, p) #define sync_test_bit(nr, addr) test_bit(nr, addr) -#define sync_cmpxchg cmpxchg #endif From patchwork Fri May 4 17:39:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135015 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp323551lji; Fri, 4 May 2018 10:40:11 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpKGkWkPYi7eZ5JaAliENxyT1FwV2SB8E0eHEKBa9TC/ir/bxWwqB4kUrbmvZTSJ1Bs3NxA X-Received: by 2002:a17:902:6113:: with SMTP id t19-v6mr28089616plj.372.1525455611364; Fri, 04 May 2018 10:40:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455611; cv=none; d=google.com; s=arc-20160816; b=vA/S6UcYmTVvFxrXuDW/SXbJCtptJCPSDnxLG75ahRPdnFAvwxDZ5f5tRaXKcZtjoP Mm8sjg96N0k+pzlPbQHsZPlnDomcdXtgRVWL77Nm8TgWjPR5jEa82tvtmdMeI7WIRp6l xQtGbziywac3g4yl7XYxtscYlgxuRA9cuASTRG8lttEWxcRn19bEJhIwke4q3j5/WlE0 jjqLONam//+3wdMdVACAS69ctZ7/IowDhYCqJWJ1lsTM2J4wQl92rB3zmnOfM73zSza7 UT0i4vkQAZ8Og6bG/ClRbHsiYz+pFYRUBOeR4hb4uA/EtYlIl6rY/pBwv0QSvduVLG0t +TCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=JFDb2bTHNhhXHAlzS+cmj2XiuPnP+AQdqEhx9TTNBOU=; b=vCX6TIcJNtOeL7uaKqh02J+0eq9ebPamBfj4iR2qFck0PNttQHIERzF8PmsFI31CU2 QwLnWiRfgDAj2CGm842eK/u8hUiBsU6zRxjOv+Zhs/IdesFBsriBopK9jmQlPHpxnhSG +ScsRmacN1dZb67TBPLo8nBVsxxKUgDCQ9u40XR7ZqgaQquZP1rKyw/olbhCR84ZJXAm amRjQ/3GJWr3lCOjK/GtyuuE6EOYMVfDanNkvL5u+4qTH7F/TP7XyDSEkydI5/H/a92v InWkgVRjX5/3pkGYrBHsmKr/KrpTdGEepf420ugqlQB7me5oRl/zrine0EJLXtVDRBiH QpCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x3-v6si14047666plo.303.2018.05.04.10.40.11; Fri, 04 May 2018 10:40:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752042AbeEDRkJ (ORCPT + 29 others); Fri, 4 May 2018 13:40:09 -0400 Received: from foss.arm.com ([217.140.101.70]:57428 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751484AbeEDRkG (ORCPT ); Fri, 4 May 2018 13:40:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4E3F61682; Fri, 4 May 2018 10:40:06 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 841763F487; Fri, 4 May 2018 10:40:04 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 6/6] arm64: instrument smp_{load_acquire,store_release} Date: Fri, 4 May 2018 18:39:37 +0100 Message-Id: <20180504173937.25300-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Our __smp_store_release() and __smp_load_acquire() macros use inline assembly, which is opaque to kasan. This means that kasan can't catch erroneous use of these. This patch adds kasan instrumentation to both. It might be better to turn these into __arch_* variants, as we do for the atomics, but this works for the time being. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon --- arch/arm64/include/asm/barrier.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index f11518af96a9..1a9c601619e5 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -20,6 +20,8 @@ #ifndef __ASSEMBLY__ +#include + #define __nops(n) ".rept " #n "\nnop\n.endr\n" #define nops(n) asm volatile(__nops(n)) @@ -68,31 +70,33 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx, #define __smp_store_release(p, v) \ do { \ + typeof(p) __p = (p); \ union { typeof(*p) __val; char __c[1]; } __u = \ { .__val = (__force typeof(*p)) (v) }; \ compiletime_assert_atomic_type(*p); \ + kasan_check_write(__p, sizeof(*__p)); \ switch (sizeof(*p)) { \ case 1: \ asm volatile ("stlrb %w1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u8 *)__u.__c) \ : "memory"); \ break; \ case 2: \ asm volatile ("stlrh %w1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u16 *)__u.__c) \ : "memory"); \ break; \ case 4: \ asm volatile ("stlr %w1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u32 *)__u.__c) \ : "memory"); \ break; \ case 8: \ asm volatile ("stlr %1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u64 *)__u.__c) \ : "memory"); \ break; \ @@ -102,27 +106,29 @@ do { \ #define __smp_load_acquire(p) \ ({ \ union { typeof(*p) __val; char __c[1]; } __u; \ + typeof(p) __p = (p); \ compiletime_assert_atomic_type(*p); \ + kasan_check_read(__p, sizeof(*__p)); \ switch (sizeof(*p)) { \ case 1: \ asm volatile ("ldarb %w0, %1" \ : "=r" (*(__u8 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ case 2: \ asm volatile ("ldarh %w0, %1" \ : "=r" (*(__u16 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ case 4: \ asm volatile ("ldar %w0, %1" \ : "=r" (*(__u32 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ case 8: \ asm volatile ("ldar %0, %1" \ : "=r" (*(__u64 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ } \ __u.__val; \