From patchwork Wed May 22 13:22:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164818 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp859265ili; Wed, 22 May 2019 06:23:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqw3milO2Ab95kQkE8SxrJ4eauopluuRmbUK1wP8GxbhJ0FEtZwYWyQCi3f/DUnixwWsp71D X-Received: by 2002:a63:f703:: with SMTP id x3mr87791530pgh.394.1558531418679; Wed, 22 May 2019 06:23:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531418; cv=none; d=google.com; s=arc-20160816; b=QegZOWUv9fEBTtJcKPvURFYCFJk2qdGdy9urt68qsESUPun3dPWeRjsmhoIlu92NB6 +uMpM60kukRzwRjao0CxG8VDxIzu/HMRaa4vz18l6cvrQOwzIBonB0v5srTCQXBPQlwM 1TUwet6OyNtP9d8LkbT71XJ4x11A4+oAGZV6sgBsa6oH8wvp7iKqN36m+twgV0ZU+27Z /M/1PvSGPecEb9RHabEMfEgd8vLbTIWczUGDk3koBgaNoB/N+++2HOEq6YaATQpHY+jQ TkWOB/es06TYsC580Si5vnIrsUOopEt/2yqr3YIsMl9xr8Px3PBpD5zBBJedLIbZJKgn C9iQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=mV6njZTpYRiZRiM46p548NSYY2BB4B1wGL0oB6HHrDI=; b=nRW9DdYpZrrlzrSsu3qIDP25ULTzaUDOIKUYJ+LrIk0YSRbPGf6BkJbUrwYNJ7GLiI 9XDWMEn+dydWGMynz+qb5zkk3LO4aVLwAJSVHzIX55TfPDiy1gWolELRoPxyaokbgNME wxV+Yd3EVMBsw8xbEidYdZjrr/1mWnYlE+WiJQWmCf8PbcNC/SOnl522YeHEY1XdU14y ns/YkGtF+6ZekhxuLRnwenRpLEw2JGpBFGdroE4xxZMYl1J2q02BYnravoaaCH/PNSvx 7h+U1huF70dxWgmN95Qm71ZZ7dgCy0G0KK8XnzUC8jCPxnHaNK4sPbmI8928ZV4MHefd 1d/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z143si27821199pfc.38.2019.05.22.06.23.38; Wed, 22 May 2019 06:23:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729361AbfEVNXh (ORCPT + 14 others); Wed, 22 May 2019 09:23:37 -0400 Received: from foss.arm.com ([217.140.101.70]:50502 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728827AbfEVNXh (ORCPT ); Wed, 22 May 2019 09:23:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5C86F80D; Wed, 22 May 2019 06:23:37 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1CF133F575; Wed, 22 May 2019 06:23:32 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion Date: Wed, 22 May 2019 14:22:33 +0100 Message-Id: <20190522132250.26499-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The return type of atomic64_read() varies by architecture. It may return long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is somewhat painful, and mandates the use of explicit casts in some cases (e.g. when printing the return value). To ameliorate matters, subsequent patches will make the atomic64 API consistently use s64. As a preparatory step, this patch updates the nx-842 code to treat the return value of atomic64_read() as s64, using explicit casts. These casts will be removed once the s64 conversion is complete. Signed-off-by: Mark Rutland Cc: Herbert Xu Cc: Peter Zijlstra Cc: Will Deacon --- drivers/crypto/nx/nx-842-pseries.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) -- 2.11.0 diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c index 57932848361b..9432e9e42afe 100644 --- a/drivers/crypto/nx/nx-842-pseries.c +++ b/drivers/crypto/nx/nx-842-pseries.c @@ -869,8 +869,8 @@ static ssize_t nx842_##_name##_show(struct device *dev, \ rcu_read_lock(); \ local_devdata = rcu_dereference(devdata); \ if (local_devdata) \ - p = snprintf(buf, PAGE_SIZE, "%ld\n", \ - atomic64_read(&local_devdata->counters->_name)); \ + p = snprintf(buf, PAGE_SIZE, "%lld\n", \ + (s64)atomic64_read(&local_devdata->counters->_name)); \ rcu_read_unlock(); \ return p; \ } @@ -922,17 +922,17 @@ static ssize_t nx842_timehist_show(struct device *dev, } for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) { - bytes = snprintf(p, bytes_remain, "%u-%uus:\t%ld\n", + bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n", i ? (2<<(i-1)) : 0, (2< X-Patchwork-Id: 164819 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp859523ili; Wed, 22 May 2019 06:23:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqy0wsRklxMgN9lnuCQWzGLjerOBElbkfHoaMUx0WNFMMnFbl5Paouv2lfJiAvSx/jTaD4Ql X-Received: by 2002:a63:a1a:: with SMTP id 26mr89627034pgk.11.1558531432395; Wed, 22 May 2019 06:23:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531432; cv=none; d=google.com; s=arc-20160816; b=DhRuHII0owQMNm6ypQs4XYdvgBKzJOwoMOELoIfEMjfsnUcjGu68J5Jx53n82wi3EB sR1BuW3zwwpVxF98+D9vqNbSI5S/+Dg/lb5oXvKlY6AZ6lTFOKtx9bCrutCX86rnMrjO ROXOfrwD9gbb5qEzsF7sIU8uhMTsjsdGP4nUX0V+v0R+V7k9Ba3wCqbWWBJrjGLV/88u 1+/Ue0X0rvdYB9OfQ4sqU8WAK+UFuRJeF6CFJ3i6uxDerYBMtuQKsj5pD6/QyrA9bTkk RPEjE1nDmCdvfFUKPNa6B3/45oRH36MF5GvcWMw4hhNg9IIFc+xoQ857nNgk9FpOcsNo YPvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=HH+WNDB7mi6967CYBqXdSGIdnXMS88TpDftB0RX7fM8=; b=KSVcTMhZe7YYNO3TMYYu62Js531E8cSCQ+nu6BfsBL7nKrTZVjegt/vpjmXlK4cKlW vQOuIk8D1OGrObEeQHIFSqlamZM6aA1WuR2lEiJE4JojWFtqNpZQ9T4ZtLcVhYtz5Pn4 wrGHU0ZpeMZODEDAfbNqGbIbur96uNGE5kVK7n9LFkz+za+ksqa7bsWJ2bNK6mvlLHIE 4/jZ2jqBmaCLA4Zc/WKIR2YWHyea/pJE2cH1Zf/GB/BNuteWfXVREIDwv5yO7kQyppUZ +PnlO17iFvh5RRVoyb5x7JAUOhrj9xupcEcsi/RQny4Ki7RZTDUgSLk3ypVDWpDhAPpj gW1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z143si27821199pfc.38.2019.05.22.06.23.52; Wed, 22 May 2019 06:23:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729469AbfEVNXv (ORCPT + 14 others); Wed, 22 May 2019 09:23:51 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:50556 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728912AbfEVNXv (ORCPT ); Wed, 22 May 2019 09:23:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1AED080D; Wed, 22 May 2019 06:23:51 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CE43F3F575; Wed, 22 May 2019 06:23:46 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 02/18] locking/atomic: s390/pci: prepare for atomic64_read() conversion Date: Wed, 22 May 2019 14:22:34 +0100 Message-Id: <20190522132250.26499-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The return type of atomic64_read() varies by architecture. It may return long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is somewhat painful, and mandates the use of explicit casts in some cases (e.g. when printing the return value). To ameliorate matters, subsequent patches will make the atomic64 API consistently use s64. As a preparatory step, this patch updates the s390 pci debug code to treat the return value of atomic64_read() as s64, using an explicit cast. This cast will be removed once the s64 conversion is complete. Signed-off-by: Mark Rutland Cc: Heiko Carstens Cc: Peter Zijlstra Cc: Will Deacon --- arch/s390/pci/pci_debug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.11.0 diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c index 6b48ca7760a7..45eccf79e990 100644 --- a/arch/s390/pci/pci_debug.c +++ b/arch/s390/pci/pci_debug.c @@ -74,8 +74,8 @@ static void pci_sw_counter_show(struct seq_file *m) int i; for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++) - seq_printf(m, "%26s:\t%lu\n", pci_sw_names[i], - atomic64_read(counter)); + seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i], + (s64)atomic64_read(counter)); } static int pci_perf_show(struct seq_file *m, void *v) From patchwork Wed May 22 13:22:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164820 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp859684ili; Wed, 22 May 2019 06:23:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqwtS+6XYMheWPVHJeFsnOyVikiHFy76E4IlPtwVtndUzf8pF8g7I1VHJFHDkkS7RqtgVD1W X-Received: by 2002:a17:902:9f85:: with SMTP id g5mr85498367plq.29.1558531438760; Wed, 22 May 2019 06:23:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531438; cv=none; d=google.com; s=arc-20160816; b=MA3RkbODx8J0wGxZ6y177m0m1ggLLZICHC2NQy2aNNbhjlsqI1ssH8fy/NYpkHopP1 qMDtfYoEqAPYFxH88WSLPwxGj2qTpuoWI+nO1QWKDE9Qu5cGTESsc9nH5O9rDPRp5Shh DSBiWR8+019stVUxLgvMpHb0eGdECARkCImmVXdh5IPb3re9bzSbEbXQnLUaS9YPA5a7 3kFtdL6ec/oEjyhyrT2iV7ZqSu304yrcNTlhKKz+q56fnbalGl2PSUeFl+qsX9/eAYV7 n38gn9Wp798uK1JoEMmqG3sOzEUG7vuvHnDMzEJBDD/xX4D/JPlT9Nij1DAhNqkItlOo qn8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Nfbjo2MlEMUwV/WjrNHINwp7eqvSDOBDmnaqg2fdANA=; b=HI8KLcJxcWHECgJwoHPftcZip5kr7+h0bTcoukM/R6lTZqdcDv5OobBjcbOwm8oQNF usrndWrvl63+0pirMqgUojQZPF0V1rH/7YEVBBd44F8a/kMD9QwxYY0B2ZFU+FIx9s8p P7u8wWih+bN7ZGBiR2tVWCp+Yffjr3BWr3TdqUScqrCm4clV0jqPvov0u02tQNgQjqYx xVKOq50B2XagwkFlkCqYyzrG+yGuU/+4NuFZpChtADlR85FuRkijnXMTZ4VzNyES5jm6 szFsS4EQbaL1YbC+r+pjqG3D+Z2eJxjmtKSvCAtUSHvXvCuXdC1koHDzLIfpIKdB5FC6 MtOw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f10si24321429pgm.425.2019.05.22.06.23.58; Wed, 22 May 2019 06:23:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729269AbfEVNX6 (ORCPT + 14 others); Wed, 22 May 2019 09:23:58 -0400 Received: from foss.arm.com ([217.140.101.70]:50610 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728912AbfEVNX5 (ORCPT ); Wed, 22 May 2019 09:23:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4042415AB; Wed, 22 May 2019 06:23:57 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 012153F575; Wed, 22 May 2019 06:23:52 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 03/18] locking/atomic: generic: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:35 +0100 Message-Id: <20190522132250.26499-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the generic atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Peter Zijlstra Cc: Will Deacon --- include/asm-generic/atomic64.h | 20 ++++++++++---------- lib/atomic64.c | 32 ++++++++++++++++---------------- 2 files changed, 26 insertions(+), 26 deletions(-) -- 2.11.0 Acked-by: Arnd Bergmann diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h index 97b28b7f1f29..fc7b831ed632 100644 --- a/include/asm-generic/atomic64.h +++ b/include/asm-generic/atomic64.h @@ -14,24 +14,24 @@ #include typedef struct { - long long counter; + s64 counter; } atomic64_t; #define ATOMIC64_INIT(i) { (i) } -extern long long atomic64_read(const atomic64_t *v); -extern void atomic64_set(atomic64_t *v, long long i); +extern s64 atomic64_read(const atomic64_t *v); +extern void atomic64_set(atomic64_t *v, s64 i); #define atomic64_set_release(v, i) atomic64_set((v), (i)) #define ATOMIC64_OP(op) \ -extern void atomic64_##op(long long a, atomic64_t *v); +extern void atomic64_##op(s64 a, atomic64_t *v); #define ATOMIC64_OP_RETURN(op) \ -extern long long atomic64_##op##_return(long long a, atomic64_t *v); +extern s64 atomic64_##op##_return(s64 a, atomic64_t *v); #define ATOMIC64_FETCH_OP(op) \ -extern long long atomic64_fetch_##op(long long a, atomic64_t *v); +extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v); #define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op) @@ -50,11 +50,11 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP -extern long long atomic64_dec_if_positive(atomic64_t *v); +extern s64 atomic64_dec_if_positive(atomic64_t *v); #define atomic64_dec_if_positive atomic64_dec_if_positive -extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n); -extern long long atomic64_xchg(atomic64_t *v, long long new); -extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u); +extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n); +extern s64 atomic64_xchg(atomic64_t *v, s64 new); +extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u); #define atomic64_fetch_add_unless atomic64_fetch_add_unless #endif /* _ASM_GENERIC_ATOMIC64_H */ diff --git a/lib/atomic64.c b/lib/atomic64.c index 1d91e31eceec..62f218bf50a0 100644 --- a/lib/atomic64.c +++ b/lib/atomic64.c @@ -46,11 +46,11 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v) return &atomic64_lock[addr & (NR_LOCKS - 1)].lock; } -long long atomic64_read(const atomic64_t *v) +s64 atomic64_read(const atomic64_t *v) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val; raw_spin_lock_irqsave(lock, flags); val = v->counter; @@ -59,7 +59,7 @@ long long atomic64_read(const atomic64_t *v) } EXPORT_SYMBOL(atomic64_read); -void atomic64_set(atomic64_t *v, long long i) +void atomic64_set(atomic64_t *v, s64 i) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); @@ -71,7 +71,7 @@ void atomic64_set(atomic64_t *v, long long i) EXPORT_SYMBOL(atomic64_set); #define ATOMIC64_OP(op, c_op) \ -void atomic64_##op(long long a, atomic64_t *v) \ +void atomic64_##op(s64 a, atomic64_t *v) \ { \ unsigned long flags; \ raw_spinlock_t *lock = lock_addr(v); \ @@ -83,11 +83,11 @@ void atomic64_##op(long long a, atomic64_t *v) \ EXPORT_SYMBOL(atomic64_##op); #define ATOMIC64_OP_RETURN(op, c_op) \ -long long atomic64_##op##_return(long long a, atomic64_t *v) \ +s64 atomic64_##op##_return(s64 a, atomic64_t *v) \ { \ unsigned long flags; \ raw_spinlock_t *lock = lock_addr(v); \ - long long val; \ + s64 val; \ \ raw_spin_lock_irqsave(lock, flags); \ val = (v->counter c_op a); \ @@ -97,11 +97,11 @@ long long atomic64_##op##_return(long long a, atomic64_t *v) \ EXPORT_SYMBOL(atomic64_##op##_return); #define ATOMIC64_FETCH_OP(op, c_op) \ -long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \ { \ unsigned long flags; \ raw_spinlock_t *lock = lock_addr(v); \ - long long val; \ + s64 val; \ \ raw_spin_lock_irqsave(lock, flags); \ val = v->counter; \ @@ -134,11 +134,11 @@ ATOMIC64_OPS(xor, ^=) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP -long long atomic64_dec_if_positive(atomic64_t *v) +s64 atomic64_dec_if_positive(atomic64_t *v) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val; raw_spin_lock_irqsave(lock, flags); val = v->counter - 1; @@ -149,11 +149,11 @@ long long atomic64_dec_if_positive(atomic64_t *v) } EXPORT_SYMBOL(atomic64_dec_if_positive); -long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n) +s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val; raw_spin_lock_irqsave(lock, flags); val = v->counter; @@ -164,11 +164,11 @@ long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n) } EXPORT_SYMBOL(atomic64_cmpxchg); -long long atomic64_xchg(atomic64_t *v, long long new) +s64 atomic64_xchg(atomic64_t *v, s64 new) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val; raw_spin_lock_irqsave(lock, flags); val = v->counter; @@ -178,11 +178,11 @@ long long atomic64_xchg(atomic64_t *v, long long new) } EXPORT_SYMBOL(atomic64_xchg); -long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u) +s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { unsigned long flags; raw_spinlock_t *lock = lock_addr(v); - long long val; + s64 val; raw_spin_lock_irqsave(lock, flags); val = v->counter; From patchwork Wed May 22 13:22:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164821 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp859809ili; Wed, 22 May 2019 06:24:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqy7kWD41+/iXhNQo+sd6i2DKmM69GgevLe9jxFUoClPuw64C7J5Q8MAQycMQwaQnMc+3ZZh X-Received: by 2002:a65:614d:: with SMTP id o13mr3717284pgv.309.1558531446643; Wed, 22 May 2019 06:24:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531446; cv=none; d=google.com; s=arc-20160816; b=WX5tP4DWw+eVlSchLxPQDxpqJt1hW0exfouDvD9jYQ2QPUgoLzp/m2GR9ieh/b6rmD Ksogkoykn0xmO1RLAMd8E03/DC5OZ7DtFqVjfr/GBTz6NHco0tL3UkgrPV2GxRqSL3G7 JgcNhLlIHo28XWsOkeF6UKhtvtgf4uTZMjYg9TDt9kHEwatQf2W8RNIzxYt/1vTK7U5b wnTbgbjyR4vNf2ymkAbqK2w/OIVjrvg7XNrt7KxNxF0lXNOY5KRh7WebI0fFv7OXTczs UQ0+B+uU96soqF2i6GguUF9tmzYe7KKGngyuUzyt6jYq3ZIQWZ04hiZSs0ajmjdPYGgE xPlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=nlz8MkUc6AwXtYeLp3DKlPwSJ5WP64LA0sGBwwPAdT4=; b=02N6Cv1mb3wgPVPUsZuNf0ggmJqlUeDqkTWk/g9k6VtmMBEmzlDny3KC4WO0DarFI6 D3bV+W+lhh4SHrdwMOUoS/jm5UOq5Tp/xBBK0daYjEdZaeOEHRH7dhox3GT7lFXIe64V ZvQfN0x4qTxFi4cC/mXhTJJWhPldPRGybEs4pbQ58KPUTyY/lOliiu80AqahnMzsxf7F bRC7+JstDiExxkLZF8V7qmqQrGylFSX9YpwoPZOFqP0hT0EG+jvCVWV5E+RG3sXD74XE tu0pGZSDf4aM0sWpD4WcqGMNYU0+CUYU5IPcIM/qgy/jmSGdszPaZG+RhDv+NzoixGt0 pxcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h128si24822853pgc.499.2019.05.22.06.24.06; Wed, 22 May 2019 06:24:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729071AbfEVNYG (ORCPT + 14 others); Wed, 22 May 2019 09:24:06 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:50656 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728912AbfEVNYF (ORCPT ); Wed, 22 May 2019 09:24:05 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14A5C80D; Wed, 22 May 2019 06:24:05 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C6BBE3F575; Wed, 22 May 2019 06:24:00 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 04/18] locking/atomic: alpha: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:36 +0100 Message-Id: <20190522132250.26499-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the alpha atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Ivan Kokshaysky Cc: Matt Turner Cc: Peter Zijlstra Cc: Richard Henderson Cc: Will Deacon --- arch/alpha/include/asm/atomic.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) -- 2.11.0 diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 150a1c5d6a2c..2144530d1428 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -93,9 +93,9 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ } #define ATOMIC64_OP(op, asm_op) \ -static __inline__ void atomic64_##op(long i, atomic64_t * v) \ +static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \ { \ - unsigned long temp; \ + s64 temp; \ __asm__ __volatile__( \ "1: ldq_l %0,%1\n" \ " " #asm_op " %0,%2,%0\n" \ @@ -109,9 +109,9 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \ } \ #define ATOMIC64_OP_RETURN(op, asm_op) \ -static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \ { \ - long temp, result; \ + s64 temp, result; \ __asm__ __volatile__( \ "1: ldq_l %0,%1\n" \ " " #asm_op " %0,%3,%2\n" \ @@ -128,9 +128,9 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ } #define ATOMIC64_FETCH_OP(op, asm_op) \ -static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \ { \ - long temp, result; \ + s64 temp, result; \ __asm__ __volatile__( \ "1: ldq_l %2,%1\n" \ " " #asm_op " %2,%3,%0\n" \ @@ -246,9 +246,9 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long c, new, old; + s64 c, new, old; smp_mb(); __asm__ __volatile__( "1: ldq_l %[old],%[mem]\n" @@ -276,9 +276,9 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) * The function returns the old value of *v minus 1, even if * the atomic variable, v, was not decremented. */ -static inline long atomic64_dec_if_positive(atomic64_t *v) +static inline s64 atomic64_dec_if_positive(atomic64_t *v) { - long old, tmp; + s64 old, tmp; smp_mb(); __asm__ __volatile__( "1: ldq_l %[old],%[mem]\n" From patchwork Wed May 22 13:22:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164823 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp860265ili; Wed, 22 May 2019 06:24:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqyfDQTk6ZZ8lwEHnKnCERs4MRcOV++bI/0kryPFRF9w7Ds+a0VhwCm+ncA4S47/32PpzTnG X-Received: by 2002:a63:b70f:: with SMTP id t15mr88050598pgf.351.1558531470777; Wed, 22 May 2019 06:24:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531470; cv=none; d=google.com; s=arc-20160816; b=kTUttLECPNPJhPXFc3WwBs7MnDHPLN2fPIEI8ptyIb2Tk4D8rrQW1Z6lx6IMWgSTYe FN+bFHdkg/s4/cg68IGLYdC3wQ/hz4w42j4oHrRzhjJMm5iCjYhAljBWjrioIkMLH7gZ ybScc44U5FIM7opLca8DzyrQMIzuOM7SpMJV3DE44qN1jt8HDZBXyjUGut6bsDUYDhjv bj6LMckT1QMwH1k3dVmoTkJjc+81KcyKFI68tUsh4fO4lHubCtxSYZMQxqzimzjCx7ii K2LtOafDqjHVAmM0xtvUPkJCcNahXHHG5i5mBeoE0vElcfB6sUaeG1jsn8Q9cEaINiCF jn9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=2QuF8dCn4qieHYz9qO100nhAERqEfuw98OP1sPOodhg=; b=BzCrIY58wuP3rbonzVHx1jzjdwRuSaq1krLfgQsj0GZr85FiPgDiR1Zi/+kr5KNJFQ OhyEYBvJNKrVhtKsi0sC/W10RSdUWc5VAj8tfwXRoziI3uspp/d7KfPn293ybEEKfJ/d TS3GY6KEHS2dLa4E9ao+ZJDcv2QTjE8zM9KrPTMH9JX1ioC/lX4CdgyPGNmwxLMquvRu k9aKRVhTmHEKrvv1Aa4lZRyOkAINHvt8bXiu/w33/naLHcSeQd8QtyNfnUyP4O/mM3Id NnmNQn+FAR/c/gyjMJwEGYA63kMRO5V94tH4iQRd0ygstXqBeEV40kLVPJCNdiOXN4CV gl3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t24si25848037pga.495.2019.05.22.06.24.30; Wed, 22 May 2019 06:24:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729575AbfEVNYa (ORCPT + 14 others); Wed, 22 May 2019 09:24:30 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:50766 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729161AbfEVNY3 (ORCPT ); Wed, 22 May 2019 09:24:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E71780D; Wed, 22 May 2019 06:24:29 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4F37C3F575; Wed, 22 May 2019 06:24:25 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 06/18] locking/atomic: arm: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:38 +0100 Message-Id: <20190522132250.26499-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the arm atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Peter Zijlstra Cc: Russell King Cc: Will Deacon --- arch/arm/include/asm/atomic.h | 50 +++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 26 deletions(-) -- 2.11.0 diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index f74756641410..d45c41f6f69c 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -249,15 +249,15 @@ ATOMIC_OPS(xor, ^=, eor) #ifndef CONFIG_GENERIC_ATOMIC64 typedef struct { - long long counter; + s64 counter; } atomic64_t; #define ATOMIC64_INIT(i) { (i) } #ifdef CONFIG_ARM_LPAE -static inline long long atomic64_read(const atomic64_t *v) +static inline s64 atomic64_read(const atomic64_t *v) { - long long result; + s64 result; __asm__ __volatile__("@ atomic64_read\n" " ldrd %0, %H0, [%1]" @@ -268,7 +268,7 @@ static inline long long atomic64_read(const atomic64_t *v) return result; } -static inline void atomic64_set(atomic64_t *v, long long i) +static inline void atomic64_set(atomic64_t *v, s64 i) { __asm__ __volatile__("@ atomic64_set\n" " strd %2, %H2, [%1]" @@ -277,9 +277,9 @@ static inline void atomic64_set(atomic64_t *v, long long i) ); } #else -static inline long long atomic64_read(const atomic64_t *v) +static inline s64 atomic64_read(const atomic64_t *v) { - long long result; + s64 result; __asm__ __volatile__("@ atomic64_read\n" " ldrexd %0, %H0, [%1]" @@ -290,9 +290,9 @@ static inline long long atomic64_read(const atomic64_t *v) return result; } -static inline void atomic64_set(atomic64_t *v, long long i) +static inline void atomic64_set(atomic64_t *v, s64 i) { - long long tmp; + s64 tmp; prefetchw(&v->counter); __asm__ __volatile__("@ atomic64_set\n" @@ -307,9 +307,9 @@ static inline void atomic64_set(atomic64_t *v, long long i) #endif #define ATOMIC64_OP(op, op1, op2) \ -static inline void atomic64_##op(long long i, atomic64_t *v) \ +static inline void atomic64_##op(s64 i, atomic64_t *v) \ { \ - long long result; \ + s64 result; \ unsigned long tmp; \ \ prefetchw(&v->counter); \ @@ -326,10 +326,10 @@ static inline void atomic64_##op(long long i, atomic64_t *v) \ } \ #define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long \ -atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \ +static inline s64 \ +atomic64_##op##_return_relaxed(s64 i, atomic64_t *v) \ { \ - long long result; \ + s64 result; \ unsigned long tmp; \ \ prefetchw(&v->counter); \ @@ -349,10 +349,10 @@ atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \ } #define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long \ -atomic64_fetch_##op##_relaxed(long long i, atomic64_t *v) \ +static inline s64 \ +atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v) \ { \ - long long result, val; \ + s64 result, val; \ unsigned long tmp; \ \ prefetchw(&v->counter); \ @@ -406,10 +406,9 @@ ATOMIC64_OPS(xor, eor, eor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP -static inline long long -atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new) +static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new) { - long long oldval; + s64 oldval; unsigned long res; prefetchw(&ptr->counter); @@ -430,9 +429,9 @@ atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new) } #define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed -static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new) +static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new) { - long long result; + s64 result; unsigned long tmp; prefetchw(&ptr->counter); @@ -450,9 +449,9 @@ static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new) } #define atomic64_xchg_relaxed atomic64_xchg_relaxed -static inline long long atomic64_dec_if_positive(atomic64_t *v) +static inline s64 atomic64_dec_if_positive(atomic64_t *v) { - long long result; + s64 result; unsigned long tmp; smp_mb(); @@ -478,10 +477,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v) } #define atomic64_dec_if_positive atomic64_dec_if_positive -static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a, - long long u) +static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long long oldval, newval; + s64 oldval, newval; unsigned long tmp; smp_mb(); From patchwork Wed May 22 13:22:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164824 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp860406ili; Wed, 22 May 2019 06:24:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqzG2OZGIiEpuvkvxq9QEdg698XBEEjos1QB5f+RzjTcK2/MhaertpfzFpOeZnyruJIAyCSO X-Received: by 2002:aa7:9159:: with SMTP id 25mr65008391pfi.64.1558531477689; Wed, 22 May 2019 06:24:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531477; cv=none; d=google.com; s=arc-20160816; b=UMBKVVf2EVZgXFdzTKsdzBIi7oF+Hazd2WicCnmWWKGv0chUcynlKRVPu0831DDBBr +LXoFOATH+cnYZA0MTlS+gh5vi98Og65OnlbW6rE9gYYHZyIUPcLynY9g3ebdVaEV294 SsbHwFHD3eIX4zeUTbeEfVQaUpmZ0VlodycXfKyqbn8JvB658totos1Ke5FPNzdllC7H ALlCQVzy0B9W0hXf90SPh9gLZDIMKe9Kl2MijCvy/ZlxJ62Zp9nxmgiJgFhgm5OgnPVe FjFmel/A9bhytagjrfjUfwK8Sb1mVuJYwWj2r4VilN5Myy88E+gwGj3icSugH355OpvE 9zDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=O0xVAiGrM4AzErk+EeKFc+nDYtZG/VutqY2FliE8MdI=; b=PcqKDf8sNdzmFYDli3kxIDzgDkwBh2Zk7k8B3+3cfkMZsa3rkbcFAG5fDkpCo0NViT M3zmlBubSIvGsc9sNXC0yDMc1sVyJWQhvskxYalgG7rKvU7D6OMWhqQrn6CDlCEm3zVK 0fT6IN+q8jJUmwFIim+8VaXWxxfCKqhvbwC8yJ0PU9bUjRSpfnERVf1qPuO2oCP2upEW QDjtys2rRieCPvI/xkdCZloO9Zhv1A7ifdd93qGlF4MF0tlr+fTUYBiK8BDs/arFmdWH vDX3tTeYWtyq/NtyAK/yliIUnnv0uYhPQixrbNyX43ZR81prvvKChhG2c15HdibXsAcJ S40w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 129si17197170pfe.140.2019.05.22.06.24.37; Wed, 22 May 2019 06:24:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729127AbfEVNYh (ORCPT + 14 others); Wed, 22 May 2019 09:24:37 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:50818 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729059AbfEVNYg (ORCPT ); Wed, 22 May 2019 09:24:36 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 89EBD15AB; Wed, 22 May 2019 06:24:36 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4A2F83F575; Wed, 22 May 2019 06:24:32 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 07/18] locking/atomic: arm64: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:39 +0100 Message-Id: <20190522132250.26499-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the arm64 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Note that in arch_atomic64_dec_if_positive(), the x0 variable is left as long, as this variable is also used to hold the pointer to the atomic64_t. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Peter Zijlstra Cc: Will Deacon --- arch/arm64/include/asm/atomic_ll_sc.h | 20 ++++++++++---------- arch/arm64/include/asm/atomic_lse.h | 34 +++++++++++++++++----------------- 2 files changed, 27 insertions(+), 27 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index e321293e0c89..f3b12d7f431f 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -133,9 +133,9 @@ ATOMIC_OPS(xor, eor) #define ATOMIC64_OP(op, asm_op) \ __LL_SC_INLINE void \ -__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \ +__LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v)) \ { \ - long result; \ + s64 result; \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "\n" \ @@ -150,10 +150,10 @@ __LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \ __LL_SC_EXPORT(arch_atomic64_##op); #define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \ -__LL_SC_INLINE long \ -__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\ +__LL_SC_INLINE s64 \ +__LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\ { \ - long result; \ + s64 result; \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "_return" #name "\n" \ @@ -172,10 +172,10 @@ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\ __LL_SC_EXPORT(arch_atomic64_##op##_return##name); #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \ -__LL_SC_INLINE long \ -__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v)) \ +__LL_SC_INLINE s64 \ +__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)) \ { \ - long result, val; \ + s64 result, val; \ unsigned long tmp; \ \ asm volatile("// atomic64_fetch_" #op #name "\n" \ @@ -225,10 +225,10 @@ ATOMIC64_OPS(xor, eor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP -__LL_SC_INLINE long +__LL_SC_INLINE s64 __LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v)) { - long result; + s64 result; unsigned long tmp; asm volatile("// atomic64_dec_if_positive\n" diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h index 9256a3921e4b..c53832b08af7 100644 --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -224,9 +224,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory") #define __LL_SC_ATOMIC64(op) __LL_SC_CALL(arch_atomic64_##op) #define ATOMIC64_OP(op, asm_op) \ -static inline void arch_atomic64_##op(long i, atomic64_t *v) \ +static inline void arch_atomic64_##op(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN(__LL_SC_ATOMIC64(op), \ @@ -244,9 +244,9 @@ ATOMIC64_OP(add, stadd) #undef ATOMIC64_OP #define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ -static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\ +static inline s64 arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -276,9 +276,9 @@ ATOMIC64_FETCH_OPS(add, ldadd) #undef ATOMIC64_FETCH_OPS #define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \ -static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\ +static inline s64 arch_atomic64_add_return##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -302,9 +302,9 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory") #undef ATOMIC64_OP_ADD_RETURN -static inline void arch_atomic64_and(long i, atomic64_t *v) +static inline void arch_atomic64_and(s64 i, atomic64_t *v) { - register long x0 asm ("x0") = i; + register s64 x0 asm ("x0") = i; register atomic64_t *x1 asm ("x1") = v; asm volatile(ARM64_LSE_ATOMIC_INSN( @@ -320,9 +320,9 @@ static inline void arch_atomic64_and(long i, atomic64_t *v) } #define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ -static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v) \ +static inline s64 arch_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -346,9 +346,9 @@ ATOMIC64_FETCH_OP_AND( , al, "memory") #undef ATOMIC64_FETCH_OP_AND -static inline void arch_atomic64_sub(long i, atomic64_t *v) +static inline void arch_atomic64_sub(s64 i, atomic64_t *v) { - register long x0 asm ("x0") = i; + register s64 x0 asm ("x0") = i; register atomic64_t *x1 asm ("x1") = v; asm volatile(ARM64_LSE_ATOMIC_INSN( @@ -364,9 +364,9 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v) } #define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \ -static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\ +static inline s64 arch_atomic64_sub_return##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -392,9 +392,9 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory") #undef ATOMIC64_OP_SUB_RETURN #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ -static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v) \ +static inline s64 arch_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \ { \ - register long x0 asm ("x0") = i; \ + register s64 x0 asm ("x0") = i; \ register atomic64_t *x1 asm ("x1") = v; \ \ asm volatile(ARM64_LSE_ATOMIC_INSN( \ @@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory") #undef ATOMIC64_FETCH_OP_SUB -static inline long arch_atomic64_dec_if_positive(atomic64_t *v) +static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { register long x0 asm ("x0") = (long)v; From patchwork Wed May 22 13:22:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164825 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp860557ili; Wed, 22 May 2019 06:24:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqwUvSpyUsFkfIASmIjyRHAeemBNPaGTbfKGtOavEiLmIpVUIWayRawwYWWz7Yj4IoDJK+/C X-Received: by 2002:a65:56c5:: with SMTP id w5mr90306685pgs.434.1558531484229; Wed, 22 May 2019 06:24:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531484; cv=none; d=google.com; s=arc-20160816; b=zUcfKYBNIxExB5fRqxEGLAUe6hWF18Lf0Cv0pPMfMfoSTHI8jlTz6MCLyJC5c2YHWr WsdGVdHI5Kdj+KQWFgPblEOYA5cQvcSK+7xZoetCaJ1aL7eToDAs421nzmm7qhXC9S63 /NxE4p8/e/0V/3Dz5V0JLwTM7Xc/2/i/+ZwZ+h+zjUdnw9bJfeUPXU5fm6nu2Bkp85C9 llQhRonZVsJIPpSHRmLgkeAlhnmYMiUItIbRplkfvdVDRLnSuRrOgVK0Hz31xPh/2lOB pVVjtc8Ge/QH8Zr9qWJpwNdF7aWC4ihXxHSsmta4OwaReWis5IDep5KdrXpPF91bsdxu CYBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=8jGhecHXi8esBoyZgI3ujmIDhtN4iwcR41K3iJO1Tw4=; b=t1vLo9JVc3v9ka57R26cCKttIAOi4+oBnztsOKPpSfQR5n5ewJcXn6+tgvrzpjwgXu 7tqKmrYfB4WFW+gxlsArva+hBnTYQnPJ0CIipA5HvsZxkGaflWp2ivYOjFBJCN4kFRmu G2VLNvqCthszovRFjW3GQWyaoZGnOGiSSdq4YoGHw4bimJ7K3XD+mAgs1IGEvg4dfbzf vUecE/UvxFUHEGUR4q4aJv6Qt60OYy5U6N/0MRfYB/WVyxXmYSBL+Fwbf6uM0mFCA/t0 5JA18JQHMP3+FYZpIcYziNhzVEGcWQfKtQycuFBHBbxTnriqpBwhfn9jhzZk0McZMpvQ 1ihg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h5si27216435pgg.526.2019.05.22.06.24.44; Wed, 22 May 2019 06:24:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729609AbfEVNYn (ORCPT + 14 others); Wed, 22 May 2019 09:24:43 -0400 Received: from foss.arm.com ([217.140.101.70]:50872 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729059AbfEVNYn (ORCPT ); Wed, 22 May 2019 09:24:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BDE5880D; Wed, 22 May 2019 06:24:42 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7C2A83F575; Wed, 22 May 2019 06:24:38 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 08/18] locking/atomic: ia64: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:40 +0100 Message-Id: <20190522132250.26499-9-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the ia64 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or __s64, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Fenghua Yu Cc: Peter Zijlstra Cc: Tony Luck Cc: Will Deacon --- arch/ia64/include/asm/atomic.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) -- 2.11.0 diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h index 206530d0751b..50440f3ddc43 100644 --- a/arch/ia64/include/asm/atomic.h +++ b/arch/ia64/include/asm/atomic.h @@ -124,10 +124,10 @@ ATOMIC_FETCH_OP(xor, ^) #undef ATOMIC_OP #define ATOMIC64_OP(op, c_op) \ -static __inline__ long \ -ia64_atomic64_##op (__s64 i, atomic64_t *v) \ +static __inline__ s64 \ +ia64_atomic64_##op (s64 i, atomic64_t *v) \ { \ - __s64 old, new; \ + s64 old, new; \ CMPXCHG_BUGCHECK_DECL \ \ do { \ @@ -139,10 +139,10 @@ ia64_atomic64_##op (__s64 i, atomic64_t *v) \ } #define ATOMIC64_FETCH_OP(op, c_op) \ -static __inline__ long \ -ia64_atomic64_fetch_##op (__s64 i, atomic64_t *v) \ +static __inline__ s64 \ +ia64_atomic64_fetch_##op (s64 i, atomic64_t *v) \ { \ - __s64 old, new; \ + s64 old, new; \ CMPXCHG_BUGCHECK_DECL \ \ do { \ @@ -162,7 +162,7 @@ ATOMIC64_OPS(sub, -) #define atomic64_add_return(i,v) \ ({ \ - long __ia64_aar_i = (i); \ + s64 __ia64_aar_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter) \ : ia64_atomic64_add(__ia64_aar_i, v); \ @@ -170,7 +170,7 @@ ATOMIC64_OPS(sub, -) #define atomic64_sub_return(i,v) \ ({ \ - long __ia64_asr_i = (i); \ + s64 __ia64_asr_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter) \ : ia64_atomic64_sub(__ia64_asr_i, v); \ @@ -178,7 +178,7 @@ ATOMIC64_OPS(sub, -) #define atomic64_fetch_add(i,v) \ ({ \ - long __ia64_aar_i = (i); \ + s64 __ia64_aar_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq) \ : ia64_atomic64_fetch_add(__ia64_aar_i, v); \ @@ -186,7 +186,7 @@ ATOMIC64_OPS(sub, -) #define atomic64_fetch_sub(i,v) \ ({ \ - long __ia64_asr_i = (i); \ + s64 __ia64_asr_i = (i); \ __ia64_atomic_const(i) \ ? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq) \ : ia64_atomic64_fetch_sub(__ia64_asr_i, v); \ From patchwork Wed May 22 13:22:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164826 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp860679ili; Wed, 22 May 2019 06:24:50 -0700 (PDT) X-Google-Smtp-Source: APXvYqwGUoyjI2feZO4U5xBs6LQfcl8bsLLqriuKMVSsYBLkDCG6QJQhpkbSovjJk7DuVczYpTFr X-Received: by 2002:a17:902:9a83:: with SMTP id w3mr21241368plp.184.1558531490164; Wed, 22 May 2019 06:24:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531490; cv=none; d=google.com; s=arc-20160816; b=IC2nT4TWJY177JsNqwAY9T3dJhdHQNQdhMC1jqurkBFWaeeOYZSwA0Bc+FktAE0BD+ ZoPZgIFtx0jjnFuVwBpiL8vWTHVIUTdg5GC7xNcCGLINLnduHeBWMBxChsJdJguGJTtR 7xymlyxmInkxZFHr/zXVBhIDjizZQXT2TU/hX511vbZh+xCnji0nOTnqDTI91jhrDjAb 3tBP/5tR2T9rGpqrE1PNfu9n0SeSjFV1lmRPKPtckpBwTS6P9ohVU+Ipua2lpld+yAVr tvpsOCPCkgxcdC6Ai1ZDUUBfEWKkQZDwO0M760OI1zrn7GOWPVkYqo0qCGA6yi82lmUl XXng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=b5soRmcym3cgBhsbuVvQ68CXHZZJ1u0HesvujDfa7uw=; b=HdEEQgUqunLl5WH6kWAYDjSJtii93vTe5lZ1zoDlO+Iv/zuPVu0ztWnwHoPETCwsor FSgSujUku48QzG28pOQVueVwUCHwboF1ghAWwwJ0bN6iDhZYaUVd9WITtysboLnfg5Re 5mcoV14SyFumQUnzJrKazCyfHla6owEvM50b8505N2Pu6pafVZX/YxL/KjdFiY9MaNMh D4ZjrW3KuCU8B7k1+H8EFiT/zEDXJ/GApMA3owsZQZkmHXAorvQehfgoPI4Lg0zGG2pi LtOQ8Ux7P+98GCbCMCa9EQxicI7ECeQ3oeoFQwpkerQS4AvmHptdXB3J1VgS1Z+h394I ep9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 13si5506184pgl.419.2019.05.22.06.24.49; Wed, 22 May 2019 06:24:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729370AbfEVNYt (ORCPT + 14 others); Wed, 22 May 2019 09:24:49 -0400 Received: from foss.arm.com ([217.140.101.70]:50926 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729059AbfEVNYt (ORCPT ); Wed, 22 May 2019 09:24:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BBC0315AB; Wed, 22 May 2019 06:24:48 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7C1893F575; Wed, 22 May 2019 06:24:44 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 09/18] locking/atomic: mips: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:41 +0100 Message-Id: <20190522132250.26499-10-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the mips atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or __s64, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: James Hogan Cc: Paul Burton Cc: Peter Zijlstra Cc: Ralf Baechle Cc: Will Deacon --- arch/mips/include/asm/atomic.h | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) -- 2.11.0 diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 94096299fc56..9a82dd11c0e9 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -254,10 +254,10 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) #define atomic64_set(v, i) WRITE_ONCE((v)->counter, (i)) #define ATOMIC64_OP(op, c_op, asm_op) \ -static __inline__ void atomic64_##op(long i, atomic64_t * v) \ +static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \ { \ if (kernel_uses_llsc) { \ - long temp; \ + s64 temp; \ \ loongson_llsc_mb(); \ __asm__ __volatile__( \ @@ -280,12 +280,12 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \ } #define ATOMIC64_OP_RETURN(op, c_op, asm_op) \ -static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \ { \ - long result; \ + s64 result; \ \ if (kernel_uses_llsc) { \ - long temp; \ + s64 temp; \ \ loongson_llsc_mb(); \ __asm__ __volatile__( \ @@ -314,12 +314,12 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \ } #define ATOMIC64_FETCH_OP(op, c_op, asm_op) \ -static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \ +static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \ { \ - long result; \ + s64 result; \ \ if (kernel_uses_llsc) { \ - long temp; \ + s64 temp; \ \ loongson_llsc_mb(); \ __asm__ __volatile__( \ @@ -386,14 +386,14 @@ ATOMIC64_OPS(xor, ^=, xor) * Atomically test @v and subtract @i if @v is greater or equal than @i. * The function returns the old value of @v minus @i. */ -static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v) +static __inline__ s64 atomic64_sub_if_positive(s64 i, atomic64_t * v) { - long result; + s64 result; smp_mb__before_llsc(); if (kernel_uses_llsc) { - long temp; + s64 temp; __asm__ __volatile__( " .set push \n" From patchwork Wed May 22 13:22:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164827 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp860818ili; Wed, 22 May 2019 06:24:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqxTWOP6BI+y5N1W8cYjqNlftWgVCchnHdIbxBqZbsBeysTX4lPuseoSAoHhTE0mVfx43aAQ X-Received: by 2002:a62:304:: with SMTP id 4mr78221267pfd.186.1558531498469; Wed, 22 May 2019 06:24:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531498; cv=none; d=google.com; s=arc-20160816; b=A+NUOS59LNuq2GfjyOx9A/Qy0uP0CLs6RftwiAXd0j8Jt4VuCIc1SsqKC1nzEvcRlc h4+ZxfrrTuJoxmyOTOGtj8WYbVREHsTSRmdAJ8sL5qeRvTitD23Y9J7D2h121qaThVbZ yxGs+Y9PLI/td6lyw/drWf29snK2CldiDVyOI+llhW0ow+HjZKetZmYdQHHqMify9quF M4IrJj9493jrUYc/JdjHI36AK8grUnad3J45qhIcob8FwZZtoLC5gDC/ORo9kOnNM18j V9OfjtfAcfVkOEA16UcVUJFad1A4qtdenHqNXYvu5Q/xI4BzRuvQCuq9A7Gs9m+DOzkk J+Ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=zxDIK2Y044Y3U7zCY285+vHr0WeLVvgOMEmPQtJy1fc=; b=X0fG8/uiavHIzrGuShnNJz/vdkuoh3/owGDQAbyGrXSl8+D3Vhia/j9+BDgLPGF+ms ea2K+ixpyospa9lvFC6eZkCndOGVlNEwMafNirwZuAXZrMkaaGCYDcyS/Q7ezthCXL5y jv24JbYP6NITj76DqGtUIbNaiJCCYFjytqSZe12m57Ym0xBHllEVwfYqH5Zs8gNALHhl PGydbCN1pI5xNkIJL5tBDlhawjnXk6LHN9KexUxTn9t7FqXn4heO6eEnez2CMUSNRX/W tWknIH2kjS91AMjFnq2hwUeyCMHwNkn6GaaTpt2MUhkh7ahABpPOGzFdm5VcVgS6AkJp 2uWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e9si25519348plk.253.2019.05.22.06.24.58; Wed, 22 May 2019 06:24:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729338AbfEVNY5 (ORCPT + 14 others); Wed, 22 May 2019 09:24:57 -0400 Received: from foss.arm.com ([217.140.101.70]:50974 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729059AbfEVNY5 (ORCPT ); Wed, 22 May 2019 09:24:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 35DE780D; Wed, 22 May 2019 06:24:57 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E59703F575; Wed, 22 May 2019 06:24:52 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 10/18] locking/atomic: powerpc: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:42 +0100 Message-Id: <20190522132250.26499-11-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the powerpc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Michael Ellerman Cc: Paul Mackerras Cc: Peter Zijlstra Cc: Will Deacon --- arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++-------------------- 1 file changed, 22 insertions(+), 22 deletions(-) -- 2.11.0 diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 52eafaf74054..31c231ea56b7 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v) #define ATOMIC64_INIT(i) { (i) } -static __inline__ long atomic64_read(const atomic64_t *v) +static __inline__ s64 atomic64_read(const atomic64_t *v) { - long t; + s64 t; __asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter)); return t; } -static __inline__ void atomic64_set(atomic64_t *v, long i) +static __inline__ void atomic64_set(atomic64_t *v, s64 i) { __asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i)); } #define ATOMIC64_OP(op, asm_op) \ -static __inline__ void atomic64_##op(long a, atomic64_t *v) \ +static __inline__ void atomic64_##op(s64 a, atomic64_t *v) \ { \ - long t; \ + s64 t; \ \ __asm__ __volatile__( \ "1: ldarx %0,0,%3 # atomic64_" #op "\n" \ @@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v) \ } #define ATOMIC64_OP_RETURN_RELAXED(op, asm_op) \ -static inline long \ -atomic64_##op##_return_relaxed(long a, atomic64_t *v) \ +static inline s64 \ +atomic64_##op##_return_relaxed(s64 a, atomic64_t *v) \ { \ - long t; \ + s64 t; \ \ __asm__ __volatile__( \ "1: ldarx %0,0,%3 # atomic64_" #op "_return_relaxed\n" \ @@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v) \ } #define ATOMIC64_FETCH_OP_RELAXED(op, asm_op) \ -static inline long \ -atomic64_fetch_##op##_relaxed(long a, atomic64_t *v) \ +static inline s64 \ +atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v) \ { \ - long res, t; \ + s64 res, t; \ \ __asm__ __volatile__( \ "1: ldarx %0,0,%4 # atomic64_fetch_" #op "_relaxed\n" \ @@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor) static __inline__ void atomic64_inc(atomic64_t *v) { - long t; + s64 t; __asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_inc\n\ @@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v) } #define atomic64_inc atomic64_inc -static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) +static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v) { - long t; + s64 t; __asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_inc_return_relaxed\n" @@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v) static __inline__ void atomic64_dec(atomic64_t *v) { - long t; + s64 t; __asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_dec\n\ @@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v) } #define atomic64_dec atomic64_dec -static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) +static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v) { - long t; + s64 t; __asm__ __volatile__( "1: ldarx %0,0,%2 # atomic64_dec_return_relaxed\n" @@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v) * Atomically test *v and decrement if it is greater than 0. * The function returns the old value of *v minus 1. */ -static __inline__ long atomic64_dec_if_positive(atomic64_t *v) +static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v) { - long t; + s64 t; __asm__ __volatile__( PPC_ATOMIC_ENTRY_BARRIER @@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns the old value of @v. */ -static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long t; + s64 t; __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER @@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) */ static __inline__ int atomic64_inc_not_zero(atomic64_t *v) { - long t1, t2; + s64 t1, t2; __asm__ __volatile__ ( PPC_ATOMIC_ENTRY_BARRIER From patchwork Wed May 22 13:22:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164829 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp861137ili; Wed, 22 May 2019 06:25:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqz6MHZZwcn/okWIp9Sd+swGvmXf1yESunJLgw9RK5Le6UPW18dzMvuN/PmsJ112QBOVyqyG X-Received: by 2002:a62:2ec4:: with SMTP id u187mr94919044pfu.84.1558531515601; Wed, 22 May 2019 06:25:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531515; cv=none; d=google.com; s=arc-20160816; b=eOuY3c4qKKPYfkfyHYLTlKTDPm6VRaVxJ28hzpXGa2gbei0/jeqJ2yTO53yJJGrXlT ZHtfAAKuvdM9R8QSRyge1HZE5ak36/iK+NOi6sFGZmgC9JbHjUpB6u1nUO08BLK1pvBt +K9v1v6DDVoQ9K6tg384LTixABqdTEOHMcOb4DS9UnvcbbIirj65lbEMhRgBpUOVwYXT /fHA7cK+HB1AsxdAl7CzosHkQZn4GP2Rr012vBUS5HZEbxGgJTSiV95PYkwA6XhKrS3U i5BUAbYsuWl38059267HkqMlYLmdOJFQaZ9rV7ThX4l2UaU4Kr0cKoRagBTJhwxchRtG eErA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=KaleCd/ZVDkzmI855H+8Mvfxaa1svzzKKuoUFGLhhkc=; b=QrjrpxnbuoCzt/E45lpf6zXSbRkHkdt0rZFNA11LCleIN5DYe0y22sxZ6lFgcpDB9A bd2odwSyqe8cdeMtTzYUFVfrwuBOLP7QW7pVfslPqJ23cZmtizcUGoCo+fEQimS/admc NtBkLtMFmSoTaADe6uM7QwdtW4f4SdJRsqhlSWhYvk8oIHlRviv6K3/trv+Yihg/LvrA leypZh+5gfuy1eNxOzIjDx857sO42Z4bYjR4wHgwJ/MWc7jIcKBahRyxQr3NQeeghePs 0/XtoFMfLGgTX7EojwPi7+W5779hCjgk/q2MFawCiKCa7uYtGvlf3JIQOpt803qssOKK JmZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f191si24087456pgc.570.2019.05.22.06.25.13; Wed, 22 May 2019 06:25:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729324AbfEVNZN (ORCPT + 14 others); Wed, 22 May 2019 09:25:13 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51078 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729172AbfEVNZM (ORCPT ); Wed, 22 May 2019 09:25:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6FD2E80D; Wed, 22 May 2019 06:25:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2DFF13F575; Wed, 22 May 2019 06:25:08 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:44 +0100 Message-Id: <20190522132250.26499-13-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the s390 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Albert Ou Cc: Palmer Dabbelt Cc: Peter Zijlstra Cc: Will Deacon --- arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 21 deletions(-) -- 2.11.0 diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index c9e18289d65c..bffebc57357d 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -42,11 +42,11 @@ static __always_inline void atomic_set(atomic_t *v, int i) #ifndef CONFIG_GENERIC_ATOMIC64 #define ATOMIC64_INIT(i) { (i) } -static __always_inline long atomic64_read(const atomic64_t *v) +static __always_inline s64 atomic64_read(const atomic64_t *v) { return READ_ONCE(v->counter); } -static __always_inline void atomic64_set(atomic64_t *v, long i) +static __always_inline void atomic64_set(atomic64_t *v, s64 i) { WRITE_ONCE(v->counter, i); } @@ -70,11 +70,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \ #ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_OP (op, asm_op, I, w, int, ) + ATOMIC_OP (op, asm_op, I, w, int, ) #else #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_OP (op, asm_op, I, w, int, ) \ - ATOMIC_OP (op, asm_op, I, d, long, 64) + ATOMIC_OP (op, asm_op, I, w, int, ) \ + ATOMIC_OP (op, asm_op, I, d, s64, 64) #endif ATOMIC_OPS(add, add, i) @@ -131,14 +131,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \ #ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) + ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ + ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) #else #define ATOMIC_OPS(op, asm_op, c_op, I) \ - ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \ - ATOMIC_FETCH_OP( op, asm_op, I, d, long, 64) \ - ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64) + ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \ + ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \ + ATOMIC_FETCH_OP( op, asm_op, I, d, s64, 64) \ + ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64) #endif ATOMIC_OPS(add, add, +, i) @@ -170,11 +170,11 @@ ATOMIC_OPS(sub, add, +, -i) #ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) + ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) #else #define ATOMIC_OPS(op, asm_op, I) \ - ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \ - ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64) + ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \ + ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64) #endif ATOMIC_OPS(and, and, i) @@ -223,9 +223,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) #define atomic_fetch_add_unless atomic_fetch_add_unless #ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u) +static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - long prev, rc; + s64 prev; + long rc; __asm__ __volatile__ ( "0: lr.d %[p], %[c]\n" @@ -294,11 +295,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \ #ifdef CONFIG_GENERIC_ATOMIC64 #define ATOMIC_OPS() \ - ATOMIC_OP( int, , 4) + ATOMIC_OP(int, , 4) #else #define ATOMIC_OPS() \ - ATOMIC_OP( int, , 4) \ - ATOMIC_OP(long, 64, 8) + ATOMIC_OP(int, , 4) \ + ATOMIC_OP(s64, 64, 8) #endif ATOMIC_OPS() @@ -336,9 +337,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1) #ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset) +static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset) { - long prev, rc; + s64 prev; + long rc; __asm__ __volatile__ ( "0: lr.d %[p], %[c]\n" From patchwork Wed May 22 13:22:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164830 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp861240ili; Wed, 22 May 2019 06:25:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqzRkdyXc8G/QB9DQxw7X6oaOR3MCakCBb/OlGdFCyfOC3JFIE9upYwZNo7V+xKGpZeiTakL X-Received: by 2002:a62:1a51:: with SMTP id a78mr94661955pfa.133.1558531521983; Wed, 22 May 2019 06:25:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531521; cv=none; d=google.com; s=arc-20160816; b=FNWfdGTiyvsBZ4JkpXa34rYk9NyH7b4T7vli7mgXnwgDbm/qKRLsWBTprwf5+tYGp6 cMj/AexLAjPbTdrFvGpo4+lpeFkG3IqlUfjAT1DSa7/lakMWIC7XTXL+B2woNAub7lQQ IS0wYUeqLTukt5eeAaBJomnapA8el/Z0/uT+hOQKqE0AIJ4BD+LkVuSseVOaImY69pqu 8vb9xQsFUMqtYkl3p8blf5VGC2NfYZhYqotIIxxrZrcNkotfLDZyJ2WzXdDSbCMylbPr a7BXf8lbA/3Gm87v+C8Myu+DRQV51flrNtdvRi9PP2FYL60Xh/GVfGUHuxgPtKU7utjX yviA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=1VswKv6beAgSuSuRalei0imAjVi5Kv9QFPhWF+Uk8pM=; b=U1uIhhKJ8jyBRkAr73y9XstoHUa1V3aE+d/YEkeCI10nBpPYgG0dY/vYpXeHrIqm1T XNozLPZap/ZE5T9woXtywbMNL+2siGdTUC0+B87nsKzkKEUHkvaoTZ+SLxZ9ncNpIlZP 6fxHr1ljS9pKZnl3FkgMeMqiyKusUpZTkn9xRqqsPkfqAHhN9gc1xaT5GqFUVN6Hd0Q7 BC3kE3uQr+WtsZz7od8P0UIpU2iqGviz+nC7MTMVVaq26JPfefTVcNvGAU3/itWn6R9M RSZUaGgBhi+f/uFqnDw5a9IWf9/Pxths1x7/AFep9Fxhqz5TbdF8nP12yf7REERfzIXi Mxlg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f191si24087456pgc.570.2019.05.22.06.25.19; Wed, 22 May 2019 06:25:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729419AbfEVNZS (ORCPT + 14 others); Wed, 22 May 2019 09:25:18 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51132 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729172AbfEVNZS (ORCPT ); Wed, 22 May 2019 09:25:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CEE3015AB; Wed, 22 May 2019 06:25:17 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8DB443F575; Wed, 22 May 2019 06:25:13 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 13/18] locking/atomic: s390: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:45 +0100 Message-Id: <20190522132250.26499-14-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the s390 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. The s390-internal __atomic64_*() ops are also used by the s390 bitops, and expect pointers to long. Since atomic64_t::counter will be converted to s64 in a subsequent patch, pointes to this are explicitly cast to pointers to long when passed to __atomic64_*() ops. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Heiko Carstens Cc: Peter Zijlstra Cc: Will Deacon --- arch/s390/include/asm/atomic.h | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) -- 2.11.0 diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h index fd20ab5d4cf7..491ad53a0d4e 100644 --- a/arch/s390/include/asm/atomic.h +++ b/arch/s390/include/asm/atomic.h @@ -84,9 +84,9 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) #define ATOMIC64_INIT(i) { (i) } -static inline long atomic64_read(const atomic64_t *v) +static inline s64 atomic64_read(const atomic64_t *v) { - long c; + s64 c; asm volatile( " lg %0,%1\n" @@ -94,49 +94,49 @@ static inline long atomic64_read(const atomic64_t *v) return c; } -static inline void atomic64_set(atomic64_t *v, long i) +static inline void atomic64_set(atomic64_t *v, s64 i) { asm volatile( " stg %1,%0\n" : "=Q" (v->counter) : "d" (i)); } -static inline long atomic64_add_return(long i, atomic64_t *v) +static inline s64 atomic64_add_return(s64 i, atomic64_t *v) { - return __atomic64_add_barrier(i, &v->counter) + i; + return __atomic64_add_barrier(i, (long *)&v->counter) + i; } -static inline long atomic64_fetch_add(long i, atomic64_t *v) +static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { - return __atomic64_add_barrier(i, &v->counter); + return __atomic64_add_barrier(i, (long *)&v->counter); } -static inline void atomic64_add(long i, atomic64_t *v) +static inline void atomic64_add(s64 i, atomic64_t *v) { #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES if (__builtin_constant_p(i) && (i > -129) && (i < 128)) { - __atomic64_add_const(i, &v->counter); + __atomic64_add_const(i, (long *)&v->counter); return; } #endif - __atomic64_add(i, &v->counter); + __atomic64_add(i, (long *)&v->counter); } #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new) +static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { - return __atomic64_cmpxchg(&v->counter, old, new); + return __atomic64_cmpxchg((long *)&v->counter, old, new); } #define ATOMIC64_OPS(op) \ -static inline void atomic64_##op(long i, atomic64_t *v) \ +static inline void atomic64_##op(s64 i, atomic64_t *v) \ { \ - __atomic64_##op(i, &v->counter); \ + __atomic64_##op(i, (long *)&v->counter); \ } \ -static inline long atomic64_fetch_##op(long i, atomic64_t *v) \ +static inline long atomic64_fetch_##op(s64 i, atomic64_t *v) \ { \ - return __atomic64_##op##_barrier(i, &v->counter); \ + return __atomic64_##op##_barrier(i, (long *)&v->counter); \ } ATOMIC64_OPS(and) @@ -145,8 +145,8 @@ ATOMIC64_OPS(xor) #undef ATOMIC64_OPS -#define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v) -#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v) -#define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v) +#define atomic64_sub_return(_i, _v) atomic64_add_return(-(s64)(_i), _v) +#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(s64)(_i), _v) +#define atomic64_sub(_i, _v) atomic64_add(-(s64)(_i), _v) #endif /* __ARCH_S390_ATOMIC__ */ From patchwork Wed May 22 13:22:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164831 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp861350ili; Wed, 22 May 2019 06:25:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqwX3AFqJWRGyVQkTPXVesObkiEFuTDkQE6x3E1Xn6EVJgcN0Vore4xBrjsLoRzj+h1DSIwT X-Received: by 2002:aa7:9ac4:: with SMTP id x4mr95225487pfp.43.1558531528279; Wed, 22 May 2019 06:25:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531528; cv=none; d=google.com; s=arc-20160816; b=pM5nfIdPcwc+Xi1scGgPrpTE3PW7HI6admbcmbBB5X9MbTwkMJfUToZoWj4DK0yxBf hzwPIJv+IMvfbbkFzRRGoo2fIu3BefmiHmKBvEeSC2Zy8h1bD9uwRk/Y+CN7Bia+R8vt gH1edy0sXoD+T5HgAfyzOl9FDQtJkmjpeJu1B3gkFHkqCb9yMxrzE3GYrOdtuC5odl5D DC/uJRygJxHHQaWYGEtXRvOrxMMT4uI4Rwr/1SYgptF8xZGWHQp/8kjDO/t3Yyzf+Zx4 8VISKD44TEAHR6ymIzPhq+RwHKgfR9Cyl/NUamIjCugg9U11FZBLmcMoQFCWHw/iW110 EU4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=su2zgVXuoYXMWLGdcZAx4SNQVb9mcpP2Rf9IgrEKRDw=; b=KQH8g5SXfbX1eMU4u99CRbmry6IqDcLTHCTAptoantRuz2Fy2UcHGOCoCqa/sZK4HS tQydx5dfPLPQD0etG56yEtSTXqMcT574vntEkxbIsOQ+TEwtg4lr5UqTnv8qvlcfIalK 6ykTCTWxfvrdkKmtVpLeSW9KUMsHRgq33pxc/LWQbA9hXITlbCqsAPD3Zh3SR/kBzC1t h/G4y8ddNaT9IwEWTbg9SN9VzQa8NfSCz8frcrmpgIRFQMRNENZgrumRVfigAM5oCWL8 LE8ZkYSkWxSmZAcKx3rPfLkV3Q5OjABQyxHKrJUQbh383NGSWfW+8a0H3VkA7zgpzoFs xYPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f191si24087456pgc.570.2019.05.22.06.25.26; Wed, 22 May 2019 06:25:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729433AbfEVNZZ (ORCPT + 14 others); Wed, 22 May 2019 09:25:25 -0400 Received: from foss.arm.com ([217.140.101.70]:51180 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729172AbfEVNZY (ORCPT ); Wed, 22 May 2019 09:25:24 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D0AF15BF; Wed, 22 May 2019 06:25:24 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F345D3F575; Wed, 22 May 2019 06:25:19 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 14/18] locking/atomic: sparc: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:46 +0100 Message-Id: <20190522132250.26499-15-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the sparc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: David S. Miller Cc: Peter Zijlstra Cc: Will Deacon --- arch/sparc/include/asm/atomic_64.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) -- 2.11.0 diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h index 6963482c81d8..b60448397d4f 100644 --- a/arch/sparc/include/asm/atomic_64.h +++ b/arch/sparc/include/asm/atomic_64.h @@ -23,15 +23,15 @@ #define ATOMIC_OP(op) \ void atomic_##op(int, atomic_t *); \ -void atomic64_##op(long, atomic64_t *); +void atomic64_##op(s64, atomic64_t *); #define ATOMIC_OP_RETURN(op) \ int atomic_##op##_return(int, atomic_t *); \ -long atomic64_##op##_return(long, atomic64_t *); +s64 atomic64_##op##_return(s64, atomic64_t *); #define ATOMIC_FETCH_OP(op) \ int atomic_fetch_##op(int, atomic_t *); \ -long atomic64_fetch_##op(long, atomic64_t *); +s64 atomic64_fetch_##op(s64, atomic64_t *); #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op) @@ -61,7 +61,7 @@ static inline int atomic_xchg(atomic_t *v, int new) ((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n))) #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) -long atomic64_dec_if_positive(atomic64_t *v); +s64 atomic64_dec_if_positive(atomic64_t *v); #define atomic64_dec_if_positive atomic64_dec_if_positive #endif /* !(__ARCH_SPARC64_ATOMIC__) */ From patchwork Wed May 22 13:22:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164832 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp861452ili; Wed, 22 May 2019 06:25:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqw1yQ5P1oYvO92j/TbguBDEEDTnZABaTpZQeQFds0rTfKrStbwV4P10Qos/Xg0NopKQxWYf X-Received: by 2002:a63:8242:: with SMTP id w63mr89152962pgd.169.1558531534842; Wed, 22 May 2019 06:25:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531534; cv=none; d=google.com; s=arc-20160816; b=aa1G0WZRKq/dLJpUfQNFV8FWbbzwe8X9Aei6HP0CktmdaV9Yzhf8NZ6+3c0roO1BED QOIVN/Tz3uAnCvHfx2AY3bTD7CDDyIYXtD6d/Sf9t3VTtP2CjLSnZDyFMZJ4rP6k2hWa XDzPSU5Or/ekv6gZfGV5MunZgbXlPqRVIyRMUP6rwDdk7mWxbAqdbvY4s7upYyOZqQ9U 8AiNJAfbChiDMm/1X0QXKWba7ySJoDpMHOG+4wxKR13tiIdnC22UCDKsm6cUmfbUBtUI HUpLiEkFTpGYA5/gdTUfeK1ftPIy0mqfKVWnrIKKsmxw3sMk5d3wpAxaNVt6dzfdRxPK dBow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=et3uCpbMskuhct6H9UHKH++UAgDvoWXfhH3oPwuJ4Ks=; b=d+52jXUysWxdiEouNneBSDEe79pgQ8o9JU1YiP4lfLcYF/mB/uyAgj3acqBfv3HxDI PSzMhR0AqBS+bOdftPQHI7hoRzqeRqXieTgbfCNQga+Glr+rkD+vFqBOCCK0YMpvcF0y 1XtyAxLWnfvZHZleu/yAm3QoEvo+JLxtqAKc6QTBIffB0pKjsX4iTli2O4i5HXvJFTxq u49/yEVtGeuU+ZEdhovh2RqNXwrwIkpOMREiNUSXUHx5xGukGlpJiEQ2ND1C/jbcAEjm chCeuOtb+CPpoU7g1wjQsSi7VjqvU+rLdamvo9P2PHdcDlu1YIXWMV88e+niKBcm3AWB 8Qfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f7si25034868pgd.155.2019.05.22.06.25.34; Wed, 22 May 2019 06:25:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729472AbfEVNZe (ORCPT + 14 others); Wed, 22 May 2019 09:25:34 -0400 Received: from foss.arm.com ([217.140.101.70]:51240 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729172AbfEVNZd (ORCPT ); Wed, 22 May 2019 09:25:33 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A969580D; Wed, 22 May 2019 06:25:32 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 688ED3F575; Wed, 22 May 2019 06:25:28 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 15/18] locking/atomic: x86: use s64 for atomic64 Date: Wed, 22 May 2019 14:22:47 +0100 Message-Id: <20190522132250.26499-16-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org As a step towards making the atomic64 API use consistent types treewide, let's have the x86 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or long long, matching the generated headers. Note that the x86 arch_atomic64 implementation is already wrapped by the generic instrumented atomic64 implementation, which uses s64 consistently. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Borislav Petkov Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Russell King Cc: Thomas Gleixner Cc: Will Deacon --- arch/x86/include/asm/atomic64_32.h | 66 ++++++++++++++++++-------------------- arch/x86/include/asm/atomic64_64.h | 38 +++++++++++----------- 2 files changed, 51 insertions(+), 53 deletions(-) -- 2.11.0 diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 6a5b0ec460da..52cfaecb13f9 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -9,7 +9,7 @@ /* An 64bit atomic type */ typedef struct { - u64 __aligned(8) counter; + s64 __aligned(8) counter; } atomic64_t; #define ATOMIC64_INIT(val) { (val) } @@ -71,8 +71,7 @@ ATOMIC64_DECL(add_unless); * the old value. */ -static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o, - long long n) +static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n) { return arch_cmpxchg64(&v->counter, o, n); } @@ -85,9 +84,9 @@ static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o, * Atomically xchgs the value of @v to @n and returns * the old value. */ -static inline long long arch_atomic64_xchg(atomic64_t *v, long long n) +static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n) { - long long o; + s64 o; unsigned high = (unsigned)(n >> 32); unsigned low = (unsigned)n; alternative_atomic64(xchg, "=&A" (o), @@ -103,7 +102,7 @@ static inline long long arch_atomic64_xchg(atomic64_t *v, long long n) * * Atomically sets the value of @v to @n. */ -static inline void arch_atomic64_set(atomic64_t *v, long long i) +static inline void arch_atomic64_set(atomic64_t *v, s64 i) { unsigned high = (unsigned)(i >> 32); unsigned low = (unsigned)i; @@ -118,9 +117,9 @@ static inline void arch_atomic64_set(atomic64_t *v, long long i) * * Atomically reads the value of @v and returns it. */ -static inline long long arch_atomic64_read(const atomic64_t *v) +static inline s64 arch_atomic64_read(const atomic64_t *v) { - long long r; + s64 r; alternative_atomic64(read, "=&A" (r), "c" (v) : "memory"); return r; } @@ -132,7 +131,7 @@ static inline long long arch_atomic64_read(const atomic64_t *v) * * Atomically adds @i to @v and returns @i + *@v */ -static inline long long arch_atomic64_add_return(long long i, atomic64_t *v) +static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { alternative_atomic64(add_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -143,7 +142,7 @@ static inline long long arch_atomic64_add_return(long long i, atomic64_t *v) /* * Other variants with different arithmetic operators: */ -static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v) +static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v) { alternative_atomic64(sub_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -151,18 +150,18 @@ static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v) return i; } -static inline long long arch_atomic64_inc_return(atomic64_t *v) +static inline s64 arch_atomic64_inc_return(atomic64_t *v) { - long long a; + s64 a; alternative_atomic64(inc_return, "=&A" (a), "S" (v) : "memory", "ecx"); return a; } #define arch_atomic64_inc_return arch_atomic64_inc_return -static inline long long arch_atomic64_dec_return(atomic64_t *v) +static inline s64 arch_atomic64_dec_return(atomic64_t *v) { - long long a; + s64 a; alternative_atomic64(dec_return, "=&A" (a), "S" (v) : "memory", "ecx"); return a; @@ -176,7 +175,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v) * * Atomically adds @i to @v. */ -static inline long long arch_atomic64_add(long long i, atomic64_t *v) +static inline s64 arch_atomic64_add(s64 i, atomic64_t *v) { __alternative_atomic64(add, add_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -191,7 +190,7 @@ static inline long long arch_atomic64_add(long long i, atomic64_t *v) * * Atomically subtracts @i from @v. */ -static inline long long arch_atomic64_sub(long long i, atomic64_t *v) +static inline s64 arch_atomic64_sub(s64 i, atomic64_t *v) { __alternative_atomic64(sub, sub_return, ASM_OUTPUT2("+A" (i), "+c" (v)), @@ -234,8 +233,7 @@ static inline void arch_atomic64_dec(atomic64_t *v) * Atomically adds @a to @v, so long as it was not @u. * Returns non-zero if the add was done, zero otherwise. */ -static inline int arch_atomic64_add_unless(atomic64_t *v, long long a, - long long u) +static inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { unsigned low = (unsigned)u; unsigned high = (unsigned)(u >> 32); @@ -254,9 +252,9 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v) } #define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero -static inline long long arch_atomic64_dec_if_positive(atomic64_t *v) +static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) { - long long r; + s64 r; alternative_atomic64(dec_if_positive, "=&A" (r), "S" (v) : "ecx", "memory"); return r; @@ -266,17 +264,17 @@ static inline long long arch_atomic64_dec_if_positive(atomic64_t *v) #undef alternative_atomic64 #undef __alternative_atomic64 -static inline void arch_atomic64_and(long long i, atomic64_t *v) +static inline void arch_atomic64_and(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0; while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c) c = old; } -static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0; while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c) c = old; @@ -284,17 +282,17 @@ static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v) return old; } -static inline void arch_atomic64_or(long long i, atomic64_t *v) +static inline void arch_atomic64_or(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0; while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c) c = old; } -static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0; while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c) c = old; @@ -302,17 +300,17 @@ static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v) return old; } -static inline void arch_atomic64_xor(long long i, atomic64_t *v) +static inline void arch_atomic64_xor(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0; while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c) c = old; } -static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0; while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c) c = old; @@ -320,9 +318,9 @@ static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v) return old; } -static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v) { - long long old, c = 0; + s64 old, c = 0; while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c) c = old; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index dadc20adba21..703b7dfd45e0 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -17,7 +17,7 @@ * Atomically reads the value of @v. * Doesn't imply a read memory barrier. */ -static inline long arch_atomic64_read(const atomic64_t *v) +static inline s64 arch_atomic64_read(const atomic64_t *v) { return READ_ONCE((v)->counter); } @@ -29,7 +29,7 @@ static inline long arch_atomic64_read(const atomic64_t *v) * * Atomically sets the value of @v to @i. */ -static inline void arch_atomic64_set(atomic64_t *v, long i) +static inline void arch_atomic64_set(atomic64_t *v, s64 i) { WRITE_ONCE(v->counter, i); } @@ -41,7 +41,7 @@ static inline void arch_atomic64_set(atomic64_t *v, long i) * * Atomically adds @i to @v. */ -static __always_inline void arch_atomic64_add(long i, atomic64_t *v) +static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "addq %1,%0" : "=m" (v->counter) @@ -55,7 +55,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v) * * Atomically subtracts @i from @v. */ -static inline void arch_atomic64_sub(long i, atomic64_t *v) +static inline void arch_atomic64_sub(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "subq %1,%0" : "=m" (v->counter) @@ -71,7 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v) * true if the result is zero, or false for all * other cases. */ -static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v) +static inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i); } @@ -142,7 +142,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v) * if the result is negative, or false when * result is greater than or equal to zero. */ -static inline bool arch_atomic64_add_negative(long i, atomic64_t *v) +static inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) { return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i); } @@ -155,43 +155,43 @@ static inline bool arch_atomic64_add_negative(long i, atomic64_t *v) * * Atomically adds @i to @v and returns @i + @v */ -static __always_inline long arch_atomic64_add_return(long i, atomic64_t *v) +static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { return i + xadd(&v->counter, i); } -static inline long arch_atomic64_sub_return(long i, atomic64_t *v) +static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v) { return arch_atomic64_add_return(-i, v); } -static inline long arch_atomic64_fetch_add(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v) { return xadd(&v->counter, i); } -static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_sub(s64 i, atomic64_t *v) { return xadd(&v->counter, -i); } -static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new) +static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { return arch_cmpxchg(&v->counter, old, new); } #define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg -static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new) +static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { return try_cmpxchg(&v->counter, old, new); } -static inline long arch_atomic64_xchg(atomic64_t *v, long new) +static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 new) { return arch_xchg(&v->counter, new); } -static inline void arch_atomic64_and(long i, atomic64_t *v) +static inline void arch_atomic64_and(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "andq %1,%0" : "+m" (v->counter) @@ -199,7 +199,7 @@ static inline void arch_atomic64_and(long i, atomic64_t *v) : "memory"); } -static inline long arch_atomic64_fetch_and(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v) { s64 val = arch_atomic64_read(v); @@ -208,7 +208,7 @@ static inline long arch_atomic64_fetch_and(long i, atomic64_t *v) return val; } -static inline void arch_atomic64_or(long i, atomic64_t *v) +static inline void arch_atomic64_or(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "orq %1,%0" : "+m" (v->counter) @@ -216,7 +216,7 @@ static inline void arch_atomic64_or(long i, atomic64_t *v) : "memory"); } -static inline long arch_atomic64_fetch_or(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v) { s64 val = arch_atomic64_read(v); @@ -225,7 +225,7 @@ static inline long arch_atomic64_fetch_or(long i, atomic64_t *v) return val; } -static inline void arch_atomic64_xor(long i, atomic64_t *v) +static inline void arch_atomic64_xor(s64 i, atomic64_t *v) { asm volatile(LOCK_PREFIX "xorq %1,%0" : "+m" (v->counter) @@ -233,7 +233,7 @@ static inline void arch_atomic64_xor(long i, atomic64_t *v) : "memory"); } -static inline long arch_atomic64_fetch_xor(long i, atomic64_t *v) +static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v) { s64 val = arch_atomic64_read(v); From patchwork Wed May 22 13:22:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 164833 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp861551ili; Wed, 22 May 2019 06:25:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqyiboPRky3vmI3G2HwXHtubVN9CHpNa+UzlYNZrIlkT8HARDl+3ml/11ysXf2yoTvMWkm7p X-Received: by 2002:a65:528b:: with SMTP id y11mr89700248pgp.341.1558531539457; Wed, 22 May 2019 06:25:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558531539; cv=none; d=google.com; s=arc-20160816; b=uBYXb5WVoR2rf/au97A9KeRVLqpMqryjGZWLa9lgdJwR4+3BdHtRKOri7d1UzQTLqf 3z5JAoO8gyHhq9sf7n0dwaIPgpIZz0/U7hP5S2VdLdpUGOm64O0foM20mYDTmbrsgfhv 6aL28VPv+Lzkfpa/pKBGUXWPf8PwToY59hvFecCon0uHgQ0STcYssq3h78EKXzwFQPCM gr58dirYmDlu0nR3K2Tpvfrp8L6DhhYkY1PqHRQgz3aWNVa//nND+D5qemMT7LScJez6 9nhnHuKe4HYd6MT2EEEdYWnkboxKY3ECOcj96+PXQadxbxH2nvsfoXNh7oDSm0ZA3Wzf RYlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=3cLhS5YE7oHY499LDkY0VYmxsbWa7IVymm4wdUWvrnE=; b=uvLwWdq/ZFQWuLsL40+AXLsBRBRkiPlvdkrZQpSLUNwNdKafoxJxkpdeGyduqO8nRY lkyYnuJWTgWmyvM2c0ZqYLjJNdoM+1h2tJgC6XyMLGV2d/mwPc84CnM7OI5DeKyKybFu yclz7KEQY3X+XtJ5iDSdmGn7itRYC6YQT4wl0Ll3ZZVrHSJQpXpuMYr2/8g1dZemr9rY hhiTi4GvxT4zsrqsiDWOGT8yikNP94Q3Te/QaWpvNyZSZz7K2mfPwd8I/tXGw9Tgos4m W07VQLz09wrpzKv9xDpU3lcttPgrGdwpl+l7c9BSbYjmxei6VCMQknL/JCPzQhF4al+f p30g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w190si23724273pgd.577.2019.05.22.06.25.39; Wed, 22 May 2019 06:25:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729172AbfEVNZi (ORCPT + 14 others); Wed, 22 May 2019 09:25:38 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51290 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729126AbfEVNZi (ORCPT ); Wed, 22 May 2019 09:25:38 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F1C9415AB; Wed, 22 May 2019 06:25:37 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A90103F575; Wed, 22 May 2019 06:25:33 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, peterz@infradead.org, will.deacon@arm.com Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bp@alien8.de, catalin.marinas@arm.com, davem@davemloft.net, fenghua.yu@intel.com, heiko.carstens@de.ibm.com, herbert@gondor.apana.org.au, ink@jurassic.park.msu.ru, jhogan@kernel.org, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, mingo@kernel.org, mpe@ellerman.id.au, palmer@sifive.com, paul.burton@mips.com, paulus@samba.org, ralf@linux-mips.org, rth@twiddle.net, stable@vger.kernel.org, tglx@linutronix.de, tony.luck@intel.com, vgupta@synopsys.com Subject: [PATCH 16/18] locking/atomic: use s64 for atomic64_t on 64-bit Date: Wed, 22 May 2019 14:22:48 +0100 Message-Id: <20190522132250.26499-17-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190522132250.26499-1-mark.rutland@arm.com> References: <20190522132250.26499-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Now that all architectures use 64 consistently as the base type for the atomic64 API, let's have the CONFIG_64BIT definition of atomic64_t use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. On architectures where atomic64_read(v) is READ_ONCE(v->counter), this patch will cause the return type of atomic64_read() to be s64. As of this patch, the atomic64 API can be relied upon to consistently return s64 where a value rather than boolean condition is returned. This should make code more robust, and simpler, allowing for the removal of casts previously required to ensure consistent types. Signed-off-by: Mark Rutland Cc: Peter Zijlstra Cc: Will Deacon --- include/linux/types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.11.0 diff --git a/include/linux/types.h b/include/linux/types.h index 231114ae38f4..05030f608be3 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -174,7 +174,7 @@ typedef struct { #ifdef CONFIG_64BIT typedef struct { - long counter; + s64 counter; } atomic64_t; #endif