From patchwork Fri Mar 1 14:03:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 159488 Delivered-To: patch@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp769675jad; Fri, 1 Mar 2019 06:04:38 -0800 (PST) X-Google-Smtp-Source: APXvYqwDS2d4vqMF/DiQ+3DLaXRUVoM1QSe6zp4jW4AWWQjtM9rN6W2qfVC02BUPlUy5jMcn2ZGD X-Received: by 2002:a63:cc15:: with SMTP id x21mr5082947pgf.380.1551449078171; Fri, 01 Mar 2019 06:04:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551449078; cv=none; d=google.com; s=arc-20160816; b=Iyq8KecsfgpZbJrVuVm1BCViplr7tTZ27jk/oHBKVot6EuE95r3aT3X9cu2VWKTAFp pBO5A/tXf4LjCzH318JE95MwXskvpKm5+VS2Rocpgs3muZpWmxirzzGAqGnHK1tIIV7+ hctzVD94m5JG83SU4uX032wucjcYn0+cCsqZnsB80DWFgHOY2XLunKVJQ27s+uYoadWk Ogp3jhiHktnsXImCNexUh2SEKx5nxd6AnDYDy532nvcJ7qfHwmVQ26mPVVD9jZS3RnTo Y+v4spX3Dz6G819k/n1Ssf+UbCag4nmV/Apjad6l95Ah394br8Rt9PoPPSzFjyKv0H1J S20g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=yrJLmKtC7FEBqt1VIwnZy0RsNufOd02Z9XO8lBvJYPg=; b=BMjteGDofmvue3Hk+6BIwZwDKCL54/aM5IK1edtn3X4Xlb5k58g0qeOx4jLorv+8nU T0x/srEULYJ2m0CmuiT0oFzWvxPNlpLLO5xaGoRrCr9nzmrn+ngy4nel3EY2aQ8YfEua nbXo9CGZz+TIlMlM34VBNXBDituKBxh6BaoGIc0ridJMCuRc8Vrc6vhpicU9PlXhi5OC H9HH0LQwUJYBdPgSyQmmSsuKijTMSgvKF9pcCMnDBviL1HuSA6y7dM4jwmBybKZT/akK du4pvUzUK7TlTN8ItpZKNmoKPUVRDlEHjiz4U5KKwpqpjn90K+w6tlRhFl3aww3dRRZM 6qoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e92si21328309pld.423.2019.03.01.06.04.37; Fri, 01 Mar 2019 06:04:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388511AbfCAOEg (ORCPT + 31 others); Fri, 1 Mar 2019 09:04:36 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:35828 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388460AbfCAOEe (ORCPT ); Fri, 1 Mar 2019 09:04:34 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7558E16A3; Fri, 1 Mar 2019 06:04:33 -0800 (PST) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3A34C3F720; Fri, 1 Mar 2019 06:04:30 -0800 (PST) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck Subject: [PATCH 09/20] sh/mmiowb: Add unconditional mmiowb() to arch_spin_unlock() Date: Fri, 1 Mar 2019 14:03:37 +0000 Message-Id: <20190301140348.25175-10-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190301140348.25175-1-will.deacon@arm.com> References: <20190301140348.25175-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The mmiowb() macro is horribly difficult to use and drivers will continue to work most of the time if they omit a call when it is required. Rather than rely on driver authors getting this right, push mmiowb() into arch_spin_unlock() for sh. If this is deemed to be a performance issue, a subsequent optimisation could make use of ARCH_HAS_MMIOWB to elide the barrier in cases where no I/O writes were performed inside the critical section. Cc: Yoshinori Sato Cc: Rich Felker Signed-off-by: Will Deacon --- arch/sh/include/asm/Kbuild | 1 - arch/sh/include/asm/io.h | 3 --- arch/sh/include/asm/mmiowb.h | 12 ++++++++++++ arch/sh/include/asm/spinlock-llsc.h | 2 ++ 4 files changed, 14 insertions(+), 4 deletions(-) create mode 100644 arch/sh/include/asm/mmiowb.h -- 2.11.0 diff --git a/arch/sh/include/asm/Kbuild b/arch/sh/include/asm/Kbuild index e3f6926c4b86..a6ef3fee5f85 100644 --- a/arch/sh/include/asm/Kbuild +++ b/arch/sh/include/asm/Kbuild @@ -13,7 +13,6 @@ generic-y += local.h generic-y += local64.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h -generic-y += mmiowb.h generic-y += parport.h generic-y += percpu.h generic-y += preempt.h diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h index 4f7f235f15f8..c28e37a344ad 100644 --- a/arch/sh/include/asm/io.h +++ b/arch/sh/include/asm/io.h @@ -229,9 +229,6 @@ __BUILD_IOPORT_STRING(q, u64) #define IO_SPACE_LIMIT 0xffffffff -/* synco on SH-4A, otherwise a nop */ -#define mmiowb() wmb() - /* We really want to try and get these to memcpy etc */ void memcpy_fromio(void *, const volatile void __iomem *, unsigned long); void memcpy_toio(volatile void __iomem *, const void *, unsigned long); diff --git a/arch/sh/include/asm/mmiowb.h b/arch/sh/include/asm/mmiowb.h new file mode 100644 index 000000000000..535d59735f1d --- /dev/null +++ b/arch/sh/include/asm/mmiowb.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_SH_MMIOWB_H +#define __ASM_SH_MMIOWB_H + +#include + +/* synco on SH-4A, otherwise a nop */ +#define mmiowb() wmb() + +#include + +#endif /* __ASM_SH_MMIOWB_H */ diff --git a/arch/sh/include/asm/spinlock-llsc.h b/arch/sh/include/asm/spinlock-llsc.h index 786ee0fde3b0..7fd929cd2e7a 100644 --- a/arch/sh/include/asm/spinlock-llsc.h +++ b/arch/sh/include/asm/spinlock-llsc.h @@ -47,6 +47,8 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock) { unsigned long tmp; + /* This could be optimised with ARCH_HAS_MMIOWB */ + mmiowb(); __asm__ __volatile__ ( "mov #1, %0 ! arch_spin_unlock \n\t" "mov.l %0, @%1 \n\t"