From patchwork Fri Feb 22 18:50:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 159053 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp2069928jaa; Fri, 22 Feb 2019 10:50:49 -0800 (PST) X-Google-Smtp-Source: AHgI3IYferMryBMxIUgqmRyJ+ePFpoOAcBM76HWHipBAdLoTuoE7S5k0/ksULiPpmuyjhwWP5rFe X-Received: by 2002:a17:902:e309:: with SMTP id cg9mr86163plb.66.1550861449236; Fri, 22 Feb 2019 10:50:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550861449; cv=none; d=google.com; s=arc-20160816; b=lk918cUBtc2IazX9W70EZ/huAGOzWmR1HA3mNK+7jV/ZSJZXAsV2cp5gOWOsEmIFX4 zOHMYZpbaLmfYM6YT4S6mg9Yo0v/ZobtJhN1etyuiFSTHQ74O9K8fpWJv7EgKyl66uxe fnkjLdS8izYvT8F7uKDBLfasDqUsehL4AmRR1USbIMo1NOk7P8uensL5x043S8Vqy7VG ozpVpzwlw6kukwXq3Tl4FcKre9B+a9MiaU3ny+lYFlO4c4iHjbFuZkSiGKwc/b9N+pql 4C4rMwjgnFpt5+bB/bWBjcTA8exP0BMpJNcAt1JrEuNVoP0ljQ35WfIt8jBJSAJm5Dvb iIgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=mBDoK7HlXfrDY4cxkLlRxpTdrmyuEGgb1YiXzY7ez0E=; b=usVwpjUu7fznFyfmEXlgZCntygJu+gsGMDqbzPWw2Rk/hr9tACNd33jFDgJzhAZZJs ylorn1uuNL4XTeRlNKs5mUCurQvy3Fo0KiyriTcGbiHbHZh81kJX7ihRV0Rw5c65VxKV 9ykfqpbaRH02SAsJzD/pBN81s46pQ1WQcc2EuHsmjoonnJXsyOq7TJv9xyHlJEW3mQCQ PzfJmEI5kEHDWQCDDv274ZX8IjHIR0gkeUUTX+NOAN7cguPshmQ1zqItT/OizbUCXqMG tJk/Ldb/F9OZGjZHYBVA0rVIRFhoZVXNylk/mu6enJRdlLcXskZ+9SEhufPN2AIk6eB8 K+tw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l10si2004944pfb.283.2019.02.22.10.50.48; Fri, 22 Feb 2019 10:50:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726839AbfBVSur (ORCPT + 32 others); Fri, 22 Feb 2019 13:50:47 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:39134 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725832AbfBVSuq (ORCPT ); Fri, 22 Feb 2019 13:50:46 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 448F41682; Fri, 22 Feb 2019 10:50:45 -0800 (PST) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 09DB33F5C1; Fri, 22 Feb 2019 10:50:41 -0800 (PST) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck Subject: [RFC PATCH 03/20] mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors Date: Fri, 22 Feb 2019 18:50:09 +0000 Message-Id: <20190222185026.10973-4-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190222185026.10973-1-will.deacon@arm.com> References: <20190222185026.10973-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Removing explicit calls to mmiowb() from driver code means that we must now call into the generic mmiowb_spin_{lock,unlock}() functions from the core spinlock code. In order to elide barriers following critical sections without any I/O writes, we also hook into the asm-generic I/O routines. Signed-off-by: Will Deacon --- include/asm-generic/io.h | 3 ++- include/linux/spinlock.h | 11 ++++++++++- kernel/locking/spinlock_debug.c | 6 +++++- 3 files changed, 17 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h index 303871651f8a..bc490a746602 100644 --- a/include/asm-generic/io.h +++ b/include/asm-generic/io.h @@ -19,6 +19,7 @@ #include #endif +#include #include #ifndef mmiowb @@ -49,7 +50,7 @@ /* serialize device access against a spin_unlock, usually handled there. */ #ifndef __io_aw -#define __io_aw() barrier() +#define __io_aw() mmiowb_set_pending() #endif #ifndef __io_pbw diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index e089157dcf97..4298b1b31d9b 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -57,6 +57,7 @@ #include #include #include +#include /* @@ -177,6 +178,7 @@ do { \ static inline void do_raw_spin_lock(raw_spinlock_t *lock) __acquires(lock) { __acquire(lock); + mmiowb_spin_lock(); arch_spin_lock(&lock->raw_lock); } @@ -188,16 +190,23 @@ static inline void do_raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long *flags) __acquires(lock) { __acquire(lock); + mmiowb_spin_lock(); arch_spin_lock_flags(&lock->raw_lock, *flags); } static inline int do_raw_spin_trylock(raw_spinlock_t *lock) { - return arch_spin_trylock(&(lock)->raw_lock); + int ret = arch_spin_trylock(&(lock)->raw_lock); + + if (ret) + mmiowb_spin_lock(); + + return ret; } static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) { + mmiowb_spin_unlock(); arch_spin_unlock(&lock->raw_lock); __release(lock); } diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c index 9aa0fccd5d43..654484b6e70c 100644 --- a/kernel/locking/spinlock_debug.c +++ b/kernel/locking/spinlock_debug.c @@ -109,6 +109,7 @@ static inline void debug_spin_unlock(raw_spinlock_t *lock) */ void do_raw_spin_lock(raw_spinlock_t *lock) { + mmiowb_spin_lock(); debug_spin_lock_before(lock); arch_spin_lock(&lock->raw_lock); debug_spin_lock_after(lock); @@ -118,8 +119,10 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) { int ret = arch_spin_trylock(&lock->raw_lock); - if (ret) + if (ret) { + mmiowb_spin_lock(); debug_spin_lock_after(lock); + } #ifndef CONFIG_SMP /* * Must not happen on UP: @@ -131,6 +134,7 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) void do_raw_spin_unlock(raw_spinlock_t *lock) { + mmiowb_spin_unlock(); debug_spin_unlock(lock); arch_spin_unlock(&lock->raw_lock); }