From patchwork Fri Mar 1 14:03:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 159483 Delivered-To: patch@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp769239jad; Fri, 1 Mar 2019 06:04:17 -0800 (PST) X-Google-Smtp-Source: APXvYqxTUk8H1lVY25g5/vuHoDRnUov5PhI1ZOchTWq722nFk3W0CqDWdRTs/u7qw+gqc/8bZDOC X-Received: by 2002:a63:1945:: with SMTP id 5mr5005868pgz.412.1551449057315; Fri, 01 Mar 2019 06:04:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551449057; cv=none; d=google.com; s=arc-20160816; b=LwyhygYmWVKb4rv0aWxE7CkeGVBbw3Ix5zYr9fUDwr5Kr+adO3lojZhJvsoMsvYtAu gTBtgumQBOnca76SlscguT91IFob5gjg2yGUJtCByQFE4bAMMxhPPs93JQvma53ZUYtg /DXdtPCxOCvP50KOyT9SaiX/pe61sXhpyFSLI6MwIHZtfrc2c5a5jio7GIb70k+K3jdw Pp+D2dELI9eCTwDLh5+/cI+W8GX/LPf4oj3pEsw+kREc6yvhi0Zfcw+mlAe9WU9vfX3B 8u5CndnzhmqZyhCizdcA1Do+vAUaeZhPctHR/fmqeEEwwUUA9UesEm2ZP3jMawAO0I5z toRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=mBDoK7HlXfrDY4cxkLlRxpTdrmyuEGgb1YiXzY7ez0E=; b=Muci/zTkGgKps3rmNC9WE/izSXwCKUviRP9Twc94DnUdcTRSOpfe2kWSrPce/0v7vY z3YqalR8IhbR4sdYUxSsMCsK1Ckcimxge193G4Gzpea0icTbnpo9X2FCxy5ccFcusN69 u/l1zsLLqRxx1+UxiugDqaA4njkixj3wnFjyNX5SXIHvvfmfLja6/FtauEuOO19WlNxh xIiMyZka2C81xuORU+ZwbwUYrJx+5TtPHZuG2voGagYM20mKZbS7NboRbqWSrQZb2dil yfiE4Dv+dv2pV8YgVVHNZlOMRNFhbfOggx5Vdd4MGJyOzwLBTirar8KbzxAgG9JlLRMR 1Xtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 70si9338206pfr.208.2019.03.01.06.04.16; Fri, 01 Mar 2019 06:04:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388409AbfCAOEP (ORCPT + 31 others); Fri, 1 Mar 2019 09:04:15 -0500 Received: from foss.arm.com ([217.140.101.70]:35588 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733126AbfCAOEM (ORCPT ); Fri, 1 Mar 2019 09:04:12 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 40DAB1715; Fri, 1 Mar 2019 06:04:12 -0800 (PST) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0582E3F720; Fri, 1 Mar 2019 06:04:08 -0800 (PST) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck Subject: [PATCH 03/20] mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors Date: Fri, 1 Mar 2019 14:03:31 +0000 Message-Id: <20190301140348.25175-4-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190301140348.25175-1-will.deacon@arm.com> References: <20190301140348.25175-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Removing explicit calls to mmiowb() from driver code means that we must now call into the generic mmiowb_spin_{lock,unlock}() functions from the core spinlock code. In order to elide barriers following critical sections without any I/O writes, we also hook into the asm-generic I/O routines. Signed-off-by: Will Deacon --- include/asm-generic/io.h | 3 ++- include/linux/spinlock.h | 11 ++++++++++- kernel/locking/spinlock_debug.c | 6 +++++- 3 files changed, 17 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h index 303871651f8a..bc490a746602 100644 --- a/include/asm-generic/io.h +++ b/include/asm-generic/io.h @@ -19,6 +19,7 @@ #include #endif +#include #include #ifndef mmiowb @@ -49,7 +50,7 @@ /* serialize device access against a spin_unlock, usually handled there. */ #ifndef __io_aw -#define __io_aw() barrier() +#define __io_aw() mmiowb_set_pending() #endif #ifndef __io_pbw diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index e089157dcf97..4298b1b31d9b 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -57,6 +57,7 @@ #include #include #include +#include /* @@ -177,6 +178,7 @@ do { \ static inline void do_raw_spin_lock(raw_spinlock_t *lock) __acquires(lock) { __acquire(lock); + mmiowb_spin_lock(); arch_spin_lock(&lock->raw_lock); } @@ -188,16 +190,23 @@ static inline void do_raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long *flags) __acquires(lock) { __acquire(lock); + mmiowb_spin_lock(); arch_spin_lock_flags(&lock->raw_lock, *flags); } static inline int do_raw_spin_trylock(raw_spinlock_t *lock) { - return arch_spin_trylock(&(lock)->raw_lock); + int ret = arch_spin_trylock(&(lock)->raw_lock); + + if (ret) + mmiowb_spin_lock(); + + return ret; } static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) { + mmiowb_spin_unlock(); arch_spin_unlock(&lock->raw_lock); __release(lock); } diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c index 9aa0fccd5d43..654484b6e70c 100644 --- a/kernel/locking/spinlock_debug.c +++ b/kernel/locking/spinlock_debug.c @@ -109,6 +109,7 @@ static inline void debug_spin_unlock(raw_spinlock_t *lock) */ void do_raw_spin_lock(raw_spinlock_t *lock) { + mmiowb_spin_lock(); debug_spin_lock_before(lock); arch_spin_lock(&lock->raw_lock); debug_spin_lock_after(lock); @@ -118,8 +119,10 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) { int ret = arch_spin_trylock(&lock->raw_lock); - if (ret) + if (ret) { + mmiowb_spin_lock(); debug_spin_lock_after(lock); + } #ifndef CONFIG_SMP /* * Must not happen on UP: @@ -131,6 +134,7 @@ int do_raw_spin_trylock(raw_spinlock_t *lock) void do_raw_spin_unlock(raw_spinlock_t *lock) { + mmiowb_spin_unlock(); debug_spin_unlock(lock); arch_spin_unlock(&lock->raw_lock); }