From patchwork Fri Mar 1 14:03:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 159499 Delivered-To: patch@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp771254jad; Fri, 1 Mar 2019 06:05:43 -0800 (PST) X-Google-Smtp-Source: APXvYqxP63DXLj8PbobqMRd9S2uITfPimd+FJEQHdYpf5HfKC0Om7uuIfM6/PR3pmsh7md1q5Eze X-Received: by 2002:a63:4d4f:: with SMTP id n15mr4997870pgl.327.1551449143721; Fri, 01 Mar 2019 06:05:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551449143; cv=none; d=google.com; s=arc-20160816; b=0nSF8j+GYT5sPU+FEANEC9vzrcoVU7t/8BYdnzkGtaEgtD7TbzuJgHMqAmEMmYFO+g vj+gYNC3xy9oegQCPnGWzQnlUrAE+gjowzU2V3Rh6VZPLAQs6AdCIx0sDEfERHdZj73/ YrY3+TNxv84b2VpzvXhyZLXa2jCx7yq3Jx7np1BkWtEi/YvG/yMH/blnWs2TmvXmgLEt ZhYmqy5ORrB/+O603dguBK2aIL48IUABHEn2SJd2OmDCAKsp3S9xao/1qH7sMsM6XxZb mPTvR1wof/XDOPNC8knCeENblQqSZKhmT/z1XeVCB+6uaCOP8kMEdyog/r2iOKp9s6Zh Sf8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=dVUwotK06hy4eX1YA0XDdEpdifp9veVzvradNce73zM=; b=EJTLZQEOPtItu9YxEGLuN9vKsMUPYpnXy3DMGba/6jEAW9IqwmBtT8bEZGzIV88jNp NmXhTUpEoqs8ig05Jr9gg9xKGJ/2/2MRgSsVbGUqKRhCIPyLU4COHmainbIzqwQnIBmn kZ2lQFqGRXA0kvgEfvnMi3TOxGoMt/SD2kwD9v7XJfrpAAbrizHBP9LbEtW/4QnJcvJP tUu10m8Lmlo8meEJnivNcue1Nxi4ikfYe7C+dhP7xQBqK/QFYJSWjBOaO+U6SVLFAu2y nBlTMDE4TTo8fRFgy/2HQoKLNT/6ds09+6I4AoF2Pg3sM9fL/QZ5aLAJjNc8zNns1zom jdGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f6si5891487pfn.108.2019.03.01.06.05.43; Fri, 01 Mar 2019 06:05:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388530AbfCAOEl (ORCPT + 31 others); Fri, 1 Mar 2019 09:04:41 -0500 Received: from foss.arm.com ([217.140.101.70]:35866 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388094AbfCAOEh (ORCPT ); Fri, 1 Mar 2019 09:04:37 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F178A1A25; Fri, 1 Mar 2019 06:04:36 -0800 (PST) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B5ED03F720; Fri, 1 Mar 2019 06:04:33 -0800 (PST) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck Subject: [PATCH 10/20] mips/mmiowb: Add unconditional mmiowb() to arch_spin_unlock() Date: Fri, 1 Mar 2019 14:03:38 +0000 Message-Id: <20190301140348.25175-11-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190301140348.25175-1-will.deacon@arm.com> References: <20190301140348.25175-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The mmiowb() macro is horribly difficult to use and drivers will continue to work most of the time if they omit a call when it is required. Rather than rely on driver authors getting this right, push mmiowb() into arch_spin_unlock() for mips. If this is deemed to be a performance issue, a subsequent optimisation could make use of ARCH_HAS_MMIOWB to elide the barrier in cases where no I/O writes were performed inside the critical section. Signed-off-by: Will Deacon --- arch/mips/include/asm/Kbuild | 1 - arch/mips/include/asm/io.h | 3 --- arch/mips/include/asm/mmiowb.h | 11 +++++++++++ arch/mips/include/asm/spinlock.h | 15 +++++++++++++++ 4 files changed, 26 insertions(+), 4 deletions(-) create mode 100644 arch/mips/include/asm/mmiowb.h -- 2.11.0 diff --git a/arch/mips/include/asm/Kbuild b/arch/mips/include/asm/Kbuild index 5653b1e47dd0..f15d5db5dd67 100644 --- a/arch/mips/include/asm/Kbuild +++ b/arch/mips/include/asm/Kbuild @@ -13,7 +13,6 @@ generic-y += irq_work.h generic-y += local64.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h -generic-y += mmiowb.h generic-y += msi.h generic-y += parport.h generic-y += percpu.h diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h index 845fbbc7a2e3..29997e42480e 100644 --- a/arch/mips/include/asm/io.h +++ b/arch/mips/include/asm/io.h @@ -102,9 +102,6 @@ static inline void set_io_port_base(unsigned long base) #define iobarrier_w() wmb() #define iobarrier_sync() iob() -/* Some callers use this older API instead. */ -#define mmiowb() iobarrier_w() - /* * virt_to_phys - map virtual addresses to physical * @address: address to remap diff --git a/arch/mips/include/asm/mmiowb.h b/arch/mips/include/asm/mmiowb.h new file mode 100644 index 000000000000..a40824e3ef8e --- /dev/null +++ b/arch/mips/include/asm/mmiowb.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_MMIOWB_H +#define _ASM_MMIOWB_H + +#include + +#define mmiowb() iobarrier_w() + +#include + +#endif /* _ASM_MMIOWB_H */ diff --git a/arch/mips/include/asm/spinlock.h b/arch/mips/include/asm/spinlock.h index ee81297d9117..8a88eb265516 100644 --- a/arch/mips/include/asm/spinlock.h +++ b/arch/mips/include/asm/spinlock.h @@ -11,6 +11,21 @@ #include #include + +#include + +#define queued_spin_unlock queued_spin_unlock +/** + * queued_spin_unlock - release a queued spinlock + * @lock : Pointer to queued spinlock structure + */ +static inline void queued_spin_unlock(struct qspinlock *lock) +{ + /* This could be optimised with ARCH_HAS_MMIOWB */ + mmiowb(); + smp_store_release(&lock->locked, 0); +} + #include #endif /* _ASM_SPINLOCK_H */