From patchwork Fri Mar 1 14:03:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 159491 Delivered-To: patch@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp770039jad; Fri, 1 Mar 2019 06:04:53 -0800 (PST) X-Google-Smtp-Source: APXvYqw0RPzZrkqWZDdJ9TyJYRjCTPhE7Kgklfd27gxYrRadzXdHOELVVQDaY4zFCVEw7bC/H8H5 X-Received: by 2002:a17:902:f096:: with SMTP id go22mr5447541plb.172.1551449093276; Fri, 01 Mar 2019 06:04:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551449093; cv=none; d=google.com; s=arc-20160816; b=UBFLlBDczSdQaBjyYFg5woL8AVCO+ruL8Hd6+iF4ptJ4O/aBOjeCGIM/EyzWM3RYVN FR83m0KVNzFVtKgN6D7A9YzELfrtb5OxYHMKG/VvB6ii609EkKOhB+hQbEq98D6Piwm0 DOYl1P9nezBoZWFMr9g8WHXGw+VzFMgZN66LpykSQG17eCh7fPPTVlQbZfvfZjKkzluO 96C0xnU2lsZ9oJQob88f6CMWOyfr3/wIcXKUpAFjRlnEp3MXrf/TwRGQOXKycXMvItjt SiBqVwhz+PAW+Wn9sBRcontnVp0wwuOHrQPAxGt51lSPmWEor+XEA3frpbNwGQ2sYjNH /jyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=WU3tHb8n0RVhaMIbW1AAqkH+hpy3jQKPafYGwnqntXM=; b=miz93sY17fJBr9jKE9FWCpOt8yaCeXck3B2bTX8xfoGqzHka/6LGln+xV+OP7Lio2E 7Gi46Dc1Tvf2j9FXJ4QDIqEFQmJv5oL1tZHY2SAIQKJw645MGkda8oJOaW+FQ0fiswKA LMGthlf4w1jcA/9thbT2lYeJTIoYCCnqBgfmoL0E//eC7cBma854BmiZwjJBW/gechLD vMUaegulIVmGMAIjS/EdrkMH9gBPeHOfK7vRebgkYQBFTrAMUyBM1JCPjnaRDX3qifjx ZLhSi4zlYPf3u6ZQwcn6vC/EvKvn9gF80sRw5JTUqiGbVlU438QYY4asrMKGclIONxkm QzEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j2si20408507plk.220.2019.03.01.06.04.52; Fri, 01 Mar 2019 06:04:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388576AbfCAOEv (ORCPT + 31 others); Fri, 1 Mar 2019 09:04:51 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:35992 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730726AbfCAOEs (ORCPT ); Fri, 1 Mar 2019 09:04:48 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7AF4A1B55; Fri, 1 Mar 2019 06:04:47 -0800 (PST) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3FBE83F720; Fri, 1 Mar 2019 06:04:44 -0800 (PST) From: Will Deacon To: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Will Deacon , "Paul E. McKenney" , Benjamin Herrenschmidt , Michael Ellerman , Arnd Bergmann , Peter Zijlstra , Andrea Parri , Palmer Dabbelt , Daniel Lustig , David Howells , Alan Stern , Linus Torvalds , "Maciej W. Rozycki" , Paul Burton , Ingo Molnar , Yoshinori Sato , Rich Felker , Tony Luck Subject: [PATCH 13/20] riscv/mmiowb: Hook up mmwiob() implementation to asm-generic code Date: Fri, 1 Mar 2019 14:03:41 +0000 Message-Id: <20190301140348.25175-14-will.deacon@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190301140348.25175-1-will.deacon@arm.com> References: <20190301140348.25175-1-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In a bid to kill off explicit mmiowb() usage in driver code, hook up the asm-generic mmiowb() tracking code for riscv, so that an mmiowb() is automatically issued from spin_unlock() if an I/O write was performed in the critical section. Signed-off-by: Will Deacon --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/Kbuild | 1 - arch/riscv/include/asm/io.h | 15 ++------------- arch/riscv/include/asm/mmiowb.h | 14 ++++++++++++++ 4 files changed, 17 insertions(+), 14 deletions(-) create mode 100644 arch/riscv/include/asm/mmiowb.h -- 2.11.0 Reviewed-by: Palmer Dabbelt diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 515fc3cc9687..08f4415203c5 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -49,6 +49,7 @@ config RISCV select RISCV_TIMER select GENERIC_IRQ_MULTI_HANDLER select ARCH_HAS_PTE_SPECIAL + select ARCH_HAS_MMIOWB config MMU def_bool y diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 221cd2ec78a4..cccd12cf27d4 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -21,7 +21,6 @@ generic-y += kvm_para.h generic-y += local.h generic-y += local64.h generic-y += mm-arch-hooks.h -generic-y += mmiowb.h generic-y += mutex.h generic-y += percpu.h generic-y += preempt.h diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h index 1d9c1376dc64..744fd92e77bc 100644 --- a/arch/riscv/include/asm/io.h +++ b/arch/riscv/include/asm/io.h @@ -20,6 +20,7 @@ #define _ASM_RISCV_IO_H #include +#include extern void __iomem *ioremap(phys_addr_t offset, unsigned long size); @@ -100,18 +101,6 @@ static inline u64 __raw_readq(const volatile void __iomem *addr) #endif /* - * FIXME: I'm flip-flopping on whether or not we should keep this or enforce - * the ordering with I/O on spinlocks like PowerPC does. The worry is that - * drivers won't get this correct, but I also don't want to introduce a fence - * into the lock code that otherwise only uses AMOs (and is essentially defined - * by the ISA to be correct). For now I'm leaving this here: "o,w" is - * sufficient to ensure that all writes to the device have completed before the - * write to the spinlock is allowed to commit. I surmised this from reading - * "ACQUIRES VS I/O ACCESSES" in memory-barriers.txt. - */ -#define mmiowb() __asm__ __volatile__ ("fence o,w" : : : "memory"); - -/* * Unordered I/O memory access primitives. These are even more relaxed than * the relaxed versions, as they don't even order accesses between successive * operations to the I/O regions. @@ -165,7 +154,7 @@ static inline u64 __raw_readq(const volatile void __iomem *addr) #define __io_br() do {} while (0) #define __io_ar(v) __asm__ __volatile__ ("fence i,r" : : : "memory"); #define __io_bw() __asm__ __volatile__ ("fence w,o" : : : "memory"); -#define __io_aw() do {} while (0) +#define __io_aw() mmiowb_set_pending() #define readb(c) ({ u8 __v; __io_br(); __v = readb_cpu(c); __io_ar(__v); __v; }) #define readw(c) ({ u16 __v; __io_br(); __v = readw_cpu(c); __io_ar(__v); __v; }) diff --git a/arch/riscv/include/asm/mmiowb.h b/arch/riscv/include/asm/mmiowb.h new file mode 100644 index 000000000000..5d7e3a2b4e3b --- /dev/null +++ b/arch/riscv/include/asm/mmiowb.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ASM_RISCV_MMIOWB_H +#define _ASM_RISCV_MMIOWB_H + +/* + * "o,w" is sufficient to ensure that all writes to the device have completed + * before the write to the spinlock is allowed to commit. + */ +#define mmiowb() __asm__ __volatile__ ("fence o,w" : : : "memory"); + +#include + +#endif /* ASM_RISCV_MMIOWB_H */