From patchwork Tue Feb 2 07:16:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 374603 Delivered-To: patch@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp1979035jah; Mon, 1 Feb 2021 23:22:50 -0800 (PST) X-Google-Smtp-Source: ABdhPJzbIzSdBI3IRy8u8pXLYdqWInm/a4LJEMUcCqgVrjQl8r8q4D9NrbxbzwyeGySctrPJh5mj X-Received: by 2002:a17:906:178d:: with SMTP id t13mr20894585eje.455.1612250570321; Mon, 01 Feb 2021 23:22:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612250570; cv=none; d=google.com; s=arc-20160816; b=APdQ4dDhU6e7oVSM8mGH/YI72XV5/JjTbzJEuyxP0MFEi/yJXFXrqrR0s9//WuUo0J ResAkuIt8s9dOXgBWxKMGkEdnnNGfuSXJjGJCGUGFfSD7K920jew08AFES+5LAhadXA2 wNj2khbhbAFVKJ84B34oHslyF1avz/UoWCzHX2Ux06WQ33fraLrsPwoy4i+veJlYC5hD 7KGpz1DzHg0MBOIGgH5StfxfTtoEbva1RhsOoi7qkZjJEL9NPlmYG4xJA/QVXKrffM0g lHIpR7qg3DGZ2GGj2Isg9ddoD+IaU3FLO9SgZs3K2FDyJZwvAEYNCb9YaXkfjEuOZaDk 9JXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=R6RQwCYxHmg+wR/uPTgwM4KrmxFX8tJ90qukOkVhf/Q=; b=whbEJ9MIuSgycAqHxmxEshZ9mIGjR50ZO5tJA4hDhjzCyvohqUoU7tr/RJCl5jFi/m OHw4QxJbYrj5/0Kxcdi6OSChewcf7GIkuV9/pF0gcJIt0m2gw7iQT2Hm2IZmnX6Ui3n7 esgRr4wk+xxa6Ezr1cXwCpZ5/ggIXTboAttm3luXTSdUWxHuUECsPDlbMnBSCVEkFsj4 VDdwU/uitQYSvgvUu/HBT1jQ687KOrF9MnAtE+JeWCVOwr6zg/R4YMnnniAiGjPWSCeo O/r6X+Oxxvxh+5wv5W6XLlIWrvV5HRC+RekxUqZiN+aobuMjEN/Jd7kfDvSH2xCI7YTW Bitw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w3si11914746edt.463.2021.02.01.23.22.49; Mon, 01 Feb 2021 23:22:50 -0800 (PST) Received-SPF: pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of devicetree-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232180AbhBBHWU (ORCPT + 6 others); Tue, 2 Feb 2021 02:22:20 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:12095 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232137AbhBBHWP (ORCPT ); Tue, 2 Feb 2021 02:22:15 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4DVGRb6KX6z1631T; Tue, 2 Feb 2021 15:20:11 +0800 (CST) Received: from thunder-town.china.huawei.com (10.174.176.220) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Tue, 2 Feb 2021 15:21:20 +0800 From: Zhen Lei To: Russell King , Greg Kroah-Hartman , Will Deacon , "Haojian Zhuang" , Arnd Bergmann , Rob Herring , Wei Xu , devicetree , linux-arm-kernel , linux-kernel CC: Zhen Lei Subject: [PATCH v7 4/4] ARM: Add support for Hisilicon Kunpeng L3 cache controller Date: Tue, 2 Feb 2021 15:16:48 +0800 Message-ID: <20210202071648.1776-5-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20210202071648.1776-1-thunder.leizhen@huawei.com> References: <20210202071648.1776-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.176.220] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add support for the Hisilicon Kunpeng L3 cache controller as used with Kunpeng506 and Kunpeng509 SoCs. These Hisilicon SoCs support LPAE, so the physical addresses is wider than 32-bits, but the actual bit width does not exceed 36 bits. When the cache operation is performed based on the address range, the upper 30 bits of the physical address are recorded in registers L3_MAINT_START and L3_MAINT_END, and ignore the lower 6 bits cacheline offset. Signed-off-by: Zhen Lei Reviewed-by: Arnd Bergmann --- arch/arm/mm/Kconfig | 10 ++ arch/arm/mm/Makefile | 1 + arch/arm/mm/cache-kunpeng-l3.c | 178 +++++++++++++++++++++++++++++++++ 3 files changed, 189 insertions(+) create mode 100644 arch/arm/mm/cache-kunpeng-l3.c -- 2.26.0.106.g9fadedd diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 02692fbe2db5c59..d2082503de053d2 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -1070,6 +1070,16 @@ config CACHE_XSC3L2 help This option enables the L2 cache on XScale3. +config CACHE_KUNPENG_L3 + bool "Enable the Hisilicon Kunpeng L3 cache controller" + depends on ARCH_KUNPENG50X && OF + default y + select OUTER_CACHE + help + This option enables the Kunpeng L3 cache controller on Hisilicon + Kunpeng506 and Kunpeng509 SoCs. It supports a maximum of 36-bit + physical addresses. + config ARM_L1_CACHE_SHIFT_6 bool default y if CPU_V7 diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 3510503bc5e688b..ececc5489e353eb 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -112,6 +112,7 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o +obj-$(CONFIG_CACHE_KUNPENG_L3) += cache-kunpeng-l3.o KASAN_SANITIZE_kasan_init.o := n obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm/mm/cache-kunpeng-l3.c b/arch/arm/mm/cache-kunpeng-l3.c new file mode 100644 index 000000000000000..64f892de9d68058 --- /dev/null +++ b/arch/arm/mm/cache-kunpeng-l3.c @@ -0,0 +1,178 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 Hisilicon Limited. + */ + +#include +#include +#include +#include + +#include + +#define L3_CACHE_LINE_SHITF 6 + +#define L3_CTRL 0x0 +#define L3_CTRL_ENABLE (1U << 0) +#define L3_CTRL_DISABLE (0U << 0) + +#define L3_AUCTRL 0x4 +#define L3_AUCTRL_EVENT_EN BIT(23) +#define L3_AUCTRL_ECC_EN BIT(8) + +#define L3_MAINT_CTRL 0x20 +#define L3_MAINT_RANGE_MASK GENMASK(3, 3) +#define L3_MAINT_RANGE_ALL (0U << 3) +#define L3_MAINT_RANGE_ADDR (1U << 3) +#define L3_MAINT_TYPE_MASK GENMASK(2, 1) +#define L3_MAINT_TYPE_CLEAN (1U << 1) +#define L3_MAINT_TYPE_INV (2U << 1) +#define L3_MAINT_TYPE_FLUSH (3U << 1) +#define L3_MAINT_STATUS_MASK GENMASK(0, 0) +#define L3_MAINT_STATUS_START (1U << 0) +#define L3_MAINT_STATUS_END (0U << 0) + +#define L3_MAINT_START 0x24 +#define L3_MAINT_END 0x28 + +static DEFINE_RAW_SPINLOCK(l3cache_lock); +static void __iomem *l3_ctrl_base; + +/* + * All read and write operations on L3 cache registers are protected by the + * spinlock, except for l3cache_init(). Each time the L3 cache operation is + * performed, all related information is filled into its registers. Therefore, + * there is no memory order problem when only _relaxed() functions are used. + * This can help us achieve some performance improvement: + * 1) The readl_relaxed() is about 20ns faster than readl(). + * 2) The writel_relaxed() is about 123ns faster than writel(). + */ +static void l3cache_maint_common(u32 range, u32 op_type) +{ + u32 reg; + + reg = readl_relaxed(l3_ctrl_base + L3_MAINT_CTRL); + reg &= ~(L3_MAINT_RANGE_MASK | L3_MAINT_TYPE_MASK); + reg |= range | op_type; + reg |= L3_MAINT_STATUS_START; + writel_relaxed(reg, l3_ctrl_base + L3_MAINT_CTRL); + + /* Wait until the hardware maintenance operation is complete. */ + do { + cpu_relax(); + reg = readl_relaxed(l3_ctrl_base + L3_MAINT_CTRL); + } while ((reg & L3_MAINT_STATUS_MASK) != L3_MAINT_STATUS_END); +} + +static void l3cache_maint_range(phys_addr_t start, phys_addr_t end, u32 op_type) +{ + start = start >> L3_CACHE_LINE_SHITF; + end = ((end - 1) >> L3_CACHE_LINE_SHITF) + 1; + + writel_relaxed(start, l3_ctrl_base + L3_MAINT_START); + writel_relaxed(end, l3_ctrl_base + L3_MAINT_END); + + l3cache_maint_common(L3_MAINT_RANGE_ADDR, op_type); +} + +static inline void l3cache_flush_all_nolock(void) +{ + l3cache_maint_common(L3_MAINT_RANGE_ALL, L3_MAINT_TYPE_FLUSH); +} + +static void l3cache_flush_all(void) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l3cache_lock, flags); + l3cache_flush_all_nolock(); + raw_spin_unlock_irqrestore(&l3cache_lock, flags); +} + +static void l3cache_inv_range(phys_addr_t start, phys_addr_t end) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l3cache_lock, flags); + l3cache_maint_range(start, end, L3_MAINT_TYPE_INV); + raw_spin_unlock_irqrestore(&l3cache_lock, flags); +} + +static void l3cache_clean_range(phys_addr_t start, phys_addr_t end) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l3cache_lock, flags); + l3cache_maint_range(start, end, L3_MAINT_TYPE_CLEAN); + raw_spin_unlock_irqrestore(&l3cache_lock, flags); +} + +static void l3cache_flush_range(phys_addr_t start, phys_addr_t end) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l3cache_lock, flags); + l3cache_maint_range(start, end, L3_MAINT_TYPE_FLUSH); + raw_spin_unlock_irqrestore(&l3cache_lock, flags); +} + +static void l3cache_disable(void) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l3cache_lock, flags); + l3cache_flush_all_nolock(); + writel_relaxed(L3_CTRL_DISABLE, l3_ctrl_base + L3_CTRL); + raw_spin_unlock_irqrestore(&l3cache_lock, flags); +} + +static const struct of_device_id l3cache_ids[] __initconst = { + {.compatible = "hisilicon,kunpeng-l3cache", .data = NULL}, + {} +}; + +static int __init l3cache_init(void) +{ + u32 reg; + struct device_node *node; + + node = of_find_matching_node(NULL, l3cache_ids); + if (!node) + return -ENODEV; + + l3_ctrl_base = of_iomap(node, 0); + if (!l3_ctrl_base) { + pr_err("failed to map Kunpeng L3 cache controller registers\n"); + return -ENOMEM; + } + + reg = readl_relaxed(l3_ctrl_base + L3_CTRL); + if (!(reg & L3_CTRL_ENABLE)) { + /* + * Ensure that no L3 cache hardware maintenance operations are + * being performed before enabling the L3 cache. Wait for it to + * finish. + */ + do { + cpu_relax(); + reg = readl_relaxed(l3_ctrl_base + L3_MAINT_CTRL); + } while ((reg & L3_MAINT_STATUS_MASK) != L3_MAINT_STATUS_END); + + reg = readl_relaxed(l3_ctrl_base + L3_AUCTRL); + reg |= L3_AUCTRL_EVENT_EN | L3_AUCTRL_ECC_EN; + writel_relaxed(reg, l3_ctrl_base + L3_AUCTRL); + + writel_relaxed(L3_CTRL_ENABLE, l3_ctrl_base + L3_CTRL); + } + + outer_cache.inv_range = l3cache_inv_range; + outer_cache.clean_range = l3cache_clean_range; + outer_cache.flush_range = l3cache_flush_range; + outer_cache.flush_all = l3cache_flush_all; + outer_cache.disable = l3cache_disable; + + pr_info("Hisilicon Kunpeng L3 cache controller enabled\n"); + + return 0; +} +arch_initcall(l3cache_init);