From patchwork Thu Nov 26 18:45:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AngeloGioacchino Del Regno X-Patchwork-Id: 333107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84EA1C71156 for ; Thu, 26 Nov 2020 18:55:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5257F21D1A for ; Thu, 26 Nov 2020 18:55:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404709AbgKZSzW (ORCPT ); Thu, 26 Nov 2020 13:55:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405051AbgKZSy6 (ORCPT ); Thu, 26 Nov 2020 13:54:58 -0500 Received: from relay06.th.seeweb.it (relay06.th.seeweb.it [IPv6:2001:4b7a:2000:18::167]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BF8BC0617A7 for ; Thu, 26 Nov 2020 10:54:58 -0800 (PST) Received: from IcarusMOD.eternityproject.eu (unknown [2.237.20.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by m-r2.th.seeweb.it (Postfix) with ESMTPSA id 2348A405F5; Thu, 26 Nov 2020 19:46:33 +0100 (CET) From: AngeloGioacchino Del Regno To: linux-arm-msm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-pm@vger.kernel.org, ulf.hansson@linaro.org, jorge.ramirez-ortiz@linaro.org, broonie@kernel.org, lgirdwood@gmail.com, daniel.lezcano@linaro.org, nks@flawful.org, bjorn.andersson@linaro.org, agross@kernel.org, robh+dt@kernel.org, viresh.kumar@linaro.org, rjw@rjwysocki.net, konrad.dybcio@somainline.org, martin.botka@somainline.org, marijn.suijten@somainline.org, phone-devel@vger.kernel.org, AngeloGioacchino Del Regno Subject: [PATCH 01/13] cpuidle: qcom_spm: Detach state machine from main SPM handling Date: Thu, 26 Nov 2020 19:45:47 +0100 Message-Id: <20201126184559.3052375-2-angelogioacchino.delregno@somainline.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> References: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org In commit a871be6b8eee the SPM driver has been converted to a generic CPUidle driver: that was mainly made to simplify the driver and that was a great accomplishment; Though, it was ignored that the SPM driver is not used only on the ARM architecture. In preparation for the enablement of SPM features on AArch64/ARM64, split the cpuidle-qcom-spm driver in two: the CPUIdle related state machine (currently used only on ARM SoCs) stays there, while the SPM communication handling lands back in soc/qcom/spm.c and also making sure to not discard the simplifications that were introduced in the aforementioned commit. Since now the "two drivers" are split, the SCM dependency in the main SPM handling is gone and for this reason it was also possible to move the SPM initialization early: this will also make sure that whenever the SAW CPUIdle driver is getting initialized, the SPM driver will be ready to do the job. Please note that the anticipation of the SPM initialization was also done to optimize the boot times on platforms that have their CPU/L2 idle states managed by other means (such as PSCI), while needing SAW initialization for other purposes, like AVS control. Signed-off-by: AngeloGioacchino Del Regno --- drivers/cpuidle/Kconfig.arm | 1 + drivers/cpuidle/cpuidle-qcom-spm.c | 294 ++++++----------------------- drivers/soc/qcom/Kconfig | 9 + drivers/soc/qcom/Makefile | 1 + drivers/soc/qcom/spm.c | 198 +++++++++++++++++++ include/soc/qcom/spm.h | 45 +++++ 6 files changed, 312 insertions(+), 236 deletions(-) create mode 100644 drivers/soc/qcom/spm.c create mode 100644 include/soc/qcom/spm.h diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm index 0844fadc4be8..c5e2f4d0df04 100644 --- a/drivers/cpuidle/Kconfig.arm +++ b/drivers/cpuidle/Kconfig.arm @@ -112,6 +112,7 @@ config ARM_QCOM_SPM_CPUIDLE select CPU_IDLE_MULTIPLE_DRIVERS select DT_IDLE_STATES select QCOM_SCM + select QCOM_SPM help Select this to enable cpuidle for Qualcomm processors. The Subsystem Power Manager (SPM) controls low power modes for the diff --git a/drivers/cpuidle/cpuidle-qcom-spm.c b/drivers/cpuidle/cpuidle-qcom-spm.c index adf91a6e4d7d..3dd7bb10b82d 100644 --- a/drivers/cpuidle/cpuidle-qcom-spm.c +++ b/drivers/cpuidle/cpuidle-qcom-spm.c @@ -18,146 +18,13 @@ #include #include #include +#include #include #include #include "dt_idle_states.h" -#define MAX_PMIC_DATA 2 -#define MAX_SEQ_DATA 64 -#define SPM_CTL_INDEX 0x7f -#define SPM_CTL_INDEX_SHIFT 4 -#define SPM_CTL_EN BIT(0) - -enum pm_sleep_mode { - PM_SLEEP_MODE_STBY, - PM_SLEEP_MODE_RET, - PM_SLEEP_MODE_SPC, - PM_SLEEP_MODE_PC, - PM_SLEEP_MODE_NR, -}; - -enum spm_reg { - SPM_REG_CFG, - SPM_REG_SPM_CTL, - SPM_REG_DLY, - SPM_REG_PMIC_DLY, - SPM_REG_PMIC_DATA_0, - SPM_REG_PMIC_DATA_1, - SPM_REG_VCTL, - SPM_REG_SEQ_ENTRY, - SPM_REG_SPM_STS, - SPM_REG_PMIC_STS, - SPM_REG_NR, -}; - -struct spm_reg_data { - const u8 *reg_offset; - u32 spm_cfg; - u32 spm_dly; - u32 pmic_dly; - u32 pmic_data[MAX_PMIC_DATA]; - u8 seq[MAX_SEQ_DATA]; - u8 start_index[PM_SLEEP_MODE_NR]; -}; - -struct spm_driver_data { - struct cpuidle_driver cpuidle_driver; - void __iomem *reg_base; - const struct spm_reg_data *reg_data; -}; - -static const u8 spm_reg_offset_v2_1[SPM_REG_NR] = { - [SPM_REG_CFG] = 0x08, - [SPM_REG_SPM_CTL] = 0x30, - [SPM_REG_DLY] = 0x34, - [SPM_REG_SEQ_ENTRY] = 0x80, -}; - -/* SPM register data for 8974, 8084 */ -static const struct spm_reg_data spm_reg_8974_8084_cpu = { - .reg_offset = spm_reg_offset_v2_1, - .spm_cfg = 0x1, - .spm_dly = 0x3C102800, - .seq = { 0x03, 0x0B, 0x0F, 0x00, 0x20, 0x80, 0x10, 0xE8, 0x5B, 0x03, - 0x3B, 0xE8, 0x5B, 0x82, 0x10, 0x0B, 0x30, 0x06, 0x26, 0x30, - 0x0F }, - .start_index[PM_SLEEP_MODE_STBY] = 0, - .start_index[PM_SLEEP_MODE_SPC] = 3, -}; - -static const u8 spm_reg_offset_v1_1[SPM_REG_NR] = { - [SPM_REG_CFG] = 0x08, - [SPM_REG_SPM_CTL] = 0x20, - [SPM_REG_PMIC_DLY] = 0x24, - [SPM_REG_PMIC_DATA_0] = 0x28, - [SPM_REG_PMIC_DATA_1] = 0x2C, - [SPM_REG_SEQ_ENTRY] = 0x80, -}; - -/* SPM register data for 8064 */ -static const struct spm_reg_data spm_reg_8064_cpu = { - .reg_offset = spm_reg_offset_v1_1, - .spm_cfg = 0x1F, - .pmic_dly = 0x02020004, - .pmic_data[0] = 0x0084009C, - .pmic_data[1] = 0x00A4001C, - .seq = { 0x03, 0x0F, 0x00, 0x24, 0x54, 0x10, 0x09, 0x03, 0x01, - 0x10, 0x54, 0x30, 0x0C, 0x24, 0x30, 0x0F }, - .start_index[PM_SLEEP_MODE_STBY] = 0, - .start_index[PM_SLEEP_MODE_SPC] = 2, -}; - -static inline void spm_register_write(struct spm_driver_data *drv, - enum spm_reg reg, u32 val) -{ - if (drv->reg_data->reg_offset[reg]) - writel_relaxed(val, drv->reg_base + - drv->reg_data->reg_offset[reg]); -} - -/* Ensure a guaranteed write, before return */ -static inline void spm_register_write_sync(struct spm_driver_data *drv, - enum spm_reg reg, u32 val) -{ - u32 ret; - - if (!drv->reg_data->reg_offset[reg]) - return; - - do { - writel_relaxed(val, drv->reg_base + - drv->reg_data->reg_offset[reg]); - ret = readl_relaxed(drv->reg_base + - drv->reg_data->reg_offset[reg]); - if (ret == val) - break; - cpu_relax(); - } while (1); -} - -static inline u32 spm_register_read(struct spm_driver_data *drv, - enum spm_reg reg) -{ - return readl_relaxed(drv->reg_base + drv->reg_data->reg_offset[reg]); -} - -static void spm_set_low_power_mode(struct spm_driver_data *drv, - enum pm_sleep_mode mode) -{ - u32 start_index; - u32 ctl_val; - - start_index = drv->reg_data->start_index[mode]; - - ctl_val = spm_register_read(drv, SPM_REG_SPM_CTL); - ctl_val &= ~(SPM_CTL_INDEX << SPM_CTL_INDEX_SHIFT); - ctl_val |= start_index << SPM_CTL_INDEX_SHIFT; - ctl_val |= SPM_CTL_EN; - spm_register_write_sync(drv, SPM_REG_SPM_CTL, ctl_val); -} - static int qcom_pm_collapse(unsigned long int unused) { qcom_scm_cpu_power_down(QCOM_SCM_CPU_PWR_DOWN_L2_ON); @@ -213,132 +80,87 @@ static const struct of_device_id qcom_idle_state_match[] = { { }, }; -static int spm_cpuidle_init(struct cpuidle_driver *drv, int cpu) +static int spm_cpuidle_register(int cpu) { + struct platform_device *pdev = NULL; + struct spm_driver_data *spm = NULL; + struct device_node *cpu_node, *saw_node; int ret; - memcpy(drv, &qcom_spm_idle_driver, sizeof(*drv)); - drv->cpumask = (struct cpumask *)cpumask_of(cpu); + cpu_node = of_cpu_device_node_get(cpu); + if (!cpu_node) + return -ENODEV; - /* Parse idle states from device tree */ - ret = dt_init_idle_driver(drv, qcom_idle_state_match, 1); - if (ret <= 0) - return ret ? : -ENODEV; + saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); + if (!saw_node) + return -ENODEV; - /* We have atleast one power down mode */ - return qcom_scm_set_warm_boot_addr(cpu_resume_arm, drv->cpumask); -} + pdev = of_find_device_by_node(saw_node); + of_node_put(saw_node); + of_node_put(cpu_node); + if (!pdev) + return -ENODEV; -static struct spm_driver_data *spm_get_drv(struct platform_device *pdev, - int *spm_cpu) -{ - struct spm_driver_data *drv = NULL; - struct device_node *cpu_node, *saw_node; - int cpu; - bool found = 0; + spm = dev_get_drvdata(&pdev->dev); + if (!spm) + return -EINVAL; - for_each_possible_cpu(cpu) { - cpu_node = of_cpu_device_node_get(cpu); - if (!cpu_node) - continue; - saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); - found = (saw_node == pdev->dev.of_node); - of_node_put(saw_node); - of_node_put(cpu_node); - if (found) - break; - } + spm->cpuidle_driver = qcom_spm_idle_driver; - if (found) { - drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); - if (drv) - *spm_cpu = cpu; - } + ret = dt_init_idle_driver(&spm->cpuidle_driver, + qcom_idle_state_match, 1); + if (ret <= 0) + return ret ? : -ENODEV; - return drv; -} + ret = qcom_scm_set_warm_boot_addr(cpu_resume_arm, cpumask_of(cpu)); + if (ret) + return ret; -static const struct of_device_id spm_match_table[] = { - { .compatible = "qcom,msm8974-saw2-v2.1-cpu", - .data = &spm_reg_8974_8084_cpu }, - { .compatible = "qcom,apq8084-saw2-v2.1-cpu", - .data = &spm_reg_8974_8084_cpu }, - { .compatible = "qcom,apq8064-saw2-v1.1-cpu", - .data = &spm_reg_8064_cpu }, - { }, -}; + return cpuidle_register(&spm->cpuidle_driver, NULL); +} -static int spm_dev_probe(struct platform_device *pdev) +static int spm_cpuidle_drv_probe(struct platform_device *pdev) { - struct spm_driver_data *drv; - struct resource *res; - const struct of_device_id *match_id; - void __iomem *addr; int cpu, ret; if (!qcom_scm_is_available()) return -EPROBE_DEFER; - drv = spm_get_drv(pdev, &cpu); - if (!drv) - return -EINVAL; - platform_set_drvdata(pdev, drv); + for_each_possible_cpu(cpu) { + ret = spm_cpuidle_register(cpu); + if (ret != -ENODEV) { + dev_err(&pdev->dev, + "Cannot register for CPU%d: %d\n", cpu, ret); + break; + } + } - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - drv->reg_base = devm_ioremap_resource(&pdev->dev, res); - if (IS_ERR(drv->reg_base)) - return PTR_ERR(drv->reg_base); + return ret; +} - match_id = of_match_node(spm_match_table, pdev->dev.of_node); - if (!match_id) - return -ENODEV; +static struct platform_driver spm_cpuidle_driver = { + .probe = spm_cpuidle_drv_probe, + .driver = { + .name = "qcom-spm-cpuidle", + }, +}; - drv->reg_data = match_id->data; +static int __init qcom_spm_cpuidle_init(void) +{ + struct platform_device *pdev; + int ret; - ret = spm_cpuidle_init(&drv->cpuidle_driver, cpu); + ret = platform_driver_register(&spm_cpuidle_driver); if (ret) return ret; - /* Write the SPM sequences first.. */ - addr = drv->reg_base + drv->reg_data->reg_offset[SPM_REG_SEQ_ENTRY]; - __iowrite32_copy(addr, drv->reg_data->seq, - ARRAY_SIZE(drv->reg_data->seq) / 4); - - /* - * ..and then the control registers. - * On some SoC if the control registers are written first and if the - * CPU was held in reset, the reset signal could trigger the SPM state - * machine, before the sequences are completely written. - */ - spm_register_write(drv, SPM_REG_CFG, drv->reg_data->spm_cfg); - spm_register_write(drv, SPM_REG_DLY, drv->reg_data->spm_dly); - spm_register_write(drv, SPM_REG_PMIC_DLY, drv->reg_data->pmic_dly); - spm_register_write(drv, SPM_REG_PMIC_DATA_0, - drv->reg_data->pmic_data[0]); - spm_register_write(drv, SPM_REG_PMIC_DATA_1, - drv->reg_data->pmic_data[1]); - - /* Set up Standby as the default low power mode */ - spm_set_low_power_mode(drv, PM_SLEEP_MODE_STBY); - - return cpuidle_register(&drv->cpuidle_driver, NULL); -} - -static int spm_dev_remove(struct platform_device *pdev) -{ - struct spm_driver_data *drv = platform_get_drvdata(pdev); + pdev = platform_device_register_simple("qcom-spm-cpuidle", + -1, NULL, 0); + if (IS_ERR(pdev)) { + platform_driver_unregister(&spm_cpuidle_driver); + return PTR_ERR(pdev); + } - cpuidle_unregister(&drv->cpuidle_driver); return 0; } - -static struct platform_driver spm_driver = { - .probe = spm_dev_probe, - .remove = spm_dev_remove, - .driver = { - .name = "saw", - .of_match_table = spm_match_table, - }, -}; - -builtin_platform_driver(spm_driver); +device_initcall(qcom_spm_cpuidle_init); diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig index 6a3b69b43ad5..37f22dcd20af 100644 --- a/drivers/soc/qcom/Kconfig +++ b/drivers/soc/qcom/Kconfig @@ -189,6 +189,15 @@ config QCOM_SOCINFO Say yes here to support the Qualcomm socinfo driver, providing information about the SoC to user space. +config QCOM_SPM + tristate "Qualcomm Subsystem Power Manager (SPM)" + depends on ARCH_QCOM + select QCOM_SCM + help + Enable the support for the Qualcomm Subsystem Power Manager, used + to manage cores, L2 low power modes and to configure the internal + Adaptive Voltage Scaler parameters, where supported. + config QCOM_WCNSS_CTRL tristate "Qualcomm WCNSS control driver" depends on ARCH_QCOM || COMPILE_TEST diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile index ad675a6593d0..24514c722832 100644 --- a/drivers/soc/qcom/Makefile +++ b/drivers/soc/qcom/Makefile @@ -20,6 +20,7 @@ obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o obj-$(CONFIG_QCOM_SMP2P) += smp2p.o obj-$(CONFIG_QCOM_SMSM) += smsm.o obj-$(CONFIG_QCOM_SOCINFO) += socinfo.o +obj-$(CONFIG_QCOM_SPM) += spm.o obj-$(CONFIG_QCOM_WCNSS_CTRL) += wcnss_ctrl.o obj-$(CONFIG_QCOM_APR) += apr.o obj-$(CONFIG_QCOM_LLCC) += llcc-qcom.o diff --git a/drivers/soc/qcom/spm.c b/drivers/soc/qcom/spm.c new file mode 100644 index 000000000000..0c8aa9240c41 --- /dev/null +++ b/drivers/soc/qcom/spm.c @@ -0,0 +1,198 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved. + * Copyright (c) 2014,2015, Linaro Ltd. + * + * SAW power controller driver + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define SPM_CTL_INDEX 0x7f +#define SPM_CTL_INDEX_SHIFT 4 +#define SPM_CTL_EN BIT(0) + +enum spm_reg { + SPM_REG_CFG, + SPM_REG_SPM_CTL, + SPM_REG_DLY, + SPM_REG_PMIC_DLY, + SPM_REG_PMIC_DATA_0, + SPM_REG_PMIC_DATA_1, + SPM_REG_VCTL, + SPM_REG_SEQ_ENTRY, + SPM_REG_SPM_STS, + SPM_REG_PMIC_STS, + SPM_REG_NR, +}; + +static const u16 spm_reg_offset_v2_1[SPM_REG_NR] = { + [SPM_REG_CFG] = 0x08, + [SPM_REG_SPM_CTL] = 0x30, + [SPM_REG_DLY] = 0x34, + [SPM_REG_SEQ_ENTRY] = 0x80, +}; + +/* SPM register data for 8974, 8084 */ +static const struct spm_reg_data spm_reg_8974_8084_cpu = { + .reg_offset = spm_reg_offset_v2_1, + .spm_cfg = 0x1, + .spm_dly = 0x3C102800, + .seq = { 0x03, 0x0B, 0x0F, 0x00, 0x20, 0x80, 0x10, 0xE8, 0x5B, 0x03, + 0x3B, 0xE8, 0x5B, 0x82, 0x10, 0x0B, 0x30, 0x06, 0x26, 0x30, + 0x0F }, + .start_index[PM_SLEEP_MODE_STBY] = 0, + .start_index[PM_SLEEP_MODE_SPC] = 3, +}; + +static const u16 spm_reg_offset_v1_1[SPM_REG_NR] = { + [SPM_REG_CFG] = 0x08, + [SPM_REG_SPM_CTL] = 0x20, + [SPM_REG_PMIC_DLY] = 0x24, + [SPM_REG_PMIC_DATA_0] = 0x28, + [SPM_REG_PMIC_DATA_1] = 0x2C, + [SPM_REG_SEQ_ENTRY] = 0x80, +}; + +/* SPM register data for 8064 */ +static const struct spm_reg_data spm_reg_8064_cpu = { + .reg_offset = spm_reg_offset_v1_1, + .spm_cfg = 0x1F, + .pmic_dly = 0x02020004, + .pmic_data[0] = 0x0084009C, + .pmic_data[1] = 0x00A4001C, + .seq = { 0x03, 0x0F, 0x00, 0x24, 0x54, 0x10, 0x09, 0x03, 0x01, + 0x10, 0x54, 0x30, 0x0C, 0x24, 0x30, 0x0F }, + .start_index[PM_SLEEP_MODE_STBY] = 0, + .start_index[PM_SLEEP_MODE_SPC] = 2, +}; + +static inline void spm_register_write(struct spm_driver_data *drv, + enum spm_reg reg, u32 val) +{ + if (drv->reg_data->reg_offset[reg]) + writel_relaxed(val, drv->reg_base + + drv->reg_data->reg_offset[reg]); +} + +/* Ensure a guaranteed write, before return */ +static inline void spm_register_write_sync(struct spm_driver_data *drv, + enum spm_reg reg, u32 val) +{ + u32 ret; + + if (!drv->reg_data->reg_offset[reg]) + return; + + do { + writel_relaxed(val, drv->reg_base + + drv->reg_data->reg_offset[reg]); + ret = readl_relaxed(drv->reg_base + + drv->reg_data->reg_offset[reg]); + if (ret == val) + break; + cpu_relax(); + } while (1); +} + +static inline u32 spm_register_read(struct spm_driver_data *drv, + enum spm_reg reg) +{ + return readl_relaxed(drv->reg_base + drv->reg_data->reg_offset[reg]); +} + +void spm_set_low_power_mode(struct spm_driver_data *drv, + enum pm_sleep_mode mode) +{ + u32 start_index; + u32 ctl_val; + + start_index = drv->reg_data->start_index[mode]; + + ctl_val = spm_register_read(drv, SPM_REG_SPM_CTL); + ctl_val &= ~(SPM_CTL_INDEX << SPM_CTL_INDEX_SHIFT); + ctl_val |= start_index << SPM_CTL_INDEX_SHIFT; + ctl_val |= SPM_CTL_EN; + spm_register_write_sync(drv, SPM_REG_SPM_CTL, ctl_val); +} + +static const struct of_device_id spm_match_table[] = { + { .compatible = "qcom,msm8974-saw2-v2.1-cpu", + .data = &spm_reg_8974_8084_cpu }, + { .compatible = "qcom,apq8084-saw2-v2.1-cpu", + .data = &spm_reg_8974_8084_cpu }, + { .compatible = "qcom,apq8064-saw2-v1.1-cpu", + .data = &spm_reg_8064_cpu }, + { }, +}; + +static int spm_dev_probe(struct platform_device *pdev) +{ + const struct of_device_id *match_id; + struct spm_driver_data *drv; + struct resource *res; + void __iomem *addr; + + drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); + if (!drv) + return -ENOMEM; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + drv->reg_base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(drv->reg_base)) + return PTR_ERR(drv->reg_base); + + match_id = of_match_node(spm_match_table, pdev->dev.of_node); + if (!match_id) + return -ENODEV; + + drv->reg_data = match_id->data; + platform_set_drvdata(pdev, drv); + + /* Write the SPM sequences first.. */ + addr = drv->reg_base + drv->reg_data->reg_offset[SPM_REG_SEQ_ENTRY]; + __iowrite32_copy(addr, drv->reg_data->seq, + ARRAY_SIZE(drv->reg_data->seq) / 4); + + /* + * ..and then the control registers. + * On some SoC if the control registers are written first and if the + * CPU was held in reset, the reset signal could trigger the SPM state + * machine, before the sequences are completely written. + */ + spm_register_write(drv, SPM_REG_CFG, drv->reg_data->spm_cfg); + spm_register_write(drv, SPM_REG_DLY, drv->reg_data->spm_dly); + spm_register_write(drv, SPM_REG_PMIC_DLY, drv->reg_data->pmic_dly); + spm_register_write(drv, SPM_REG_PMIC_DATA_0, + drv->reg_data->pmic_data[0]); + spm_register_write(drv, SPM_REG_PMIC_DATA_1, + drv->reg_data->pmic_data[1]); + + /* Set up Standby as the default low power mode */ + spm_set_low_power_mode(drv, PM_SLEEP_MODE_STBY); + + return 0; +} + +static struct platform_driver spm_driver = { + .probe = spm_dev_probe, + .driver = { + .name = "qcom_spm", + .of_match_table = spm_match_table, + }, +}; + +static int __init qcom_spm_init(void) +{ + return platform_driver_register(&spm_driver); +} +arch_initcall(qcom_spm_init); diff --git a/include/soc/qcom/spm.h b/include/soc/qcom/spm.h new file mode 100644 index 000000000000..dfdb9e9edfc6 --- /dev/null +++ b/include/soc/qcom/spm.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved. + * Copyright (c) 2014,2015, Linaro Ltd. + * Copyright (C) 2020, AngeloGioacchino Del Regno + */ + +#ifndef __SPM_H__ +#define __SPM_H__ + +#include + +#define MAX_PMIC_DATA 2 +#define MAX_SEQ_DATA 64 + +enum pm_sleep_mode { + PM_SLEEP_MODE_STBY, + PM_SLEEP_MODE_RET, + PM_SLEEP_MODE_SPC, + PM_SLEEP_MODE_PC, + PM_SLEEP_MODE_NR, +}; + +struct spm_reg_data { + const u16 *reg_offset; + u32 spm_cfg; + u32 spm_dly; + u32 pmic_dly; + u32 pmic_data[MAX_PMIC_DATA]; + u32 avs_ctl; + u32 avs_limit; + u8 seq[MAX_SEQ_DATA]; + u8 start_index[PM_SLEEP_MODE_NR]; +}; + +struct spm_driver_data { + struct cpuidle_driver cpuidle_driver; + void __iomem *reg_base; + const struct spm_reg_data *reg_data; +}; + +void spm_set_low_power_mode(struct spm_driver_data *drv, + enum pm_sleep_mode mode); + +#endif /* __SPM_H__ */ From patchwork Thu Nov 26 18:45:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AngeloGioacchino Del Regno X-Patchwork-Id: 333109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80A34C83018 for ; Thu, 26 Nov 2020 18:55:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C04621D1A for ; Thu, 26 Nov 2020 18:55:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729016AbgKZSzI (ORCPT ); Thu, 26 Nov 2020 13:55:08 -0500 Received: from relay05.th.seeweb.it ([5.144.164.166]:53085 "EHLO relay05.th.seeweb.it" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405044AbgKZSzD (ORCPT ); Thu, 26 Nov 2020 13:55:03 -0500 Received: from IcarusMOD.eternityproject.eu (unknown [2.237.20.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by m-r2.th.seeweb.it (Postfix) with ESMTPSA id C31AB40606; Thu, 26 Nov 2020 19:46:34 +0100 (CET) From: AngeloGioacchino Del Regno To: linux-arm-msm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-pm@vger.kernel.org, ulf.hansson@linaro.org, jorge.ramirez-ortiz@linaro.org, broonie@kernel.org, lgirdwood@gmail.com, daniel.lezcano@linaro.org, nks@flawful.org, bjorn.andersson@linaro.org, agross@kernel.org, robh+dt@kernel.org, viresh.kumar@linaro.org, rjw@rjwysocki.net, konrad.dybcio@somainline.org, martin.botka@somainline.org, marijn.suijten@somainline.org, phone-devel@vger.kernel.org, AngeloGioacchino Del Regno Subject: [PATCH 05/13] soc: qcom: cpr: Move common functions to new file Date: Thu, 26 Nov 2020 19:45:51 +0100 Message-Id: <20201126184559.3052375-6-angelogioacchino.delregno@somainline.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> References: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org In preparation for implementing a new driver that will be handling CPRv3, CPRv4 and CPR-Hardened, format out common functions to a new file. Signed-off-by: AngeloGioacchino Del Regno --- drivers/soc/qcom/Makefile | 2 +- drivers/soc/qcom/cpr-common.c | 382 +++++++++++++++++++++++++++++ drivers/soc/qcom/cpr-common.h | 113 +++++++++ drivers/soc/qcom/cpr.c | 441 +++------------------------------- 4 files changed, 523 insertions(+), 415 deletions(-) create mode 100644 drivers/soc/qcom/cpr-common.c create mode 100644 drivers/soc/qcom/cpr-common.h diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile index 24514c722832..778a2a5f07bb 100644 --- a/drivers/soc/qcom/Makefile +++ b/drivers/soc/qcom/Makefile @@ -3,7 +3,7 @@ CFLAGS_rpmh-rsc.o := -I$(src) obj-$(CONFIG_QCOM_AOSS_QMP) += qcom_aoss.o obj-$(CONFIG_QCOM_GENI_SE) += qcom-geni-se.o obj-$(CONFIG_QCOM_COMMAND_DB) += cmd-db.o -obj-$(CONFIG_QCOM_CPR) += cpr.o +obj-$(CONFIG_QCOM_CPR) += cpr-common.o cpr.o obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o obj-$(CONFIG_QCOM_OCMEM) += ocmem.o diff --git a/drivers/soc/qcom/cpr-common.c b/drivers/soc/qcom/cpr-common.c new file mode 100644 index 000000000000..70e1e0f441db --- /dev/null +++ b/drivers/soc/qcom/cpr-common.c @@ -0,0 +1,382 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2013-2015, The Linux Foundation. All rights reserved. + * Copyright (c) 2019, Linaro Limited + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "cpr-common.h" + +int cpr_read_efuse(struct device *dev, const char *cname, u32 *data) +{ + struct nvmem_cell *cell; + ssize_t len; + char *ret; + int i; + + *data = 0; + + cell = nvmem_cell_get(dev, cname); + if (IS_ERR(cell)) { + if (PTR_ERR(cell) != -EPROBE_DEFER) + dev_err(dev, "undefined cell %s\n", cname); + return PTR_ERR(cell); + } + + ret = nvmem_cell_read(cell, &len); + nvmem_cell_put(cell); + if (IS_ERR(ret)) { + dev_err(dev, "can't read cell %s\n", cname); + return PTR_ERR(ret); + } + + for (i = 0; i < len; i++) + *data |= ret[i] << (8 * i); + + kfree(ret); + dev_dbg(dev, "efuse read(%s) = %x, bytes %zd\n", cname, *data, len); + + return 0; +} + +int cpr_populate_ring_osc_idx(struct device *dev, + struct fuse_corner *fuse_corner, + const struct cpr_fuse *cpr_fuse, + int num_fuse_corners) +{ + struct fuse_corner *end = fuse_corner + num_fuse_corners; + u32 data; + int ret; + + for (; fuse_corner < end; fuse_corner++, cpr_fuse++) { + ret = cpr_read_efuse(dev, cpr_fuse->ring_osc, + &data); + if (ret) + return ret; + fuse_corner->ring_osc_idx = data; + } + + return 0; +} + +int cpr_read_fuse_uV(int init_v_width, int step_size_uV, int ref_uV, + int adj, int step_volt, const char *init_v_efuse, + struct device *dev) +{ + int steps, uV; + u32 bits = 0; + int ret; + + ret = cpr_read_efuse(dev, init_v_efuse, &bits); + if (ret) + return ret; + + steps = bits & (BIT(init_v_width - 1) - 1); + /* Not two's complement.. instead highest bit is sign bit */ + if (bits & BIT(init_v_width - 1)) + steps = -steps; + + uV = ref_uV + steps * step_size_uV; + + /* Apply open-loop fixed adjustments to fused values */ + uV += adj; + + return DIV_ROUND_UP(uV, step_volt) * step_volt; +} + +const struct cpr_fuse *cpr_get_fuses(struct device *dev, int tid, + int num_fuse_corners) +{ + struct cpr_fuse *fuses; + int i; + + fuses = devm_kcalloc(dev, num_fuse_corners, + sizeof(struct cpr_fuse), + GFP_KERNEL); + if (!fuses) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < num_fuse_corners; i++) { + char tbuf[50]; + + snprintf(tbuf, sizeof(tbuf), "cpr%d_ring_osc%d", tid, i + 1); + fuses[i].ring_osc = devm_kstrdup(dev, tbuf, GFP_KERNEL); + if (!fuses[i].ring_osc) + return ERR_PTR(-ENOMEM); + + snprintf(tbuf, sizeof(tbuf), + "cpr%d_init_voltage%d", tid, i + 1); + fuses[i].init_voltage = devm_kstrdup(dev, tbuf, + GFP_KERNEL); + if (!fuses[i].init_voltage) + return ERR_PTR(-ENOMEM); + + snprintf(tbuf, sizeof(tbuf), "cpr%d_quotient%d", tid, i + 1); + fuses[i].quotient = devm_kstrdup(dev, tbuf, GFP_KERNEL); + if (!fuses[i].quotient) + return ERR_PTR(-ENOMEM); + + snprintf(tbuf, sizeof(tbuf), + "cpr%d_quotient_offset%d", tid, i + 1); + fuses[i].quotient_offset = devm_kstrdup(dev, tbuf, + GFP_KERNEL); + if (!fuses[i].quotient_offset) + return ERR_PTR(-ENOMEM); + } + + return fuses; +} + +int cpr_populate_fuse_common(struct device *dev, + struct fuse_corner_data *fdata, + const struct cpr_fuse *cpr_fuse, + struct fuse_corner *fuse_corner, + int step_volt, int init_v_width, + int init_v_step) +{ + int uV, ret; + + /* Populate uV */ + uV = cpr_read_fuse_uV(init_v_width, init_v_step, + fdata->ref_uV, fdata->volt_oloop_adjust, + step_volt, cpr_fuse->init_voltage, dev); + if (uV < 0) + return uV; + + /* + * Update SoC voltages: platforms might choose a different + * regulators than the one used to characterize the algorithms + * (ie, init_voltage_step). + */ + fdata->min_uV = roundup(fdata->min_uV, step_volt); + fdata->max_uV = roundup(fdata->max_uV, step_volt); + + fuse_corner->min_uV = fdata->min_uV; + fuse_corner->max_uV = fdata->max_uV; + fuse_corner->uV = clamp(uV, fuse_corner->min_uV, fuse_corner->max_uV); + + /* Populate target quotient by scaling */ + ret = cpr_read_efuse(dev, cpr_fuse->quotient, &fuse_corner->quot); + if (ret) + return ret; + + fuse_corner->quot *= fdata->quot_scale; + fuse_corner->quot += fdata->quot_offset; + fuse_corner->quot += fdata->quot_adjust; + + return 0; +} + +/* + * Returns: Index of the initial corner or negative number for error. + */ +int cpr_find_initial_corner(struct device *dev, struct clk *cpu_clk, + struct corner *corners, int num_corners) +{ + unsigned long rate; + struct corner *iter, *corner; + const struct corner *end; + unsigned int ret = 0; + + if (!cpu_clk) + return -EINVAL; + + end = &corners[num_corners - 1]; + rate = clk_get_rate(cpu_clk); + + /* + * Some bootloaders set a CPU clock frequency that is not defined + * in the OPP table. When running at an unlisted frequency, + * cpufreq_online() will change to the OPP which has the lowest + * frequency, at or above the unlisted frequency. + * Since cpufreq_online() always "rounds up" in the case of an + * unlisted frequency, this function always "rounds down" in case + * of an unlisted frequency. That way, when cpufreq_online() + * triggers the first ever call to cpr_set_performance_state(), + * it will correctly determine the direction as UP. + */ + for (iter = corners; iter <= end; iter++) { + if (iter->freq > rate) + break; + ret++; + if (iter->freq == rate) { + corner = iter; + break; + } + if (iter->freq < rate) + corner = iter; + } + + if (!corner) { + dev_err(dev, "boot up corner not found\n"); + return -EINVAL; + } + + dev_dbg(dev, "boot up perf state: %u\n", ret); + + return ret; +} + +u32 cpr_get_fuse_corner(struct dev_pm_opp *opp, u32 tid) +{ + struct device_node *np; + u32 fc; + + np = dev_pm_opp_get_of_node(opp); + if (of_property_read_u32_index(np, "qcom,opp-fuse-level", tid, &fc)) { + pr_debug("%s: missing 'qcom,opp-fuse-level' property\n", + __func__); + fc = 0; + } + + of_node_put(np); + + return fc; +} + +unsigned long cpr_get_opp_hz_for_req(struct dev_pm_opp *ref, + struct device *cpu_dev) +{ + u64 rate = 0; + struct device_node *ref_np; + struct device_node *desc_np; + struct device_node *child_np = NULL; + struct device_node *child_req_np = NULL; + + desc_np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); + if (!desc_np) + return 0; + + ref_np = dev_pm_opp_get_of_node(ref); + if (!ref_np) + goto out_ref; + + do { + of_node_put(child_req_np); + child_np = of_get_next_available_child(desc_np, child_np); + child_req_np = of_parse_phandle(child_np, "required-opps", 0); + } while (child_np && child_req_np != ref_np); + + if (child_np && child_req_np == ref_np) + of_property_read_u64(child_np, "opp-hz", &rate); + + of_node_put(child_req_np); + of_node_put(child_np); + of_node_put(ref_np); +out_ref: + of_node_put(desc_np); + + return (unsigned long) rate; +} + +int cpr_calculate_scaling(const char *quot_offset, + struct device *dev, + const struct fuse_corner_data *fdata, + const struct corner *corner) +{ + u32 quot_diff = 0; + unsigned long freq_diff; + int scaling; + const struct fuse_corner *fuse, *prev_fuse; + int ret; + + fuse = corner->fuse_corner; + prev_fuse = fuse - 1; + + if (quot_offset) { + ret = cpr_read_efuse(dev, quot_offset, "_diff); + if (ret) + return ret; + + quot_diff *= fdata->quot_offset_scale; + quot_diff += fdata->quot_offset_adjust; + } else { + quot_diff = fuse->quot - prev_fuse->quot; + } + + freq_diff = fuse->max_freq - prev_fuse->max_freq; + freq_diff /= 1000000; /* Convert to MHz */ + scaling = 1000 * quot_diff / freq_diff; + return min(scaling, fdata->max_quot_scale); +} + +int cpr_interpolate(const struct corner *corner, int step_volt, + const struct fuse_corner_data *fdata) +{ + unsigned long f_high, f_low, f_diff; + int uV_high, uV_low, uV; + u64 temp, temp_limit; + const struct fuse_corner *fuse, *prev_fuse; + + fuse = corner->fuse_corner; + prev_fuse = fuse - 1; + + f_high = fuse->max_freq; + f_low = prev_fuse->max_freq; + uV_high = fuse->uV; + uV_low = prev_fuse->uV; + f_diff = fuse->max_freq - corner->freq; + + /* + * Don't interpolate in the wrong direction. This could happen + * if the adjusted fuse voltage overlaps with the previous fuse's + * adjusted voltage. + */ + if (f_high <= f_low || uV_high <= uV_low || f_high <= corner->freq) + return corner->uV; + + temp = f_diff * (uV_high - uV_low); + do_div(temp, f_high - f_low); + + /* + * max_volt_scale has units of uV/MHz while freq values + * have units of Hz. Divide by 1000000 to convert to. + */ + temp_limit = f_diff * fdata->max_volt_scale; + do_div(temp_limit, 1000000); + + uV = uV_high - min(temp, temp_limit); + return roundup(uV, step_volt); +} + +int cpr_check_vreg_constraints(struct device *dev, struct regulator *vreg, + struct fuse_corner *f) +{ + int ret; + + ret = regulator_is_supported_voltage(vreg, f->min_uV, f->min_uV); + if (!ret) { + dev_err(dev, "min uV: %d not supported by regulator\n", + f->min_uV); + return -EINVAL; + } + + ret = regulator_is_supported_voltage(vreg, f->max_uV, f->max_uV); + if (!ret) { + dev_err(dev, "max uV: %d not supported by regulator\n", + f->max_uV); + return -EINVAL; + } + + return 0; +} diff --git a/drivers/soc/qcom/cpr-common.h b/drivers/soc/qcom/cpr-common.h new file mode 100644 index 000000000000..83a1f7c941b8 --- /dev/null +++ b/drivers/soc/qcom/cpr-common.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include +#include + +enum voltage_change_dir { + NO_CHANGE, + DOWN, + UP, +}; + +struct fuse_corner_data { + int ref_uV; + int max_uV; + int min_uV; + int range_uV; + /* fuse volt: closed/open loop */ + int volt_cloop_adjust; + int volt_oloop_adjust; + int max_volt_scale; + int max_quot_scale; + /* fuse quot */ + int quot_offset; + int quot_scale; + int quot_adjust; + /* fuse quot_offset */ + int quot_offset_scale; + int quot_offset_adjust; +}; + +struct cpr_fuse { + char *ring_osc; + char *init_voltage; + char *quotient; + char *quotient_offset; +}; + +struct fuse_corner { + int min_uV; + int max_uV; + int uV; + int quot; + int step_quot; + const struct reg_sequence *accs; + int num_accs; + unsigned long max_freq; + u8 ring_osc_idx; +}; + +struct corner { + int min_uV; + int max_uV; + int uV; + int last_uV; + int quot_adjust; + u32 save_ctl; + u32 save_irq; + unsigned long freq; + bool is_open_loop; + struct fuse_corner *fuse_corner; +}; + +struct corner_data { + unsigned int fuse_corner; + unsigned long freq; +}; + +struct acc_desc { + unsigned int enable_reg; + u32 enable_mask; + + struct reg_sequence *config; + struct reg_sequence *settings; + int num_regs_per_fuse; +}; + +struct cpr_acc_desc { + const struct cpr_desc *cpr_desc; + const struct acc_desc *acc_desc; +}; + + +int cpr_read_efuse(struct device *dev, const char *cname, u32 *data); +int cpr_populate_ring_osc_idx(struct device *dev, + struct fuse_corner *fuse_corner, + const struct cpr_fuse *cpr_fuse, + int num_fuse_corners); +int cpr_read_fuse_uV(int init_v_width, int step_size_uV, int ref_uV, + int adj, int step_volt, const char *init_v_efuse, + struct device *dev); +const struct cpr_fuse *cpr_get_fuses(struct device *dev, int tid, + int num_fuse_corners); +int cpr_populate_fuse_common(struct device *dev, + struct fuse_corner_data *fdata, + const struct cpr_fuse *cpr_fuse, + struct fuse_corner *fuse_corner, + int step_volt, int init_v_width, + int init_v_step); +int cpr_find_initial_corner(struct device *dev, struct clk *cpu_clk, + struct corner *corners, int num_corners); +u32 cpr_get_fuse_corner(struct dev_pm_opp *opp, u32 tid); +unsigned long cpr_get_opp_hz_for_req(struct dev_pm_opp *ref, + struct device *cpu_dev); +int cpr_calculate_scaling(const char *quot_offset, + struct device *dev, + const struct fuse_corner_data *fdata, + const struct corner *corner); +int cpr_interpolate(const struct corner *corner, int step_volt, + const struct fuse_corner_data *fdata); +int cpr_check_vreg_constraints(struct device *dev, struct regulator *vreg, + struct fuse_corner *f); diff --git a/drivers/soc/qcom/cpr.c b/drivers/soc/qcom/cpr.c index b24cc77d1889..29121567956b 100644 --- a/drivers/soc/qcom/cpr.c +++ b/drivers/soc/qcom/cpr.c @@ -25,6 +25,7 @@ #include #include #include +#include "cpr-common.h" /* Register Offsets for RB-CPR and Bit Definitions */ @@ -124,45 +125,12 @@ #define FUSE_REVISION_UNKNOWN (-1) -enum voltage_change_dir { - NO_CHANGE, - DOWN, - UP, -}; - -struct cpr_fuse { - char *ring_osc; - char *init_voltage; - char *quotient; - char *quotient_offset; -}; - -struct fuse_corner_data { - int ref_uV; - int max_uV; - int min_uV; - int max_volt_scale; - int max_quot_scale; - /* fuse quot */ - int quot_offset; - int quot_scale; - int quot_adjust; - /* fuse quot_offset */ - int quot_offset_scale; - int quot_offset_adjust; -}; - struct cpr_fuses { int init_voltage_step; int init_voltage_width; struct fuse_corner_data *fuse_corner_data; }; -struct corner_data { - unsigned int fuse_corner; - unsigned long freq; -}; - struct cpr_desc { unsigned int num_fuse_corners; int min_diff_quot; @@ -184,44 +152,6 @@ struct cpr_desc { bool reduce_to_corner_uV; }; -struct acc_desc { - unsigned int enable_reg; - u32 enable_mask; - - struct reg_sequence *config; - struct reg_sequence *settings; - int num_regs_per_fuse; -}; - -struct cpr_acc_desc { - const struct cpr_desc *cpr_desc; - const struct acc_desc *acc_desc; -}; - -struct fuse_corner { - int min_uV; - int max_uV; - int uV; - int quot; - int step_quot; - const struct reg_sequence *accs; - int num_accs; - unsigned long max_freq; - u8 ring_osc_idx; -}; - -struct corner { - int min_uV; - int max_uV; - int uV; - int last_uV; - int quot_adjust; - u32 save_ctl; - u32 save_irq; - unsigned long freq; - struct fuse_corner *fuse_corner; -}; - struct cpr_drv { unsigned int num_corners; unsigned int ref_clk_khz; @@ -801,95 +731,16 @@ static int cpr_set_performance_state(struct generic_pm_domain *domain, return ret; } -static int cpr_read_efuse(struct device *dev, const char *cname, u32 *data) -{ - struct nvmem_cell *cell; - ssize_t len; - char *ret; - int i; - - *data = 0; - - cell = nvmem_cell_get(dev, cname); - if (IS_ERR(cell)) { - if (PTR_ERR(cell) != -EPROBE_DEFER) - dev_err(dev, "undefined cell %s\n", cname); - return PTR_ERR(cell); - } - - ret = nvmem_cell_read(cell, &len); - nvmem_cell_put(cell); - if (IS_ERR(ret)) { - dev_err(dev, "can't read cell %s\n", cname); - return PTR_ERR(ret); - } - - for (i = 0; i < len; i++) - *data |= ret[i] << (8 * i); - - kfree(ret); - dev_dbg(dev, "efuse read(%s) = %x, bytes %zd\n", cname, *data, len); - - return 0; -} - -static int -cpr_populate_ring_osc_idx(struct cpr_drv *drv) -{ - struct fuse_corner *fuse = drv->fuse_corners; - struct fuse_corner *end = fuse + drv->desc->num_fuse_corners; - const struct cpr_fuse *fuses = drv->cpr_fuses; - u32 data; - int ret; - - for (; fuse < end; fuse++, fuses++) { - ret = cpr_read_efuse(drv->dev, fuses->ring_osc, - &data); - if (ret) - return ret; - fuse->ring_osc_idx = data; - } - - return 0; -} - -static int cpr_read_fuse_uV(const struct cpr_desc *desc, - const struct fuse_corner_data *fdata, - const char *init_v_efuse, - int step_volt, - struct cpr_drv *drv) -{ - int step_size_uV, steps, uV; - u32 bits = 0; - int ret; - - ret = cpr_read_efuse(drv->dev, init_v_efuse, &bits); - if (ret) - return ret; - - steps = bits & ~BIT(desc->cpr_fuses.init_voltage_width - 1); - /* Not two's complement.. instead highest bit is sign bit */ - if (bits & BIT(desc->cpr_fuses.init_voltage_width - 1)) - steps = -steps; - - step_size_uV = desc->cpr_fuses.init_voltage_step; - - uV = fdata->ref_uV + steps * step_size_uV; - return DIV_ROUND_UP(uV, step_volt) * step_volt; -} - static int cpr_fuse_corner_init(struct cpr_drv *drv) { const struct cpr_desc *desc = drv->desc; - const struct cpr_fuse *fuses = drv->cpr_fuses; + const struct cpr_fuse *cpr_fuse = drv->cpr_fuses; const struct acc_desc *acc_desc = drv->acc_desc; - int i; - unsigned int step_volt; struct fuse_corner_data *fdata; struct fuse_corner *fuse, *end; - int uV; const struct reg_sequence *accs; - int ret; + unsigned int step_volt; + int i, ret; accs = acc_desc->settings; @@ -902,24 +753,16 @@ static int cpr_fuse_corner_init(struct cpr_drv *drv) end = &fuse[desc->num_fuse_corners - 1]; fdata = desc->cpr_fuses.fuse_corner_data; - for (i = 0; fuse <= end; fuse++, fuses++, i++, fdata++) { - /* - * Update SoC voltages: platforms might choose a different - * regulators than the one used to characterize the algorithms - * (ie, init_voltage_step). - */ - fdata->min_uV = roundup(fdata->min_uV, step_volt); - fdata->max_uV = roundup(fdata->max_uV, step_volt); - - /* Populate uV */ - uV = cpr_read_fuse_uV(desc, fdata, fuses->init_voltage, - step_volt, drv); - if (uV < 0) - return uV; + for (i = 0; fuse <= end; fuse++, cpr_fuse++, i++, fdata++) { + ret = cpr_populate_fuse_common( + drv->dev, fdata, cpr_fuse, + fuse, step_volt, + desc->cpr_fuses.init_voltage_width, + desc->cpr_fuses.init_voltage_step); + if (ret) + return ret; - fuse->min_uV = fdata->min_uV; - fuse->max_uV = fdata->max_uV; - fuse->uV = clamp(uV, fuse->min_uV, fuse->max_uV); + fuse->step_quot = desc->step_quot[fuse->ring_osc_idx]; if (fuse == end) { /* @@ -931,16 +774,6 @@ static int cpr_fuse_corner_init(struct cpr_drv *drv) end->max_uV = max(end->max_uV, end->uV); } - /* Populate target quotient by scaling */ - ret = cpr_read_efuse(drv->dev, fuses->quotient, &fuse->quot); - if (ret) - return ret; - - fuse->quot *= fdata->quot_scale; - fuse->quot += fdata->quot_offset; - fuse->quot += fdata->quot_adjust; - fuse->step_quot = desc->step_quot[fuse->ring_osc_idx]; - /* Populate acc settings */ fuse->accs = accs; fuse->num_accs = acc_desc->num_regs_per_fuse; @@ -957,25 +790,9 @@ static int cpr_fuse_corner_init(struct cpr_drv *drv) else if (fuse->uV < fuse->min_uV) fuse->uV = fuse->min_uV; - ret = regulator_is_supported_voltage(drv->vdd_apc, - fuse->min_uV, - fuse->min_uV); - if (!ret) { - dev_err(drv->dev, - "min uV: %d (fuse corner: %d) not supported by regulator\n", - fuse->min_uV, i); - return -EINVAL; - } - - ret = regulator_is_supported_voltage(drv->vdd_apc, - fuse->max_uV, - fuse->max_uV); - if (!ret) { - dev_err(drv->dev, - "max uV: %d (fuse corner: %d) not supported by regulator\n", - fuse->max_uV, i); - return -EINVAL; - } + ret = cpr_check_vreg_constraints(drv->dev, drv->vdd_apc, fuse); + if (ret) + return ret; dev_dbg(drv->dev, "fuse corner %d: [%d %d %d] RO%hhu quot %d squot %d\n", @@ -986,126 +803,6 @@ static int cpr_fuse_corner_init(struct cpr_drv *drv) return 0; } -static int cpr_calculate_scaling(const char *quot_offset, - struct cpr_drv *drv, - const struct fuse_corner_data *fdata, - const struct corner *corner) -{ - u32 quot_diff = 0; - unsigned long freq_diff; - int scaling; - const struct fuse_corner *fuse, *prev_fuse; - int ret; - - fuse = corner->fuse_corner; - prev_fuse = fuse - 1; - - if (quot_offset) { - ret = cpr_read_efuse(drv->dev, quot_offset, "_diff); - if (ret) - return ret; - - quot_diff *= fdata->quot_offset_scale; - quot_diff += fdata->quot_offset_adjust; - } else { - quot_diff = fuse->quot - prev_fuse->quot; - } - - freq_diff = fuse->max_freq - prev_fuse->max_freq; - freq_diff /= 1000000; /* Convert to MHz */ - scaling = 1000 * quot_diff / freq_diff; - return min(scaling, fdata->max_quot_scale); -} - -static int cpr_interpolate(const struct corner *corner, int step_volt, - const struct fuse_corner_data *fdata) -{ - unsigned long f_high, f_low, f_diff; - int uV_high, uV_low, uV; - u64 temp, temp_limit; - const struct fuse_corner *fuse, *prev_fuse; - - fuse = corner->fuse_corner; - prev_fuse = fuse - 1; - - f_high = fuse->max_freq; - f_low = prev_fuse->max_freq; - uV_high = fuse->uV; - uV_low = prev_fuse->uV; - f_diff = fuse->max_freq - corner->freq; - - /* - * Don't interpolate in the wrong direction. This could happen - * if the adjusted fuse voltage overlaps with the previous fuse's - * adjusted voltage. - */ - if (f_high <= f_low || uV_high <= uV_low || f_high <= corner->freq) - return corner->uV; - - temp = f_diff * (uV_high - uV_low); - do_div(temp, f_high - f_low); - - /* - * max_volt_scale has units of uV/MHz while freq values - * have units of Hz. Divide by 1000000 to convert to. - */ - temp_limit = f_diff * fdata->max_volt_scale; - do_div(temp_limit, 1000000); - - uV = uV_high - min(temp, temp_limit); - return roundup(uV, step_volt); -} - -static unsigned int cpr_get_fuse_corner(struct dev_pm_opp *opp) -{ - struct device_node *np; - unsigned int fuse_corner = 0; - - np = dev_pm_opp_get_of_node(opp); - if (of_property_read_u32(np, "qcom,opp-fuse-level", &fuse_corner)) - pr_err("%s: missing 'qcom,opp-fuse-level' property\n", - __func__); - - of_node_put(np); - - return fuse_corner; -} - -static unsigned long cpr_get_opp_hz_for_req(struct dev_pm_opp *ref, - struct device *cpu_dev) -{ - u64 rate = 0; - struct device_node *ref_np; - struct device_node *desc_np; - struct device_node *child_np = NULL; - struct device_node *child_req_np = NULL; - - desc_np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); - if (!desc_np) - return 0; - - ref_np = dev_pm_opp_get_of_node(ref); - if (!ref_np) - goto out_ref; - - do { - of_node_put(child_req_np); - child_np = of_get_next_available_child(desc_np, child_np); - child_req_np = of_parse_phandle(child_np, "required-opps", 0); - } while (child_np && child_req_np != ref_np); - - if (child_np && child_req_np == ref_np) - of_property_read_u64(child_np, "opp-hz", &rate); - - of_node_put(child_req_np); - of_node_put(child_np); - of_node_put(ref_np); -out_ref: - of_node_put(desc_np); - - return (unsigned long) rate; -} - static int cpr_corner_init(struct cpr_drv *drv) { const struct cpr_desc *desc = drv->desc; @@ -1143,7 +840,7 @@ static int cpr_corner_init(struct cpr_drv *drv) opp = dev_pm_opp_find_level_exact(&drv->pd.dev, level); if (IS_ERR(opp)) return -EINVAL; - fc = cpr_get_fuse_corner(opp); + fc = cpr_get_fuse_corner(opp, 0); if (!fc) { dev_pm_opp_put(opp); return -EINVAL; @@ -1219,7 +916,7 @@ static int cpr_corner_init(struct cpr_drv *drv) corner->uV = fuse->uV; if (prev_fuse && cdata[i - 1].freq == prev_fuse->max_freq) { - scaling = cpr_calculate_scaling(quot_offset, drv, + scaling = cpr_calculate_scaling(quot_offset, drv->dev, fdata, corner); if (scaling < 0) return scaling; @@ -1257,47 +954,6 @@ static int cpr_corner_init(struct cpr_drv *drv) return 0; } -static const struct cpr_fuse *cpr_get_fuses(struct cpr_drv *drv) -{ - const struct cpr_desc *desc = drv->desc; - struct cpr_fuse *fuses; - int i; - - fuses = devm_kcalloc(drv->dev, desc->num_fuse_corners, - sizeof(struct cpr_fuse), - GFP_KERNEL); - if (!fuses) - return ERR_PTR(-ENOMEM); - - for (i = 0; i < desc->num_fuse_corners; i++) { - char tbuf[32]; - - snprintf(tbuf, 32, "cpr_ring_osc%d", i + 1); - fuses[i].ring_osc = devm_kstrdup(drv->dev, tbuf, GFP_KERNEL); - if (!fuses[i].ring_osc) - return ERR_PTR(-ENOMEM); - - snprintf(tbuf, 32, "cpr_init_voltage%d", i + 1); - fuses[i].init_voltage = devm_kstrdup(drv->dev, tbuf, - GFP_KERNEL); - if (!fuses[i].init_voltage) - return ERR_PTR(-ENOMEM); - - snprintf(tbuf, 32, "cpr_quotient%d", i + 1); - fuses[i].quotient = devm_kstrdup(drv->dev, tbuf, GFP_KERNEL); - if (!fuses[i].quotient) - return ERR_PTR(-ENOMEM); - - snprintf(tbuf, 32, "cpr_quotient_offset%d", i + 1); - fuses[i].quotient_offset = devm_kstrdup(drv->dev, tbuf, - GFP_KERNEL); - if (!fuses[i].quotient_offset) - return ERR_PTR(-ENOMEM); - } - - return fuses; -} - static void cpr_set_loop_allowed(struct cpr_drv *drv) { drv->loop_disabled = false; @@ -1329,54 +985,6 @@ static int cpr_init_parameters(struct cpr_drv *drv) return 0; } -static int cpr_find_initial_corner(struct cpr_drv *drv) -{ - unsigned long rate; - const struct corner *end; - struct corner *iter; - unsigned int i = 0; - - if (!drv->cpu_clk) { - dev_err(drv->dev, "cannot get rate from NULL clk\n"); - return -EINVAL; - } - - end = &drv->corners[drv->num_corners - 1]; - rate = clk_get_rate(drv->cpu_clk); - - /* - * Some bootloaders set a CPU clock frequency that is not defined - * in the OPP table. When running at an unlisted frequency, - * cpufreq_online() will change to the OPP which has the lowest - * frequency, at or above the unlisted frequency. - * Since cpufreq_online() always "rounds up" in the case of an - * unlisted frequency, this function always "rounds down" in case - * of an unlisted frequency. That way, when cpufreq_online() - * triggers the first ever call to cpr_set_performance_state(), - * it will correctly determine the direction as UP. - */ - for (iter = drv->corners; iter <= end; iter++) { - if (iter->freq > rate) - break; - i++; - if (iter->freq == rate) { - drv->corner = iter; - break; - } - if (iter->freq < rate) - drv->corner = iter; - } - - if (!drv->corner) { - dev_err(drv->dev, "boot up corner not found\n"); - return -EINVAL; - } - - dev_dbg(drv->dev, "boot up perf state: %u\n", i); - - return 0; -} - static const struct cpr_desc qcs404_cpr_desc = { .num_fuse_corners = 3, .min_diff_quot = CPR_FUSE_MIN_QUOT_DIFF, @@ -1564,8 +1172,9 @@ static int cpr_pd_attach_dev(struct generic_pm_domain *domain, if (ret) goto unlock; - ret = cpr_find_initial_corner(drv); - if (ret) + ret = cpr_find_initial_corner(drv->dev, drv->cpu_clk, drv->corners, + drv->num_corners); + if (ret < 0) goto unlock; if (acc_desc->config) @@ -1650,6 +1259,7 @@ static int cpr_probe(struct platform_device *pdev) struct resource *res; struct device *dev = &pdev->dev; struct cpr_drv *drv; + const struct cpr_desc *desc; int irq, ret; const struct cpr_acc_desc *data; struct device_node *np; @@ -1665,6 +1275,7 @@ static int cpr_probe(struct platform_device *pdev) drv->dev = dev; drv->desc = data->cpr_desc; drv->acc_desc = data->acc_desc; + desc = drv->desc; drv->fuse_corners = devm_kcalloc(dev, drv->desc->num_fuse_corners, sizeof(*drv->fuse_corners), @@ -1705,11 +1316,13 @@ static int cpr_probe(struct platform_device *pdev) if (ret) return ret; - drv->cpr_fuses = cpr_get_fuses(drv); + drv->cpr_fuses = cpr_get_fuses(drv->dev, 0, desc->num_fuse_corners); if (IS_ERR(drv->cpr_fuses)) return PTR_ERR(drv->cpr_fuses); - ret = cpr_populate_ring_osc_idx(drv); + ret = cpr_populate_ring_osc_idx(drv->dev, drv->fuse_corners, + drv->cpr_fuses, + desc->num_fuse_corners); if (ret) return ret; From patchwork Thu Nov 26 18:45:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AngeloGioacchino Del Regno X-Patchwork-Id: 333111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBEFEC6379D for ; Thu, 26 Nov 2020 18:55:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75277221F8 for ; Thu, 26 Nov 2020 18:55:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405040AbgKZSy6 (ORCPT ); Thu, 26 Nov 2020 13:54:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405028AbgKZSy5 (ORCPT ); Thu, 26 Nov 2020 13:54:57 -0500 Received: from relay06.th.seeweb.it (relay06.th.seeweb.it [IPv6:2001:4b7a:2000:18::167]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21128C061A52; Thu, 26 Nov 2020 10:54:57 -0800 (PST) Received: from IcarusMOD.eternityproject.eu (unknown [2.237.20.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by m-r2.th.seeweb.it (Postfix) with ESMTPSA id 33EAE40609; Thu, 26 Nov 2020 19:46:35 +0100 (CET) From: AngeloGioacchino Del Regno To: linux-arm-msm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-pm@vger.kernel.org, ulf.hansson@linaro.org, jorge.ramirez-ortiz@linaro.org, broonie@kernel.org, lgirdwood@gmail.com, daniel.lezcano@linaro.org, nks@flawful.org, bjorn.andersson@linaro.org, agross@kernel.org, robh+dt@kernel.org, viresh.kumar@linaro.org, rjw@rjwysocki.net, konrad.dybcio@somainline.org, martin.botka@somainline.org, marijn.suijten@somainline.org, phone-devel@vger.kernel.org, AngeloGioacchino Del Regno Subject: [PATCH 06/13] arm64: qcom: qcs404: Change CPR nvmem-names Date: Thu, 26 Nov 2020 19:45:52 +0100 Message-Id: <20201126184559.3052375-7-angelogioacchino.delregno@somainline.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> References: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The CPR driver's common functions were split and put in another file in order to support newer CPR revisions: to simplify the commonization, the expected names of the fuses had to be changed in order for both new and old support to use the same fuse name retrieval function and keeping the naming consistent. The thread id was added to the fuse name and, since CPRv1 does not support threads, it is expected to always read ID 0, which means that the expected name here is now "cpr0_(fuse_name)" instead of "cpr_(fuse_name)": luckily, QCS404 is the only user so change it accordingly. Signed-off-by: AngeloGioacchino Del Regno --- arch/arm64/boot/dts/qcom/qcs404.dtsi | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi index b654b802e95c..5d5a33c7eb82 100644 --- a/arch/arm64/boot/dts/qcom/qcs404.dtsi +++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi @@ -1168,19 +1168,19 @@ cpr: power-controller@b018000 { <&cpr_efuse_ring2>, <&cpr_efuse_ring3>, <&cpr_efuse_revision>; - nvmem-cell-names = "cpr_quotient_offset1", - "cpr_quotient_offset2", - "cpr_quotient_offset3", - "cpr_init_voltage1", - "cpr_init_voltage2", - "cpr_init_voltage3", - "cpr_quotient1", - "cpr_quotient2", - "cpr_quotient3", - "cpr_ring_osc1", - "cpr_ring_osc2", - "cpr_ring_osc3", - "cpr_fuse_revision"; + nvmem-cell-names = "cpr0_quotient_offset1", + "cpr0_quotient_offset2", + "cpr0_quotient_offset3", + "cpr0_init_voltage1", + "cpr0_init_voltage2", + "cpr0_init_voltage3", + "cpr0_quotient1", + "cpr0_quotient2", + "cpr0_quotient3", + "cpr0_ring_osc1", + "cpr0_ring_osc2", + "cpr0_ring_osc3", + "cpr0_fuse_revision"; }; timer@b120000 { From patchwork Thu Nov 26 18:45:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AngeloGioacchino Del Regno X-Patchwork-Id: 333106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BAC2C83021 for ; Thu, 26 Nov 2020 18:55:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73E3921D91 for ; Thu, 26 Nov 2020 18:55:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405030AbgKZSz3 (ORCPT ); Thu, 26 Nov 2020 13:55:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405035AbgKZSy5 (ORCPT ); Thu, 26 Nov 2020 13:54:57 -0500 Received: from relay07.th.seeweb.it (relay07.th.seeweb.it [IPv6:2001:4b7a:2000:18::168]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F279CC061A49 for ; Thu, 26 Nov 2020 10:54:56 -0800 (PST) Received: from IcarusMOD.eternityproject.eu (unknown [2.237.20.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by m-r2.th.seeweb.it (Postfix) with ESMTPSA id 149A8405F1; Thu, 26 Nov 2020 19:46:36 +0100 (CET) From: AngeloGioacchino Del Regno To: linux-arm-msm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-pm@vger.kernel.org, ulf.hansson@linaro.org, jorge.ramirez-ortiz@linaro.org, broonie@kernel.org, lgirdwood@gmail.com, daniel.lezcano@linaro.org, nks@flawful.org, bjorn.andersson@linaro.org, agross@kernel.org, robh+dt@kernel.org, viresh.kumar@linaro.org, rjw@rjwysocki.net, konrad.dybcio@somainline.org, martin.botka@somainline.org, marijn.suijten@somainline.org, phone-devel@vger.kernel.org, AngeloGioacchino Del Regno Subject: [PATCH 08/13] soc: qcom: Add support for Core Power Reduction v3, v4 and Hardened Date: Thu, 26 Nov 2020 19:45:54 +0100 Message-Id: <20201126184559.3052375-9-angelogioacchino.delregno@somainline.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> References: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This commit introduces a new driver, based on the one for cpr v1, to enable support for the newer Qualcomm Core Power Reduction hardware, known downstream as CPR3, CPR4 and CPRh. In these new versions of the hardware, support for various new features was introduced, including voltage reduction for the GPU, security hardening and a new way of controlling CPU DVFS, consisting in internal communication between microcontrollers, specifically the CPR-Hardened and the Operating State Manager. The CPR v3, v4 and CPRh are present in a broad range of SoCs, from the mid-range to the high end ones including, but not limited to, MSM8953/8996/8998, SDM630/636/660/845. Signed-off-by: AngeloGioacchino Del Regno --- drivers/soc/qcom/Kconfig | 17 + drivers/soc/qcom/Makefile | 1 + drivers/soc/qcom/cpr3.c | 2474 +++++++++++++++++++++++++++++++++++++ 3 files changed, 2492 insertions(+) create mode 100644 drivers/soc/qcom/cpr3.c diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig index 37f22dcd20af..9d3f35fc4ab1 100644 --- a/drivers/soc/qcom/Kconfig +++ b/drivers/soc/qcom/Kconfig @@ -42,6 +42,23 @@ config QCOM_CPR To compile this driver as a module, choose M here: the module will be called qcom-cpr +config QCOM_CPR3 + tristate "QCOM Core Power Reduction (CPR v3/v4/Hardened) support" + depends on ARCH_QCOM && HAS_IOMEM + select PM_OPP + select REGMAP + help + Say Y here to enable support for the CPR hardware found on a broad + variety of Qualcomm SoCs like MSM8996, MSM8998, SDM630, SDM660, + SDM845 and others. + + This driver populates OPP tables and makes adjustments to them + based on feedback from the CPR hardware. If you want to do CPU + and/or GPU frequency scaling say Y here. + + To compile this driver as a module, choose M here: the module will + be called qcom-cpr3 + config QCOM_GENI_SE tristate "QCOM GENI Serial Engine Driver" depends on ARCH_QCOM || COMPILE_TEST diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile index 778a2a5f07bb..0500283c2dc5 100644 --- a/drivers/soc/qcom/Makefile +++ b/drivers/soc/qcom/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_QCOM_AOSS_QMP) += qcom_aoss.o obj-$(CONFIG_QCOM_GENI_SE) += qcom-geni-se.o obj-$(CONFIG_QCOM_COMMAND_DB) += cmd-db.o obj-$(CONFIG_QCOM_CPR) += cpr-common.o cpr.o +obj-$(CONFIG_QCOM_CPR3) += cpr-common.o cpr3.o obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o obj-$(CONFIG_QCOM_OCMEM) += ocmem.o diff --git a/drivers/soc/qcom/cpr3.c b/drivers/soc/qcom/cpr3.c new file mode 100644 index 000000000000..a7e83ef8f478 --- /dev/null +++ b/drivers/soc/qcom/cpr3.c @@ -0,0 +1,2474 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2013-2020, The Linux Foundation. All rights reserved. + * Copyright (c) 2019 Linaro Limited + * Copyright (c) 2020, AngeloGioacchino Del Regno + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "cpr-common.h" + +#define CPR3_RO_COUNT 16 +#define CPR3_RO_MASK GENMASK(CPR3_RO_COUNT - 1, 0) + +/* CPR3 registers */ +#define CPR3_REG_CPR_VERSION 0x0 +#define CPRH_CPR_VERSION_4P5 0x40050000 + +#define CPR3_REG_CPR_CTL 0x4 +#define CPR3_CPR_CTL_LOOP_EN_MASK BIT(0) +#define CPR3_CPR_CTL_IDLE_CLOCKS_MASK GENMASK(4, 0) +#define CPR3_CPR_CTL_IDLE_CLOCKS_SHIFT 1 +#define CPR3_CPR_CTL_COUNT_MODE_MASK GENMASK(1, 0) +#define CPR3_CPR_CTL_COUNT_MODE_SHIFT 6 +#define CPR3_CPR_CTL_COUNT_MODE_ALL_AT_ONCE_MIN 0 +#define CPR3_CPR_CTL_COUNT_MODE_ALL_AT_ONCE_MAX 1 +#define CPR3_CPR_CTL_COUNT_MODE_STAGGERED 2 +#define CPR3_CPR_CTL_COUNT_MODE_ALL_AT_ONCE_AGE 3 +#define CPR3_CPR_CTL_COUNT_REPEAT_MASK GENMASK(22, 0) +#define CPR3_CPR_CTL_COUNT_REPEAT_SHIFT 9 + +#define CPR3_REG_CPR_STATUS 0x8 +#define CPR3_CPR_STATUS_BUSY_MASK BIT(0) + +/* + * This register is not present on controllers that support HW closed-loop + * except CPR4 APSS controller. + */ +#define CPR3_REG_CPR_TIMER_AUTO_CONT 0xC + +#define CPR3_REG_CPR_STEP_QUOT 0x14 +#define CPR3_CPR_STEP_QUOT_MIN_MASK GENMASK(5, 0) +#define CPR3_CPR_STEP_QUOT_MIN_SHIFT 0 +#define CPR3_CPR_STEP_QUOT_MAX_MASK GENMASK(5, 0) +#define CPR3_CPR_STEP_QUOT_MAX_SHIFT 6 +#define CPRH_DELTA_QUOT_STEP_FACTOR 4 + +#define CPR3_REG_GCNT(ro) (0xA0 + 0x4 * (ro)) +#define CPR3_REG_SENSOR_OWNER(sensor) (0x200 + 0x4 * (sensor)) + +#define CPR3_REG_CONT_CMD 0x800 +#define CPR3_CONT_CMD_ACK 0x1 +#define CPR3_CONT_CMD_NACK 0x0 + +#define CPR3_REG_THRESH(thread) (0x808 + 0x440 * (thread)) +#define CPR3_THRESH_CONS_DOWN_MASK GENMASK(3, 0) +#define CPR3_THRESH_CONS_DOWN_SHIFT 0 +#define CPR3_THRESH_CONS_UP_MASK GENMASK(3, 0) +#define CPR3_THRESH_CONS_UP_SHIFT 4 +#define CPR3_THRESH_DOWN_THRESH_MASK GENMASK(4, 0) +#define CPR3_THRESH_DOWN_THRESH_SHIFT 8 +#define CPR3_THRESH_UP_THRESH_MASK GENMASK(4, 0) +#define CPR3_THRESH_UP_THRESH_SHIFT 13 + +#define CPR3_REG_RO_MASK(thread) (0x80C + 0x440 * (thread)) + +#define CPR3_REG_RESULT0(thread) (0x810 + 0x440 * (thread)) +#define CPR3_RESULT0_BUSY_MASK BIT(0) +#define CPR3_RESULT0_STEP_DN_MASK BIT(1) +#define CPR3_RESULT0_STEP_UP_MASK BIT(2) +#define CPR3_RESULT0_ERROR_STEPS_MASK GENMASK(4, 0) +#define CPR3_RESULT0_ERROR_STEPS_SHIFT 3 +#define CPR3_RESULT0_ERROR_MASK GENMASK(11, 0) +#define CPR3_RESULT0_ERROR_SHIFT 8 + +#define CPR3_REG_RESULT1(thread) (0x814 + 0x440 * (thread)) +#define CPR3_RESULT1_QUOT_MIN_MASK GENMASK(11, 0) +#define CPR3_RESULT1_QUOT_MIN_SHIFT 0 +#define CPR3_RESULT1_QUOT_MAX_MASK GENMASK(11, 0) +#define CPR3_RESULT1_QUOT_MAX_SHIFT 12 +#define CPR3_RESULT1_RO_MIN_MASK GENMASK(3, 0) +#define CPR3_RESULT1_RO_MIN_SHIFT 24 +#define CPR3_RESULT1_RO_MAX_MASK GENMASK(3, 0) +#define CPR3_RESULT1_RO_MAX_SHIFT 28 + +#define CPR3_REG_RESULT2(thread) (0x818 + 0x440 * (thread)) +#define CPR3_RESULT2_STEP_QUOT_MIN_MASK GENMASK(5, 0) +#define CPR3_RESULT2_STEP_QUOT_MIN_SHIFT 0 +#define CPR3_RESULT2_STEP_QUOT_MAX_MASK GENMASK(5, 0) +#define CPR3_RESULT2_STEP_QUOT_MAX_SHIFT 6 +#define CPR3_RESULT2_SENSOR_MIN_MASK GENMASK(7, 0) +#define CPR3_RESULT2_SENSOR_MIN_SHIFT 16 +#define CPR3_RESULT2_SENSOR_MAX_MASK GENMASK(7, 0) +#define CPR3_RESULT2_SENSOR_MAX_SHIFT 24 + +#define CPR3_REG_IRQ_EN 0x81C +#define CPR3_REG_IRQ_CLEAR 0x820 +#define CPR3_REG_IRQ_STATUS 0x824 +#define CPR3_IRQ_UP BIT(3) +#define CPR3_IRQ_MID BIT(2) +#define CPR3_IRQ_DOWN BIT(1) +#define CPR3_IRQ_ALL (CPR3_IRQ_UP | CPR3_IRQ_MID | CPR3_IRQ_DOWN) + +#define CPR3_REG_TARGET_QUOT(thread, ro) (0x840 + 0x440 * (thread) + 0x4 * (ro)) + +/* Registers found only on controllers that support HW closed-loop. */ +#define CPR3_REG_PD_THROTTLE 0xE8 + +#define CPR3_REG_HW_CLOSED_LOOP_DISABLED 0x3000 +#define CPR3_REG_CPR_TIMER_MID_CONT 0x3004 +#define CPR3_REG_CPR_TIMER_UP_DN_CONT 0x3008 + +/* CPR4 controller specific registers and bit definitions */ +#define CPR4_REG_CPR_TIMER_CLAMP 0x10 +#define CPR4_CPR_TIMER_CLAMP_THREAD_AGGREGATION_EN BIT(27) + +#define CPR4_REG_MISC 0x700 +#define CPR4_MISC_RESET_STEP_QUOT_LOOP_EN BIT(2) +#define CPR4_MISC_THREAD_HAS_ALWAYS_VOTE_EN BIT(3) + +#define CPR4_REG_SAW_ERROR_STEP_LIMIT 0x7A4 +#define CPR4_SAW_ERROR_STEP_LIMIT_UP_MASK GENMASK(4, 0) +#define CPR4_SAW_ERROR_STEP_LIMIT_UP_SHIFT 0 +#define CPR4_SAW_ERROR_STEP_LIMIT_DN_MASK GENMASK(4, 0) +#define CPR4_SAW_ERROR_STEP_LIMIT_DN_SHIFT 5 + +#define CPR4_REG_MARGIN_TEMP_CORE_TIMERS 0x7A8 +#define CPR4_MARGIN_TEMP_CORE_TIMERS_SETTLE_VOLTAGE_COUNT_MASK GENMASK(10, 0) +#define CPR4_MARGIN_TEMP_CORE_TIMERS_SETTLE_VOLTAGE_COUNT_SHFT 18 + +#define CPR4_REG_MARGIN_ADJ_CTL 0x7F8 +#define CPR4_MARGIN_ADJ_HW_CLOSED_LOOP_EN BIT(4) +#define CPR4_MARGIN_ADJ_PER_RO_KV_MARGIN_EN BIT(7) +#define CPR4_MARGIN_ADJ_PMIC_STEP_SIZE_MASK GENMASK(4, 0) +#define CPR4_MARGIN_ADJ_PMIC_STEP_SIZE_SHIFT 12 + +#define CPR4_REG_CPR_MASK_THREAD(thread) (0x80C + 0x440 * (thread)) +#define CPR4_CPR_MASK_THREAD_DISABLE_THREAD BIT(31) +#define CPR4_CPR_MASK_THREAD_RO_MASK4THREAD_MASK GENMASK(15, 0) + +/* CPRh controller specific registers and bit definitions */ +#define __CPRH_REG_CORNER(rbase, tbase, tid, cnum) (rbase + (tbase * tid) + \ + (0x4 * cnum)) +#define CPRH_REG_CORNER(d, t, c) __CPRH_REG_CORNER(d->reg_corner, \ + d->reg_corner_tid, \ + t, c); + +#define CPRH_CTL_OSM_ENABLED BIT(0) +#define CPRH_CTL_BASE_VOLTAGE_MASK GENMASK(10, 1) +#define CPRH_CTL_BASE_VOLTAGE_SHIFT 1 +#define CPRH_CTL_MODE_SWITCH_DELAY_MASK GENMASK(24, 17) +#define CPRH_CTL_MODE_SWITCH_DELAY_SHIFT 17 +#define CPRH_CTL_VOLTAGE_MULTIPLIER_MASK GENMASK(28, 25) +#define CPRH_CTL_VOLTAGE_MULTIPLIER_SHIFT 25 + +#define CPRH_CORNER_INIT_VOLTAGE_MASK GENMASK(7, 0) +#define CPRH_CORNER_INIT_VOLTAGE_SHIFT 0 +#define CPRH_CORNER_FLOOR_VOLTAGE_MASK GENMASK(15, 8) +#define CPRH_CORNER_FLOOR_VOLTAGE_SHIFT 8 +#define CPRH_CORNER_QUOT_DELTA_MASK GENMASK(24, 16) +#define CPRH_CORNER_QUOT_DELTA_SHIFT 16 +#define CPRH_CORNER_RO_SEL_MASK GENMASK(28, 25) +#define CPRH_CORNER_RO_SEL_SHIFT 25 +#define CPRH_CORNER_CPR_CL_DISABLE BIT(29) + +#define CPRH_CORNER_INIT_VOLTAGE_MAX_VALUE 255 +#define CPRH_CORNER_FLOOR_VOLTAGE_MAX_VALUE 255 +#define CPRH_CORNER_QUOT_DELTA_MAX_VALUE 511 + +enum cpr_type { + CTRL_TYPE_CPR3, + CTRL_TYPE_CPR4, + CTRL_TYPE_CPRH, + CTRL_TYPE_MAX, +}; + +/* + * struct cpr_thread_desc - CPR Thread-specific parameters + * + * @controller_id: Identifier of the CPR controller expected by the HW + * @ro_scaling_factor: Scaling factor for each ring oscillator entry + * @hw_tid: Identifier of the CPR thread expected by the HW + * @init_voltage_step: Voltage in uV for each step read from the fuse array + * @init_voltage_width: Bitwidth of the voltage read from the fuse array + * @sensor_range_start: First sensor ID used by a thread + * @sensor_range_end: Last sensor ID used by a thread + * @num_fuse_corners: Number of valid entries in fuse_corner_data + * @fuse_corner_data: Parameters for calculation of each fuse corner + */ +struct cpr_thread_desc { + int controller_id; + int hw_tid; + const int *ro_scaling_factor; + int init_voltage_step; + int init_voltage_width; + int sensor_range_start; + int sensor_range_end; + unsigned int num_fuse_corners; + struct fuse_corner_data *fuse_corner_data; +}; + +/* + * struct cpr_desc - Driver instance-wide CPR parameters + * + * @cpr_type: Type (base version) of the CPR controller + * @num_threads: Max. number of threads supported by this controller + * @timer_delay_us: Loop delay time in uS + * @timer_updn_delay_us: Voltage after-up/before-down delay time in uS + * @timer_cons_up: Wait between consecutive up requests in uS + * @timer_cons_down: Wait between consecutive down requests in uS + * @up_threshold: Generic corner up threshold + * @down_threshold: Generic corner down threshold + * @idle_clocks: CPR Sensor: idle timer in cpr clocks unit + * @count_mode: CPR Sensor: counting mode + * @count_repeat: CPR Sensor: number of times to repeat reading + * @step_quot_init_min: Minimum achievable step quotient + * @step_quot_init_max: Maximum achievable step quotient + * @gcnt_us: CPR measurement interval in uS + * @vreg_step_fixed: Regulator voltage per step (if vreg unusable) + * @vreg_step_up_limit: Num. of steps up at once before re-measuring sensors + * @vreg_step_down_limit: Num. of steps dn at once before re-measuring sensors + * @vdd_settle_time_us: Settling timer to account for one VDD supply step + * @corner_settle_time_us: Settle time for corner switch request + * @apm_threshold: Array Power Mux (APM) voltage threshold + * @apm_hysteresis: Hysteresis for APM V-threshold related calculations + * @cpr_base_voltage: Safety: Absolute minimum voltage (uV) on this CPR + * @cpr_max_voltage: Safety: Absolute maximum voltage (uV) on this CPR + * @pd_throttle_val: CPR Power Domain throttle during voltage switch + * @threads: Array containing "CPR Thread" specific parameters + * @reduce_to_fuse_uV: Reduce corner max volts (if higher) to fuse ceiling + * @reduce_to_corner_uV: Reduce corner max volts (if higher) to corner ceil. + * @hw_closed_loop_en: Enable CPR HW Closed-Loop voltage auto-adjustment + */ +struct cpr_desc { + enum cpr_type cpr_type; + unsigned int num_threads; + unsigned int timer_delay_us; + unsigned int timer_updn_delay_us; + unsigned int timer_cons_up; + unsigned int timer_cons_down; + unsigned int up_threshold; + unsigned int down_threshold; + unsigned int idle_clocks; + unsigned int count_mode; + unsigned int count_repeat; + unsigned int step_quot_init_min; + unsigned int step_quot_init_max; + unsigned int gcnt_us; + unsigned int vreg_step_fixed; + unsigned int vreg_step_up_limit; + unsigned int vreg_step_down_limit; + unsigned int vdd_settle_time_us; + unsigned int corner_settle_time_us; + int apm_threshold; + int apm_hysteresis; + u32 cpr_base_voltage; + u32 cpr_max_voltage; + u32 pd_throttle_val; + + const struct cpr_thread_desc **threads; + bool reduce_to_fuse_uV; + bool reduce_to_corner_uV; + bool hw_closed_loop_en; +}; + +struct cpr_drv; +struct cpr_thread { + int num_corners; + int id; + bool enabled; + void __iomem *base; + struct clk *cpu_clk; + struct corner *corner; + struct corner *corners; + struct fuse_corner *fuse_corners; + struct cpr_drv *drv; + struct generic_pm_domain pd; + struct device *attached_cpu_dev; + struct work_struct restart_work; + bool restarting; + + const struct cpr_fuse *cpr_fuses; + const struct cpr_thread_desc *desc; +}; + +struct cpr_drv { + int num_threads; + int irq; + unsigned int ref_clk_khz; + struct device *dev; + struct mutex lock; + struct regulator *vreg; + struct regmap *tcsr; + u32 gcnt; + u32 speed_bin; + u32 fusing_rev; + u32 last_uV; + u32 cpr_hw_rev; + u32 reg_corner; + u32 reg_corner_tid; + u32 reg_ctl; + u32 reg_status; + int fuse_level_set; + int extra_corners; + unsigned int vreg_step; + bool enabled; + + struct cpr_thread *threads; + struct genpd_onecell_data cell_data; + + const struct cpr_desc *desc; + const struct acc_desc *acc_desc; + struct dentry *debugfs; +}; + +static void cpr_write(struct cpr_thread *thread, u32 offset, u32 value) +{ + writel_relaxed(value, thread->base + offset); +} + +static u32 cpr_read(struct cpr_thread *thread, u32 offset) +{ + return readl_relaxed(thread->base + offset); +} + +static void +cpr_masked_write(struct cpr_thread *thread, u32 offset, u32 mask, u32 value) +{ + u32 val; + + val = readl_relaxed(thread->base + offset); + val &= ~mask; + val |= value & mask; + writel_relaxed(val, thread->base + offset); +} + +static void cpr_irq_clr(struct cpr_thread *thread) +{ + cpr_write(thread, CPR3_REG_IRQ_CLEAR, CPR3_IRQ_ALL); +} + +static void cpr_irq_clr_nack(struct cpr_thread *thread) +{ + cpr_irq_clr(thread); + cpr_write(thread, CPR3_REG_CONT_CMD, CPR3_CONT_CMD_NACK); +} + +static void cpr_irq_clr_ack(struct cpr_thread *thread) +{ + cpr_irq_clr(thread); + cpr_write(thread, CPR3_REG_CONT_CMD, CPR3_CONT_CMD_ACK); +} + +static void cpr_irq_set(struct cpr_thread *thread, u32 int_bits) +{ + /* On CPR-hardened, interrupts are managed by and on firmware */ + if (thread->drv->desc->cpr_type == CTRL_TYPE_CPRH) + return; + + cpr_write(thread, CPR3_REG_IRQ_EN, int_bits); +} + +/** + * cpr_ctl_enable - Enable CPR thread + * @thread: Structure holding CPR thread-specific parameters + */ +static void cpr_ctl_enable(struct cpr_thread *thread) +{ + if (thread->drv->enabled && !thread->restarting) + cpr_masked_write(thread, CPR3_REG_CPR_CTL, + CPR3_CPR_CTL_LOOP_EN_MASK, + CPR3_CPR_CTL_LOOP_EN_MASK); +} + +/** + * cpr_ctl_disable - Disable CPR thread + * @thread: Structure holding CPR thread-specific parameters + */ +static void cpr_ctl_disable(struct cpr_thread *thread) +{ + const struct cpr_desc *desc = thread->drv->desc; + + if (desc->cpr_type != CTRL_TYPE_CPRH) { + cpr_irq_set(thread, 0); + cpr_irq_clr(thread); + } + + cpr_masked_write(thread, CPR3_REG_CPR_CTL, + CPR3_CPR_CTL_LOOP_EN_MASK, 0); +} + +/** + * cpr_ctl_is_enabled - Check if thread is enabled + * @thread: Structure holding CPR thread-specific parameters + * + * Returns: true if the CPR is enabled, false if it is disabled. + */ +static bool cpr_ctl_is_enabled(struct cpr_thread *thread) +{ + u32 reg_val; + + reg_val = cpr_read(thread, CPR3_REG_CPR_CTL); + return reg_val & CPR3_CPR_CTL_LOOP_EN_MASK; +} + +/** + * cpr_check_any_thread_busy - Check if HW is done processing + * @thread: Structure holding CPR thread-specific parameters + * + * Returns: true if the CPR is busy, false if it is ready. + */ +static bool cpr_check_any_thread_busy(struct cpr_thread *thread) +{ + int i; + + for (i = 0; i < thread->drv->num_threads; i++) + if (cpr_read(thread, CPR3_REG_RESULT0(i)) & + CPR3_RESULT0_BUSY_MASK) + return true; + + return false; +} + +static void cpr_restart_worker(struct work_struct *work) +{ + struct cpr_thread *thread = container_of(work, struct cpr_thread, + restart_work); + struct cpr_drv *drv = thread->drv; + int i; + + mutex_lock(&drv->lock); + + thread->restarting = true; + cpr_ctl_disable(thread); + disable_irq(drv->irq); + + mutex_unlock(&drv->lock); + + for (i = 0; i < 20; i++) { + u32 cpr_status = cpr_read(thread, CPR3_REG_CPR_STATUS); + u32 ctl = cpr_read(thread, CPR3_REG_CPR_CTL); + + if ((cpr_status & CPR3_CPR_STATUS_BUSY_MASK) && + !(ctl & CPR3_CPR_CTL_LOOP_EN_MASK)) + break; + + udelay(10); + } + + cpr_irq_clr(thread); + + for (i = 0; i < 20; i++) { + u32 status = cpr_read(thread, CPR3_REG_IRQ_STATUS); + + if (!(status & CPR3_IRQ_ALL)) + break; + udelay(10); + } + + mutex_lock(&drv->lock); + + thread->restarting = false; + enable_irq(drv->irq); + cpr_ctl_enable(thread); + + mutex_unlock(&drv->lock); +} + +/** + * cpr_corner_restore - Restore saved corner level + * @thread: Structure holding CPR thread-specific parameters + * @corner: Structure holding the saved corner level + */ +static void cpr_corner_restore(struct cpr_thread *thread, + struct corner *corner) +{ + struct cpr_drv *drv = thread->drv; + struct fuse_corner *fuse = corner->fuse_corner; + const struct cpr_thread_desc *tdesc = thread->desc; + u32 ro_sel = fuse->ring_osc_idx; + + cpr_write(thread, CPR3_REG_GCNT(ro_sel), drv->gcnt); + + cpr_write(thread, CPR3_REG_RO_MASK(tdesc->hw_tid), + CPR3_RO_MASK & ~BIT(ro_sel)); + + cpr_write(thread, CPR3_REG_TARGET_QUOT(tdesc->hw_tid, ro_sel), + fuse->quot - corner->quot_adjust); + + if (drv->desc->cpr_type == CTRL_TYPE_CPR4) { + cpr_masked_write(thread, + CPR4_REG_CPR_MASK_THREAD(tdesc->hw_tid), + CPR4_CPR_MASK_THREAD_DISABLE_THREAD | + CPR4_CPR_MASK_THREAD_RO_MASK4THREAD_MASK, 0); + } + + thread->corner = corner; + corner->last_uV = corner->uV; +} + +/** + * cpr_set_acc - Set fuse level to the mem-acc + * @thread: Structure holding CPR thread-specific parameters + * @f: Fuse level + */ +static void cpr_set_acc(struct cpr_drv *drv, int f) +{ + const struct acc_desc *desc = drv->acc_desc; + struct reg_sequence *s = desc->settings; + int n = desc->num_regs_per_fuse; + + if (!s || f == drv->fuse_level_set) + return; + + regmap_multi_reg_write(drv->tcsr, s + (n * f), n); + drv->fuse_level_set = f; +} + +/** + * cpr_post_voltage - Actions to execute before setting voltage + * @thread: Structure holding CPR thread-specific parameters + * @dir: Enumeration for voltage change direction + * @fuse_level: Fuse corner for mem-acc, if supported. + * + * Returns: Zero for success or negative number on errors. + */ +static int cpr_pre_voltage(struct cpr_thread *thread, + enum voltage_change_dir dir, + int fuse_level) +{ + struct cpr_drv *drv = thread->drv; + + if (drv->desc->cpr_type == CTRL_TYPE_CPR3 && + drv->desc->pd_throttle_val) + cpr_write(thread, CPR3_REG_PD_THROTTLE, + drv->desc->pd_throttle_val); + + if (drv->tcsr && dir == DOWN) + cpr_set_acc(drv, fuse_level); + + return 0; +} + +/** + * cpr_post_voltage - Actions to execute after setting voltage + * @thread: Structure holding CPR thread-specific parameters + * @dir: Enumeration for voltage change direction + * @fuse_level: Fuse corner for mem-acc, if supported. + * + * Returns: Zero for success or negative number on errors. + */ +static int cpr_post_voltage(struct cpr_thread *thread, + enum voltage_change_dir dir, + int fuse_level) +{ + struct cpr_drv *drv = thread->drv; + + if (drv->tcsr && dir == UP) + cpr_set_acc(drv, fuse_level); + + if (drv->desc->cpr_type == CTRL_TYPE_CPR3) + cpr_write(thread, CPR3_REG_PD_THROTTLE, 0); + + return 0; +} + +/** + * cpr_commit_state - Set the newly requested voltage + * @thread: Structure holding CPR thread-specific parameters + * + * Returns: IRQ_SUCCESS for success, IRQ_NONE if the CPR is disabled. + */ +static int cpr_commit_state(struct cpr_thread *thread) +{ + struct cpr_drv *drv = thread->drv; + int min_uV = 0, max_uV = 0, new_uV = 0, fuse_level = 0; + enum voltage_change_dir dir; + u32 next_irqmask = 0; + int ret, i; + + /* On CPRhardened, control states are managed in firmware */ + if (drv->desc->cpr_type == CTRL_TYPE_CPRH) + return 0; + + for (i = 0; i < drv->num_threads; i++) { + struct cpr_thread *thread = &drv->threads[i]; + + if (!thread->corner) + continue; + + fuse_level = max(fuse_level, + (int) (thread->corner->fuse_corner - + &thread->fuse_corners[0])); + + max_uV = max(max_uV, thread->corner->max_uV); + min_uV = max(min_uV, thread->corner->min_uV); + new_uV = max(new_uV, thread->corner->last_uV); + } + dev_vdbg(drv->dev, "%s: new uV: %d, last uV: %d\n", + __func__, new_uV, drv->last_uV); + + /* + * Safety measure: if the voltage is out of the globally allowed + * range, then go out and warn the user. + * This should *never* happen. + */ + if (new_uV > drv->desc->cpr_max_voltage || + new_uV < drv->desc->cpr_base_voltage) { + dev_warn(drv->dev, "Voltage (%u uV) out of range.", new_uV); + return -EINVAL; + } + + if (new_uV == drv->last_uV || fuse_level == drv->fuse_level_set) + goto out; + + if (fuse_level > drv->fuse_level_set) + dir = UP; + else + dir = DOWN; + + ret = cpr_pre_voltage(thread, fuse_level, dir); + if (ret) + return ret; + + dev_vdbg(drv->dev, "setting voltage: %d\n", new_uV); + + ret = regulator_set_voltage(drv->vreg, new_uV, new_uV); + if (ret) { + dev_err_ratelimited(drv->dev, "failed to set voltage %d: %d\n", + new_uV, ret); + return ret; + } + + ret = cpr_post_voltage(thread, fuse_level, dir); + if (ret) + return ret; + + drv->last_uV = new_uV; +out: + if (new_uV > min_uV) + next_irqmask |= CPR3_IRQ_DOWN; + if (new_uV < max_uV) + next_irqmask |= CPR3_IRQ_UP; + + cpr_irq_set(thread, next_irqmask); + + return 0; +} + +static unsigned int cpr_get_cur_perf_state(struct cpr_thread *thread) +{ + return thread->corner ? thread->corner - thread->corners + 1 : 0; +} + +/** + * cpr_scale - Calculate new voltage for the received direction + * @thread: Structure holding CPR thread-specific parameters + * @dir: Enumeration for voltage change direction + * + * The CPR scales one by one: this function calculates the new + * voltage to set when a voltage-UP or voltage-DOWN request comes + * and stores it into the per-thread structure that gets passed. + */ +static void cpr_scale(struct cpr_thread *thread, enum voltage_change_dir dir) +{ + struct cpr_drv *drv = thread->drv; + const struct cpr_thread_desc *tdesc = thread->desc; + u32 val, error_steps; + int last_uV, new_uV; + struct corner *corner; + + if (dir != UP && dir != DOWN) + return; + + corner = thread->corner; + val = cpr_read(thread, CPR3_REG_RESULT0(tdesc->hw_tid)); + error_steps = val >> CPR3_RESULT0_ERROR_STEPS_SHIFT; + error_steps &= CPR3_RESULT0_ERROR_STEPS_MASK; + + last_uV = corner->last_uV; + + if (dir == UP) { + if (!(val & CPR3_RESULT0_STEP_UP_MASK)) + return; + + /* Calculate new voltage */ + new_uV = last_uV + drv->vreg_step; + new_uV = min(new_uV, corner->max_uV); + + dev_vdbg(drv->dev, + "[T%u] UP - new_uV=%d last_uV=%d p-state=%u st=%u\n", + thread->id, new_uV, last_uV, + cpr_get_cur_perf_state(thread), error_steps); + } else { + if (!(val & CPR3_RESULT0_STEP_DN_MASK)) + return; + + /* Calculate new voltage */ + new_uV = last_uV - drv->vreg_step; + new_uV = max(new_uV, corner->min_uV); + dev_vdbg(drv->dev, + "[T%u] DOWN - new_uV=%d last_uV=%d p-state=%u st=%u\n", + thread->id, new_uV, last_uV, + cpr_get_cur_perf_state(thread), error_steps); + } + corner->last_uV = new_uV; +} + +/** + * cpr_irq_handler - Handle CPR3/CPR4 status interrupts + * @irq: Number of the interrupt + * @dev: Pointer to the cpr_thread structure + * + * Handle the interrupts coming from non-hardened CPR HW as to get + * an ok to scale voltages immediately, or to pass error status to + * the hardware (either success/ACK or failure/NACK). + * + * Returns: IRQ_SUCCESS for success, IRQ_NONE if the CPR is disabled. + */ +static irqreturn_t cpr_irq_handler(int irq, void *dev) +{ + struct cpr_thread *thread = dev; + struct cpr_drv *drv = thread->drv; + irqreturn_t ret = IRQ_HANDLED; + int i, rc; + enum voltage_change_dir dir = NO_CHANGE; + u32 val; + + mutex_lock(&drv->lock); + + val = cpr_read(thread, CPR3_REG_IRQ_STATUS); + + dev_vdbg(drv->dev, "IRQ_STATUS = %#02x\n", val); + + if (!cpr_ctl_is_enabled(thread)) { + dev_vdbg(drv->dev, "CPR is disabled\n"); + ret = IRQ_NONE; + } else if (cpr_check_any_thread_busy(thread)) { + cpr_irq_clr_nack(thread); + dev_dbg(drv->dev, "CPR measurement is not ready\n"); + } else { + /* + * Following sequence of handling is as per each IRQ's + * priority + */ + if (val & CPR3_IRQ_UP) + dir = UP; + else if (val & CPR3_IRQ_DOWN) + dir = DOWN; + + if (dir != NO_CHANGE) { + for (i = 0; i < drv->num_threads; i++) { + thread = &drv->threads[i]; + cpr_scale(thread, dir); + } + + rc = cpr_commit_state(thread); + if (rc) + cpr_irq_clr_nack(thread); + else + cpr_irq_clr_ack(thread); + } else if (val & CPR3_IRQ_MID) { + dev_dbg(drv->dev, "IRQ occurred for Mid Flag\n"); + } else { + dev_warn(drv->dev, + "IRQ occurred for unknown flag (%#08x)\n", + val); + schedule_work(&thread->restart_work); + } + } + + mutex_unlock(&drv->lock); + + return ret; +} + +static int cpr_switch(struct cpr_drv *drv) +{ + int i, ret; + bool enabled = false; + + if (drv->desc->cpr_type == CTRL_TYPE_CPRH) + return 0; + + for (i = 0; i < drv->num_threads && !enabled; i++) + enabled = drv->threads[i].enabled; + + dev_vdbg(drv->dev, "%s: enabled = %d\n", __func__, enabled); + + if (enabled == drv->enabled) + return 0; + + if (enabled) { + ret = regulator_enable(drv->vreg); + if (ret) + return ret; + + drv->enabled = enabled; + + for (i = 0; i < drv->num_threads; i++) + if (drv->threads[i].corner) + break; + + if (i < drv->num_threads) { + cpr_irq_clr(&drv->threads[i]); + + cpr_commit_state(&drv->threads[i]); + cpr_ctl_enable(&drv->threads[i]); + } + } else { + for (i = 0; i < drv->num_threads && !enabled; i++) + cpr_ctl_disable(&drv->threads[i]); + + drv->enabled = enabled; + + ret = regulator_disable(drv->vreg); + if (ret < 0) + return ret; + } + + return 0; +} + +/** + * cpr_enable - Enables a CPR thread + * @thread: Structure holding CPR thread-specific parameters + * + * Returns: Zero for success or negative number on errors. + */ +static int cpr_enable(struct cpr_thread *thread) +{ + struct cpr_drv *drv = thread->drv; + int ret; + + dev_dbg(drv->dev, "Enabling thread %d\n", thread->id); + + mutex_lock(&drv->lock); + + thread->enabled = true; + ret = cpr_switch(thread->drv); + + mutex_unlock(&drv->lock); + + return ret; +} + +/** + * cpr_disable - Disables a CPR thread + * @thread: Structure holding CPR thread-specific parameters + * + * Returns: Zero for success or negative number on errors. + */ +static int cpr_disable(struct cpr_thread *thread) +{ + struct cpr_drv *drv = thread->drv; + int ret; + + dev_dbg(drv->dev, "Disabling thread %d\n", thread->id); + + mutex_lock(&drv->lock); + + thread->enabled = false; + ret = cpr_switch(thread->drv); + + mutex_unlock(&drv->lock); + + return ret; +} + +/** + * cpr_configure - Configure main HW parameters + * @thread: Structure holding CPR thread-specific parameters + * + * This function configures the main CPR hardware parameters, such as + * internal timers (and delays), sensor ownerships, activates and/or + * deactivates cpr-threads and others, as one sequence for all of the + * versions supported in this driver. By design, the function may + * return a success earlier if the sequence for "a previous version" + * has ended. + * + * NOTE: The CPR must be clocked before calling this function! + * + * Returns: Zero for success or negative number on errors. + */ +static int cpr_configure(struct cpr_thread *thread) +{ + struct cpr_drv *drv = thread->drv; + const struct cpr_desc *desc = drv->desc; + const struct cpr_thread_desc *tdesc = thread->desc; + int i; + u32 val; + + /* Disable interrupt and CPR */ + cpr_irq_set(thread, 0); + cpr_write(thread, CPR3_REG_CPR_CTL, 0); + + /* Init and save gcnt */ + drv->gcnt = drv->ref_clk_khz * desc->gcnt_us; + do_div(drv->gcnt, 1000); + + /* Program the delay count for the timer */ + val = drv->ref_clk_khz * desc->timer_delay_us; + do_div(val, 1000); + if (desc->cpr_type == CTRL_TYPE_CPR3) { + cpr_write(thread, CPR3_REG_CPR_TIMER_MID_CONT, val); + + val = drv->ref_clk_khz * desc->timer_updn_delay_us; + do_div(val, 1000); + cpr_write(thread, CPR3_REG_CPR_TIMER_UP_DN_CONT, val); + } else { + cpr_write(thread, CPR3_REG_CPR_TIMER_AUTO_CONT, val); + } + dev_dbg(drv->dev, "Timer count: %#0x (for %d us)\n", val, + desc->timer_delay_us); + + /* Program the control register */ + val = desc->idle_clocks << CPR3_CPR_CTL_IDLE_CLOCKS_SHIFT + | desc->count_mode << CPR3_CPR_CTL_COUNT_MODE_SHIFT + | desc->count_repeat << CPR3_CPR_CTL_COUNT_REPEAT_SHIFT; + cpr_write(thread, CPR3_REG_CPR_CTL, val); + + /* Configure CPR default step quotients */ + val = desc->step_quot_init_min << CPR3_CPR_STEP_QUOT_MIN_SHIFT + | desc->step_quot_init_max << CPR3_CPR_STEP_QUOT_MAX_SHIFT; + cpr_write(thread, CPR3_REG_CPR_STEP_QUOT, val); + + /* + * Configure the CPR sensor ownership always on thread 0 + * TODO: SDM845 has different ownership for sensors!! + */ + for (i = tdesc->sensor_range_start; i < tdesc->sensor_range_end; i++) + cpr_write(thread, CPR3_REG_SENSOR_OWNER(i), 0); + + /* Program Consecutive Up & Down */ + val = desc->timer_cons_up << CPR3_THRESH_CONS_UP_SHIFT; + val |= desc->timer_cons_down << CPR3_THRESH_CONS_DOWN_SHIFT; + val |= desc->up_threshold << CPR3_THRESH_UP_THRESH_SHIFT; + val |= desc->down_threshold << CPR3_THRESH_DOWN_THRESH_SHIFT; + cpr_write(thread, CPR3_REG_THRESH(tdesc->hw_tid), val); + + /* Mask all ring oscillators for all threads initially */ + cpr_write(thread, CPR3_REG_RO_MASK(tdesc->hw_tid), CPR3_RO_MASK); + + /* HW Closed-loop control */ + if (desc->cpr_type == CTRL_TYPE_CPR3) + cpr_write(thread, CPR3_REG_HW_CLOSED_LOOP_DISABLED, + !desc->hw_closed_loop_en); + else + cpr_masked_write(thread, CPR4_REG_MARGIN_ADJ_CTL, + CPR4_MARGIN_ADJ_HW_CLOSED_LOOP_EN, + desc->hw_closed_loop_en ? + CPR4_MARGIN_ADJ_HW_CLOSED_LOOP_EN : 0); + + /* Additional configuration for CPR4 and beyond */ + if (desc->cpr_type < CTRL_TYPE_CPR4) + return 0; + + /* Disable threads initially only on non-hardened CPR4 */ + if (desc->cpr_type == CTRL_TYPE_CPR4) + cpr_masked_write(thread, CPR4_REG_CPR_MASK_THREAD(1), + CPR4_CPR_MASK_THREAD_DISABLE_THREAD | + CPR4_CPR_MASK_THREAD_RO_MASK4THREAD_MASK, + CPR4_CPR_MASK_THREAD_DISABLE_THREAD | + CPR4_CPR_MASK_THREAD_RO_MASK4THREAD_MASK); + + if (drv->num_threads == 2) + cpr_masked_write(thread, CPR4_REG_MISC, + CPR4_MISC_RESET_STEP_QUOT_LOOP_EN | + CPR4_MISC_THREAD_HAS_ALWAYS_VOTE_EN, + CPR4_MISC_RESET_STEP_QUOT_LOOP_EN | + CPR4_MISC_THREAD_HAS_ALWAYS_VOTE_EN); + + val = drv->vreg_step; + do_div(val, 1000); + cpr_masked_write(thread, CPR4_REG_MARGIN_ADJ_CTL, + CPR4_MARGIN_ADJ_PMIC_STEP_SIZE_MASK, + val << CPR4_MARGIN_ADJ_PMIC_STEP_SIZE_SHIFT); + + cpr_masked_write(thread, CPR4_REG_SAW_ERROR_STEP_LIMIT, + CPR4_SAW_ERROR_STEP_LIMIT_DN_MASK, + desc->vreg_step_down_limit << + CPR4_SAW_ERROR_STEP_LIMIT_DN_SHIFT); + + cpr_masked_write(thread, CPR4_REG_SAW_ERROR_STEP_LIMIT, + CPR4_SAW_ERROR_STEP_LIMIT_UP_MASK, + desc->vreg_step_up_limit << + CPR4_SAW_ERROR_STEP_LIMIT_UP_SHIFT); + + cpr_masked_write(thread, CPR4_REG_MARGIN_ADJ_CTL, + CPR4_MARGIN_ADJ_PER_RO_KV_MARGIN_EN, + CPR4_MARGIN_ADJ_PER_RO_KV_MARGIN_EN); + + if (drv->num_threads > 1) + cpr_masked_write(thread, CPR4_REG_CPR_TIMER_CLAMP, + CPR4_CPR_TIMER_CLAMP_THREAD_AGGREGATION_EN, + CPR4_CPR_TIMER_CLAMP_THREAD_AGGREGATION_EN); + + /* Settling timer to account for one VDD supply step */ + if (desc->vdd_settle_time_us > 0) { + u32 m = CPR4_MARGIN_TEMP_CORE_TIMERS_SETTLE_VOLTAGE_COUNT_MASK; + u32 s = CPR4_MARGIN_TEMP_CORE_TIMERS_SETTLE_VOLTAGE_COUNT_SHFT; + + cpr_masked_write(thread, CPR4_REG_MARGIN_TEMP_CORE_TIMERS, + m, desc->vdd_settle_time_us << s); + } + + /* Additional configuration for CPR-hardened */ + if (desc->cpr_type < CTRL_TYPE_CPRH) + return 0; + + /* Settling timer to account for one corner-switch request */ + if (desc->corner_settle_time_us > 0) + cpr_masked_write(thread, drv->reg_ctl, + CPRH_CTL_MODE_SWITCH_DELAY_MASK, + desc->corner_settle_time_us << + CPRH_CTL_MODE_SWITCH_DELAY_SHIFT); + + /* Base voltage and multiplier values for CPRh internal calculations */ + cpr_masked_write(thread, drv->reg_ctl, + CPRH_CTL_BASE_VOLTAGE_MASK, + (DIV_ROUND_UP(desc->cpr_base_voltage, + drv->vreg_step) << + CPRH_CTL_BASE_VOLTAGE_SHIFT)); + + cpr_masked_write(thread, drv->reg_ctl, + CPRH_CTL_VOLTAGE_MULTIPLIER_MASK, + DIV_ROUND_UP(drv->vreg_step, 1000) << + CPRH_CTL_VOLTAGE_MULTIPLIER_SHIFT); + + return 0; +} + +static int cpr_set_performance_state(struct generic_pm_domain *domain, + unsigned int state) +{ + struct cpr_thread *thread = container_of(domain, struct cpr_thread, pd); + struct cpr_drv *drv = thread->drv; + struct corner *corner, *end; + int ret = 0; + + /* On CPRhardened, performance states are managed in firmware */ + if (drv->desc->cpr_type == CTRL_TYPE_CPRH) + return 0; + + mutex_lock(&drv->lock); + + dev_dbg(drv->dev, + "setting perf state: %u (prev state: %u thread: %u)\n", + state, cpr_get_cur_perf_state(thread), thread->id); + + /* + * Determine new corner we're going to. + * Remove one since lowest performance state is 1. + */ + corner = thread->corners + state - 1; + end = &thread->corners[thread->num_corners - 1]; + if (corner > end || corner < thread->corners) { + ret = -EINVAL; + goto unlock; + } + + cpr_ctl_disable(thread); + + cpr_irq_clr(thread); + if (thread->corner != corner) + cpr_corner_restore(thread, corner); + + ret = cpr_commit_state(thread); + if (ret) + goto unlock; + + cpr_ctl_enable(thread); +unlock: + mutex_unlock(&drv->lock); + + dev_dbg(drv->dev, + "set perf state %u on thread %u\n", state, thread->id); + + return ret; +} + +/** + * cpr3_adjust_quot - Adjust the closed-loop quotients + * @thread: Structure holding CPR thread-specific parameters + * + * Calculates the quotient adjustment factor based on closed-loop + * quotients and ring oscillator factor. + * + * Returns: Adjusted quotient + */ +static int cpr3_adjust_quot(int ring_osc_factor, int volt_closed_loop) +{ + s64 temp; + + if (ring_osc_factor == 0 || volt_closed_loop == 0) + return 0; + + temp = (s64)(ring_osc_factor * volt_closed_loop); + + return (int)div_s64(temp, 1000000); +} + +/** + * cpr_fuse_corner_init - Calculate fuse corner table + * @thread: Structure holding CPR thread-specific parameters + * + * This function populates the fuse corners table by reading the + * values from the fuses, eventually adjusting them with a fixed + * per-corner offset and doing basic checks about them being + * supported by the regulator that is assigned to this CPR - if + * it is available (on CPR-Hardened, there is no usable vreg, as + * that is protected by the hypervisor). + * + * Returns: Zero for success, negative number on error + */ +static int cpr_fuse_corner_init(struct cpr_thread *thread) +{ + struct cpr_drv *drv = thread->drv; + const struct cpr_thread_desc *desc = thread->desc; + const struct cpr_fuse *cpr_fuse = thread->cpr_fuses; + struct fuse_corner_data *fdata; + struct fuse_corner *fuse, *end; + int i, ret; + + /* Populate fuse_corner members */ + fuse = thread->fuse_corners; + end = &fuse[desc->num_fuse_corners - 1]; + fdata = desc->fuse_corner_data; + + for (i = 0; fuse <= end; fuse++, cpr_fuse++, i++, fdata++) { + int ro_fac = desc->ro_scaling_factor[fuse->ring_osc_idx]; + + ret = cpr_populate_fuse_common(drv->dev, fdata, cpr_fuse, + fuse, drv->vreg_step, + desc->init_voltage_width, + desc->init_voltage_step); + if (ret) + return ret; + + /* + * Adjust the fuse quot with per-fuse-corner closed-loop + * voltage adjustment parameters. + */ + fuse->quot += cpr3_adjust_quot(ro_fac, + fdata->volt_cloop_adjust); + + /* CPRh: no regulator access... */ + if (drv->desc->cpr_type == CTRL_TYPE_CPRH) + goto skip_pvs_restrict; + + /* Re-check if corner voltage range is supported by regulator */ + ret = cpr_check_vreg_constraints(drv->dev, drv->vreg, fuse); + if (ret) + return ret; + +skip_pvs_restrict: + dev_dbg(drv->dev, + "fuse corner %d: [%d %d %d] RO%hhu quot %d\n", + i, fuse->min_uV, fuse->uV, fuse->max_uV, + fuse->ring_osc_idx, fuse->quot); + } + + return 0; +} + +/* + * cprh_corner_adjust_opps - Set voltage on each CPU OPP table entry + * + * On CPR-Hardened, the voltage level is controlled internally through + * the OSM hardware: in order to initialize the latter, we have to + * communicate the voltage to its driver, so that it will be able to + * write the right parameters (as they have to be set both on the CPRh + * and on the OSM) on it. + * This function is called only for CPRh. + * + * Return: Zero for success, negative number for error. + */ +static int cprh_corner_adjust_opps(struct cpr_thread *thread) +{ + struct corner *corner = thread->corners; + struct cpr_drv *drv = thread->drv; + struct opp_table *tbl; + int i, ret; + + tbl = dev_pm_opp_get_opp_table(thread->attached_cpu_dev); + if (IS_ERR(tbl)) + return PTR_ERR(tbl); + + for (i = 0; i < thread->num_corners; i++) { + ret = dev_pm_opp_adjust_voltage(thread->attached_cpu_dev, + corner[i].freq, + corner[i].uV, + corner[i].min_uV, + corner[i].max_uV); + if (ret) + break; + + dev_dbg(drv->dev, + "OPP voltage adjusted for %lu kHz, %d uV\n", + corner[i].freq, corner[i].uV); + } + + dev_pm_opp_put_opp_table(tbl); + return ret; +} + +/** + * cpr3_corner_init - Calculate and set-up corners for the CPR HW + * @thread: Structure holding CPR thread-specific parameters + * + * This function calculates all the corner parameters by comparing + * and interpolating the values read from the various set-points + * read from the fuses (also called "fuse corners") to generate and + * program to the CPR a lookup table that describes each voltage + * step, mapped to a performance level (or corner number). + * + * It also programs other essential parameters on the CPR and - if + * we are dealing with CPR-Hardened, it will also enable the internal + * interface between the Operating State Manager (OSM) and the CPRh + * in order to achieve CPU DVFS. + * + * Returns: Zero for success, negative number on error + */ +static int cpr3_corner_init(struct cpr_thread *thread) +{ + struct cpr_drv *drv = thread->drv; + const struct cpr_desc *desc = drv->desc; + const struct cpr_thread_desc *tdesc = thread->desc; + const struct cpr_fuse *fuses = thread->cpr_fuses; + int i, ret, total_corners, level, scaling = 0; + unsigned int fnum, fc; + const char *quot_offset; + const struct fuse_corner_data *fdata; + struct fuse_corner *fuse, *prev_fuse; + struct corner *corner, *end; + struct corner_data *cdata; + struct dev_pm_opp *opp; + bool apply_scaling; + unsigned long freq; + u32 ring_osc_mask = CPR3_RO_MASK, min_quotient = U32_MAX; + + corner = thread->corners; + end = &corner[thread->num_corners - 1]; + + cdata = devm_kcalloc(drv->dev, + thread->num_corners + drv->extra_corners, + sizeof(struct corner_data), + GFP_KERNEL); + if (!cdata) + return -ENOMEM; + + for (level = 1; level <= thread->num_corners; level++) { + opp = dev_pm_opp_find_level_exact(&thread->pd.dev, level); + if (IS_ERR(opp)) + return -EINVAL; + + /* + * If there is only one specified qcom,opp-fuse-level, then + * it is assumed that this only one is global and valid for + * all IDs, so try to get the specific one but, on failure, + * go for the global one. + */ + fc = cpr_get_fuse_corner(opp, thread->id); + if (fc == 0) { + fc = cpr_get_fuse_corner(opp, 0); + if (fc == 0) { + dev_err(drv->dev, + "qcom,opp-fuse-level is missing!\n"); + dev_pm_opp_put(opp); + return -EINVAL; + } + } + fnum = fc - 1; + + freq = cpr_get_opp_hz_for_req(opp, thread->attached_cpu_dev); + if (!freq) { + thread->num_corners = max(level - 1, 0); + end = &thread->corners[thread->num_corners - 1]; + break; + } + cdata[level - 1].fuse_corner = fnum; + cdata[level - 1].freq = freq; + + fuse = &thread->fuse_corners[fnum]; + dev_dbg(drv->dev, "freq: %lu level: %u fuse level: %u\n", + freq, dev_pm_opp_get_level(opp) - 1, fnum); + if (freq > fuse->max_freq) + fuse->max_freq = freq; + dev_pm_opp_put(opp); + } + + if (thread->num_corners < 2) { + dev_err(drv->dev, "need at least 2 OPPs to use CPR\n"); + return -EINVAL; + } + + /* + * Get the quotient adjustment scaling factor, according to: + * + * scaling = min(1000 * (QUOT(corner_N) - QUOT(corner_N-1)) + * / (freq(corner_N) - freq(corner_N-1)), max_factor) + * + * QUOT(corner_N): quotient read from fuse for fuse corner N + * QUOT(corner_N-1): quotient read from fuse for fuse corner (N - 1) + * freq(corner_N): max frequency in MHz supported by fuse corner N + * freq(corner_N-1): max frequency in MHz supported by fuse corner + * (N - 1) + * + * Then walk through the corners mapped to each fuse corner + * and calculate the quotient adjustment for each one using the + * following formula: + * + * quot_adjust = (freq_max - freq_corner) * scaling / 1000 + * + * freq_max: max frequency in MHz supported by the fuse corner + * freq_corner: frequency in MHz corresponding to the corner + * scaling: calculated from above equation + * + * + * + + + * | v | + * q | f c o | f c + * u | c l | c + * o | f t | f + * t | c a | c + * | c f g | c f + * | e | + * +--------------- +---------------- + * 0 1 2 3 4 5 6 0 1 2 3 4 5 6 + * corner corner + * + * c = corner + * f = fuse corner + * + */ + for (apply_scaling = false, i = 0; corner <= end; corner++, i++) { + unsigned long freq_diff_mhz; + + fnum = cdata[i].fuse_corner; + fdata = &tdesc->fuse_corner_data[fnum]; + quot_offset = fuses[fnum].quotient_offset; + fuse = &thread->fuse_corners[fnum]; + ring_osc_mask &= (u16)(~BIT(fuse->ring_osc_idx)); + if (fnum) + prev_fuse = &thread->fuse_corners[fnum - 1]; + else + prev_fuse = NULL; + + corner->fuse_corner = fuse; + corner->freq = cdata[i].freq; + corner->uV = fuse->uV; + + if (prev_fuse) { + scaling = cpr_calculate_scaling(quot_offset, drv->dev, + fdata, corner); + if (scaling < 0) + return scaling; + + apply_scaling = true; + } else if (corner->freq == fuse->max_freq) { + /* This is a fuse corner; don't scale anything */ + apply_scaling = false; + } + + if (apply_scaling) { + freq_diff_mhz = fuse->max_freq - corner->freq; + do_div(freq_diff_mhz, 1000000); /* now in MHz */ + + corner->quot_adjust = scaling * freq_diff_mhz; + do_div(corner->quot_adjust, 1000); + + corner->uV = cpr_interpolate(corner, + drv->vreg_step, fdata); + } + + /* Negative fuse quotients are nonsense. */ + if (fuse->quot < corner->quot_adjust) + return -EINVAL; + + min_quotient = min(min_quotient, + (u32)(fuse->quot - corner->quot_adjust)); + corner->max_uV = fuse->max_uV; + corner->min_uV = fuse->min_uV; + corner->uV = clamp(corner->uV, corner->min_uV, corner->max_uV); + corner->last_uV = corner->uV; + + /* Reduce the ceiling voltage if needed */ + if (desc->reduce_to_corner_uV && corner->uV < corner->max_uV) + corner->max_uV = corner->uV; + else if (desc->reduce_to_fuse_uV && fuse->uV < corner->max_uV) + corner->max_uV = max(corner->min_uV, fuse->uV); + + corner->min_uV = max(corner->max_uV - fdata->range_uV, + corner->min_uV); + /* + * Adjust per-corner floor and ceiling voltages so that + * they do not overlap the memory Array Power Mux (APM) + * threshold voltage. + * The apm_hysteresis is checked because it is available + * only on CPR controllers that need to fullfill this + * requirement. + */ + if (desc->apm_hysteresis > 0 && + desc->apm_threshold > corner->min_uV && + desc->apm_threshold < corner->max_uV) { + if (corner->uV >= desc->apm_threshold) { + corner->min_uV = max(corner->min_uV, + desc->apm_threshold - + desc->apm_hysteresis); + if (corner->min_uV > corner->uV) + corner->uV = corner->min_uV; + } else { + corner->max_uV = desc->apm_threshold; + corner->max_uV -= drv->vreg_step; + } + } + + dev_dbg(drv->dev, "corner %d: [%d %d %d] scaling %d quot %d\n", i, + corner->min_uV, corner->uV, corner->max_uV, scaling, + fuse->quot - corner->quot_adjust); + } + + /* Additional setup for CPRh only */ + if (desc->cpr_type < CTRL_TYPE_CPRH) + return 0; + + /* If the OPPs can't be adjusted, programming the CPRh is useless */ + ret = cprh_corner_adjust_opps(thread); + if (ret) { + dev_err(drv->dev, "Cannot adjust CPU OPP voltages: %d\n", ret); + return ret; + } + + /* + * Unmask the ring oscillator(s) that we're going to use: it seems + * to be mandatory to do this *before* sending the rest of the + * CPRhardened specific configuration. + */ + dev_dbg(drv->dev, + "Unmasking ring oscillators with mask 0x%x\n", ring_osc_mask); + cpr_write(thread, CPR3_REG_RO_MASK(tdesc->hw_tid), ring_osc_mask); + + total_corners = thread->num_corners; + + /* If the APM extra corner exists, add it now. */ + if (desc->apm_threshold && desc->apm_hysteresis && + drv->extra_corners) { + thread->corners[total_corners].uV = desc->apm_threshold; + thread->corners[total_corners].min_uV = desc->apm_threshold; + thread->corners[total_corners].max_uV = desc->apm_threshold; + thread->corners[total_corners].is_open_loop = true; + + /* + * Also add and disable an opp with zero frequency: other + * drivers will need to know what is the APM threshold voltage. + */ + ret = dev_pm_opp_add(thread->attached_cpu_dev, 0, + desc->apm_threshold); + if (ret) + return ret; + + ret = dev_pm_opp_disable(thread->attached_cpu_dev, 0); + if (ret) + return ret; + + dev_dbg(drv->dev, "corner %d (APM): [%d %d %d] Open-Loop\n", + total_corners, desc->apm_threshold, + desc->apm_threshold, desc->apm_threshold); + + total_corners += drv->extra_corners; + } + + for (i = 0; i < total_corners; i++) { + int volt_oloop_steps, volt_floor_steps, delta_quot_steps; + int ring_osc; + u32 val; + + fnum = cdata[i].fuse_corner; + fuse = &thread->fuse_corners[fnum]; + + val = thread->corners[i].uV - desc->cpr_base_voltage; + volt_oloop_steps = DIV_ROUND_UP(val, drv->vreg_step); + + val = thread->corners[i].min_uV - desc->cpr_base_voltage; + volt_floor_steps = DIV_ROUND_UP(val, drv->vreg_step); + + val = fuse->quot - thread->corners[i].quot_adjust; + val -= min_quotient; + delta_quot_steps = DIV_ROUND_UP(val, + CPRH_DELTA_QUOT_STEP_FACTOR); + + /* + * If we are accessing corners that are not used as + * an active DCVS set-point, then always select RO 0 + * and zero out the delta quotient + */ + if (i >= thread->num_corners) { + ring_osc = 0; + delta_quot_steps = 0; + } else { + ring_osc = fuse->ring_osc_idx; + } + + if (volt_oloop_steps > CPRH_CORNER_INIT_VOLTAGE_MAX_VALUE || + volt_floor_steps > CPRH_CORNER_FLOOR_VOLTAGE_MAX_VALUE || + delta_quot_steps > CPRH_CORNER_QUOT_DELTA_MAX_VALUE) { + dev_err(drv->dev, + "Invalid cfg: oloop=%d, floor=%d, delta=%d\n", + volt_oloop_steps, volt_floor_steps, + delta_quot_steps); + return -EINVAL; + } + /* Green light: Go, Go, Go! */ + val = CPRH_REG_CORNER(drv, tdesc->hw_tid, i); + + /* Set the minimal quotient for all corners */ + cpr_write(thread, CPR3_REG_TARGET_QUOT(tdesc->hw_tid, i), + min_quotient); + + /* Set number of open-loop steps */ + cpr_masked_write(thread, val, + CPRH_CORNER_INIT_VOLTAGE_MASK, + volt_oloop_steps << + CPRH_CORNER_INIT_VOLTAGE_SHIFT); + + /* Set number of floor voltage steps */ + cpr_masked_write(thread, val, + CPRH_CORNER_FLOOR_VOLTAGE_MASK, + volt_floor_steps << + CPRH_CORNER_FLOOR_VOLTAGE_SHIFT); + + /* Set number of target quotient delta steps */ + cpr_masked_write(thread, val, + CPRH_CORNER_QUOT_DELTA_MASK, + delta_quot_steps << + CPRH_CORNER_QUOT_DELTA_SHIFT); + + /* Select ring oscillator for this corner */ + cpr_masked_write(thread, val, + CPRH_CORNER_RO_SEL_MASK, + ring_osc << CPRH_CORNER_RO_SEL_SHIFT); + + dev_dbg(drv->dev, + "steps [%d]: open-loop %d, floor %d, delta_quot %d\n", + i, volt_oloop_steps, volt_floor_steps, + delta_quot_steps); + + /* Open loop only corner is for APM crossover */ + if (thread->corners[i].is_open_loop) { + dev_dbg(drv->dev, + "Disabling Closed-Loop on corner %d\n", i); + cpr_masked_write(thread, val, + CPRH_CORNER_CPR_CL_DISABLE, + CPRH_CORNER_CPR_CL_DISABLE); + } + } + /* + * Make sure that all the writes go through before enabling + * internal communication between the OSM and the CPRh + * controllers, otherwise there is a high risk of hardware + * lockups due to under-voltage for the selected CPU clock. + * + * Please note that the CPR-hardened gets set-up in Linux but + * then gets actually used in firmware (and only by the OSM); + * after handing it off we will have no more control on it, + * hence the only way to get things up properly for sure here + * is a write barrier. + */ + wmb(); + + /* Enable the interface between OSM and CPRh */ + cpr_masked_write(thread, drv->reg_ctl, + CPRH_CTL_OSM_ENABLED, + CPRH_CTL_OSM_ENABLED); + + /* On success, free cdata manually */ + devm_kfree(drv->dev, cdata); + return 0; +} + +/** + * cpr3_init_parameters - Initialize CPR global parameters + * @drv: Main driver structure + * + * Initial "integrity" checks and setup for the thread-independent parameters. + * + * Returns: Zero for success, negative number on error + */ +static int cpr3_init_parameters(struct cpr_drv *drv) +{ + const struct cpr_desc *desc = drv->desc; + struct clk *clk; + + clk = devm_clk_get(drv->dev, "ref"); + if (IS_ERR(clk)) + return PTR_ERR(clk); + + drv->ref_clk_khz = clk_get_rate(clk); + do_div(drv->ref_clk_khz, 1000); + + /* On CPRh this clock is not always-on... */ + if (desc->cpr_type == CTRL_TYPE_CPRH) + clk_prepare_enable(clk); + else + devm_clk_put(drv->dev, clk); + + if (desc->timer_cons_up > CPR3_THRESH_CONS_UP_MASK || + desc->timer_cons_down > CPR3_THRESH_CONS_DOWN_MASK || + desc->up_threshold > CPR3_THRESH_UP_THRESH_MASK || + desc->down_threshold > CPR3_THRESH_DOWN_THRESH_MASK || + desc->idle_clocks > CPR3_CPR_CTL_IDLE_CLOCKS_MASK || + desc->count_mode > CPR3_CPR_CTL_COUNT_MODE_MASK || + desc->count_repeat > CPR3_CPR_CTL_COUNT_REPEAT_MASK || + desc->step_quot_init_min > CPR3_CPR_STEP_QUOT_MIN_MASK || + desc->step_quot_init_max > CPR3_CPR_STEP_QUOT_MAX_MASK) + return -EINVAL; + + /* + * Read the CPR version register only from CPR3 onwards: + * this is needed to get the additional register offsets. + * + * Note: When threaded, even if multi-controller, there + * is no chance to have different versions at the + * same time in the same domain, so it is safe to + * check this only on the first controller/thread. + */ + drv->cpr_hw_rev = cpr_read(&drv->threads[0], + CPR3_REG_CPR_VERSION); + dev_dbg(drv->dev, + "CPR hardware revision: 0x%x\n", drv->cpr_hw_rev); + + if (drv->cpr_hw_rev >= CPRH_CPR_VERSION_4P5) { + drv->reg_corner = 0x3500; + drv->reg_corner_tid = 0xa0; + drv->reg_ctl = 0x3a80; + drv->reg_status = 0x3a84; + } else { + drv->reg_corner = 0x3a00; + drv->reg_corner_tid = 0; + drv->reg_ctl = 0x3aa0; + drv->reg_status = 0x3aa4; + } + + dev_dbg(drv->dev, "up threshold = %u, down threshold = %u\n", + desc->up_threshold, desc->down_threshold); + + return 0; +} + +/** + * cpr3_find_initial_corner - Finds boot-up p-state and enables CPR + * @thread: Structure holding CPR thread-specific parameters + * + * Differently from CPRv1, from CPRv3 onwards when we successfully find + * the target boot-up performance state, we must refresh the HW + * immediately to guarantee system stability and to avoid overheating + * during the boot process, thing that would more likely happen without + * this driver doing its job. + * + * Returns: Zero for success, negative number on error + */ +static int cpr3_find_initial_corner(struct cpr_thread *thread) +{ + struct cpr_drv *drv = thread->drv; + struct corner *corner; + int uV, idx; + + idx = cpr_find_initial_corner(drv->dev, thread->cpu_clk, + thread->corners, + thread->num_corners); + if (idx < 0) + return idx; + + cpr_ctl_disable(thread); + + corner = &thread->corners[idx]; + cpr_corner_restore(thread, corner); + + uV = regulator_get_voltage(drv->vreg); + uV = clamp(uV, corner->min_uV, corner->max_uV); + + corner->last_uV = uV; + if (!drv->last_uV) + drv->last_uV = uV; + + cpr_commit_state(thread); + thread->enabled = true; + cpr_switch(drv); + + return 0; +} + +static const int sdm630_gold_scaling_factor[] = { + 4040, 3230, 0, 2210, + 2560, 2450, 2230, 2220, + 2410, 2300, 2560, 2470, + 1600, 3120, 2620, 2280, +}; + +static const int sdm630_silver_scaling_factor[] = { + 3600, 3600, 3830, 2430, + 2520, 2700, 1790, 1760, + 1970, 1880, 2110, 2010, + 2510, 4900, 4370, 4780, +}; + +static const struct cpr_thread_desc sdm630_thread_gold = { + .controller_id = 0, + .hw_tid = 0, + .ro_scaling_factor = sdm630_gold_scaling_factor, + .sensor_range_start = 0, + .sensor_range_end = 6, + .init_voltage_step = 10000, + .init_voltage_width = 6, + .num_fuse_corners = 5, + .fuse_corner_data = (struct fuse_corner_data[]){ + /* fuse corner 0 */ + { + .ref_uV = 644000, + .max_uV = 724000, + .min_uV = 588000, + .range_uV = 40000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 15000, + .max_volt_scale = 10, + .max_quot_scale = 300, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + /* fuse corner 1 */ + { + .ref_uV = 788000, + .max_uV = 788000, + .min_uV = 652000, + .range_uV = 40000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 5000, + .max_volt_scale = 320, + .max_quot_scale = 275, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + /* fuse corner 2 */ + { + .ref_uV = 868000, + .max_uV = 868000, + .min_uV = 712000, + .range_uV = 40000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 5000, + .max_volt_scale = 350, + .max_quot_scale = 800, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + /* fuse corner 3 */ + { + .ref_uV = 988000, + .max_uV = 988000, + .min_uV = 784000, + .range_uV = 66000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 0, + .max_volt_scale = 868, + .max_quot_scale = 980, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + /* fuse corner 4 */ + { + .ref_uV = 1068000, + .max_uV = 1068000, + .min_uV = 844000, + .range_uV = 40000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 0, + .max_volt_scale = 868, + .max_quot_scale = 980, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + }, +}; + +static const struct cpr_thread_desc sdm630_thread_silver = { + .controller_id = 1, + .hw_tid = 0, + .ro_scaling_factor = sdm630_silver_scaling_factor, + .sensor_range_start = 0, + .sensor_range_end = 6, + .init_voltage_step = 10000, + .init_voltage_width = 6, + .num_fuse_corners = 3, + .fuse_corner_data = (struct fuse_corner_data[]){ + /* fuse corner 0 */ + { + .ref_uV = 644000, + .max_uV = 724000, + .min_uV = 588000, + .range_uV = 32000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 0, + .max_volt_scale = 10, + .max_quot_scale = 360, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + /* fuse corner 1 */ + { + .ref_uV = 788000, + .max_uV = 788000, + .min_uV = 652000, + .range_uV = 40000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 0, + .max_volt_scale = 500, + .max_quot_scale = 550, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + /* fuse corner 2 */ + { + .ref_uV = 1068000, + .max_uV = 1068000, + .min_uV = 800000, + .range_uV = 40000, + .volt_cloop_adjust = -30000, + .volt_oloop_adjust = 0, + .max_volt_scale = 2370, + .max_quot_scale = 550, + .quot_offset = 0, + .quot_scale = 1, + .quot_adjust = 0, + .quot_offset_scale = 5, + .quot_offset_adjust = 0, + }, + }, +}; + +static const struct cpr_desc sdm630_cpr_desc = { + .cpr_type = CTRL_TYPE_CPRH, + .num_threads = 2, + .apm_threshold = 872000, + .apm_hysteresis = 20000, + .cpr_base_voltage = 400000, + .cpr_max_voltage = 1300000, + .step_quot_init_min = 12, + .step_quot_init_max = 14, + .timer_delay_us = 5000, + .timer_cons_up = 0, + .timer_cons_down = 2, + .up_threshold = 2, + .down_threshold = 2, + .idle_clocks = 15, + .count_mode = CPR3_CPR_CTL_COUNT_MODE_ALL_AT_ONCE_MIN, + .count_repeat = 14, + .gcnt_us = 1, + .vreg_step_fixed = 4000, + .vreg_step_up_limit = 1, + .vreg_step_down_limit = 1, + .vdd_settle_time_us = 34, + .corner_settle_time_us = 5, + .reduce_to_corner_uV = true, + .hw_closed_loop_en = true, + .threads = (const struct cpr_thread_desc *[]) { + &sdm630_thread_gold, + &sdm630_thread_silver, + }, +}; + +static const struct cpr_acc_desc sdm630_cpr_acc_desc = { + .cpr_desc = &sdm630_cpr_desc, +}; + +static unsigned int cpr_get_performance_state(struct generic_pm_domain *genpd, + struct dev_pm_opp *opp) +{ + return dev_pm_opp_get_level(opp); +} + +static int cpr_power_off(struct generic_pm_domain *domain) +{ + struct cpr_thread *thread = container_of(domain, struct cpr_thread, pd); + + return cpr_disable(thread); +} + +static int cpr_power_on(struct generic_pm_domain *domain) +{ + struct cpr_thread *thread = container_of(domain, struct cpr_thread, pd); + + return cpr_enable(thread); +} + +static void cpr_pd_detach_dev(struct generic_pm_domain *domain, + struct device *dev) +{ + struct cpr_thread *thread = container_of(domain, struct cpr_thread, pd); + struct cpr_drv *drv = thread->drv; + + mutex_lock(&drv->lock); + + dev_dbg(drv->dev, "detach callback for: %s\n", dev_name(dev)); + thread->attached_cpu_dev = NULL; + + mutex_unlock(&drv->lock); +} + +static int cpr_pd_attach_dev(struct generic_pm_domain *domain, + struct device *dev) +{ + struct cpr_thread *thread = container_of(domain, struct cpr_thread, pd); + struct cpr_drv *drv = thread->drv; + const struct acc_desc *acc_desc = drv->acc_desc; + int ret = 0; + + mutex_lock(&drv->lock); + + dev_dbg(drv->dev, "attach callback for: %s\n", dev_name(dev)); + + /* + * This driver only supports scaling voltage for a CPU cluster + * where all CPUs in the cluster share a single regulator. + * Therefore, save the struct device pointer only for the first + * CPU device that gets attached. There is no need to do any + * additional initialization when further CPUs get attached. + */ + if (thread->attached_cpu_dev) + goto unlock; + + /* + * cpr_scale_voltage() requires the direction (if we are changing + * to a higher or lower OPP). The first time + * cpr_set_performance_state() is called, there is no previous + * performance state defined. Therefore, we call + * cpr_find_initial_corner() that gets the CPU clock frequency + * set by the bootloader, so that we can determine the direction + * the first time cpr_set_performance_state() is called. + */ + thread->cpu_clk = devm_clk_get(dev, NULL); + if (drv->desc->cpr_type < CTRL_TYPE_CPRH && + IS_ERR(thread->cpu_clk)) { + ret = PTR_ERR(thread->cpu_clk); + if (ret != -EPROBE_DEFER) + dev_err(drv->dev, "could not get cpu clk: %d\n", ret); + goto unlock; + } + thread->attached_cpu_dev = dev; + + dev_dbg(drv->dev, "using cpu clk from: %s\n", + dev_name(thread->attached_cpu_dev)); + + /* + * Everything related to (virtual) corners has to be initialized + * here, when attaching to the power domain, since we need to know + * the maximum frequency for each fuse corner, and this is only + * available after the cpufreq driver has attached to us. + * The reason for this is that we need to know the highest + * frequency associated with each fuse corner. + */ + ret = dev_pm_opp_get_opp_count(&thread->pd.dev); + if (ret < 0) { + dev_err(drv->dev, "could not get OPP count\n"); + thread->attached_cpu_dev = NULL; + goto unlock; + } + thread->num_corners = ret; + + thread->corners = devm_kcalloc(drv->dev, + thread->num_corners + + drv->extra_corners, + sizeof(*thread->corners), + GFP_KERNEL); + if (!thread->corners) { + ret = -ENOMEM; + goto unlock; + } + + ret = cpr3_corner_init(thread); + if (ret) + goto unlock; + + if (drv->desc->cpr_type < CTRL_TYPE_CPRH) { + ret = cpr3_find_initial_corner(thread); + if (ret) + goto unlock; + + if (acc_desc->config) + regmap_multi_reg_write(drv->tcsr, acc_desc->config, + acc_desc->num_regs_per_fuse); + + /* Enable ACC if required */ + if (acc_desc->enable_mask) + regmap_update_bits(drv->tcsr, acc_desc->enable_reg, + acc_desc->enable_mask, + acc_desc->enable_mask); + } + dev_info(drv->dev, "thread %d initialized with %u OPPs\n", + thread->id, thread->num_corners); + +unlock: + mutex_unlock(&drv->lock); + + return ret; +} + +static int cpr3_debug_info_show(struct seq_file *s, void *unused) +{ + u32 ro_sel, ctl, irq_status, reg, quot; + struct cpr_thread *thread = s->private; + struct corner *corner = thread->corners; + struct fuse_corner *fuse = thread->fuse_corners; + unsigned int i; + + const struct { + const char *name; + uint32_t mask; + uint8_t shift; + } result0_fields[] = { + { "busy", 1, 0 }, + { "step_dn", 1, 1 }, + { "step_up", 1, 2 }, + { "error_steps", CPR3_RESULT0_ERROR_STEPS_MASK, + CPR3_RESULT0_ERROR_STEPS_SHIFT }, + { "error", CPR3_RESULT0_ERROR_MASK, CPR3_RESULT0_ERROR_SHIFT }, + { "negative", 1, 20 }, + }, result1_fields[] = { + { "quot_min", CPR3_RESULT1_QUOT_MIN_MASK, + CPR3_RESULT1_QUOT_MIN_SHIFT }, + { "quot_max", CPR3_RESULT1_QUOT_MAX_MASK, + CPR3_RESULT1_QUOT_MAX_SHIFT }, + { "ro_min", CPR3_RESULT1_RO_MIN_MASK, + CPR3_RESULT1_RO_MIN_SHIFT }, + { "ro_max", CPR3_RESULT1_RO_MAX_MASK, + CPR3_RESULT1_RO_MAX_SHIFT }, + }, result2_fields[] = { + { "qout_step_min", CPR3_RESULT2_STEP_QUOT_MIN_MASK, + CPR3_RESULT2_STEP_QUOT_MIN_SHIFT }, + { "qout_step_max", CPR3_RESULT2_STEP_QUOT_MAX_MASK, + CPR3_RESULT2_STEP_QUOT_MAX_SHIFT }, + { "sensor_min", CPR3_RESULT2_SENSOR_MIN_MASK, + CPR3_RESULT2_SENSOR_MIN_SHIFT }, + { "sensor_max", CPR3_RESULT2_SENSOR_MAX_MASK, + CPR3_RESULT2_SENSOR_MAX_SHIFT }, + }; + + if (thread->drv->desc->cpr_type < CTRL_TYPE_CPRH) + seq_printf(s, "current_volt = %d uV\n", thread->drv->last_uV); + + irq_status = cpr_read(thread, CPR3_REG_IRQ_STATUS); + seq_printf(s, "irq_status = %#02X\n", irq_status); + + ctl = cpr_read(thread, CPR3_REG_CPR_CTL); + seq_printf(s, "cpr_ctl = %#02X\n", ctl); + + seq_printf(s, "thread %d - hw tid: %d - enabled: %d:\n", + thread->id, thread->desc->hw_tid, thread->enabled); + seq_printf(s, "%d corners, derived from %d fuse corners\n", + thread->num_corners, thread->desc->num_fuse_corners); + + for (i = 0; i < thread->num_corners; i++, corner++) + seq_printf(s, "corner %d - uV=[%d %d %d] quot=%d freq=%lu\n", + i, corner->min_uV, corner->uV, corner->max_uV, + corner->quot_adjust, corner->freq); + + for (i = 0; i < thread->desc->num_fuse_corners; i++, fuse++) + seq_printf(s, "fuse %d - uV=[%d %d %d] quot=%d freq=%lu\n", + i, fuse->min_uV, fuse->uV, fuse->max_uV, + fuse->quot, corner->freq); + + seq_printf(s, "requested voltage: %d uV\n", + thread->corner->last_uV); + + ro_sel = corner->fuse_corner->ring_osc_idx; + quot = cpr_read(thread, CPR3_REG_TARGET_QUOT(i, ro_sel)); + seq_printf(s, "quot_target (%u) = %#02X\n", ro_sel, quot); + + reg = cpr_read(thread, CPR3_REG_RESULT0(i)); + seq_printf(s, "cpr_result_0 = %#02X\n [", reg); + for (i = 0; i < ARRAY_SIZE(result0_fields); i++) + seq_printf(s, "%s%s = %u", + i ? ", " : "", + result0_fields[i].name, + (reg >> result0_fields[i].shift) & + result0_fields[i].mask); + seq_puts(s, "]\n"); + reg = cpr_read(thread, CPR3_REG_RESULT1(i)); + seq_printf(s, "cpr_result_1 = %#02X\n [", reg); + for (i = 0; i < ARRAY_SIZE(result1_fields); i++) + seq_printf(s, "%s%s = %u", + i ? ", " : "", + result1_fields[i].name, + (reg >> result1_fields[i].shift) & + result1_fields[i].mask); + seq_puts(s, "]\n"); + reg = cpr_read(thread, CPR3_REG_RESULT2(i)); + seq_printf(s, "cpr_result_2 = %#02X\n [", reg); + for (i = 0; i < ARRAY_SIZE(result2_fields); i++) + seq_printf(s, "%s%s = %u", + i ? ", " : "", + result2_fields[i].name, + (reg >> result2_fields[i].shift) & + result2_fields[i].mask); + seq_puts(s, "]\n"); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(cpr3_debug_info); + +static void cpr3_debugfs_init(struct cpr_drv *drv) +{ + int i; + + drv->debugfs = debugfs_create_dir("qcom_cpr3", NULL); + + for (i = 0; i < drv->num_threads; i++) { + char buf[50]; + + snprintf(buf, sizeof(buf), "thread%d", i); + + debugfs_create_file(buf, 0444, drv->debugfs, &drv->threads[i], + &cpr3_debug_info_fops); + } +} + +/** + * cpr_thread_init - Initialize CPR thread related parameters + * @drv: Main driver structure + * @tid: Thread ID + * + * Returns: Zero for success, negative number on error + */ +static int cpr_thread_init(struct cpr_drv *drv, int tid) +{ + const struct cpr_desc *desc = drv->desc; + const struct cpr_thread_desc *tdesc = desc->threads[tid]; + struct cpr_thread *thread = &drv->threads[tid]; + int ret; + + thread->id = tid; + thread->drv = drv; + thread->desc = tdesc; + thread->fuse_corners = devm_kcalloc(drv->dev, + tdesc->num_fuse_corners + + drv->extra_corners, + sizeof(*thread->fuse_corners), + GFP_KERNEL); + if (!thread->fuse_corners) + return -ENOMEM; + + thread->cpr_fuses = cpr_get_fuses(drv->dev, tid, + tdesc->num_fuse_corners); + if (IS_ERR(thread->cpr_fuses)) + return PTR_ERR(thread->cpr_fuses); + + ret = cpr_populate_ring_osc_idx(thread->drv->dev, thread->fuse_corners, + thread->cpr_fuses, + tdesc->num_fuse_corners); + if (ret) + return ret; + + ret = cpr_fuse_corner_init(thread); + if (ret) + return ret; + + thread->pd.name = devm_kasprintf(drv->dev, GFP_KERNEL, + "%s_thread%d", + drv->dev->of_node->full_name, + thread->id); + if (!thread->pd.name) + return -EINVAL; + + thread->pd.power_off = cpr_power_off; + thread->pd.power_on = cpr_power_on; + thread->pd.set_performance_state = cpr_set_performance_state; + thread->pd.opp_to_performance_state = cpr_get_performance_state; + thread->pd.attach_dev = cpr_pd_attach_dev; + thread->pd.detach_dev = cpr_pd_detach_dev; + + /* Anything later than CPR1 must be always-on for now */ + thread->pd.flags = GENPD_FLAG_ALWAYS_ON; + + drv->cell_data.domains[tid] = &thread->pd; + + ret = pm_genpd_init(&thread->pd, NULL, false); + if (ret) + return ret; + + /* On CPRhardened, the interrupts are managed in firmware */ + if (desc->cpr_type < CTRL_TYPE_CPRH) { + INIT_WORK(&thread->restart_work, cpr_restart_worker); + + ret = devm_request_threaded_irq(drv->dev, drv->irq, + NULL, cpr_irq_handler, + IRQF_ONESHOT | + IRQF_TRIGGER_RISING, + "cpr", drv); + if (ret) + return ret; + } + + return 0; +} + +/** + * cpr3_resources_init - Initialize resources used by this driver + * @pdev: Platform device + * @drv: Main driver structure + * + * Returns: Zero for success, negative number on error + */ +static int cpr3_resources_init(struct platform_device *pdev, + struct cpr_drv *drv) +{ + const struct cpr_desc *desc = drv->desc; + struct cpr_thread *threads = drv->threads; + unsigned int i; + u8 cid_mask = 0; + + /* + * Here, we are accounting for the following usecases: + * - One controller + * - One or multiple threads on the same iospace + * + * - Multiple controllers + * - Each controller has its own iospace and each + * may have one or multiple threads in their + * parent controller's iospace + * + * Then, to avoid complicating the code for no reason, + * this also needs a mandatory order in the list of + * threads which implies that all of them from the same + * controllers are specified sequentially. As an example: + * + * C0-T0, C0-T1...C0-Tn, C1-T0, C1-T1...C1-Tn + */ + for (i = 0; i < desc->num_threads; i++) { + u8 cid = desc->threads[i]->controller_id; + + if (cid_mask & BIT(cid)) { + if (desc->threads[i - 1]->controller_id != cid) { + dev_err(drv->dev, + "Bad threads order. Please fix!\n"); + return -EINVAL; + } + threads[i].base = threads[i - 1].base; + continue; + } + + dev_dbg(drv->dev, "Mapping resource for CID %d\n", cid); + threads[i].base = devm_platform_ioremap_resource(pdev, cid); + if (IS_ERR(threads[i].base)) + return PTR_ERR(threads[i].base); + cid_mask |= BIT(cid); + } + + return 0; +} + +static int cpr_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct cpr_drv *drv; + const struct cpr_desc *desc; + const struct cpr_acc_desc *data; + struct device_node *np; + unsigned int i; + int ret; + + data = of_device_get_match_data(dev); + if (!data || !data->cpr_desc) + return -EINVAL; + + desc = data->cpr_desc; + + /* CPRh disallows MEM-ACC access from the HLOS */ + if (!data->acc_desc && desc->cpr_type < CTRL_TYPE_CPRH) + return -EINVAL; + + drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL); + if (!drv) + return -ENOMEM; + + drv->dev = dev; + drv->desc = desc; + drv->num_threads = desc->num_threads; + drv->threads = devm_kcalloc(dev, drv->num_threads, + sizeof(*drv->threads), GFP_KERNEL); + if (!drv->threads) + return -ENOMEM; + + drv->cell_data.num_domains = desc->num_threads, + drv->cell_data.domains = devm_kcalloc(drv->dev, + drv->cell_data.num_domains, + sizeof(*drv->cell_data.domains), + GFP_KERNEL); + if (!drv->cell_data.domains) + return -ENOMEM; + + if (data->acc_desc) + drv->acc_desc = data->acc_desc; + + mutex_init(&drv->lock); + + if (desc->cpr_type < CTRL_TYPE_CPRH) { + np = of_parse_phandle(dev->of_node, "acc-syscon", 0); + if (!np) + return -ENODEV; + + drv->tcsr = syscon_node_to_regmap(np); + of_node_put(np); + if (IS_ERR(drv->tcsr)) + return PTR_ERR(drv->tcsr); + } + + ret = cpr3_resources_init(pdev, drv); + if (ret) + return ret; + + drv->irq = platform_get_irq_optional(pdev, 0); + if ((desc->cpr_type != CTRL_TYPE_CPRH) && (drv->irq < 0)) + return -EINVAL; + + /* On CPRhardened, vreg access it not allowed */ + drv->vreg = devm_regulator_get_optional(dev, "vdd"); + if (desc->cpr_type != CTRL_TYPE_CPRH && IS_ERR(drv->vreg)) + return PTR_ERR(drv->vreg); + + /* + * On at least CPRhardened, vreg is unaccessible and there is no + * way to read linear step from that regulator, hence it is hardcoded + * in the driver; + * When the vreg_step is not declared in the cpr data (or is zero), + * then having access to the vreg regulator is mandatory, as this + * will be retrieved through the regulator API. + */ + if (desc->vreg_step_fixed) + drv->vreg_step = desc->vreg_step_fixed; + else + drv->vreg_step = regulator_get_linear_step(drv->vreg); + + if (!drv->vreg_step) + return -EINVAL; + + /* + * Initialize fuse corners, since it simply depends + * on data in efuses. + * Everything related to (virtual) corners has to be + * initialized after attaching to the power domain, + * since it depends on the CPU's OPP table. + */ + ret = cpr_read_efuse(dev, "cpr_fuse_revision", &drv->fusing_rev); + if (ret) + return ret; + + ret = cpr_read_efuse(dev, "cpr_speed_bin", &drv->speed_bin); + if (ret) + return ret; + + /* + * If the APM threshold and hysteresis params are specified, + * reserve a corner for APM crossover, used by OSM and CPRh + * HW to set the supply voltage during the APM switch routine + */ + if (desc->cpr_type == CTRL_TYPE_CPRH && + desc->apm_threshold && desc->apm_hysteresis) + drv->extra_corners++; + + /* Initialize all threads */ + for (i = 0; i < desc->num_threads; i++) { + ret = cpr_thread_init(drv, i); + if (ret) + return ret; + } + + /* Initialize global parameters */ + ret = cpr3_init_parameters(drv); + if (ret) + return ret; + + /* Write initial configuration on all threads */ + for (i = 0; i < desc->num_threads; i++) { + ret = cpr_configure(&drv->threads[i]); + if (ret) + return ret; + } + + ret = of_genpd_add_provider_onecell(dev->of_node, &drv->cell_data); + if (ret) + return ret; + + platform_set_drvdata(pdev, drv); + cpr3_debugfs_init(drv); + + return 0; +} + +static int cpr_remove(struct platform_device *pdev) +{ + struct cpr_drv *drv = platform_get_drvdata(pdev); + int i; + + of_genpd_del_provider(pdev->dev.of_node); + + for (i = 0; i < drv->num_threads; i++) { + cpr_ctl_disable(&drv->threads[i]); + cpr_irq_set(&drv->threads[i], 0); + pm_genpd_remove(&drv->threads[i].pd); + } + + debugfs_remove_recursive(drv->debugfs); + + return 0; +} + +static const struct of_device_id cpr3_match_table[] = { + { .compatible = "qcom,sdm630-cprh", .data = &sdm630_cpr_acc_desc }, + { } +}; +MODULE_DEVICE_TABLE(of, cpr3_match_table); + +static struct platform_driver cpr3_driver = { + .probe = cpr_probe, + .remove = cpr_remove, + .driver = { + .name = "qcom-cpr3", + .of_match_table = cpr3_match_table, + }, +}; +module_platform_driver(cpr3_driver) + +MODULE_DESCRIPTION("Core Power Reduction (CPR) v3/v4 driver"); +MODULE_LICENSE("GPL v2"); From patchwork Thu Nov 26 18:45:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AngeloGioacchino Del Regno X-Patchwork-Id: 333105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C14ADC8302A for ; Thu, 26 Nov 2020 18:55:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 82BB821D7F for ; Thu, 26 Nov 2020 18:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405147AbgKZSzk (ORCPT ); Thu, 26 Nov 2020 13:55:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405019AbgKZSy5 (ORCPT ); Thu, 26 Nov 2020 13:54:57 -0500 X-Greylist: delayed 499 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Thu, 26 Nov 2020 10:54:56 PST Received: from relay08.th.seeweb.it (relay08.th.seeweb.it [IPv6:2001:4b7a:2000:18::169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA34CC061A47; Thu, 26 Nov 2020 10:54:56 -0800 (PST) Received: from IcarusMOD.eternityproject.eu (unknown [2.237.20.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by m-r2.th.seeweb.it (Postfix) with ESMTPSA id 53D6340635; Thu, 26 Nov 2020 19:46:37 +0100 (CET) From: AngeloGioacchino Del Regno To: linux-arm-msm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-pm@vger.kernel.org, ulf.hansson@linaro.org, jorge.ramirez-ortiz@linaro.org, broonie@kernel.org, lgirdwood@gmail.com, daniel.lezcano@linaro.org, nks@flawful.org, bjorn.andersson@linaro.org, agross@kernel.org, robh+dt@kernel.org, viresh.kumar@linaro.org, rjw@rjwysocki.net, konrad.dybcio@somainline.org, martin.botka@somainline.org, marijn.suijten@somainline.org, phone-devel@vger.kernel.org, AngeloGioacchino Del Regno Subject: [PATCH 11/13] dt-bindings: cpufreq: Convert qcom-cpufreq-hw to YAML binding Date: Thu, 26 Nov 2020 19:45:57 +0100 Message-Id: <20201126184559.3052375-12-angelogioacchino.delregno@somainline.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> References: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Convert the qcom-cpufreq-hw documentation to YAML binding as qcom,cpufreq-hw.yaml. Signed-off-by: AngeloGioacchino Del Regno --- .../bindings/cpufreq/cpufreq-qcom-hw.txt | 173 +--------------- .../bindings/cpufreq/qcom,cpufreq-hw.yaml | 196 ++++++++++++++++++ 2 files changed, 197 insertions(+), 172 deletions(-) create mode 100644 Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt index 9299028ee712..bd4e81f6f835 100644 --- a/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt @@ -1,172 +1 @@ -Qualcomm Technologies, Inc. CPUFREQ Bindings - -CPUFREQ HW is a hardware engine used by some Qualcomm Technologies, Inc. (QTI) -SoCs to manage frequency in hardware. It is capable of controlling frequency -for multiple clusters. - -Properties: -- compatible - Usage: required - Value type: - Definition: must be "qcom,cpufreq-hw" or "qcom,cpufreq-epss". - -- clocks - Usage: required - Value type: From common clock binding. - Definition: clock handle for XO clock and GPLL0 clock. - -- clock-names - Usage: required - Value type: From common clock binding. - Definition: must be "xo", "alternate". - -- reg - Usage: required - Value type: - Definition: Addresses and sizes for the memory of the HW bases in - each frequency domain. -- reg-names - Usage: Optional - Value type: - Definition: Frequency domain name i.e. - "freq-domain0", "freq-domain1". - -- #freq-domain-cells: - Usage: required. - Definition: Number of cells in a freqency domain specifier. - -* Property qcom,freq-domain -Devices supporting freq-domain must set their "qcom,freq-domain" property with -phandle to a cpufreq_hw followed by the Domain ID(0/1) in the CPU DT node. - - -Example: - -Example 1: Dual-cluster, Quad-core per cluster. CPUs within a cluster switch -DCVS state together. - -/ { - cpus { - #address-cells = <2>; - #size-cells = <0>; - - CPU0: cpu@0 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x0>; - enable-method = "psci"; - next-level-cache = <&L2_0>; - qcom,freq-domain = <&cpufreq_hw 0>; - L2_0: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - L3_0: l3-cache { - compatible = "cache"; - }; - }; - }; - - CPU1: cpu@100 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x100>; - enable-method = "psci"; - next-level-cache = <&L2_100>; - qcom,freq-domain = <&cpufreq_hw 0>; - L2_100: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - }; - }; - - CPU2: cpu@200 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x200>; - enable-method = "psci"; - next-level-cache = <&L2_200>; - qcom,freq-domain = <&cpufreq_hw 0>; - L2_200: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - }; - }; - - CPU3: cpu@300 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x300>; - enable-method = "psci"; - next-level-cache = <&L2_300>; - qcom,freq-domain = <&cpufreq_hw 0>; - L2_300: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - }; - }; - - CPU4: cpu@400 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x400>; - enable-method = "psci"; - next-level-cache = <&L2_400>; - qcom,freq-domain = <&cpufreq_hw 1>; - L2_400: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - }; - }; - - CPU5: cpu@500 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x500>; - enable-method = "psci"; - next-level-cache = <&L2_500>; - qcom,freq-domain = <&cpufreq_hw 1>; - L2_500: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - }; - }; - - CPU6: cpu@600 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x600>; - enable-method = "psci"; - next-level-cache = <&L2_600>; - qcom,freq-domain = <&cpufreq_hw 1>; - L2_600: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - }; - }; - - CPU7: cpu@700 { - device_type = "cpu"; - compatible = "qcom,kryo385"; - reg = <0x0 0x700>; - enable-method = "psci"; - next-level-cache = <&L2_700>; - qcom,freq-domain = <&cpufreq_hw 1>; - L2_700: l2-cache { - compatible = "cache"; - next-level-cache = <&L3_0>; - }; - }; - }; - - soc { - cpufreq_hw: cpufreq@17d43000 { - compatible = "qcom,cpufreq-hw"; - reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>; - reg-names = "freq-domain0", "freq-domain1"; - - clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; - clock-names = "xo", "alternate"; - - #freq-domain-cells = <1>; - }; -} +This file has been moved to qcom,cpufreq-hw.yaml diff --git a/Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml b/Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml new file mode 100644 index 000000000000..94a56317b14b --- /dev/null +++ b/Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml @@ -0,0 +1,196 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: "http://devicetree.org/schemas/cpufreq/qcom,cpufreq-hw.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#" + +title: Qualcomm Technologies, Inc. CPUFREQ HW Bindings + +description: | + CPUFREQ HW is a hardware engine used by some Qualcomm Technologies, Inc. (QTI) + SoCs to manage frequency in hardware. It is capable of controlling frequency + for multiple clusters. + +maintainers: + - TBD + +properties: + compatible: + items: + - enum: + - qcom,cpufreq-hw + - qcom,cpufreq-hw-8998 + - qcom,cpufreq-epss + + reg: + minItems: 2 + maxItems: 2 + + reg-names: + description: + Frequency domain register region for each domain. + items: + - const: "freq-domain0" + - const: "freq-domain1" + + clock-names: + - const: xo + - const: ref + + clocks: + maxItems: 2 + + '#freq-domain-cells': + description: Number of cells in a freqency domain specifier. + const: 1 + + operating-points-v2: true + + qcom,freq-domain: + $ref: '/schemas/types.yaml#/definitions/phandle' + description: | + Devices supporting freq-domain must set their "qcom,freq-domain" + property with phandle to a cpufreq_hw followed by the Domain ID(0/1) + in the CPU DT node. + +required: + - compatible + - reg + - clock-names + - clocks + - "#freq-domain-cells" + +additionalProperties: false + +examples: + - | + // Example 1: Dual-cluster, Quad-core per cluster. CPUs within a + // cluster switch DCVS state together. + + #include + #include + + cpus { + #address-cells = <2>; + #size-cells = <0>; + + CPU0: cpu@0 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x0>; + enable-method = "psci"; + next-level-cache = <&L2_0>; + qcom,freq-domain = <&cpufreq_hw 0>; + L2_0: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + L3_0: l3-cache { + compatible = "cache"; + }; + }; + }; + + CPU1: cpu@100 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x100>; + enable-method = "psci"; + next-level-cache = <&L2_100>; + qcom,freq-domain = <&cpufreq_hw 0>; + L2_100: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + }; + }; + + CPU2: cpu@200 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x200>; + enable-method = "psci"; + next-level-cache = <&L2_200>; + qcom,freq-domain = <&cpufreq_hw 0>; + L2_200: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + }; + }; + + CPU3: cpu@300 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x300>; + enable-method = "psci"; + next-level-cache = <&L2_300>; + qcom,freq-domain = <&cpufreq_hw 0>; + L2_300: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + }; + }; + + CPU4: cpu@400 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x400>; + enable-method = "psci"; + next-level-cache = <&L2_400>; + qcom,freq-domain = <&cpufreq_hw 1>; + L2_400: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + }; + }; + + CPU5: cpu@500 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x500>; + enable-method = "psci"; + next-level-cache = <&L2_500>; + qcom,freq-domain = <&cpufreq_hw 1>; + L2_500: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + }; + }; + + CPU6: cpu@600 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x600>; + enable-method = "psci"; + next-level-cache = <&L2_600>; + qcom,freq-domain = <&cpufreq_hw 1>; + L2_600: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + }; + }; + + CPU7: cpu@700 { + device_type = "cpu"; + compatible = "qcom,kryo385"; + reg = <0x0 0x700>; + enable-method = "psci"; + next-level-cache = <&L2_700>; + qcom,freq-domain = <&cpufreq_hw 1>; + L2_700: l2-cache { + compatible = "cache"; + next-level-cache = <&L3_0>; + }; + }; + }; + + soc { + cpufreq_hw: cpufreq@17d43000 { + compatible = "qcom,cpufreq-hw"; + reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>; + reg-names = "freq-domain0", "freq-domain1"; + + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; + + #freq-domain-cells = <1>; + }; +... From patchwork Thu Nov 26 18:45:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AngeloGioacchino Del Regno X-Patchwork-Id: 333108 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5111C64E7B for ; Thu, 26 Nov 2020 18:55:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B9B221D1A for ; Thu, 26 Nov 2020 18:55:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404883AbgKZSzS (ORCPT ); Thu, 26 Nov 2020 13:55:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405053AbgKZSy6 (ORCPT ); Thu, 26 Nov 2020 13:54:58 -0500 Received: from relay08.th.seeweb.it (relay08.th.seeweb.it [IPv6:2001:4b7a:2000:18::169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67E1AC061A47 for ; Thu, 26 Nov 2020 10:54:58 -0800 (PST) Received: from IcarusMOD.eternityproject.eu (unknown [2.237.20.237]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by m-r2.th.seeweb.it (Postfix) with ESMTPSA id 2B91D4063F; Thu, 26 Nov 2020 19:46:38 +0100 (CET) From: AngeloGioacchino Del Regno To: linux-arm-msm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-pm@vger.kernel.org, ulf.hansson@linaro.org, jorge.ramirez-ortiz@linaro.org, broonie@kernel.org, lgirdwood@gmail.com, daniel.lezcano@linaro.org, nks@flawful.org, bjorn.andersson@linaro.org, agross@kernel.org, robh+dt@kernel.org, viresh.kumar@linaro.org, rjw@rjwysocki.net, konrad.dybcio@somainline.org, martin.botka@somainline.org, marijn.suijten@somainline.org, phone-devel@vger.kernel.org, AngeloGioacchino Del Regno Subject: [PATCH 13/13] dt-bindings: cpufreq: qcom-hw: Add bindings for 8998 Date: Thu, 26 Nov 2020 19:45:59 +0100 Message-Id: <20201126184559.3052375-14-angelogioacchino.delregno@somainline.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> References: <20201126184559.3052375-1-angelogioacchino.delregno@somainline.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The OSM programming addition has been done under the qcom,cpufreq-hw-8998 compatible name: specify the requirement of two additional register spaces for this functionality. Signed-off-by: AngeloGioacchino Del Regno --- .../bindings/cpufreq/qcom,cpufreq-hw.yaml | 31 ++++++++++++++++--- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml b/Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml index 94a56317b14b..f64cea73037e 100644 --- a/Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml +++ b/Documentation/devicetree/bindings/cpufreq/qcom,cpufreq-hw.yaml @@ -23,17 +23,21 @@ properties: - qcom,cpufreq-epss reg: + description: Base address and size of the RBCPR register region minItems: 2 maxItems: 2 reg-names: description: - Frequency domain register region for each domain. - items: - - const: "freq-domain0" - - const: "freq-domain1" + Frequency domain register region for each domain. If OSM programming + does not happen in the bootloader and has to be done in this driver, + then also the OSM domain region osm-domain[0-1] has to be provided. + minItems: 2 + maxItems: 2 clock-names: + minItems: 2 + maxItems: 2 - const: xo - const: ref @@ -53,9 +57,28 @@ properties: property with phandle to a cpufreq_hw followed by the Domain ID(0/1) in the CPU DT node. +allOf: + - if: + properties: + reg-names: + contains: + const: qcom,cpufreq-hw-8998 + then: + properties: + reg: + minItems: 4 + maxItems: 4 + reg-names: + items: + - const: "freq-domain0" + - const: "freq-domain1" + - const: "osm-domain0" + - const: "osm-domain1" + required: - compatible - reg + - reg-names - clock-names - clocks - "#freq-domain-cells"