From patchwork Wed Jul 13 06:52:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 590220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 960A9CCA479 for ; Wed, 13 Jul 2022 06:53:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234394AbiGMGxW (ORCPT ); Wed, 13 Jul 2022 02:53:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234322AbiGMGxS (ORCPT ); Wed, 13 Jul 2022 02:53:18 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FBCFE0F41 for ; Tue, 12 Jul 2022 23:53:13 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id fz10so10737101pjb.2 for ; Tue, 12 Jul 2022 23:53:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T+FjPu7CEqaUGA6dxu9j75NgS+pDMgasoBPZEY/Bx7k=; b=cYYSIeYjXyC4rOlRSy9otHxWGaGXrKaKc9UWGl6Sy9rOnCdzfM2DCRQsfX/gBwcGUK 3M3estnxsHdKoxhk3gq6DQpQMhwq9kd/wx5rjobfW1vFa+lv9n/Vx4WpWxcD2Gqx4z53 4LZ85rOe81sI31zna5LCxxC8Ghjow0AASSDWBP/L86in7wArTg/9BybWEbUz14YSbt6Z 59kXI5GBv/4pUIc7MewQBk1GkraxyOKs6kMMyHvokZu0rAZMw1xfgZI5nXZHpUfHhbsW X0JmQVn+5G3INxq7fAQR23crw2TJ95UKDTzPyOfj1V4JobgN0XZA0bvOt5G/eiIiO6Gp CRtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T+FjPu7CEqaUGA6dxu9j75NgS+pDMgasoBPZEY/Bx7k=; b=eV6RgH4tIIX2vpyvZfoDsGnaoPFAcaSe+IbvBsz1ckdzMea2S5ZlYlPLKWblZ3fdst e/Xo7rNG8HFjIHm7zOE7yJ+7jOt2UU9614F7HnEk9lLr7NQKjUgRkID5y1N2yLNXidCl uL5+zTOPaWu7aYI9Rlvr+NXyBwl/r8PQBfbFECjZks3XjB1zfXeZNcdn0vlq6dpGgwdJ QEN3WqmG2XU4pIu1luk/TgKDmnsXlzI7NhaJo/ixse0+LmFP4tQ0M3bjZUC0yUC6tdEc 6G0RIIy6b0x/0YSOhtaoRLdTgKZIFgbbU5qOBns1hJG7toiKb8CEI0ZeWK9PoDiX2EUa u31g== X-Gm-Message-State: AJIora/U3osnLO7MfGS+elVQYbmkv+e7l60XtA7PfXC1IFN9xeanCoBU DTgCe0xZvWcSaPMwQCcV5iNe3Q== X-Google-Smtp-Source: AGRyM1vmdjxZHvTZxuwKbMysmWcTakqw7XDsOepkHN5uT1Wo8RRxsHlUMzGkSjnB/ALTdBEKWw/ilw== X-Received: by 2002:a17:903:41cd:b0:16b:f00c:5927 with SMTP id u13-20020a17090341cd00b0016bf00c5927mr1729670ple.49.1657695192984; Tue, 12 Jul 2022 23:53:12 -0700 (PDT) Received: from localhost ([122.171.18.80]) by smtp.gmail.com with ESMTPSA id 70-20020a621549000000b0050dc76281d3sm8141359pfv.173.2022.07.12.23.53.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 23:53:12 -0700 (PDT) From: Viresh Kumar To: Bjorn Andersson , Manivannan Sadhasivam , Andy Gross , "Rafael J. Wysocki" , Viresh Kumar Cc: Vincent Guittot , Johan Hovold , Rob Herring , Krzysztof Kozlowski , linux-pm@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 1/4] dt-bindings: cpufreq-qcom-hw: Move clocks to CPU nodes Date: Wed, 13 Jul 2022 12:22:56 +0530 Message-Id: <035fe13689dad6d3867a1d33f7d5e91d4637d14a.1657695140.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.31.1.272.g89b43f80a514 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org cpufreq-hw is a hardware engine, which takes care of frequency management for CPUs. The engine manages the clocks for CPU devices, but it isn't the end consumer of the clocks, which are the CPUs in this case. For this reason, it looks incorrect to keep the clock related properties in the cpufreq-hw node. They should really be present at the end user, i.e. the CPUs. The case was simple currently as all the devices, i.e. the CPUs, that the engine manages share the same clock names. What if the clock names are different for different CPUs or clusters ? How will keeping the clock properties in the cpufreq-hw node work in that case ? This design creates further problems for frameworks like OPP, which expects all such details (clocks) to be present in the end device node itself, instead of another related node. Move the clocks properties to the node that uses them instead. Signed-off-by: Viresh Kumar --- .../bindings/cpufreq/cpufreq-qcom-hw.yaml | 31 ++++++++++--------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml index 2f1b8b6852a0..2ef4eeeca9b9 100644 --- a/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml @@ -42,24 +42,12 @@ description: | - const: freq-domain1 - const: freq-domain2 - clocks: - items: - - description: XO Clock - - description: GPLL0 Clock - - clock-names: - items: - - const: xo - - const: alternate - '#freq-domain-cells': const: 1 required: - compatible - reg - - clocks - - clock-names - '#freq-domain-cells' additionalProperties: false @@ -81,6 +69,8 @@ additionalProperties: false reg = <0x0 0x0>; enable-method = "psci"; next-level-cache = <&L2_0>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 0>; L2_0: l2-cache { compatible = "cache"; @@ -97,6 +87,8 @@ additionalProperties: false reg = <0x0 0x100>; enable-method = "psci"; next-level-cache = <&L2_100>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 0>; L2_100: l2-cache { compatible = "cache"; @@ -110,6 +102,8 @@ additionalProperties: false reg = <0x0 0x200>; enable-method = "psci"; next-level-cache = <&L2_200>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 0>; L2_200: l2-cache { compatible = "cache"; @@ -123,6 +117,8 @@ additionalProperties: false reg = <0x0 0x300>; enable-method = "psci"; next-level-cache = <&L2_300>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 0>; L2_300: l2-cache { compatible = "cache"; @@ -136,6 +132,8 @@ additionalProperties: false reg = <0x0 0x400>; enable-method = "psci"; next-level-cache = <&L2_400>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 1>; L2_400: l2-cache { compatible = "cache"; @@ -149,6 +147,8 @@ additionalProperties: false reg = <0x0 0x500>; enable-method = "psci"; next-level-cache = <&L2_500>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 1>; L2_500: l2-cache { compatible = "cache"; @@ -162,6 +162,8 @@ additionalProperties: false reg = <0x0 0x600>; enable-method = "psci"; next-level-cache = <&L2_600>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 1>; L2_600: l2-cache { compatible = "cache"; @@ -175,6 +177,8 @@ additionalProperties: false reg = <0x0 0x700>; enable-method = "psci"; next-level-cache = <&L2_700>; + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; + clock-names = "xo", "alternate"; qcom,freq-domain = <&cpufreq_hw 1>; L2_700: l2-cache { compatible = "cache"; @@ -192,9 +196,6 @@ additionalProperties: false reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>; reg-names = "freq-domain0", "freq-domain1"; - clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>; - clock-names = "xo", "alternate"; - #freq-domain-cells = <1>; }; };