From patchwork Mon Jul 11 11:52:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leo Yan X-Patchwork-Id: 589987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B426C433EF for ; Mon, 11 Jul 2022 11:53:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230374AbiGKLxj (ORCPT ); Mon, 11 Jul 2022 07:53:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230054AbiGKLxP (ORCPT ); Mon, 11 Jul 2022 07:53:15 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09B8B29CAA for ; Mon, 11 Jul 2022 04:53:10 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id b9so4542434pfp.10 for ; Mon, 11 Jul 2022 04:53:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=F9C4b2eXgqDOnaISTWY3hd8BHivbis4FswsOZcWe/jU=; b=vUebZIGKNacxF3KqZp8W0o7xTR8bwMilpI1OMr+YKfjbmxpHzCKq8DfP+s/d7BrbKk vqUfx0AJTq+tlRaVvaCvg6+cewoJZY4++sqiwpgU2wMvnnoA1cKwdpZU+9+yLTwHCU6R ygRvVQDUiggbMbr07B0VtyBk1eXhHVVZbCanUv6uOoiFaATgLNuRSfXyUZ5yjp0qfrrs Ouogbe02NP7BFzX25uiWgLFH5XvVG2A3A52pHpdjNt9Pnfh1dnIyt9IvKh8HtAssLccG /rDbXdNYAeGCVDYP0CLq/+dnumAW/bT17c+SyoL4AaSEg6jtLvgWHwKAjEkIqAPXuj/u yYAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=F9C4b2eXgqDOnaISTWY3hd8BHivbis4FswsOZcWe/jU=; b=u3KW0HHEpDO1w0a2tzueabu7xkR0P2YkoC5gVTYTYxeCD0rz0tkKt2p74G6gLgLpkq bukFalpFwbgyKBS8JMVNEv8fOujuwCZFjcSncywEhAei9qIiDI67c2GkuKal9IYfrRJC 3ch7O6oz31HKZGfIpAv+oUIqkWcQH+jH3tCZCl7vGaYPoLvDMrnLcpuFjEQiT6F9KapY rEGOMc3oQjLx8/oSxgbuQia2n7V7Lh/2GVVBkzS/aBmypU3FQnFuhN3S8p7KvZS0aoRY Q4rZtYDuRqz09pK7KepDE1VZ3m5+kL+t9Sukq2jwRj9yFlyfSy7/qe9/79jC4VahhWAc v6cQ== X-Gm-Message-State: AJIora+/q1xvneEYdW2sgHJtrch3KL1ry0sa4q0GMEtQjRl92RlPNZqT czGac4rBLTHjBugPrIkMWy5uYqwMv+8DZ4G066U= X-Google-Smtp-Source: AGRyM1vpZXo8S0A+cw9lPocExuKZFE4v5svlvg3jjLPozwoQPPVwu6Nf4Qtp7bhf5TPqR8+tyxmQXw== X-Received: by 2002:a63:9701:0:b0:40c:a588:b488 with SMTP id n1-20020a639701000000b0040ca588b488mr15689995pge.303.1657540389089; Mon, 11 Jul 2022 04:53:09 -0700 (PDT) Received: from leo-build-box.lan (n058152077182.netvigator.com. [58.152.77.182]) by smtp.gmail.com with ESMTPSA id h14-20020a17090a648e00b001eaec8cea55sm4586502pjj.57.2022.07.11.04.53.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jul 2022 04:53:08 -0700 (PDT) From: Leo Yan To: Georgi Djakov , Andy Gross , Bjorn Andersson , Krzysztof Kozlowski , Konrad Dybcio , Rob Herring , Bryan O'Donoghue , linux-arm-msm@vger.kernel.org, linux-pm@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Leo Yan Subject: [PATCH v5 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values Date: Mon, 11 Jul 2022 19:52:40 +0800 Message-Id: <20220711115240.806236-6-leo.yan@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220711115240.806236-1-leo.yan@linaro.org> References: <20220711115240.806236-1-leo.yan@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org This commit uses buckets for support bandwidth and clock rates. It introduces a new function qcom_icc_bus_aggregate() to calculate the aggregate average and peak bandwidths for every bucket, and also it calculates the maximum value of aggregated average bandwidth across all buckets. The maximum aggregated average is used to calculate the final bandwidth requests. And we can set the clock rate per bucket, we use SLEEP bucket as default bucket if a platform doesn't enable the interconnect path tags in DT binding; otherwise, we use WAKE bucket to set active clock and use SLEEP bucket for other clocks. So far we don't use AMC bucket. Signed-off-by: Leo Yan --- drivers/interconnect/qcom/icc-rpm.c | 75 +++++++++++++++++++++++------ 1 file changed, 61 insertions(+), 14 deletions(-) diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c index d27b1582521f..f15f5deee6ef 100644 --- a/drivers/interconnect/qcom/icc-rpm.c +++ b/drivers/interconnect/qcom/icc-rpm.c @@ -302,18 +302,57 @@ static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, return 0; } +/** + * qcom_icc_bus_aggregate - aggregate bandwidth by traversing all nodes + * @provider: generic interconnect provider + * @agg_avg: an array for aggregated average bandwidth of buckets + * @agg_peak: an array for aggregated peak bandwidth of buckets + * @max_agg_avg: pointer to max value of aggregated average bandwidth + */ +static void qcom_icc_bus_aggregate(struct icc_provider *provider, + u64 *agg_avg, u64 *agg_peak, + u64 *max_agg_avg) +{ + struct icc_node *node; + struct qcom_icc_node *qn; + int i; + + /* Initialise aggregate values */ + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { + agg_avg[i] = 0; + agg_peak[i] = 0; + } + + *max_agg_avg = 0; + + /* + * Iterate nodes on the interconnect and aggregate bandwidth + * requests for every bucket. + */ + list_for_each_entry(node, &provider->nodes, node_list) { + qn = node->data; + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { + agg_avg[i] += qn->sum_avg[i]; + agg_peak[i] = max_t(u64, agg_peak[i], qn->max_peak[i]); + } + } + + /* Find maximum values across all buckets */ + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) + *max_agg_avg = max_t(u64, *max_agg_avg, agg_avg[i]); +} + static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) { struct qcom_icc_provider *qp; struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL; struct icc_provider *provider; - struct icc_node *n; u64 sum_bw; - u64 max_peak_bw; u64 rate; - u32 agg_avg = 0; - u32 agg_peak = 0; + u64 agg_avg[QCOM_ICC_NUM_BUCKETS], agg_peak[QCOM_ICC_NUM_BUCKETS]; + u64 max_agg_avg, max_agg_peak; int ret, i; + int bucket; src_qn = src->data; if (dst) @@ -321,12 +360,9 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) provider = src->provider; qp = to_qcom_provider(provider); - list_for_each_entry(n, &provider->nodes, node_list) - provider->aggregate(n, 0, n->avg_bw, n->peak_bw, - &agg_avg, &agg_peak); + qcom_icc_bus_aggregate(provider, agg_avg, agg_peak, &max_agg_avg); - sum_bw = icc_units_to_bps(agg_avg); - max_peak_bw = icc_units_to_bps(agg_peak); + sum_bw = icc_units_to_bps(max_agg_avg); ret = __qcom_icc_set(src, src_qn, sum_bw); if (ret) @@ -337,12 +373,23 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) return ret; } - rate = max(sum_bw, max_peak_bw); - - do_div(rate, src_qn->buswidth); - rate = min_t(u64, rate, LONG_MAX); - for (i = 0; i < qp->num_clks; i++) { + /* + * Use WAKE bucket for active clock, otherwise, use SLEEP bucket + * for other clocks. If a platform doesn't set interconnect + * path tags, by default use sleep bucket for all clocks. + * + * Note, AMC bucket is not supported yet. + */ + if (!strcmp(qp->bus_clks[i].id, "bus_a")) + bucket = QCOM_ICC_BUCKET_WAKE; + else + bucket = QCOM_ICC_BUCKET_SLEEP; + + rate = icc_units_to_bps(max(agg_avg[bucket], agg_peak[bucket])); + do_div(rate, src_qn->buswidth); + rate = min_t(u64, rate, LONG_MAX); + if (qp->bus_clk_rate[i] == rate) continue;