From patchwork Sat Apr 16 15:40:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leo Yan X-Patchwork-Id: 562671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC5E3C4332F for ; Sat, 16 Apr 2022 15:40:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232479AbiDPPnZ (ORCPT ); Sat, 16 Apr 2022 11:43:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232461AbiDPPnR (ORCPT ); Sat, 16 Apr 2022 11:43:17 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F69C43ACF for ; Sat, 16 Apr 2022 08:40:45 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id ll10so9746372pjb.5 for ; Sat, 16 Apr 2022 08:40:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pX4HjQ3ivemJ1iGijdmhH5B9H0Rvrrgm3X88jva/bN0=; b=ZnmRPloUsbpuCgCElVmAT2qCfPiIHMHTOHu32lcyex9OkH/tkXg8iP7dZYV3y1ridR bZN92FcYLJsDaqcPzOAjISj8AU1tSLlmh8hKEbPbVr9BvikAywjadWXI3ZpfLjYkTnjC ecr0a+1uXOkN9yP+J7So7OaGVl1DHiPgnCDrVNhtCFLjBpYqyinls2rKwQ3XDlGlYOf1 /+bI4oDH1baZRj3SinTxatyydbjk3ICYLZYWIjopfPhbRUjORYb5w7IXjtCJgIID2a6z kGgfyts2apleJis6U+xSXcRdnormK/rv3aLN94aXv7zSJKXwv73B9OCsyoEh9qcOvzPI CFPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pX4HjQ3ivemJ1iGijdmhH5B9H0Rvrrgm3X88jva/bN0=; b=x06qfJo+ets8605dJ/vRP2lm/jhgRL4Ew4JNFyKqvBUJYPPx7Y5jNCx7OiGzeVJvJB SFhOwGGVoEM8p35vTsMM95Ekg4sBsFarFJvMC0zt/Dynvr6GZzqJol8yssN3ZijmgNQW N9HnutvtQC5mBMS3rhdpj+QedvUr98EmeVfnDcQ5envYwRusgw2TGe3+yz1jilvkERDM k0GXDJjlmIUZmuKx7qaY9vBZOTpsNf8RKIyzKM2O89ffiF6c/c5ofXi9iJhrDzPoo7Sa SotiCWBXybKzQdTcvH1xh1cpJ+6xv7wdsaVMk5r9XTMo2YoUrg41sTagFfbQTAgoeAXJ fTKg== X-Gm-Message-State: AOAM530ET9sDdsWzrEHgg6BqTEmdpAPfMedyZtQeIZCGgpiz7ivvE0V7 pYhNZ4GG+YCAQ4EbnGKhaS1rIw== X-Google-Smtp-Source: ABdhPJz0Sh63ihtTleKIFh9pi3M/2idU3a3wqfMXKb0pasbW9De65odsEA68NEdgwQvgcai+ipeJrw== X-Received: by 2002:a17:90a:b10c:b0:1cb:9e27:5005 with SMTP id z12-20020a17090ab10c00b001cb9e275005mr4375069pjq.241.1650123644465; Sat, 16 Apr 2022 08:40:44 -0700 (PDT) Received: from localhost.localdomain ([134.195.101.46]) by smtp.gmail.com with ESMTPSA id z16-20020a056a00241000b004f3a647ae89sm6358681pfh.174.2022.04.16.08.40.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Apr 2022 08:40:43 -0700 (PDT) From: Leo Yan To: Andy Gross , Bjorn Andersson , Georgi Djakov , Rob Herring , Krzysztof Kozlowski , Bryan O'Donoghue , Dmitry Baryshkov , linux-arm-msm@vger.kernel.org, linux-pm@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Leo Yan Subject: [PATCH v1 4/5] interconnect: qcom: icc-rpm: Support multiple buckets Date: Sat, 16 Apr 2022 23:40:12 +0800 Message-Id: <20220416154013.1357444-5-leo.yan@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220416154013.1357444-1-leo.yan@linaro.org> References: <20220416154013.1357444-1-leo.yan@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The current interconnect rpm driver uses a single aggregate bandwidth to calculate the clock rates for both active and sleep clocks; therefore, it has no chance to separate bandwidth requests for these two kinds of clocks. This patch studies the implementation from interconnect rpmh driver to support multiple buckets. The rpmh driver provides three buckets for AMC, WAKE, and SLEEP; this driver only needs to use WAKE and sleep buckets, but we keep the same way with rpmh driver, this can allow us to reuse the DT binding and also avoid to define duplicated data structures. This patch introduces two callbacks: qcom_icc_pre_bw_aggregate() is used to clean up bucket values before aggregate bandwidth requests, and qcom_icc_bw_aggregate() is to aggregate bandwidth for buckets. Signed-off-by: Leo Yan --- drivers/interconnect/qcom/icc-rpm.c | 51 ++++++++++++++++++++++++++++- drivers/interconnect/qcom/icc-rpm.h | 6 ++++ 2 files changed, 56 insertions(+), 1 deletion(-) diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c index 2ffaf9ba08f9..41c108a96ea7 100644 --- a/drivers/interconnect/qcom/icc-rpm.c +++ b/drivers/interconnect/qcom/icc-rpm.c @@ -234,6 +234,54 @@ static int qcom_icc_rpm_set(int mas_rpm_id, int slv_rpm_id, u64 sum_bw) return ret; } +/** + * qcom_icc_rpm_pre_bw_aggregate - cleans up values before re-aggregate requests + * @node: icc node to operate on + */ +static void qcom_icc_pre_bw_aggregate(struct icc_node *node) +{ + struct qcom_icc_node *qn; + size_t i; + + qn = node->data; + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { + qn->sum_avg[i] = 0; + qn->max_peak[i] = 0; + } +} + +/** + * qcom_icc_bw_aggregate - aggregate bw for buckets indicated by tag + * @node: node to aggregate + * @tag: tag to indicate which buckets to aggregate + * @avg_bw: new bw to sum aggregate + * @peak_bw: new bw to max aggregate + * @agg_avg: existing aggregate avg bw val + * @agg_peak: existing aggregate peak bw val + */ +static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, + u32 peak_bw, u32 *agg_avg, u32 *agg_peak) +{ + size_t i; + struct qcom_icc_node *qn; + + qn = node->data; + + if (!tag) + tag = QCOM_ICC_TAG_ALWAYS; + + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { + if (tag & BIT(i)) { + qn->sum_avg[i] += avg_bw; + qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw); + } + } + + *agg_avg += avg_bw; + *agg_peak = max_t(u32, *agg_peak, peak_bw); + return 0; +} + static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) { struct qcom_icc_provider *qp; @@ -395,7 +443,8 @@ int qnoc_probe(struct platform_device *pdev) INIT_LIST_HEAD(&provider->nodes); provider->dev = dev; provider->set = qcom_icc_set; - provider->aggregate = icc_std_aggregate; + provider->pre_aggregate = qcom_icc_pre_bw_aggregate; + provider->aggregate = qcom_icc_bw_aggregate; provider->xlate_extended = qcom_icc_xlate_extended; provider->data = data; diff --git a/drivers/interconnect/qcom/icc-rpm.h b/drivers/interconnect/qcom/icc-rpm.h index f6c4ac960102..e8ee29ea132f 100644 --- a/drivers/interconnect/qcom/icc-rpm.h +++ b/drivers/interconnect/qcom/icc-rpm.h @@ -6,6 +6,8 @@ #ifndef __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H #define __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H +#include + #define RPM_BUS_MASTER_REQ 0x73616d62 #define RPM_BUS_SLAVE_REQ 0x766c7362 @@ -65,6 +67,8 @@ struct qcom_icc_qos { * @links: an array of nodes where we can go next while traversing * @num_links: the total number of @links * @buswidth: width of the interconnect between a node and the bus (bytes) + * @sum_avg: current sum aggregate value of all avg bw requests + * @max_peak: current max aggregate value of all peak bw requests * @mas_rpm_id: RPM id for devices that are bus masters * @slv_rpm_id: RPM id for devices that are bus slaves * @qos: NoC QoS setting parameters @@ -75,6 +79,8 @@ struct qcom_icc_node { const u16 *links; u16 num_links; u16 buswidth; + u64 sum_avg[QCOM_ICC_NUM_BUCKETS]; + u64 max_peak[QCOM_ICC_NUM_BUCKETS]; int mas_rpm_id; int slv_rpm_id; struct qcom_icc_qos qos;