From patchwork Mon Nov 5 06:36:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 150141 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp2242374ljp; Sun, 4 Nov 2018 22:37:23 -0800 (PST) X-Google-Smtp-Source: AJdET5dHcw4c1ICNwXMmImeyRtRHZkC2i8YCYKCWvtIsQWwH9jptZ7kh/BO+IPpH82tf3Z0ovvkP X-Received: by 2002:a63:7d0f:: with SMTP id y15-v6mr9412518pgc.171.1541399843782; Sun, 04 Nov 2018 22:37:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541399843; cv=none; d=google.com; s=arc-20160816; b=pxOFdzqsG8p0Ld7bikfqNGtlIOiMbJBHMA9fezaOE7VqhZA61zmSdL0wKM25D4aP6s jDOQI8DxxnyvKqDT6uaYry5tFmeGstImycqM4RClR5pkPaBBWZU9hZwFhGQcfOKncqT5 DGGlt867mt7GM6WJzE3TJGXnP/xSXSu2FW+Cb/yCKO05W09JYQHiJskdddN7jXAD5FiI 2XE1vZmj7V+REahmFuTt0NXJ+m3mZMMyuz5poWK6qzb+M0AMqz2HK41a8KXpVVX+t2ea s61JDNE+kTU3gzwbFaS9px+Xh34joogiK57q5Wa/qUbGkJzbeCY7Sft7iPN5z8ylROHj BDBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OTLJoVN5MMvVsPcSQkjRuLaNcVa7Qww7gt9pAH6Lr8g=; b=ZWgO30vzqlW0h+tgVBO1BFkjTPqBjkZcvXnQg5TfiauwTFkl801VVmcJsxWtniS/OV RoaMc9ml/KJlyozqrNZ9QPoxXOPkqTDfDm17PytGnEOaSzue18U2JA825g0LzA2DxdMj 8qI4hxNMqsAGFGXRg1DjpulXctRZ435Qn66e3fuyZC85OEj022gQT3f6s6gkThUMQkyJ Bc7ANKavQIrOkHw8MrG85VuWIUgKX2oI0O4P6+I3gbiMqy1Zm7JEmTlA/Vxr+MDwT6S6 a8Z0ziqRyKHT7/+rAoWnUmkba92EHmwDvTjYB8ETfaVDl3lJVVVJOqhL+agAfctL6BZY nWVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eQ4ZDR4z; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 7-v6si42023076pgw.401.2018.11.04.22.37.23; Sun, 04 Nov 2018 22:37:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eQ4ZDR4z; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729462AbeKEPz1 (ORCPT + 11 others); Mon, 5 Nov 2018 10:55:27 -0500 Received: from mail-pl1-f194.google.com ([209.85.214.194]:45423 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729461AbeKEPz0 (ORCPT ); Mon, 5 Nov 2018 10:55:26 -0500 Received: by mail-pl1-f194.google.com with SMTP id o19-v6so3929416pll.12 for ; Sun, 04 Nov 2018 22:37:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OTLJoVN5MMvVsPcSQkjRuLaNcVa7Qww7gt9pAH6Lr8g=; b=eQ4ZDR4zxeQ4zqmCfhat2+B6osV3DKaHIS04eIZ1r4ALYqveT91QcWVWqZg6cYxFVj THKU2ML7zxiXa7E3oWKx+ZHUpXr0kFlDJNAynDRAxQZJnMIIvyDo5qzCioDnNypZeBmg c1Caqp57G8aux6i4Z62Vi2M98jfLVWJaEhH2Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OTLJoVN5MMvVsPcSQkjRuLaNcVa7Qww7gt9pAH6Lr8g=; b=QMv1WaJuxLmFNKgb4PDl6wuja9sR+wUf1h9zA2VMvRz9KjT+UEIviruBGTn3eYcv2e jfs16FK8eu2xpcqmQRTT9yscoGF/96lQNj8I2eNrB0+CEqfO10Gba5nSQnq/I9sW0gF9 YJKV0KcPuZGAsau911zjr0KSTBZqQ1BJB+bNhI2JRULaJ0col2dTajJq4+MGFtUB1Y1/ BvsI11AKX+ls8oReeQiXuRUUtAvCDiSWrX1XSfxRp/wqVCo4ruj04oBBhLqxtU8kXkAn qhUayzAWOLMU8DvUu1HL7ETxROyl9fsdF1CMNLvINhm79ob4ANagaLJY/kpzZLqDnn89 lAdw== X-Gm-Message-State: AGRZ1gIiTyRqgR02JHpNovMkbL4OtrANOlVE7fnUjJbd9iR5r5vT86EJ wyU3evPSWZ4+ko2oBMHhM3LaTQ== X-Received: by 2002:a17:902:74c4:: with SMTP id f4-v6mr21229811plt.52.1541399840064; Sun, 04 Nov 2018 22:37:20 -0800 (PST) Received: from localhost ([171.61.116.174]) by smtp.gmail.com with ESMTPSA id e204-v6sm6986750pfh.68.2018.11.04.22.37.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Nov 2018 22:37:19 -0800 (PST) From: Viresh Kumar To: ulf.hansson@linaro.org, "Rafael J. Wysocki" , Kevin Hilman , Pavel Machek , Len Brown Cc: Viresh Kumar , linux-pm@vger.kernel.org, Vincent Guittot , Stephen Boyd , Nishanth Menon , niklas.cassel@linaro.org, rnayak@codeaurora.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] PM / Domains: Factorize dev_pm_genpd_set_performance_state() Date: Mon, 5 Nov 2018 12:06:45 +0530 Message-Id: X-Mailer: git-send-email 2.19.1.568.g152ad8e3369a In-Reply-To: References: MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Separate out _genpd_set_performance_state() and _genpd_reeval_performance_state() from dev_pm_genpd_set_performance_state() to handle performance state update related stuff. This will be used by a later commit. Signed-off-by: Viresh Kumar --- drivers/base/power/domain.c | 104 +++++++++++++++++++++--------------- 1 file changed, 61 insertions(+), 43 deletions(-) -- 2.19.1.568.g152ad8e3369a diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 0d928359b880..6d2e9b3406f1 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -239,6 +239,62 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd) static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {} #endif +static int _genpd_set_performance_state(struct generic_pm_domain *genpd, + unsigned int state) +{ + int ret; + + if (!genpd_status_on(genpd)) + goto out; + + ret = genpd->set_performance_state(genpd, state); + if (ret) + return ret; + +out: + genpd->performance_state = state; + return 0; +} + +static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd, + unsigned int state) +{ + struct generic_pm_domain_data *pd_data; + struct pm_domain_data *pdd; + + /* New requested state is same as Max requested state */ + if (state == genpd->performance_state) + return 0; + + /* New requested state is higher than Max requested state */ + if (state > genpd->performance_state) + goto update_state; + + /* Traverse all devices within the domain */ + list_for_each_entry(pdd, &genpd->dev_list, list_node) { + pd_data = to_gpd_data(pdd); + + if (pd_data->performance_state > state) + state = pd_data->performance_state; + } + + if (state == genpd->performance_state) + return 0; + + /* + * We aren't propagating performance state changes of a subdomain to its + * masters as we don't have hardware that needs it. Over that, the + * performance states of subdomain and its masters may not have + * one-to-one mapping and would require additional information. We can + * get back to this once we have hardware that needs it. For that + * reason, we don't have to consider performance state of the subdomains + * of genpd here. + */ + +update_state: + return _genpd_set_performance_state(genpd, state); +} + /** * dev_pm_genpd_set_performance_state- Set performance state of device's power * domain. @@ -257,10 +313,9 @@ static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {} int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state) { struct generic_pm_domain *genpd; - struct generic_pm_domain_data *gpd_data, *pd_data; - struct pm_domain_data *pdd; + struct generic_pm_domain_data *gpd_data; unsigned int prev; - int ret = 0; + int ret; genpd = dev_to_genpd(dev); if (IS_ERR(genpd)) @@ -281,47 +336,10 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state) prev = gpd_data->performance_state; gpd_data->performance_state = state; - /* New requested state is same as Max requested state */ - if (state == genpd->performance_state) - goto unlock; - - /* New requested state is higher than Max requested state */ - if (state > genpd->performance_state) - goto update_state; - - /* Traverse all devices within the domain */ - list_for_each_entry(pdd, &genpd->dev_list, list_node) { - pd_data = to_gpd_data(pdd); - - if (pd_data->performance_state > state) - state = pd_data->performance_state; - } - - if (state == genpd->performance_state) - goto unlock; - - /* - * We aren't propagating performance state changes of a subdomain to its - * masters as we don't have hardware that needs it. Over that, the - * performance states of subdomain and its masters may not have - * one-to-one mapping and would require additional information. We can - * get back to this once we have hardware that needs it. For that - * reason, we don't have to consider performance state of the subdomains - * of genpd here. - */ - -update_state: - if (genpd_status_on(genpd)) { - ret = genpd->set_performance_state(genpd, state); - if (ret) { - gpd_data->performance_state = prev; - goto unlock; - } - } - - genpd->performance_state = state; + ret = _genpd_reeval_performance_state(genpd, state); + if (ret) + gpd_data->performance_state = prev; -unlock: genpd_unlock(genpd); return ret; From patchwork Mon Nov 5 06:36:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 150142 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp2242411ljp; Sun, 4 Nov 2018 22:37:26 -0800 (PST) X-Google-Smtp-Source: AJdET5cGgucAnSU2NqTV2raekWigb3seunpukFTHIvwXyVajTAtzVv9FG3ZmagaSGn5U0XD6MUx/ X-Received: by 2002:a17:902:509:: with SMTP id 9-v6mr20597005plf.3.1541399846121; Sun, 04 Nov 2018 22:37:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541399846; cv=none; d=google.com; s=arc-20160816; b=Rrdai/cwk0RpvMlVd6+Ga3wnfF5HiHavXGiJmDtWLH1pQkABrFDV8bvln/Pslz+UlI V0c9dPb1HU3+BtiCe/tPKeTZvWIAY+CCzMHtdd8SxNvpoEKXo62ixGrxheyYLtqtEiXb AIqHWImcA769tr6mETn3RGeXzmcuW8D2y4KwXi21nCXylNb/qRy9AknoxwkSWzMSFTN9 jnNaWcNHlgdjP5IHMWCsJjG4IHXK+CNZMkPgKxbyrGx/oFBQeNrkKZvJP6XfHcCZVPde fcoFEw62JcNzcY+B+UkU2SvPGknhbh8bD3lH80mFii38PcazGaKWEzbGENCbYVmpwMaY BynA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kvgkIwlydlRc6WpmhNnenG6hU+mTIZzl/tZpnVg+Y7Q=; b=JD4jWBUfzVj6O0jMlPJCAAef6+vsXUPGY02UiFAeEMurKkyQL8CQJmZcs/9J2SGMR3 t3GP7KAzzMuwR8ZBHQBRcGP6n6NidAkibYKGSS9xK+XyaoBFkdI9gO2mMEXHZGnWC8eT kvQxQtPCX4HB4WgjGIs9n0fvrqckP1KntNgoV6n6CWu6v1jREgELZ9ZINj5tG/W44nLx GcqxfxkKa+Bi5m56xSWscblucPolt0bz+1+VMpTUp8LGOMeQnyvWcNDKRwJgjlCHwbAs 8po1wWuXwYoC/FxCyuJcjBK1+Q2xR/a1TTVDjkoxKLdmvVzeNbbp1SlcGUQ6Huje4Ilx zD/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cQXTQwks; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c75-v6si25267515pga.34.2018.11.04.22.37.25; Sun, 04 Nov 2018 22:37:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cQXTQwks; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729499AbeKEPza (ORCPT + 11 others); Mon, 5 Nov 2018 10:55:30 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:36718 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729460AbeKEPza (ORCPT ); Mon, 5 Nov 2018 10:55:30 -0500 Received: by mail-pf1-f195.google.com with SMTP id j22-v6so3927213pfh.3 for ; Sun, 04 Nov 2018 22:37:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kvgkIwlydlRc6WpmhNnenG6hU+mTIZzl/tZpnVg+Y7Q=; b=cQXTQwksB3F6QbAEk1qkHtRNORvwrMGMzNa5DXdgit2lTArGGF4on0Sijd4+ZNahyc meA9K2Mg294qw4tQwQaKBKqPta2cpFIUZCEeIHpJ9yRYDbnWAxI1FdQDV4xEY+8rGDa1 rlxOWvkvqpxfwBvM5te9FTZ3/6lM1hHpQR4FM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kvgkIwlydlRc6WpmhNnenG6hU+mTIZzl/tZpnVg+Y7Q=; b=O0hHnw0D/v6+sBExIL/jCOh2NCmAiH5npem3cw4kIEZnYXkyKr3xe52uxsb6HpI6rG OOWhkGWicJRw6V0hbbPk61FnLl1b69kGpsi0KCDj2POWTMSTQ7ut6B287utxOQGyOlBk zorYVVXSr0RYuCo7ImUmyDwD3PBgF8cMBX70YFomQw0bUghy7ydKYhfGroYPJ1baTagR ZuPCz1ywSH9nYXsauPkXMPGBpkYfaNDhVhFWypCqSbgy1uLZYIEhKSQEGpAMk61e/Tgw gBPHk/WuHGXiOvk3PN0YxsYl//5u9WroLM7btDjX4iHeZ9rRzUesv5nl9IcSbs+uXxrk 8/eg== X-Gm-Message-State: AGRZ1gJrGxMCM27LdDJ9MTLJ2JRzjZ9xoGjO81t7Y9wEd+i28XU7nfW+ buvtnTqE0PRGZcsFZQIB5CLpQQ== X-Received: by 2002:a65:44c6:: with SMTP id g6-v6mr18534913pgs.350.1541399843366; Sun, 04 Nov 2018 22:37:23 -0800 (PST) Received: from localhost ([171.61.116.174]) by smtp.gmail.com with ESMTPSA id 64-v6sm80704012pfq.10.2018.11.04.22.37.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Nov 2018 22:37:22 -0800 (PST) From: Viresh Kumar To: ulf.hansson@linaro.org, "Rafael J. Wysocki" , Kevin Hilman , Len Brown , Pavel Machek Cc: Viresh Kumar , linux-pm@vger.kernel.org, Vincent Guittot , Stephen Boyd , Nishanth Menon , niklas.cassel@linaro.org, rnayak@codeaurora.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] PM / Domains: Propagate performance state updates Date: Mon, 5 Nov 2018 12:06:46 +0530 Message-Id: <7e1ea283f9eebce081af80ddb8d3ca5c9c76cd3b.1541399301.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.19.1.568.g152ad8e3369a In-Reply-To: References: MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org This commit updates genpd core to start propagating performance state updates to master domains that have their set_performance_state() callback set. A genpd handles two type of performance states now. The first one is the performance state requirement put on the genpd by the devices and sub-domains under it, which is already represented by genpd->performance_state. The second type, introduced in this commit, is the performance state requirement(s) put by the genpd on its master domain(s). There is a separate value required for each master that the genpd has and so a new field is added to the struct gpd_link (link->performance_state), which represents the link between a genpd and its master. The struct gpd_link also got another field prev_performance_state, which is used by genpd core as a temporary variable during transitions. We need to propagate setting performance state while powering-on a genpd as well, as we ignore performance state requirements from sub-domains which are powered-off. For this reason _genpd_power_on() also received the additional parameter, depth, which is used for hierarchical locking within genpd. Signed-off-by: Viresh Kumar --- drivers/base/power/domain.c | 107 +++++++++++++++++++++++++++++------- include/linux/pm_domain.h | 4 ++ 2 files changed, 92 insertions(+), 19 deletions(-) -- 2.19.1.568.g152ad8e3369a diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 6d2e9b3406f1..81e02c5f753f 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -239,28 +239,86 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd) static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {} #endif +static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd, + unsigned int state, int depth); + static int _genpd_set_performance_state(struct generic_pm_domain *genpd, - unsigned int state) + unsigned int state, int depth) { + struct generic_pm_domain *master; + struct gpd_link *link; + unsigned int mstate; int ret; if (!genpd_status_on(genpd)) goto out; + /* Propagate to masters of genpd */ + list_for_each_entry(link, &genpd->slave_links, slave_node) { + master = link->master; + + if (!master->set_performance_state) + continue; + + /* Find master's performance state */ + mstate = dev_pm_opp_xlate_performance_state(genpd->opp_table, + master->opp_table, state); + if (unlikely(!mstate)) + goto err; + + genpd_lock_nested(master, depth + 1); + + link->prev_performance_state = link->performance_state; + link->performance_state = mstate; + ret = _genpd_reeval_performance_state(master, mstate, depth + 1); + if (ret) + link->performance_state = link->prev_performance_state; + + genpd_unlock(master); + + if (ret) + goto err; + } + ret = genpd->set_performance_state(genpd, state); if (ret) - return ret; + goto err; out: genpd->performance_state = state; return 0; + +err: + /* Encountered an error, lets rollback */ + list_for_each_entry_continue_reverse(link, &genpd->slave_links, + slave_node) { + master = link->master; + + if (!master->set_performance_state) + continue; + + genpd_lock_nested(master, depth + 1); + + mstate = link->prev_performance_state; + link->performance_state = mstate; + + if (_genpd_reeval_performance_state(master, mstate, depth + 1)) { + pr_err("%s: Failed to roll back to %d performance state\n", + master->name, mstate); + } + + genpd_unlock(master); + } + + return ret; } static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd, - unsigned int state) + unsigned int state, int depth) { struct generic_pm_domain_data *pd_data; struct pm_domain_data *pdd; + struct gpd_link *link; /* New requested state is same as Max requested state */ if (state == genpd->performance_state) @@ -278,21 +336,30 @@ static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd, state = pd_data->performance_state; } - if (state == genpd->performance_state) - return 0; - /* - * We aren't propagating performance state changes of a subdomain to its - * masters as we don't have hardware that needs it. Over that, the - * performance states of subdomain and its masters may not have - * one-to-one mapping and would require additional information. We can - * get back to this once we have hardware that needs it. For that - * reason, we don't have to consider performance state of the subdomains - * of genpd here. + * Traverse all powered-on subdomains within the domain. This can be + * done without any additional locking as the link->performance_state + * field is protected by the master genpd->lock, which is already taken. + * + * Also note that link->performance_state (subdomain's performance state + * requirement to master domain) is different from + * link->slave->performance_state (current performance state requirement + * of the devices/sub-domains of the subdomain) and so can have a + * different value. */ + list_for_each_entry(link, &genpd->master_links, master_node) { + if (!genpd_status_on(link->slave)) + continue; + + if (link->performance_state > state) + state = link->performance_state; + } + + if (state == genpd->performance_state) + return 0; update_state: - return _genpd_set_performance_state(genpd, state); + return _genpd_set_performance_state(genpd, state, depth); } /** @@ -336,7 +403,7 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state) prev = gpd_data->performance_state; gpd_data->performance_state = state; - ret = _genpd_reeval_performance_state(genpd, state); + ret = _genpd_reeval_performance_state(genpd, state, 0); if (ret) gpd_data->performance_state = prev; @@ -346,7 +413,8 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state) } EXPORT_SYMBOL_GPL(dev_pm_genpd_set_performance_state); -static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) +static int +_genpd_power_on(struct generic_pm_domain *genpd, bool timed, int depth) { unsigned int state_idx = genpd->state_idx; ktime_t time_start; @@ -367,7 +435,8 @@ static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start)); if (unlikely(genpd->set_performance_state)) { - ret = genpd->set_performance_state(genpd, genpd->performance_state); + ret = _genpd_set_performance_state(genpd, + genpd->performance_state, depth); if (ret) { pr_warn("%s: Failed to set performance state %d (%d)\n", genpd->name, genpd->performance_state, ret); @@ -557,7 +626,7 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth) } } - ret = _genpd_power_on(genpd, true); + ret = _genpd_power_on(genpd, true, depth); if (ret) goto err; @@ -962,7 +1031,7 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock, genpd_unlock(link->master); } - _genpd_power_on(genpd, false); + _genpd_power_on(genpd, false, depth); genpd->status = GPD_STATE_ACTIVE; } diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 9ad101362aef..dd364abb649a 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -136,6 +136,10 @@ struct gpd_link { struct list_head master_node; struct generic_pm_domain *slave; struct list_head slave_node; + + /* Sub-domain's per-master domain performance state */ + unsigned int performance_state; + unsigned int prev_performance_state; }; struct gpd_timing_data {