From patchwork Tue Sep 19 22:32:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 113071 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp34958qgf; Tue, 19 Sep 2017 15:34:13 -0700 (PDT) X-Received: by 10.84.138.193 with SMTP id 59mr115639plp.414.1505860453588; Tue, 19 Sep 2017 15:34:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505860453; cv=none; d=google.com; s=arc-20160816; b=A0pT6MIxYr6GpQP71yD2dEKNcNYXs/WQw2mDDzN4OSJp5Z/o+UsqcrmdNB76i7j8U6 4h6LkSlMvt4Dbr4zWMJr7Oh+h6fuPUNA0uQyp3BwGjrkm63zt+5MvukffaRy6yd0J+AO DSHLexER0Ho3PJI0p2lovKHqW/a32Vnx6848bCwpxfS+R38tMPmeAqufZpz/NGeCWhRC yJcN2Bqchp7ir3leYw+5HnwGAXQqrQrobSEuqDet81paxbuESYJYLfvSC8EosyUoetjz z30iUs5C4l3vK+oQU/0I8h4UDJ4J7YnOeEnoBEZs9IHeOncEwgm1P803skLCKblxyZ9C ejHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=LDInNzQqi0kfwi+YD241hE2JtkAkUc/AWSifUBGwiAA=; b=GG9olpaoFHzmMiMSO/YtNcvVhKqaNKjIcyxHIxPNFV5+rAY+y7WmaLIBFO6qYU4+E4 xc2SWLlEH30PKuVsWuzHBSeOqZZgK7yynD0rZ4/gZCh/ysIwNl7oMmxcB0nU/H1FZHt2 OqjEiCNK4AkYOYXyNVCUlm4BYwGAPlUtTjRL4XXLLtVDRO59Y9EzT53NgxeEnwLgSocI 8N7KwQrAxU0B/R8LKUkzD/8wwGtZz4jejaHKiQMavVoJNgT/YdW15wdocIMUaHyTkahg 4vyYgou/nbdCLueyIKNKCxaLporBLSoY3mtK1FhYYoQIjLTkVvBQryXwY1EfQUibprZU ooKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LDWKYem4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e6si1973729pfg.3.2017.09.19.15.34.13; Tue, 19 Sep 2017 15:34:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LDWKYem4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751883AbdISWeL (ORCPT + 26 others); Tue, 19 Sep 2017 18:34:11 -0400 Received: from mail-pg0-f44.google.com ([74.125.83.44]:52649 "EHLO mail-pg0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751653AbdISWcj (ORCPT ); Tue, 19 Sep 2017 18:32:39 -0400 Received: by mail-pg0-f44.google.com with SMTP id i195so601523pgd.9 for ; Tue, 19 Sep 2017 15:32:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=LDInNzQqi0kfwi+YD241hE2JtkAkUc/AWSifUBGwiAA=; b=LDWKYem43/wpnPPp29DMHPJ/ELt56QcOhKfCkNbjkz05xMNVPAYhyvrim5AsfLXRzf Tj7gj0P2IkyKiNgV9vMID/k/5Nru0vZ+NJrt2viGhHgTY62dFYzpckJuVGR1kaeT2r6z EErkP2BQr9VL4ABKB4gFHfiqd8hSqMoXyOako= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=LDInNzQqi0kfwi+YD241hE2JtkAkUc/AWSifUBGwiAA=; b=LwsUD+WdM5GOsjTCRHA0n/e4HcCbQlMEDZXzgkfOSsCwdG3U4VLkCc6/n7fsvNozwa SrhhSRcvA4QLSF7z86kTdWB9jjBO5fe6weTsylZgRDoJiQYkxaRf7Vc11y5VGziGxYML ATgwAsjBydVktV8Js26CASdnSHWnx3QdbU9CQxhqh3KkN6REXCQPIZC6lS5X2Q/Fvmcd ZZIKpMWMSY7Fj7HF9XxGfnuRTVFONkZQOhs5ewTdMDLEah66k08/vOVWjtyfBi2w2Zda 8lv51CyXSUzNLOioZnoonxwGfclagCQc9kmBEaiMoIbLn0Z8pNcCZnTsMnEsQHuJhfzG V7/Q== X-Gm-Message-State: AHPjjUjyQtRLB5XoO8Za9wUJSv5HYqZyuoixkqXiymTIabAIce5sgQob q3cxHvQL402CC2XVa8+BipSoCg== X-Google-Smtp-Source: AOwi7QA0ZHF8wjMq1aqYCIbUinnTSjXdEPy/kqjVz7cBvshYjOQePqc/Jf98HiIpDwfPGUzuk3neVg== X-Received: by 10.98.207.134 with SMTP id b128mr130844pfg.202.1505860358464; Tue, 19 Sep 2017 15:32:38 -0700 (PDT) Received: from localhost (cpe-172-88-64-62.socal.res.rr.com. [172.88.64.62]) by smtp.gmail.com with ESMTPSA id m69sm4671555pfc.38.2017.09.19.15.32.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Sep 2017 15:32:37 -0700 (PDT) From: Viresh Kumar To: Rafael Wysocki , ulf.hansson@linaro.org, Kevin Hilman Cc: Viresh Kumar , linux-pm@vger.kernel.org, Vincent Guittot , Stephen Boyd , Nishanth Menon , robh+dt@kernel.org, lina.iyer@linaro.org, rnayak@codeaurora.org, sudeep.holla@arm.com, linux-kernel@vger.kernel.org, Len Brown , Pavel Machek , Andy Gross , David Brown Subject: [PATCH V10 1/7] PM / Domains: Add support to select performance-state of domains Date: Tue, 19 Sep 2017 15:32:17 -0700 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some platforms have the capability to configure the performance state of PM domains and this patch updates the genpd core to support them. The performance levels (within the genpd core) are identified by positive integer values, a lower value represents lower performance state. This patch adds two new genpd APIs: - int dev_pm_genpd_has_performance_state(struct device *dev) This can be called (only once) by the device drivers to make sure all dependencies are met and the PM domain of the device supports performance states. - int dev_pm_genpd_update_performance_state(struct device *dev, unsigned long rate); This can be called (any number of times) by the device drivers after they have called dev_pm_genpd_has_performance_state() once and it returned "true". This call will update the performance state of the PM domain of the device (for which this routine is called) and propagate it to the masters of the PM domain. This requires certain callbacks to be available for the genpd of the device, which are internally called by this routine in the order in which they are described below. - dev_get_performance_state() This shall return the performance state (integer value) corresponding to a target frequency for the device. This state is used by the genpd core as device's requested performance state and would be used while aggregating the requested states of all the devices and subdomains for a PM domain. Note that the same state value will be used by the device's PM domain and its masters hierarchy. We may want to implement master specific states later on once we have more complex cases available. Providing this callback is mandatory for any genpd which needs to manage performance states and is registered as master of one or more devices. Domains which only have sub domains and no devices, should not implement this callback. - genpd_set_performance_state() The aggregate of the performance states of the devices and subdomains of a PM genpd is then passed to this callback, which must change the performance state of the genpd. This callback of the masters of the genpd are also called to propagate the change. The power domains can avoid supplying these callbacks, if they don't support setting performance-states. Tested-by: Rajendra Nayak Signed-off-by: Viresh Kumar --- drivers/base/power/domain.c | 277 +++++++++++++++++++++++++++++++++++++++++++- include/linux/pm_domain.h | 23 ++++ 2 files changed, 298 insertions(+), 2 deletions(-) -- 2.7.4 diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index e8ca5e2cf1e5..6d05c91cf44f 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -237,6 +237,267 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd) static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {} #endif +/** + * dev_pm_genpd_has_performance_state - Checks if power domain does performance + * state management. + * + * @dev: Device whose power domain is getting inquired. + * + * This can be called by the user drivers, before they start calling + * dev_pm_genpd_update_performance_state(), to guarantee that all dependencies + * are met and the device's genpd supports performance states. + * + * It is assumed that the user driver guarantees that the genpd wouldn't be + * detached while this routine is getting called. + * + * Returns "true" if device's genpd supports performance states, "false" + * otherwise. + */ +bool dev_pm_genpd_has_performance_state(struct device *dev) +{ + struct generic_pm_domain *genpd = genpd_lookup_dev(dev); + + return !IS_ERR_OR_NULL(genpd) && genpd->dev_get_performance_state; +} +EXPORT_SYMBOL_GPL(dev_pm_genpd_has_performance_state); + +static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd, + int state, int depth); + +/* Returns -ve errors or 0 on success */ +static int _genpd_set_performance_state(struct generic_pm_domain *genpd, + int state, int depth) +{ + struct generic_pm_domain *master; + struct gpd_link *link; + int prev = genpd->performance_state, ret; + + /* Propagate to masters of genpd */ + list_for_each_entry(link, &genpd->slave_links, slave_node) { + master = link->master; + + genpd_lock_nested(master, depth + 1); + + link->performance_state = state; + ret = _genpd_reeval_performance_state(master, state, depth + 1); + if (ret) + link->performance_state = prev; + + genpd_unlock(master); + + if (ret) + goto err; + } + + if (genpd->genpd_set_performance_state) { + ret = genpd->genpd_set_performance_state(genpd, state); + if (ret) + goto err; + } + + /* + * The masters are updated now, lets get genpd performance_state in sync + * with those. + */ + genpd->performance_state = state; + return 0; + +err: + /* Encountered an error, lets rollback */ + list_for_each_entry_continue_reverse(link, &genpd->slave_links, + slave_node) { + master = link->master; + + genpd_lock_nested(master, depth + 1); + link->performance_state = prev; + if (_genpd_reeval_performance_state(master, prev, depth + 1)) { + pr_err("%s: Failed to roll back to %d performance state\n", + master->name, prev); + } + genpd_unlock(master); + } + + return ret; +} + +/* + * Re-evaluate performance state of a genpd. Finds the highest requested + * performance state by the devices and subdomains within the genpd and then + * change genpd's performance state (if required). The request is then + * propagated to the masters of the genpd. + * + * @genpd: PM domain whose state needs to be reevaluated. + * @state: Newly requested performance state of the device or subdomain for + * which this routine is called. + * @depth: nesting count for lockdep. + * + * Locking rules followed are: + * + * - Device's state (pd_data->performance_state) should be accessed from within + * its master's lock protected section. + * + * - Subdomains have a separate state field (link->performance_state) per master + * domain and is accessed only from within master's lock protected section. + * + * - Subdomain's state (genpd->performance_state) should be accessed from within + * its own lock protected section. + * + * - The locks are always taken in bottom->up order, i.e. subdomain first, + * followed by its masters. + * + * Returns -ve errors or 0 on success. + */ +static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd, + int state, int depth) +{ + struct generic_pm_domain_data *pd_data; + struct pm_domain_data *pdd; + struct gpd_link *link; + + /* New requested state is same as Max requested state */ + if (state == genpd->performance_state) + return 0; + + /* New requested state is higher than Max requested state */ + if (state > genpd->performance_state) + goto update_state; + + /* Traverse all devices within the domain */ + list_for_each_entry(pdd, &genpd->dev_list, list_node) { + pd_data = to_gpd_data(pdd); + + if (pd_data->performance_state > state) + state = pd_data->performance_state; + } + + /* + * Traverse all subdomains within the domain. This can be done without + * any additional locks as all link->performance_state fields are + * protected by genpd->lock, which is already taken. + */ + list_for_each_entry(link, &genpd->master_links, master_node) { + if (link->performance_state > state) + state = link->performance_state; + } + +update_state: + return _genpd_set_performance_state(genpd, state, depth); +} + +static void __genpd_dev_update_performance_state(struct device *dev, int state) +{ + struct generic_pm_domain_data *gpd_data; + + spin_lock_irq(&dev->power.lock); + + if (!dev->power.subsys_data || !dev->power.subsys_data->domain_data) { + WARN_ON(1); + goto unlock; + } + + gpd_data = to_gpd_data(dev->power.subsys_data->domain_data); + gpd_data->performance_state = state; + +unlock: + spin_unlock_irq(&dev->power.lock); +} + +/** + * dev_pm_genpd_update_performance_state - Update performance state of device's + * power domain for the target frequency for the device. + * + * @dev: Device for which the performance-state needs to be adjusted. + * @rate: Device's next frequency. This can be set as 0 when the device doesn't + * have any performance state constraints left (And so the device wouldn't + * participate anymore to find the target performance state of the genpd). + * + * The user drivers may want to call dev_pm_genpd_has_performance_state() (only + * once) before calling this routine (any number of times) to guarantee that all + * dependencies are met. + * + * It is assumed that the user driver guarantees that the genpd wouldn't be + * detached while this routine is getting called. + * + * Returns 0 on success and negative error values on failures. + */ +int dev_pm_genpd_update_performance_state(struct device *dev, + unsigned long rate) +{ + struct generic_pm_domain *genpd; + int ret, state; + + genpd = dev_to_genpd(dev); + if (IS_ERR(genpd)) + return -ENODEV; + + genpd_lock(genpd); + + if (!genpd_status_on(genpd)) { + ret = -EBUSY; + goto unlock; + } + + state = genpd->dev_get_performance_state(dev, rate); + if (state < 0) { + ret = state; + goto unlock; + } + + ret = _genpd_reeval_performance_state(genpd, state, 0); + if (!ret) { + /* + * Since we are passing "state" to + * _genpd_reeval_performance_state() as well, we don't need to + * call __genpd_dev_update_performance_state() before updating + * genpd's state with the above call. Update it only after the + * state of master domain is updated. + */ + __genpd_dev_update_performance_state(dev, state); + } + +unlock: + genpd_unlock(genpd); + + return ret; +} +EXPORT_SYMBOL_GPL(dev_pm_genpd_update_performance_state); + +static int _genpd_on_update_performance_state(struct generic_pm_domain *genpd, + int depth) +{ + int ret, prev = genpd->prev_performance_state; + + if (likely(!prev)) + return 0; + + ret = _genpd_set_performance_state(genpd, prev, depth); + if (ret) { + pr_err("%s: Failed to restore performance state to %d (%d)\n", + genpd->name, prev, ret); + } else { + genpd->prev_performance_state = 0; + } + + return ret; +} + +static void _genpd_off_update_performance_state(struct generic_pm_domain *genpd, + int depth) +{ + int ret, state = genpd->performance_state; + + if (likely(!state)) + return; + + ret = _genpd_set_performance_state(genpd, 0, depth); + if (ret) { + pr_err("%s: Failed to set performance state to 0 (%d)\n", + genpd->name, ret); + } else { + genpd->prev_performance_state = state; + } +} + static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed) { unsigned int state_idx = genpd->state_idx; @@ -388,6 +649,8 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on, return ret; } + _genpd_off_update_performance_state(genpd, depth); + genpd->status = GPD_STATE_POWER_OFF; genpd_update_accounting(genpd); @@ -437,15 +700,21 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth) } } - ret = _genpd_power_on(genpd, true); + ret = _genpd_on_update_performance_state(genpd, depth); if (ret) goto err; + ret = _genpd_power_on(genpd, true); + if (ret) + goto err_power_on; + genpd->status = GPD_STATE_ACTIVE; genpd_update_accounting(genpd); return 0; + err_power_on: + _genpd_off_update_performance_state(genpd, depth); err: list_for_each_entry_continue_reverse(link, &genpd->slave_links, @@ -807,6 +1076,8 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock, if (_genpd_power_off(genpd, false)) return; + _genpd_off_update_performance_state(genpd, depth); + genpd->status = GPD_STATE_POWER_OFF; list_for_each_entry(link, &genpd->slave_links, slave_node) { @@ -852,7 +1123,9 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock, genpd_unlock(link->master); } - _genpd_power_on(genpd, false); + if (!_genpd_on_update_performance_state(genpd, depth)) + if (_genpd_power_on(genpd, false)) + _genpd_off_update_performance_state(genpd, depth); genpd->status = GPD_STATE_ACTIVE; } diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 84f423d5633e..715cca7ac399 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -64,8 +64,13 @@ struct generic_pm_domain { unsigned int device_count; /* Number of devices */ unsigned int suspended_count; /* System suspend device counter */ unsigned int prepared_count; /* Suspend counter of prepared devices */ + unsigned int performance_state; /* Max requested performance state */ + unsigned int prev_performance_state; /* Performance state before power off */ int (*power_off)(struct generic_pm_domain *domain); int (*power_on)(struct generic_pm_domain *domain); + int (*dev_get_performance_state)(struct device *dev, unsigned long rate); + int (*genpd_set_performance_state)(struct generic_pm_domain *genpd, + unsigned int state); struct gpd_dev_ops dev_ops; s64 max_off_time_ns; /* Maximum allowed "suspended" time. */ bool max_off_time_changed; @@ -102,6 +107,9 @@ struct gpd_link { struct list_head master_node; struct generic_pm_domain *slave; struct list_head slave_node; + + /* Sub-domain's per-master domain performance state */ + unsigned int performance_state; }; struct gpd_timing_data { @@ -121,6 +129,7 @@ struct generic_pm_domain_data { struct pm_domain_data base; struct gpd_timing_data td; struct notifier_block nb; + unsigned int performance_state; void *data; }; @@ -148,6 +157,9 @@ extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, extern int pm_genpd_init(struct generic_pm_domain *genpd, struct dev_power_governor *gov, bool is_off); extern int pm_genpd_remove(struct generic_pm_domain *genpd); +extern bool dev_pm_genpd_has_performance_state(struct device *dev); +extern int dev_pm_genpd_update_performance_state(struct device *dev, + unsigned long rate); extern struct dev_power_governor simple_qos_governor; extern struct dev_power_governor pm_domain_always_on_gov; @@ -188,6 +200,17 @@ static inline int pm_genpd_remove(struct generic_pm_domain *genpd) return -ENOTSUPP; } +static inline bool dev_pm_genpd_has_performance_state(struct device *dev) +{ + return false; +} + +static inline int dev_pm_genpd_update_performance_state(struct device *dev, + unsigned long rate) +{ + return -ENOTSUPP; +} + #define simple_qos_governor (*(struct dev_power_governor *)(NULL)) #define pm_domain_always_on_gov (*(struct dev_power_governor *)(NULL)) #endif