From patchwork Fri Mar 3 20:41:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 94860 Delivered-To: patch@linaro.org Received: by 10.140.82.71 with SMTP id g65csp416525qgd; Fri, 3 Mar 2017 12:43:00 -0800 (PST) X-Received: by 10.99.96.9 with SMTP id u9mr304969pgb.159.1488573780724; Fri, 03 Mar 2017 12:43:00 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m63si11582781pld.36.2017.03.03.12.43.00; Fri, 03 Mar 2017 12:43:00 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752037AbdCCUm5 (ORCPT + 13 others); Fri, 3 Mar 2017 15:42:57 -0500 Received: from mail-pf0-f169.google.com ([209.85.192.169]:36622 "EHLO mail-pf0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751970AbdCCUmy (ORCPT ); Fri, 3 Mar 2017 15:42:54 -0500 Received: by mail-pf0-f169.google.com with SMTP id x66so36636758pfb.3 for ; Fri, 03 Mar 2017 12:41:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jKzRI7p0gKRTrygEsSKqQzrAjclxQir+aIOxjyGIgaQ=; b=WsfFeV6YrIHLC5IUaW56JFNWqR1zdMl+qay+JaHY8l6cxqm7LcAl2en6FqKIJkZnn7 cGJbuMlYTkPmLtxeOqWu+bcsIqo+MQWMVIszHl12jSt8MqLi1l4YqS8B3bOAdKwk2Ybi DP+7G9rVJ2jcAvNmk0viGKoo7UbgMDQi4v+zA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jKzRI7p0gKRTrygEsSKqQzrAjclxQir+aIOxjyGIgaQ=; b=TjCYJWQxCKFWnunE746gmp3vqNJDhs+JmDCOTomA9FwM1XH+OVb0JISnY0DgEMO4tG oMFNiFwJWs9bzhfZEzCo0dVF/KQfiOeLzjHjilQsV4dU1BXDAkIDTHfUeFANYdSIN54k SLrhdpfW6AE/nC4DV4hrcB0FtvYqyALIGIcGhRDdrFYvmZwKLIka9SUPzLufoEC+kOhX s4+vxc86wd96sZAv+YcpwZ9+QxS1kUiMpM/TLc85YoSrkPEBOVcS24ZmX76vLdeqEkJ6 COmlFfD4rHfqdC9QcLnSq30CxDU8HpHkKMNDm+IUv8i0gc+79gKFs/LBF8/Di6ew+N/X cV0A== X-Gm-Message-State: AMke39n8x60VPUBUGEPuDJaEBncU/eKc7MkIkPd/Lu/eB5QmRW90XegeKXV+/F5knuh9Ky75 X-Received: by 10.98.204.25 with SMTP id a25mr5736203pfg.6.1488573714565; Fri, 03 Mar 2017 12:41:54 -0800 (PST) Received: from ubuntu.localdomain (i-global254.qualcomm.com. [199.106.103.254]) by smtp.gmail.com with ESMTPSA id o189sm25207003pga.12.2017.03.03.12.41.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 Mar 2017 12:41:53 -0800 (PST) From: Lina Iyer To: ulf.hansson@linaro.org, khilman@kernel.org, rjw@rjwysocki.net, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: andy.gross@linaro.org, sboyd@codeaurora.org, linux-arm-msm@vger.kernel.org, brendan.jackman@arm.com, lorenzo.pieralisi@arm.com, sudeep.holla@arm.com, Juri.Lelli@arm.com, Lina Iyer Subject: [PATCH V5 6/9] PM / cpu_domains: Add PM Domain governor for CPUs Date: Fri, 3 Mar 2017 12:41:32 -0800 Message-Id: <1488573695-106680-7-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488573695-106680-1-git-send-email-lina.iyer@linaro.org> References: <1488573695-106680-1-git-send-email-lina.iyer@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org A PM domain comprising of CPUs may be powered off when all the CPUs in the domain are powered down. Powering down a CPU domain is generally a expensive operation and therefore the power performance trade offs should be considered. The time between the last CPU powering down and the first CPU powering up in a domain, is the time available for the domain to sleep. Ideally, the sleep time of the domain should fulfill the residency requirement of the domains' idle state. To do this effectively, read the time before the wakeup of the cluster's CPUs and ensure that the domain's idle state sleep time guarantees the QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the state's residency. Signed-off-by: Lina Iyer --- drivers/base/power/cpu_domains.c | 80 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 79 insertions(+), 1 deletion(-) -- 2.7.4 diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c index 04891dc..4f2a40e 100644 --- a/drivers/base/power/cpu_domains.c +++ b/drivers/base/power/cpu_domains.c @@ -16,9 +16,12 @@ #include #include #include +#include +#include #include #include #include +#include #define CPU_PD_NAME_MAX 36 @@ -50,6 +53,81 @@ static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d) return res; } +static bool cpu_pd_down_ok(struct dev_pm_domain *pd) +{ + struct generic_pm_domain *genpd = pd_to_genpd(pd); + struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd); + int qos_ns = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); + u64 sleep_ns; + ktime_t earliest, next_wakeup; + int cpu; + int i; + + /* Reset the last set genpd state, default to index 0 */ + genpd->state_idx = 0; + + /* We don't want to power down, if QoS is 0 */ + if (!qos_ns) + return false; + + /* + * Find the sleep time for the cluster. + * The time between now and the first wake up of any CPU that + * are in this domain hierarchy is the time available for the + * domain to be idle. + * + * We only care about the next wakeup for any online CPU in that + * cluster. Hotplug off any of the CPUs that we care about will + * wait on the genpd lock, until we are done. Any other CPU hotplug + * is not of consequence to our sleep time. + */ + earliest = ktime_set(KTIME_SEC_MAX, 0); + for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) { + next_wakeup = tick_nohz_get_next_wakeup(cpu); + if (earliest > next_wakeup) + earliest = next_wakeup; + } + + sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get())); + if (sleep_ns <= 0) + return false; + + /* + * Find the deepest sleep state that satisfies the residency + * requirement and the QoS constraint + */ + for (i = genpd->state_count - 1; i >= 0; i--) { + u64 state_sleep_ns; + + state_sleep_ns = genpd->states[i].power_off_latency_ns + + genpd->states[i].power_on_latency_ns + + genpd->states[i].residency_ns; + + /* + * If we can't sleep to save power in the state, move on + * to the next lower idle state. + */ + if (state_sleep_ns > sleep_ns) + continue; + + /* + * We also don't want to sleep more than we should to + * gaurantee QoS. + */ + if (state_sleep_ns < (qos_ns * NSEC_PER_USEC)) + break; + } + + if (i >= 0) + genpd->state_idx = i; + + return (i >= 0); +} + +static struct dev_power_governor cpu_pd_gov = { + .power_down_ok = cpu_pd_down_ok, +}; + static int cpu_pd_power_on(struct generic_pm_domain *genpd) { struct cpu_pm_domain *pd = to_cpu_pd(genpd); @@ -172,7 +250,7 @@ int cpu_pd_init(struct generic_pm_domain *genpd, const struct cpu_pd_ops *ops) list_add_rcu(&pd->link, &of_cpu_pd_list); mutex_unlock(&cpu_pd_list_lock); - ret = pm_genpd_init(genpd, &simple_qos_governor, false); + ret = pm_genpd_init(genpd, &cpu_pd_gov, false); if (ret) { pr_err("Unable to initialize domain %s\n", genpd->name); goto fail;