From patchwork Wed Jun 20 17:22:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 139372 Delivered-To: patches@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp1050445lji; Wed, 20 Jun 2018 10:22:43 -0700 (PDT) X-Received: by 2002:a19:2216:: with SMTP id i22-v6mr12532693lfi.136.1529515362972; Wed, 20 Jun 2018 10:22:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529515362; cv=none; d=google.com; s=arc-20160816; b=Ntoj9RYxPuGvDDNFbC8tY/XHL5hRkFeqNkRlEDLegA/4wUmrmAIZfWZ4tR70xRlC1n qeGnp95HKAdHzzRVfHpILWSCpmxxBFS6p5TD9n9MuHerR690HJxdPbTG1qbIQhgV1B9j UV6jIzE7sz75bBUBSjCH0PvLTNIqZX4U6SGZIZNthHA1dQ7jnjV7a6zc3NKpZqauGCXB WubDvdae7Q8tdZNdPnwydAWLuxTFm9aYT+ohWMG3IrHd99D02AKvm7ZjHvCWm1cxWZMN 2QCJnxBiGaygFjg1C16MWYFj+tkJwnSAyam5iAPSPYnlg5FsxPe+c8srlPBA+hD5MCtX aXjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=q3sBFx4bDLVEE6Yvpy9kVMF3vWlpgdg6YBSbLYqXsME=; b=Ldwsd/zfHLEWxaL+Lid/Hm42hZvEFh88q7nyFOBgl3GmgOkEeCVV9b71KdkT03sLR2 yb4me8nqbGpG0NjF/RQQdG+YxtHZ+L/xom8125v5ttSuc9D7koJUEO1MHuygGiungqRE tUMQbSNcTHiSN3bIGI8Kj+H3ARUBlOEjik553kfb+iwhLFiWsPFCc5QzwF9SM+W5iBQi pL2I2Lafwe9HmOlVXekgkbXvvcqRaikK5ai5hIilx9V6JL8ZlmsZ9mTaZ09fI6DXFbpq 7V8MLmUygP1hJEgizwYA9S9wLIneD1LXBfV4PwmWKS2gFI3Sw1XbmcR2QhOv6yH2BWQv v00A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="e/XMLeMl"; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id v19-v6sor678980lfe.36.2018.06.20.10.22.42 for (Google Transport Security); Wed, 20 Jun 2018 10:22:42 -0700 (PDT) Received-SPF: pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="e/XMLeMl"; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=q3sBFx4bDLVEE6Yvpy9kVMF3vWlpgdg6YBSbLYqXsME=; b=e/XMLeMlRx28jbEDB7QNTC7AZ+tIR885ZpI7S+nl9KRtbtMETY2CF2Pv+JFE1mweSe 1awjv5ZZJLYqTVddil3Hr7LCV3KHUgjjmPN5NSdSkzFsMaTzvaOXTGE/J9DXGl/0DfgP 4FBSc+VNLZq8otJedKQjH/MWqJZUvu5tbL0v8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=q3sBFx4bDLVEE6Yvpy9kVMF3vWlpgdg6YBSbLYqXsME=; b=XfWX8qJpRfYgh3jQ6dYPEUJGV3cp1K9QHu2Bibv2t1kVYoXGMgpG5ChAMXm9y2S+Ir f26muZBkFEdGjEjuqtldtuO4Y9f5v2CdSDo7TB2yK/+r9hqldxhPPZYWobFc3nyjdhcW bJMNwhUzdy1h2KewZ2nt4K0EtosSn8N/vV844AbBQXQv8z4CP6V2eWGQdPYRbwmnTHUm +TRP5RfVoDqj15rBbPHHaLlM3qvb8chNaEHrf9/L/xxi16JjZ76Y3R94CMRgVXh8hZbe wgmhCYCDzBTl6FOgfUBe9iZzfkJdHHuCCPLQ7MMkqJ5l64Q2jkRBofbS6dr/6GObZfFj +Hqw== X-Gm-Message-State: APt69E0NxweZEQt+f63k8ouvQe69q0LXi3PUXakeA6/IQz3xn4sjwC55 bZluq1zhV2CFe+edzPBxrbcq3xy5 X-Google-Smtp-Source: ADUXVKIp3AXtOaz03IpCwOZ9IQJrq2qZ7IYsLAa/KRT98qnsBHyHNXYzDx/1n/77pq4H37AKTRzMdA== X-Received: by 2002:a19:6550:: with SMTP id c16-v6mr14358137lfj.31.1529515362666; Wed, 20 Jun 2018 10:22:42 -0700 (PDT) Return-Path: Received: from localhost.localdomain (h-158-174-22-210.NA.cust.bahnhof.se. [158.174.22.210]) by smtp.gmail.com with ESMTPSA id b2-v6sm514441lji.85.2018.06.20.10.22.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jun 2018 10:22:41 -0700 (PDT) From: Ulf Hansson To: "Rafael J . Wysocki" , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , linux-pm@vger.kernel.org Cc: Kevin Hilman , Lina Iyer , Lina Iyer , Ulf Hansson , Rob Herring , Daniel Lezcano , Thomas Gleixner , Vincent Guittot , Stephen Boyd , Juri Lelli , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v8 04/26] PM / Domains: Add support for CPU devices to genpd Date: Wed, 20 Jun 2018 19:22:04 +0200 Message-Id: <20180620172226.15012-5-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180620172226.15012-1-ulf.hansson@linaro.org> References: <20180620172226.15012-1-ulf.hansson@linaro.org> To enable a device belonging to a CPU to be attached to a PM domain managed by genpd, let's do a few changes to genpd as to make it convenient to manage the specifics around CPUs. First, as to be able to quickly find out what CPUs that are attached to a genpd, which typically becomes useful from a genpd governor as following changes is about to show, let's add a cpumask 'cpus' to the struct generic_pm_domain. At the point when a device that belongs to a CPU, is attached/detached to its corresponding PM domain via genpd_add_device(), let's update the cpumask in genpd->cpus. Moreover, propagate the update of the cpumask to the master domains, which makes the genpd->cpus to contain a cpumask that hierarchically reflect all CPUs for a genpd, including CPUs attached to subdomains. Second, to unconditionally manage CPUs and the cpumask in genpd->cpus, is unnecessary for cases when only non-CPU devices are parts of a genpd. Let's avoid this by adding a new configuration bit, GENPD_FLAG_CPU_DOMAIN. Clients must set the bit before they call pm_genpd_init(), as to instruct genpd that it shall deal with CPUs and thus manage the cpumask in genpd->cpus. Cc: Lina Iyer Co-developed-by: Lina Iyer Signed-off-by: Ulf Hansson --- drivers/base/power/domain.c | 69 ++++++++++++++++++++++++++++++++++++- include/linux/pm_domain.h | 3 ++ 2 files changed, 71 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 21d298e1820b..6149ce0bfa7b 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "power.h" @@ -126,6 +127,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, const struct generic_pm_domain *genpd) @@ -1377,6 +1379,62 @@ static void genpd_free_dev_data(struct device *dev, dev_pm_put_subsys_data(dev); } +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, + int cpu, bool set, unsigned int depth) +{ + struct gpd_link *link; + + if (!genpd_is_cpu_domain(genpd)) + return; + + list_for_each_entry(link, &genpd->slave_links, slave_node) { + struct generic_pm_domain *master = link->master; + + genpd_lock_nested(master, depth + 1); + __genpd_update_cpumask(master, cpu, set, depth + 1); + genpd_unlock(master); + } + + if (set) + cpumask_set_cpu(cpu, genpd->cpus); + else + cpumask_clear_cpu(cpu, genpd->cpus); +} + +static void genpd_update_cpumask(struct generic_pm_domain *genpd, + struct device *dev, bool set) +{ + bool is_cpu = false; + int cpu; + + if (!genpd_is_cpu_domain(genpd)) + return; + + for_each_possible_cpu(cpu) { + if (get_cpu_device(cpu) == dev) { + is_cpu = true; + break; + } + } + + if (!is_cpu) + return; + + __genpd_update_cpumask(genpd, cpu, set, 0); +} + +static void genpd_set_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, true); +} + +static void genpd_clear_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, false); +} + static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, struct gpd_timing_data *td) { @@ -1398,6 +1456,8 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, if (ret) goto out; + genpd_set_cpumask(genpd, dev); + dev_pm_domain_set(dev, &genpd->domain); genpd->device_count++; @@ -1459,6 +1519,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd, if (genpd->detach_dev) genpd->detach_dev(genpd, dev); + genpd_clear_cpumask(genpd, dev); dev_pm_domain_set(dev, NULL); list_del_init(&pdd->list_node); @@ -1686,11 +1747,16 @@ int pm_genpd_init(struct generic_pm_domain *genpd, if (genpd_is_always_on(genpd) && !genpd_status_on(genpd)) return -EINVAL; + if (!zalloc_cpumask_var(&genpd->cpus, GFP_KERNEL)) + return -ENOMEM; + /* Use only one "off" state if there were no states declared */ if (genpd->state_count == 0) { ret = genpd_set_default_power_state(genpd); - if (ret) + if (ret) { + free_cpumask_var(genpd->cpus); return ret; + } } else if (!gov) { pr_warn("%s : no governor for states\n", genpd->name); } @@ -1736,6 +1802,7 @@ static int genpd_remove(struct generic_pm_domain *genpd) list_del(&genpd->gpd_list_node); genpd_unlock(genpd); cancel_work_sync(&genpd->power_off_work); + free_cpumask_var(genpd->cpus); kfree(genpd->free); pr_debug("%s: removed %s\n", __func__, genpd->name); diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 27fca748344a..3f67ff0c1c69 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -16,12 +16,14 @@ #include #include #include +#include /* Defines used for the flags field in the struct generic_pm_domain */ #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ #define GENPD_FLAG_IRQ_SAFE (1U << 1) /* PM domain operates in atomic */ #define GENPD_FLAG_ALWAYS_ON (1U << 2) /* PM domain is always powered on */ #define GENPD_FLAG_ACTIVE_WAKEUP (1U << 3) /* Keep devices active if wakeup */ +#define GENPD_FLAG_CPU_DOMAIN (1U << 4) /* PM domain manages CPUs */ enum gpd_status { GPD_STATE_ACTIVE = 0, /* PM domain is active */ @@ -68,6 +70,7 @@ struct generic_pm_domain { unsigned int suspended_count; /* System suspend device counter */ unsigned int prepared_count; /* Suspend counter of prepared devices */ unsigned int performance_state; /* Aggregated max performance state */ + cpumask_var_t cpus; /* A cpumask of the attached CPUs */ int (*power_off)(struct generic_pm_domain *domain); int (*power_on)(struct generic_pm_domain *domain); unsigned int (*opp_to_performance_state)(struct generic_pm_domain *genpd,