From patchwork Thu Jun 4 22:29:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 49481 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E038420DDC for ; Wed, 3 Jun 2015 15:53:32 +0000 (UTC) Received: by lbbti3 with SMTP id ti3sf3836984lbb.1 for ; Wed, 03 Jun 2015 08:53:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=1Jjp1zW/A0MjFFgasVhWJSNcbOqCeKqyenbOvAShKeA=; b=Rhdlpqt9UmPzv2VJzdMexpSTzgF0vsADftmi1zQXD4IWyQBowiFDqzpXSY5r9MSoW1 d/ckIdd09J44qdnfyinhfvltmh7eFrYgsW2uKqr0wFCgBCSvE7FtoapoC0goyZ5Jtej2 f+jYRqqMHIqczUU2iQKs4FSmULnwNzhkSRmZhE8djFel+OnsyyVfCM5U/SV3zF3TUn3i KM9lQ3cVZoNNiUsufmPNCq87wgNR4rSLue5sds1WIoiC8iaWEi0GE1zJ3gjiC18TlhD/ YeNNEfSG4Fhv1H8jxN8PwLkc5tmbKLX3IMGo1AcgsOvjGBYWptKXaULBLLkvY5+YgjJe PkvQ== X-Gm-Message-State: ALoCoQm/Hny5qf0jQjIwhew9Hs1xLOZEsshtbQJMIP4KaY+SDDRvVczCA1vn029LP2f9pqb97PXT X-Received: by 10.152.37.101 with SMTP id x5mr31289456laj.5.1433346811866; Wed, 03 Jun 2015 08:53:31 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.121.98 with SMTP id lj2ls74975lab.94.gmail; Wed, 03 Jun 2015 08:53:31 -0700 (PDT) X-Received: by 10.112.119.243 with SMTP id kx19mr20950812lbb.17.1433346811641; Wed, 03 Jun 2015 08:53:31 -0700 (PDT) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id jc6si13524037lac.71.2015.06.03.08.53.31 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Jun 2015 08:53:31 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by laei3 with SMTP id i3so11618594lae.3 for ; Wed, 03 Jun 2015 08:53:31 -0700 (PDT) X-Received: by 10.112.222.133 with SMTP id qm5mr32892389lbc.86.1433346811408; Wed, 03 Jun 2015 08:53:31 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp3965593lbb; Wed, 3 Jun 2015 08:53:29 -0700 (PDT) X-Received: by 10.70.90.1 with SMTP id bs1mr60951609pdb.10.1433346808925; Wed, 03 Jun 2015 08:53:28 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id vb4si1563975pac.93.2015.06.03.08.53.24; Wed, 03 Jun 2015 08:53:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755274AbbFCPxY (ORCPT + 11 others); Wed, 3 Jun 2015 11:53:24 -0400 Received: from mail-pd0-f180.google.com ([209.85.192.180]:36216 "EHLO mail-pd0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751950AbbFCPxW (ORCPT ); Wed, 3 Jun 2015 11:53:22 -0400 Received: by pdjm12 with SMTP id m12so10275937pdj.3 for ; Wed, 03 Jun 2015 08:53:22 -0700 (PDT) X-Received: by 10.66.182.229 with SMTP id eh5mr3225924pac.18.1433346802376; Wed, 03 Jun 2015 08:53:22 -0700 (PDT) Received: from ubuntu.localdomain (i-global254.qualcomm.com. [199.106.103.254]) by mx.google.com with ESMTPSA id q3sm1216066pdr.10.2015.06.03.08.53.20 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 03 Jun 2015 08:53:21 -0700 (PDT) From: Lina Iyer To: rjw@rjwysocki.net, ulf.hansson@linaro.org, khilman@linaro.org Cc: mathieu.poirier@linaro.org, galak@codeaurora.org, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, msivasub@codeaurora.org, agross@codeaurora.org, Lina Iyer Subject: [PATCH RFC 2/3] PM / Domains: Support atomic PM domains Date: Thu, 4 Jun 2015 16:29:05 -0600 Message-Id: <1433456946-53296-3-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1433456946-53296-1-git-send-email-lina.iyer@linaro.org> References: <1433456946-53296-1-git-send-email-lina.iyer@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: lina.iyer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Power Domains currently support turning on/off only in process context. This restricts the usage of PM domains to devices and domains that could be powered on/off in irq disabled contexts as the mutexes used in GenPD allows for cpu sleep while waiting for locks. Genpd inherently provides support for devices, domain hierarchy and can be used to represent cpu clusters like in ARM's big.Little, where, each cpu cluster is in its domain, with supporting caches and other peripheral hardware. Multiple such domains could be part of another domain. Because mutexes are used to protect and synchronize domain operations and cpu idle operations are inherently atomic, the use of genpd is not possible for runtime suspend and resume of the pm domain. Replacing the locks to spinlocks would allow cpu domain to be be powered off to save power, when all the cpus are powered off. However, not all domains can operate in irq-safe contexts and usually would need to sleep during domain operations. So genpd has to support both the cases, where the domain is or is not irq-safe. The irq-safe attribute is therefore domain specific. To achieve domain specific locking, set the GENPD_FLAG_IRQ_SAFE flag while defining the domain. This determines if the domain should use a spinlock instead of a mutex. Locking is abstracted through genpd_lock_domain() and genpd_unlock_domain() functions that use the flag to determine the locking to be used for this domain. The restriction this imposes on the domain hierarchy is that subdomains and all devices in the hierarchy also be irq-safe. Non irq-safe domains may continue to have irq-safe devices, but not the other way around. Cc: Ulf Hansson Cc: Rafael J. Wysocki Cc: Kevin Hilman Signed-off-by: Lina Iyer --- drivers/base/power/domain.c | 200 ++++++++++++++++++++++++++++++++++---------- include/linux/pm_domain.h | 11 ++- 2 files changed, 164 insertions(+), 47 deletions(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index dfd7595..8b89d15 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -50,6 +50,71 @@ static LIST_HEAD(gpd_list); static DEFINE_MUTEX(gpd_list_lock); +static inline int genpd_lock_domain_noirq(struct generic_pm_domain *genpd, + unsigned int subclass) + __acquires(&genpd->slock) +{ + unsigned long flags; + + if (unlikely(subclass > 0)) + spin_lock_irqsave_nested(&genpd->slock, flags, subclass); + else + spin_lock_irqsave(&genpd->slock, flags); + + genpd->flags = flags; + + return 0; +} + +static inline int genpd_unlock_domain_noirq(struct generic_pm_domain *genpd) + __releases(&genpd->slock) +{ + spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags); + return 0; +} + +static inline int genpd_lock_domain_irq(struct generic_pm_domain *genpd, + unsigned int subclass) + __acquires(&genpd->mlock) +{ + if (unlikely(subclass > 0)) + mutex_lock_nested(&genpd->mlock, subclass); + else + mutex_lock(&genpd->mlock); + + return 0; +} + +static inline int genpd_lock_domain_interruptible_irq( + struct generic_pm_domain *genpd) + __acquires(&genpd->mlock) +{ + return mutex_lock_interruptible(&genpd->mlock); +} + +static inline int genpd_unlock_domain_irq(struct generic_pm_domain *genpd) + __releases(&genpd->mlock) +{ + mutex_unlock(&genpd->mlock); + return 0; +} + +#define genpd_lock_domain(genpd) \ + (genpd->irq_safe ? genpd_lock_domain_noirq(genpd, 0) \ + : genpd_lock_domain_irq(genpd, 0)) + +#define genpd_lock_domain_nested(genpd) \ + (genpd->irq_safe ? genpd_lock_domain_noirq(genpd, SINGLE_DEPTH_NESTING)\ + : genpd_lock_domain_irq(genpd, SINGLE_DEPTH_NESTING)) + +#define genpd_unlock_domain(genpd) \ + (genpd->irq_safe ? genpd_unlock_domain_noirq(genpd) \ + : genpd_unlock_domain_irq(genpd)) + +#define genpd_lock_domain_interruptible(genpd) \ + (genpd->irq_safe ? genpd_lock_domain_noirq(genpd, 0) \ + : genpd_lock_domain_interruptible_irq(genpd)) + static struct generic_pm_domain *pm_genpd_lookup_name(const char *domain_name) { struct generic_pm_domain *genpd = NULL, *gpd; @@ -262,9 +327,9 @@ int pm_genpd_poweron(struct generic_pm_domain *genpd) { int ret; - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); ret = __pm_genpd_poweron(genpd); - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); return ret; } @@ -326,9 +391,9 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb, spin_unlock_irq(&dev->power.lock); if (!IS_ERR(genpd)) { - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); genpd->max_off_time_changed = true; - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); } dev = dev->parent; @@ -387,7 +452,7 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd) return -EBUSY; if (pdd->dev->driver && (!pm_runtime_suspended(pdd->dev) - || pdd->dev->power.irq_safe)) + || (pdd->dev->power.irq_safe && !genpd->irq_safe))) not_suspended++; } @@ -453,9 +518,9 @@ static void genpd_power_off_work_fn(struct work_struct *work) genpd = container_of(work, struct generic_pm_domain, power_off_work); - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); pm_genpd_poweroff(genpd); - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); } /** @@ -478,12 +543,8 @@ static int pm_genpd_runtime_suspend(struct device *dev) if (IS_ERR(genpd)) return -EINVAL; - /* - * We can't allow to power off the PM domain if it holds an irq_safe - * device. That's beacuse we use mutexes to protect data while power - * off and on the PM domain, thus we can't execute in atomic context. - */ - if (dev->power.irq_safe) + /* We can't allow to power off a domain that is also not irq safe. */ + if (dev->power.irq_safe && !genpd->irq_safe) return -EBUSY; stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL; @@ -500,11 +561,19 @@ static int pm_genpd_runtime_suspend(struct device *dev) return ret; } - mutex_lock(&genpd->lock); + /* + * If power.irq_safe is set, this routine will be run with interrupts + * off, so suspend only if the power domain is irq_safe. + */ + if (dev->power.irq_safe && !genpd->irq_safe) + return 0; + + genpd_lock_domain(genpd); + genpd->in_progress++; pm_genpd_poweroff(genpd); genpd->in_progress--; - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); return 0; } @@ -528,13 +597,16 @@ static int pm_genpd_runtime_resume(struct device *dev) if (IS_ERR(genpd)) return -EINVAL; - /* If power.irq_safe, the PM domain is never powered off. */ - if (dev->power.irq_safe) + /* + * If power.irq_safe and domain is not, then + * the PM domain is never powered off. + */ + if (dev->power.irq_safe && !genpd->irq_safe) return genpd_start_dev_no_timing(genpd, dev); - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); ret = __pm_genpd_poweron(genpd); - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); if (ret) return ret; @@ -729,14 +801,14 @@ static int pm_genpd_prepare(struct device *dev) if (resume_needed(dev, genpd)) pm_runtime_resume(dev); - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); if (genpd->prepared_count++ == 0) { genpd->suspended_count = 0; genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF; } - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); if (genpd->suspend_power_off) { pm_runtime_put_noidle(dev); @@ -754,12 +826,12 @@ static int pm_genpd_prepare(struct device *dev) ret = pm_generic_prepare(dev); if (ret) { - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); if (--genpd->prepared_count == 0) genpd->suspend_power_off = false; - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); pm_runtime_enable(dev); } @@ -1116,13 +1188,13 @@ static void pm_genpd_complete(struct device *dev) if (IS_ERR(genpd)) return; - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); run_complete = !genpd->suspend_power_off; if (--genpd->prepared_count == 0) genpd->suspend_power_off = false; - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); if (run_complete) { pm_generic_complete(dev); @@ -1266,11 +1338,18 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev)) return -EINVAL; + /* Devices in an IRQ safe PM Domain have to be irq safe too */ + if (genpd->irq_safe && !dev->power.irq_safe) { + dev_warn(dev, + "Devices in an irq-safe domain have to be irq safe.\n"); + return -EINVAL; + } + gpd_data = genpd_alloc_dev_data(dev, genpd, td); if (IS_ERR(gpd_data)) return PTR_ERR(gpd_data); - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); if (genpd->prepared_count > 0) { ret = -EAGAIN; @@ -1287,7 +1366,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, list_add_tail(&gpd_data->base.list_node, &genpd->dev_list); out: - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); if (ret) genpd_free_dev_data(dev, gpd_data); @@ -1331,7 +1410,7 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd, gpd_data = to_gpd_data(pdd); dev_pm_qos_remove_notifier(dev, &gpd_data->nb); - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); if (genpd->prepared_count > 0) { ret = -EAGAIN; @@ -1346,14 +1425,14 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd, list_del_init(&pdd->list_node); - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); genpd_free_dev_data(dev, gpd_data); return 0; out: - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); dev_pm_qos_add_notifier(dev, &gpd_data->nb); return ret; @@ -1374,12 +1453,23 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, || genpd == subdomain) return -EINVAL; + /* + * If the domain can be powered on/off in an irq safe + * context, ensure that the subdomain can also be + * powered on/off in that context. + */ + if (genpd->irq_safe && !subdomain->irq_safe) { + pr_warn("Incompatible sub-domain %s of irq-safe domain %s\n", + subdomain->name, genpd->name); + return -EINVAL; + } + link = kzalloc(sizeof(*link), GFP_KERNEL); if (!link) return -ENOMEM; - mutex_lock(&genpd->lock); - mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING); + genpd_lock_domain(genpd); + genpd_lock_domain_nested(subdomain); if (genpd->status == GPD_STATE_POWER_OFF && subdomain->status != GPD_STATE_POWER_OFF) { @@ -1402,8 +1492,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, genpd_sd_counter_inc(genpd); out: - mutex_unlock(&subdomain->lock); - mutex_unlock(&genpd->lock); + genpd_unlock_domain(subdomain); + genpd_unlock_domain(genpd); return ret; } @@ -1451,13 +1541,13 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain)) return -EINVAL; - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); list_for_each_entry(link, &genpd->master_links, master_node) { if (link->slave != subdomain) continue; - mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING); + genpd_lock_domain_nested(subdomain); list_del(&link->master_node); list_del(&link->slave_node); @@ -1465,13 +1555,13 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, if (subdomain->status != GPD_STATE_POWER_OFF) genpd_sd_counter_dec(genpd); - mutex_unlock(&subdomain->lock); + genpd_unlock_domain(subdomain); ret = 0; break; } - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); return ret; } @@ -1495,11 +1585,17 @@ int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state) if (IS_ERR_OR_NULL(genpd) || state < 0) return -EINVAL; + if (genpd->irq_safe) { + pr_err("Domain %s is irq safe, cannot attach to cpuidle\n", + genpd->name); + return -EINVAL; + } + cpuidle_data = kzalloc(sizeof(*cpuidle_data), GFP_KERNEL); if (!cpuidle_data) return -ENOMEM; - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); if (genpd->cpuidle_data) { ret = -EEXIST; @@ -1526,7 +1622,7 @@ int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state) genpd_recalc_cpu_exit_latency(genpd); out: - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); return ret; err: @@ -1563,7 +1659,7 @@ int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) if (IS_ERR_OR_NULL(genpd)) return -EINVAL; - mutex_lock(&genpd->lock); + genpd_lock_domain(genpd); cpuidle_data = genpd->cpuidle_data; if (!cpuidle_data) { @@ -1581,7 +1677,7 @@ int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) kfree(cpuidle_data); out: - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); return ret; } @@ -1642,6 +1738,17 @@ static int pm_genpd_default_restore_state(struct device *dev) return cb ? cb(dev) : 0; } +static void genpd_lock_domain_init(struct generic_pm_domain *genpd) +{ + if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { + spin_lock_init(&genpd->slock); + genpd->irq_safe = true; + } else { + mutex_init(&genpd->mlock); + genpd->irq_safe = false; + } +} + /** * pm_genpd_init - Initialize a generic I/O PM domain object. * @genpd: PM domain object to initialize. @@ -1657,7 +1764,7 @@ void pm_genpd_init(struct generic_pm_domain *genpd, INIT_LIST_HEAD(&genpd->master_links); INIT_LIST_HEAD(&genpd->slave_links); INIT_LIST_HEAD(&genpd->dev_list); - mutex_init(&genpd->lock); + genpd_lock_domain_init(genpd); genpd->gov = gov; INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn); genpd->in_progress = 0; @@ -2042,7 +2149,7 @@ static int pm_genpd_summary_one(struct seq_file *s, struct gpd_link *link; int ret; - ret = mutex_lock_interruptible(&genpd->lock); + ret = genpd_lock_domain_interruptible(genpd); if (ret) return -ERESTARTSYS; @@ -2062,7 +2169,8 @@ static int pm_genpd_summary_one(struct seq_file *s, } list_for_each_entry(pm_data, &genpd->dev_list, list_node) { - kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL); + kobj_path = kobject_get_path(&pm_data->dev->kobj, + genpd->irq_safe ? GFP_ATOMIC : GFP_KERNEL); if (kobj_path == NULL) continue; @@ -2073,7 +2181,7 @@ static int pm_genpd_summary_one(struct seq_file *s, seq_puts(s, "\n"); exit: - mutex_unlock(&genpd->lock); + genpd_unlock_domain(genpd); return 0; } diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index b2725e6..dc7cb53 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -16,9 +16,11 @@ #include #include #include +#include /* Defines used for the flags field in the struct generic_pm_domain */ #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ +#define GENPD_FLAG_IRQ_SAFE (1U << 1) /* PM domain operates in atomic */ enum gpd_status { GPD_STATE_ACTIVE = 0, /* PM domain is active */ @@ -49,7 +51,6 @@ struct generic_pm_domain { struct list_head master_links; /* Links with PM domain as a master */ struct list_head slave_links; /* Links with PM domain as a slave */ struct list_head dev_list; /* List of devices */ - struct mutex lock; struct dev_power_governor *gov; struct work_struct power_off_work; const char *name; @@ -74,6 +75,14 @@ struct generic_pm_domain { void (*detach_dev)(struct generic_pm_domain *domain, struct device *dev); unsigned int flags; /* Bit field of configs for genpd */ + bool irq_safe; + union { + struct mutex mlock; + struct { + spinlock_t slock; + unsigned long lock_flags; + }; + }; }; static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)