diff mbox

[RFC,08/27] PM / Domains: Support IRQ safe PM domains

Message ID 1447799871-56374-9-git-send-email-lina.iyer@linaro.org
State New
Headers show

Commit Message

Lina Iyer Nov. 17, 2015, 10:37 p.m. UTC
Generic Power Domains currently support turning on/off only in process
context. This prevents the usage of PM domains for domains that could be
powered on/off in a context where IRQs are disabled. Many such domains
exist today and do not get powered off, when the IRQ safe devices in
that domain are powered off, because of this limitation.

However, not all domains can operate in IRQ safe contexts. Genpd
therefore, has to support both cases where the domain may or may not
operate in IRQ safe contexts. Configuring genpd to use an appropriate
lock for that domain, would allow domains that have IRQ safe devices to
runtime suspend and resume, in atomic context.

To achieve domain specific locking, set the domain's ->flag to
GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd
should use a spinlock instead of a mutex for locking the domain. Locking
is abstracted through genpd_lock() and genpd_unlock() functions that use
the flag to determine the appropriate lock to be used for that domain.
Domains that have lower latency to suspend and resume and can operate
with IRQs disabled may now be able to save power, when the component
devices and sub-domains are idle at runtime.

The restriction this imposes on the domain hierarchy is that sub-domains
and all devices in the IRQ safe domain's hierarchy also have to be IRQ
safe, so that we dont try to lock a mutex, while holding a spinlock.
Non-IRQ safe domains may continue to have devices and sub-domains that
may or may not be IRQ safe.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Krzysztof Kozłowski <k.kozlowski@samsung.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>

---
 Documentation/power/devices.txt |  11 ++-
 drivers/base/power/domain.c     | 197 ++++++++++++++++++++++++++++++++--------
 include/linux/pm_domain.h       |  12 ++-
 3 files changed, 178 insertions(+), 42 deletions(-)

-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Ulf Hansson Jan. 14, 2016, 2:42 p.m. UTC | #1
On 17 November 2015 at 23:37, Lina Iyer <lina.iyer@linaro.org> wrote:
> Generic Power Domains currently support turning on/off only in process

> context. This prevents the usage of PM domains for domains that could be

> powered on/off in a context where IRQs are disabled. Many such domains

> exist today and do not get powered off, when the IRQ safe devices in

> that domain are powered off, because of this limitation.

>

> However, not all domains can operate in IRQ safe contexts. Genpd

> therefore, has to support both cases where the domain may or may not

> operate in IRQ safe contexts. Configuring genpd to use an appropriate

> lock for that domain, would allow domains that have IRQ safe devices to

> runtime suspend and resume, in atomic context.

>

> To achieve domain specific locking, set the domain's ->flag to

> GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd

> should use a spinlock instead of a mutex for locking the domain. Locking

> is abstracted through genpd_lock() and genpd_unlock() functions that use

> the flag to determine the appropriate lock to be used for that domain.


Please add a newline here.

> Domains that have lower latency to suspend and resume and can operate


I believe I get the point, as we must not keep IRQs disabled for too
long. Perhaps that can be a bit clarified?

> with IRQs disabled may now be able to save power, when the component

> devices and sub-domains are idle at runtime.

>

> The restriction this imposes on the domain hierarchy is that sub-domains

> and all devices in the IRQ safe domain's hierarchy also have to be IRQ

> safe, so that we dont try to lock a mutex, while holding a spinlock.


Isn't it the opposite as you described here?

So holding a mutex while taking a spinlock should be okay, but not the
other way around.

Regarding power on:
*)
If we get a request to power on the master, its spinlock will be
taken, but its subdomain isn't touched.

**)
If we get a request to power on the subdomain, it will first try to
power its master. That means, first we take the mutex of the
subdomain, then as the master is IRQ safe we will take its spinlock.
We power on the master, releases its spinlock, continues to power on
the subdomain and then releases its mutex.

Regarding power off:
*)
Trying to power off the master will fail unless the atomic "sd_count"
is tested for zero. In that execution path, its subdomain isn't
touched.

**)
Trying to power off a subdomain, thus executing in non-atomic context,
starts by taking its mutex then continue doing the power off things.
When completed, the sd_count for its master will be decreased and then
we schedule a power off work for it, to allow it to be powered off at
some point later. Even if we wouldn't schedule a work, and instead
tried to power off the master within the same context it should be
safe to take a spinlock.

I haven't thought about the IRQ safe devices yet, but I guess similar
applies to them.

> Non-IRQ safe domains may continue to have devices and sub-domains that

> may or may not be IRQ safe.


According the my comments above, no I don't think so. Non-IRQ safe
domains can't have subdomains that is IRQ safe.

Before I continue to review the code, let's align on the above first
as I think it's important we get this right first. :-)

[...]

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Lina Iyer Jan. 15, 2016, 4:57 p.m. UTC | #2
On Fri, Jan 15 2016 at 01:55 -0700, Ulf Hansson wrote:
[...]
>

>> *) Non-IRQ safe parent + IRQ-safe subdomain

>

>Let's name this case 1.

>

>>   Attach subdomain:

>>         - parent mutex held and subdomain spinlocked.

>>   Power-on:

>>         - subdomain assumes parent is always ON or powered on before the

>>         subdomain is powered ON.

>

>As stated above, what do you think of *not* dealing with this case as

>I think it's not a solution but a workaround!

>

>>   Power-off:

>>         - parent is not powered off even if the subdomain was the last

>>         active domain.

>>

>> **) IRQ-safe parent + IRQ-safe subdomain

>

>Let's name this case 2.

>

>>    Attach subdomain:

>>         - parent spinlock held and subdomain spinlocked.

>>    Power-on:

>>         - subdomain may power on the parent.

>>    Power-off:

>>         - last active sub-domain will power off the parent in the same

>>           context.

>

>From a genpd locking point of view, there should be no reason to power

>off in same context. We should be able to deal with this via a

>scheduled work for this case as well, right!?

>

I agree it need not be a requirement. May be we can have an option to
schedule the work. In the case of CPU power management, scheduling a
work is not an option, especially when the domain and the CPUs are
collapsing.

>Although, I realize that it's very useful to power off in the same

>context to minimize latencies in the full blown CPU Cluster Power

>Management solution.

>Perhaps a separate genpd configuration flag could be added to enable

>this method, as I imagine that not *all* IRQ safe domains wants to use

>it.

>

>What do you think?

>

May be we can provide the is_async option as a genpd flag.

>>

>> ***) IRQ-safe parent + non-IRQ-safe subdomain

>

>Let's name this case 3.

>

>>    Attach subdomain:

>>         - parent spinlock held and subdomain **cannot** be mutex locked.

>

>First, attaching shouldn't occur from atomic context.

>

>Second, the important part is that we pick the locks in same order as

>elsewhere in genpd, and of course we must not pick a mutex while

>holding a spinlock.

>

>In all cases when adding subdomains, we should start by fetching the

>subdomains lock *then* the lock for the parent. The current genpd code

>doesn't do that and additionally it uses nested locks.

>I am going to send a patch the changes this behaviour, as I think it's

>wrong and may cause deadlocks!

>

Ah. Makes sense. I wish I had thought of it.

>Following this approach means that case 1 can't be supported, as that

>would mean holding a spinlock while fetching a mutex.

>

>>    Power-on:

>>         - subdomain may power on the parent.

>>    Power-off:

>>         - last active subdomain will be able to power off the domain

>>

>> Except for the last case, we can support the others in addition to the

>> currently available, with the restriction that

>> - IRQ-safe parents can only have IRQ-safe subdomains and devices

>>

>> - Non-IRQ safe parents may have both IRQ-safe and non-IRQ-safe

>>  subdomains and devices and the assumption that

>>         -- IRQ-safe subdomains and devices will not bother to power

>>         on/off their non-IRQ-safe parent.

>>

>

>I think case 2 and case 3 should be okay to support, but I am really

>questioning case 1.

>

With you other patch, it does make sense to have 2 and 3 instead of 1
and 2. 

My only concern is as I view the SoC as a whole, there would be cases
where the parent domain could only be non-IRQ safe (turning off a big
regulator may require some sleep) and the subdomains are quicker and
faster IRQ-safe domains. Putting up this restriction, chops up the
domain hierarchy. Imagine, if you could represent the whole power domain
organization in DT, but because of this restriction, we may not be able
to do that.

Thanks,
Lina
--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Lina Iyer Jan. 18, 2016, 4:58 p.m. UTC | #3
On Fri, Jan 15 2016 at 15:08 -0700, Ulf Hansson wrote:
>[...]

>

>>> I think case 2 and case 3 should be okay to support, but I am really

>>> questioning case 1.

>>>

>> With you other patch, it does make sense to have 2 and 3 instead of 1

>> and 2.

>

>Okay!

>

>> My only concern is as I view the SoC as a whole, there would be cases

>> where the parent domain could only be non-IRQ safe (turning off a big

>> regulator may require some sleep) and the subdomains are quicker and

>> faster IRQ-safe domains. Putting up this restriction, chops up the

>

>Just because they are "quick" (don't sleep) doesn't mean they have to

>set the IRQ safe flag for the domain.

>

>Let's compare how pm_runtime_irq_safe() is used for devices in

>drivers. Lots of corresponding runtime PM callbacks don't sleep, but

>that don't mean that the driver calls pm_runtime_irq_safe().

>

>Drivers that are expecting to invoke pm_runtime_get_sync() (or other

>synchronous runtime PM API) from atomic context, requires the

>corresponding runtime PM callbacks to be IRQ safe. Only under these

>circumstances the pm_runtime_irq_safe() is used.

>

I agree.

The limitation and the interpretation could completely be s/w. There is
nothing preventing that in hardware. So we could have a processor
subsystem domain that can collapse faster and even during cpuidle. We
would define that domain IRQ-safe because of its need to collapse and
resume when IRQs are disabled, but its parent may not have that
restriction, in fact, the parent could have another child that could
only operate as non-IRQ safe.

By imposing the limitation that parents of IRQ-safe domains cannot be 
non-IRQ-safe, we have to chop off that relationship, because of another
non-IRQ-safe sibling.

>> domain hierarchy. Imagine, if you could represent the whole power domain

>> organization in DT, but because of this restriction, we may not be able

>> to do that.

>

>You do have a point. Considering my comment above, do you still think

>this may become an issue?

>

Yes. Pls see my example below.

That said, I probably won't encounter this limitation in my current
series, but its not hard to foresee this corner case.

Thanks,
Lina
--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 8ba6625..bde6141 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -607,7 +607,16 @@  individually.  Instead, a set of devices sharing a power resource can be put
 into a low-power state together at the same time by turning off the shared
 power resource.  Of course, they also need to be put into the full-power state
 together, by turning the shared power resource on.  A set of devices with this
-property is often referred to as a power domain.
+property is often referred to as a power domain. A power domain may also be
+nested inside another power domain.
+
+Devices, by default, operate in process context and if a device can operate in
+IRQ safe context, has to be explicitly set as IRQ safe. Power domains by
+default, operate in process context but could have devices that are IRQ safe.
+Such power domains cannot be powered on/off during runtime PM. On the other
+hand, an IRQ safe PM domain that can be powered on/off and suspended or resumed
+in an atomic context, may contain IRQ safe devices. Such domains may only
+contain IRQ safe devices or IRQ safe sub-domains.
 
 Support for power domains is provided through the pm_domain field of struct
 device.  This field is a pointer to an object of type struct dev_pm_domain,
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index abc81bd..8df43f8 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -44,6 +44,80 @@  static int pm_genpd_alloc_states_names(struct generic_pm_domain *genpd,
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
 
+static inline int genpd_lock_nosleep(struct generic_pm_domain *genpd,
+					unsigned int subclass)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	if (subclass > 0)
+		spin_lock_irqsave_nested(&genpd->slock, flags, subclass);
+	else
+		spin_lock_irqsave(&genpd->slock, flags);
+
+	genpd->lock_flags = flags;
+	return 0;
+}
+
+static inline void genpd_unlock_nosleep(struct generic_pm_domain *genpd)
+	__releases(&genpd->slock)
+{
+	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
+}
+
+static inline int genpd_lock_irq(struct generic_pm_domain *genpd,
+					unsigned int subclass)
+	__acquires(&genpd->mlock)
+{
+	if (subclass > 0)
+		mutex_lock_nested(&genpd->mlock, subclass);
+	else
+		mutex_lock(&genpd->mlock);
+	return 0;
+}
+
+static inline int genpd_lock_interruptible_irq(struct generic_pm_domain *genpd)
+	__acquires(&genpd->mlock)
+{
+	return mutex_lock_interruptible(&genpd->mlock);
+}
+
+static inline void genpd_unlock_irq(struct generic_pm_domain *genpd)
+	__releases(&genpd->mlock)
+{
+	mutex_unlock(&genpd->mlock);
+}
+
+static inline int genpd_lock(struct generic_pm_domain *genpd)
+{
+	return genpd->irq_safe ? genpd_lock_nosleep(genpd, 0)
+			: genpd_lock_irq(genpd, 0);
+}
+
+static inline int genpd_lock_nested(struct generic_pm_domain *genpd)
+{
+	return genpd->irq_safe ? genpd_lock_nosleep(genpd, SINGLE_DEPTH_NESTING)
+			: genpd_lock_irq(genpd, SINGLE_DEPTH_NESTING);
+}
+
+static inline int genpd_lock_interruptible(struct generic_pm_domain *genpd)
+{
+	return genpd->irq_safe ? genpd_lock_nosleep(genpd, 0)
+			: genpd_lock_interruptible_irq(genpd);
+}
+
+static inline void genpd_unlock(struct generic_pm_domain *genpd)
+{
+	return genpd->irq_safe ? genpd_unlock_nosleep(genpd)
+			: genpd_unlock_irq(genpd);
+}
+
+static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
+		struct generic_pm_domain *genpd)
+{
+	return dev->power.irq_safe && !genpd->irq_safe;
+}
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -238,9 +312,9 @@  static int genpd_poweron(struct generic_pm_domain *genpd)
 {
 	int ret;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = __genpd_poweron(genpd);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	return ret;
 }
 
@@ -282,9 +356,9 @@  static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
 		spin_unlock_irq(&dev->power.lock);
 
 		if (!IS_ERR(genpd)) {
-			mutex_lock(&genpd->lock);
+			genpd_lock(genpd);
 			genpd->max_off_time_changed = true;
-			mutex_unlock(&genpd->lock);
+			genpd_unlock(genpd);
 		}
 
 		dev = dev->parent;
@@ -330,8 +404,17 @@  static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
 		if (stat > PM_QOS_FLAGS_NONE)
 			return -EBUSY;
 
-		if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
+		/*
+		 * We do not want to power off the domain if the device is
+		 * not suspended or an IRQ safe device is part of this
+		 * non-IRQ safe domain.
+		 */
+		if (!pm_runtime_suspended(pdd->dev) ||
+			irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd))
 			not_suspended++;
+		WARN_ONCE(irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd),
+				"PM domain %s will not be powered off\n",
+				genpd->name);
 	}
 
 	if (not_suspended > 1 || (not_suspended == 1 && is_async))
@@ -381,9 +464,9 @@  static void genpd_power_off_work_fn(struct work_struct *work)
 
 	genpd = container_of(work, struct generic_pm_domain, power_off_work);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, true);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -437,15 +520,18 @@  static int pm_genpd_runtime_suspend(struct device *dev)
 	}
 
 	/*
-	 * If power.irq_safe is set, this routine will be run with interrupts
-	 * off, so it can't use mutexes.
+	 * If power.irq_safe is set, this routine may be run with
+	 * IRQ disabled, so suspend only if the power domain is
+	 * irq_safe.
 	 */
-	if (dev->power.irq_safe)
+	WARN_ONCE(irq_safe_dev_in_no_sleep_domain(dev, genpd),
+			"genpd %s will not be powered off\n", genpd->name);
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
 		return 0;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, false);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
@@ -473,15 +559,18 @@  static int pm_genpd_runtime_resume(struct device *dev)
 	if (IS_ERR(genpd))
 		return -EINVAL;
 
-	/* If power.irq_safe, the PM domain is never powered off. */
-	if (dev->power.irq_safe) {
+	/*
+	 * As we dont power off a non IRQ safe domain, which holds
+	 * an IRQ safe device, we dont need to restore power to it.
+	 */
+	if (dev->power.irq_safe && !genpd->irq_safe) {
 		timed = false;
 		goto out;
 	}
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = __genpd_poweron(genpd);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		return ret;
@@ -695,14 +784,14 @@  static int pm_genpd_prepare(struct device *dev)
 	if (resume_needed(dev, genpd))
 		pm_runtime_resume(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count++ == 0) {
 		genpd->suspended_count = 0;
 		genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF;
 	}
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (genpd->suspend_power_off) {
 		pm_runtime_put_noidle(dev);
@@ -720,12 +809,12 @@  static int pm_genpd_prepare(struct device *dev)
 
 	ret = pm_generic_prepare(dev);
 	if (ret) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 
 		if (--genpd->prepared_count == 0)
 			genpd->suspend_power_off = false;
 
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 		pm_runtime_enable(dev);
 	}
 
@@ -1083,13 +1172,13 @@  static void pm_genpd_complete(struct device *dev)
 	if (IS_ERR(genpd))
 		return;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	run_complete = !genpd->suspend_power_off;
 	if (--genpd->prepared_count == 0)
 		genpd->suspend_power_off = false;
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (run_complete) {
 		pm_generic_complete(dev);
@@ -1275,11 +1364,18 @@  int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
 		return -EINVAL;
 
+	if (genpd->irq_safe && !dev->power.irq_safe) {
+		dev_err(dev,
+			"PM Domain %s is IRQ safe; device has to IRQ safe.\n",
+			genpd->name);
+		return -EINVAL;
+	}
+
 	gpd_data = genpd_alloc_dev_data(dev, genpd, td);
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1296,7 +1392,7 @@  int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		genpd_free_dev_data(dev, gpd_data);
@@ -1328,7 +1424,7 @@  int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 	gpd_data = to_gpd_data(pdd);
 	dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1343,14 +1439,14 @@  int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 
 	list_del_init(&pdd->list_node);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	genpd_free_dev_data(dev, gpd_data);
 
 	return 0;
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
 	return ret;
@@ -1371,12 +1467,23 @@  int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	    || genpd == subdomain)
 		return -EINVAL;
 
+	/*
+	 * If the domain can be powered on/off in an IRQ safe
+	 * context, ensure that the subdomain can also be
+	 * powered on/off in that context.
+	 */
+	if (genpd->irq_safe && !subdomain->irq_safe) {
+		WARN("Sub-domain (%s) in an IRQ-safe domain (%s) has to be IRQ safe\n",
+				subdomain->name, genpd->name);
+		return -EINVAL;
+	}
+
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
 	if (!link)
 		return -ENOMEM;
 
-	mutex_lock(&genpd->lock);
-	mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(genpd);
+	genpd_lock_nested(subdomain);
 
 	if (genpd->status == GPD_STATE_POWER_OFF
 	    &&  subdomain->status != GPD_STATE_POWER_OFF) {
@@ -1399,8 +1506,8 @@  int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 		genpd_sd_counter_inc(genpd);
 
  out:
-	mutex_unlock(&subdomain->lock);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(subdomain);
+	genpd_unlock(genpd);
 	if (ret)
 		kfree(link);
 	return ret;
@@ -1421,7 +1528,7 @@  int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
 		return -EINVAL;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (!list_empty(&subdomain->slave_links) || subdomain->device_count) {
 		pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
@@ -1434,7 +1541,7 @@  int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 		if (link->slave != subdomain)
 			continue;
 
-		mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING);
+		genpd_lock_nested(subdomain);
 
 		list_del(&link->master_node);
 		list_del(&link->slave_node);
@@ -1442,14 +1549,14 @@  int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 		if (subdomain->status != GPD_STATE_POWER_OFF)
 			genpd_sd_counter_dec(genpd);
 
-		mutex_unlock(&subdomain->lock);
+		genpd_unlock(subdomain);
 
 		ret = 0;
 		break;
 	}
 
 out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return ret;
 }
@@ -1612,6 +1719,17 @@  static int of_genpd_device_parse_states(struct device_node *np,
 	return err;
 }
 
+static void genpd_lock_init(struct generic_pm_domain *genpd)
+{
+	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
+		spin_lock_init(&genpd->slock);
+		genpd->irq_safe = true;
+	} else {
+		mutex_init(&genpd->mlock);
+		genpd->irq_safe = false;
+	}
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1636,7 +1754,7 @@  void pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->lock);
+	genpd_lock_init(genpd);
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -2100,7 +2218,7 @@  static int pm_genpd_summary_one(struct seq_file *s,
 	struct gpd_link *link;
 	int ret;
 
-	ret = mutex_lock_interruptible(&genpd->lock);
+	ret = genpd_lock_interruptible(genpd);
 	if (ret)
 		return -ERESTARTSYS;
 
@@ -2124,7 +2242,8 @@  static int pm_genpd_summary_one(struct seq_file *s,
 	}
 
 	list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
-		kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+		kobj_path = kobject_get_path(&pm_data->dev->kobj,
+				genpd->irq_safe ? GFP_ATOMIC : GFP_KERNEL);
 		if (kobj_path == NULL)
 			continue;
 
@@ -2135,7 +2254,7 @@  static int pm_genpd_summary_one(struct seq_file *s,
 
 	seq_puts(s, "\n");
 exit:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index fed024a..f1329ea 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -15,9 +15,11 @@ 
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/notifier.h>
+#include <linux/spinlock.h>
 
 /* Defines used for the flags field in the struct generic_pm_domain */
 #define GENPD_FLAG_PM_CLK	(1U << 0) /* PM domain uses PM clk */
+#define GENPD_FLAG_IRQ_SAFE	(1U << 1) /* PM domain operates in atomic */
 
 enum gpd_status {
 	GPD_STATE_ACTIVE = 0,	/* PM domain is active */
@@ -51,7 +53,6 @@  struct generic_pm_domain {
 	struct list_head master_links;	/* Links with PM domain as a master */
 	struct list_head slave_links;	/* Links with PM domain as a slave */
 	struct list_head dev_list;	/* List of devices */
-	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
 	const char *name;
@@ -75,7 +76,14 @@  struct generic_pm_domain {
 	struct genpd_power_state *states;
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
-
+	bool irq_safe;
+	union {
+		struct mutex mlock;
+		struct {
+			spinlock_t slock;
+			unsigned long lock_flags;
+		};
+	};
 };
 
 static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)