diff mbox series

PM: runtime: Allow rpm_resume() to succeed when runtime PM is disabled

Message ID 20211026222626.39222-1-ulf.hansson@linaro.org
State New
Headers show
Series PM: runtime: Allow rpm_resume() to succeed when runtime PM is disabled | expand

Commit Message

Ulf Hansson Oct. 26, 2021, 10:26 p.m. UTC
During system suspend, the PM core sets dev->power.is_suspended for the
device that is being suspended. This flag is also being used in
rpm_resume(), to allow it to succeed by returning 1, assuming that runtime
PM has been disabled and the runtime PM status is RPM_ACTIVE, for the
device.

To make this behaviour a bit more useful, let's drop the check for the
dev->power.is_suspended flag in rpm_resume(), as it doesn't really need to
be limited to this anyway.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>

---
 drivers/base/power/runtime.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

-- 
2.25.1

Comments

Ulf Hansson Nov. 30, 2021, 4:41 p.m. UTC | #1
On Tue, 30 Nov 2021 at 14:02, Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Tue, Nov 30, 2021 at 12:58 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> >
> > [...]
> >
> > > > > > > >
> > > > > > > > Am I thinking correctly that this is mostly about working around the
> > > > > > > > limitations of pm_runtime_force_suspend()?
> > > > > > >
> > > > > > > No, this isn't related at all.
> > > > > > >
> > > > > > > The cpuidle-psci driver doesn't have PM callbacks, thus using
> > > > > > > pm_runtime_force_suspend() would not work here.
> > > > > >
> > > > > > Just wanted to send a ping on this to see if we can come to a
> > > > > > conclusion. Or maybe we did? :-)
> > > > > >
> > > > > > I think in the end, what slightly bothers me, is that the behavior is
> > > > > > a bit inconsistent. Although, maybe it's the best we can do.
> > > > >
> > > > > I've been thinking about this and it looks like we can do better, but
> > > > > instead of talking about this I'd rather send a patch.
> > > >
> > > > Alright.
> > > >
> > > > I was thinking along the lines of make similar changes for
> > > > rpm_idle|suspend(). That would make the behaviour even more
> > > > consistent, I think.
> > > >
> > > > Perhaps that's what you have in mind? :-)
> > >
> > > Well, not exactly.
> > >
> > > The idea is to add another counter (called restrain_depth in the patch)
> > > to prevent rpm_resume() from running the callback when that is potentially
> > > problematic.  With that, it is possible to actually distinguish devices
> > > with PM-runtime enabled and it allows the PM-runtime status to be checked
> > > when it is still known to be meaningful.
> >
> > Hmm, I don't quite understand the benefit of introducing a new flag
> > for this. rpm_resume() already checks the disable_depth to understand
> > when it's safe to invoke the callback. Maybe there is a reason why
> > that isn't sufficient?
>
> The problem is that disable_depth > 0 may very well mean that runtime
> PM has not been enabled at all for the given device which IMO is a
> problem.
>
> As it stands, it is necessary to make assumptions, like disable_depth
> == 1 meaning that runtime PM is really enabled, but the PM core has
> disabled it temporarily, which is somewhat questionable.
>
> Another problem with disabling is that it causes rpm_resume() to fail
> even if the status is RPM_ACTIVE and it has to do that exactly because
> it cannot know why runtime PM has been disabled.  If it has never been
> enabled, rpm_resume() must fail, but if it has been disabled
> temporarily, rpm_resume() may return 1 when the status is RPM_ACTIVE.
>
> The new count allows the "enabled in general, but temporarily disabled
> at the moment" to be handled cleanly.

My overall comment is that I fail to understand why we need to
distinguish between these two cases. To me, it shouldn't really
matter, *why* runtime PM is (or have been) disabled for the device.

The important point is that the default state for a device is
RPM_SUSPENDED and someone has moved into RPM_ACTIVE, for whatever
reason. That should be sufficient to allow rpm_resume() to return '1'
when disable_depth > 0, shouldn't it?

>
> > >
> > > It requires quite a few changes, but is rather straightforward, unless I'm
> > > missing something.
> > >
> > > Please see the patch below.  I've only checked that it builds on x86-64.
> > >
> > > ---
> > >  drivers/base/power/main.c    |   18 +++----
> > >  drivers/base/power/runtime.c |  105 ++++++++++++++++++++++++++++++++++++-------
> > >  include/linux/pm.h           |    2
> > >  include/linux/pm_runtime.h   |    2
> > >  4 files changed, 101 insertions(+), 26 deletions(-)
> > >
> > > Index: linux-pm/include/linux/pm.h
> > > ===================================================================
> > > --- linux-pm.orig/include/linux/pm.h
> > > +++ linux-pm/include/linux/pm.h
> > > @@ -598,6 +598,7 @@ struct dev_pm_info {
> > >         atomic_t                usage_count;
> > >         atomic_t                child_count;
> > >         unsigned int            disable_depth:3;
> > > +       unsigned int            restrain_depth:3;       /* PM core private */
> > >         unsigned int            idle_notification:1;
> > >         unsigned int            request_pending:1;
> > >         unsigned int            deferred_resume:1;
> > > @@ -609,6 +610,7 @@ struct dev_pm_info {
> > >         unsigned int            use_autosuspend:1;
> > >         unsigned int            timer_autosuspends:1;
> > >         unsigned int            memalloc_noio:1;
> > > +       unsigned int            already_suspended:1;    /* PM core private */
> > >         unsigned int            links_count;
> > >         enum rpm_request        request;
> > >         enum rpm_status         runtime_status;
> > > Index: linux-pm/include/linux/pm_runtime.h
> > > ===================================================================
> > > --- linux-pm.orig/include/linux/pm_runtime.h
> > > +++ linux-pm/include/linux/pm_runtime.h
> > > @@ -46,6 +46,8 @@ extern void pm_runtime_enable(struct dev
> > >  extern void __pm_runtime_disable(struct device *dev, bool check_resume);
> > >  extern void pm_runtime_allow(struct device *dev);
> > >  extern void pm_runtime_forbid(struct device *dev);
> > > +extern void pm_runtime_restrain(struct device *dev);
> > > +extern void pm_runtime_relinquish(struct device *dev);
> > >  extern void pm_runtime_no_callbacks(struct device *dev);
> > >  extern void pm_runtime_irq_safe(struct device *dev);
> > >  extern void __pm_runtime_use_autosuspend(struct device *dev, bool use);
> > > Index: linux-pm/drivers/base/power/runtime.c
> > > ===================================================================
> > > --- linux-pm.orig/drivers/base/power/runtime.c
> > > +++ linux-pm/drivers/base/power/runtime.c
> > > @@ -744,11 +744,11 @@ static int rpm_resume(struct device *dev
> > >   repeat:
> > >         if (dev->power.runtime_error)
> > >                 retval = -EINVAL;
> > > -       else if (dev->power.disable_depth == 1 && dev->power.is_suspended
> > > -           && dev->power.runtime_status == RPM_ACTIVE)
> > > -               retval = 1;
> > >         else if (dev->power.disable_depth > 0)
> > >                 retval = -EACCES;
> > > +       else if (dev->power.restrain_depth > 0)
> > > +               retval = dev->power.runtime_status == RPM_ACTIVE ? 1 : -EAGAIN;
> > > +
> > >         if (retval)
> > >                 goto out;
> > >
> > > @@ -1164,9 +1164,9 @@ EXPORT_SYMBOL_GPL(pm_runtime_get_if_acti
> > >   * @dev: Device to handle.
> > >   * @status: New runtime PM status of the device.
> > >   *
> > > - * If runtime PM of the device is disabled or its power.runtime_error field is
> > > - * different from zero, the status may be changed either to RPM_ACTIVE, or to
> > > - * RPM_SUSPENDED, as long as that reflects the actual state of the device.
> > > + * If runtime PM of the device is disabled or restrained, or its
> > > + * power.runtime_error field is nonzero, the status may be changed either to
> > > + * RPM_ACTIVE, or to RPM_SUSPENDED, as long as that reflects its actual state.
> > >   * However, if the device has a parent and the parent is not active, and the
> > >   * parent's power.ignore_children flag is unset, the device's status cannot be
> > >   * set to RPM_ACTIVE, so -EBUSY is returned in that case.
> > > @@ -1195,13 +1195,16 @@ int __pm_runtime_set_status(struct devic
> > >         spin_lock_irq(&dev->power.lock);
> > >
> > >         /*
> > > -        * Prevent PM-runtime from being enabled for the device or return an
> > > -        * error if it is enabled already and working.
> > > +        * Prevent PM-runtime from being used for the device or return an
> > > +        * error if it is in use already.
> > >          */
> > > -       if (dev->power.runtime_error || dev->power.disable_depth)
> > > -               dev->power.disable_depth++;
> > > -       else
> > > +       if (dev->power.runtime_error || dev->power.disable_depth ||
> > > +           dev->power.restrain_depth) {
> > > +               pm_runtime_get_noresume(dev);
> >
> > Why do we need to bump the usage count here? Except for balancing with
> > pm_runtime_relinquish() a few lines below, of course?
>
> First off, I need the usage count to be greater than 0 to prevent the
> runtime suspend callback from running while "restrained" (and the
> suspend could check the restrain count, but that's one more check in
> the suspend path which isn't necessary if the usage counter is always
> bumped up upfront).

If disable_depth > 0 (or restrain_depth > 0), the runtime PM core
should prevent the runtime suspend callback from being invoked, no
matter whether the usage count has been bumped or not. Or did I get
that wrong?

>
> Second, the PM core bumps up the usage counter during system-wide
> suspend, so bumping it up again isn't strictly needed if this
> "temporary disabling" is limited to the system-wide suspend-resume
> paths, but I'm not sure if it should be limited this way.
>
> I would prefer the "temporarily unavailable" case to be clearly
> different from the "disabled" one in any case, not just during
> system-wide suspend-resume.
>
> > > +               dev->power.restrain_depth++;
> > > +       } else {
> > >                 error = -EAGAIN;
> > > +       }
> > >
> > >         spin_unlock_irq(&dev->power.lock);
> > >
> > > @@ -1278,7 +1281,7 @@ int __pm_runtime_set_status(struct devic
> > >                 device_links_read_unlock(idx);
> > >         }
> > >
> > > -       pm_runtime_enable(dev);
> > > +       pm_runtime_relinquish(dev);
> > >
> > >         return error;
> > >  }
> > > @@ -1513,6 +1516,72 @@ void pm_runtime_allow(struct device *dev
> > >  EXPORT_SYMBOL_GPL(pm_runtime_allow);
> > >
> > >  /**
> > > + * pm_runtime_restrain - Temporarily block runtime PM of a device.
> > > + * @dev: Device to handle.
> > > + *
> > > + * Increase the device's usage count and its restrain_dpeth count.  If the
> > > + * latter was 0 initially, cancel the runtime PM work for @dev if pending and
> > > + * wait for all of the runtime PM operations on it in progress to complete.
> > > + *
> > > + * After this function has been called, attempts to runtime-suspend @dev will
> > > + * fail with -EAGAIN and attempts to runtime-resume it will succeed if its
> > > + * runtime PM status is RPM_ACTIVE and will fail with -EAGAIN otherwise.
> > > + *
> > > + * This function can only be called by the PM core.
> > > + */
> > > +void pm_runtime_restrain(struct device *dev)
> > > +{
> > > +       pm_runtime_get_noresume(dev);
> > > +
> > > +       spin_lock_irq(&dev->power.lock);
> > > +
> > > +       if (dev->power.restrain_depth++ > 0)
> > > +               goto out;
> > > +
> > > +       if (dev->power.disable_depth > 0) {
> > > +               dev->power.already_suspended = false;
> > > +               goto out;
> > > +       }
> > > +
> > > +       /* Update time accounting before blocking PM-runtime. */
> > > +       update_pm_runtime_accounting(dev);
> > > +
> > > +       __pm_runtime_barrier(dev);
> > > +
> > > +       dev->power.already_suspended = pm_runtime_status_suspended(dev);
> > > +
> > > +out:
> > > +       spin_unlock_irq(&dev->power.lock);
> > > +}
> >
> > What if someone calls pm_runtime_disable() after the PM core has
> > called pm_runtime_restrain() for a device? It looks like we may run
> > another round of __pm_runtime_barrier() and
> > update_pm_runtime_accounting(), does that really make sense?
>
> No, it doesn't, but it's a bug in the patch.  And there are other bugs in it ...
>
> In this particular case, __pm_runtime_disable() should check the
> "restrain" count and do nothing when it is nonzero.
>
> > > +
> > > +/**
> > > + * pm_runtime_relinquish - Unblock runtime PM of a device.
> > > + * @dev: Device to handle.
> > > + *
> > > + * Decrease the device's usage count and its restrain_dpeth count.
> > > + *
> > > + * This function can only be called by the PM core.
> > > + */
> > > +void pm_runtime_relinquish(struct device *dev)
> > > +{
> > > +       spin_lock_irq(&dev->power.lock);
> > > +
> > > +       if (dev->power.restrain_depth > 0) {
> > > +               dev->power.restrain_depth--;
> > > +
> > > +               /* About to unbolck runtime PM, set accounting_timestamp to now */
> > > +               if (!dev->power.restrain_depth && !dev->power.disable_depth)
> > > +                       dev->power.accounting_timestamp = ktime_get_mono_fast_ns();
> > > +       } else {
> > > +               dev_warn(dev, "Unbalanced %s!\n", __func__);
> > > +       }
> > > +
> > > +       spin_unlock_irq(&dev->power.lock);
> > > +
> > > +       pm_runtime_put_noidle(dev);
> > > +}
> > > +
> > > +/**
> > >   * pm_runtime_no_callbacks - Ignore runtime PM callbacks for a device.
> > >   * @dev: Device to handle.
> > >   *
> > > @@ -1806,8 +1875,10 @@ int pm_runtime_force_suspend(struct devi
> > >         int (*callback)(struct device *);
> > >         int ret;
> > >
> > > -       pm_runtime_disable(dev);
> > > -       if (pm_runtime_status_suspended(dev))
> > > +       pm_runtime_restrain(dev);
> > > +
> > > +       /* No suspend if the device has already been suspended by PM-runtime. */
> > > +       if (!dev->power.already_suspended)
> >
> > I assume you are looking at using pm_runtime_force_suspend|resume() to
> > support my use case for the cpuidle-psci driver? In other words,
> > replace pm_runtime_get_sync() and pm_runtime_put_sync_suspend() in
> > __psci_enter_domain_idle_state(), right?
>
> Not really.  I've been looking at a general "temporarily unavailable"
> vs "disabled" problem.

Okay, so I understand that you want to distinguish between these two
cases, but honestly I fail to understand *why* that is needed, sorry.

>
> > If so, that doesn't really fit well, I think. Not only because we
> > don't have system suspend/resume callbacks available, which is really
> > the proper place to call the pm_runtime_force_*() functions from, but
> > also because we don't want to call __pm_runtime_barrier(), etc, every
> > time in the idle path of a CPU. If anything, we should instead strive
> > towards a more lightweight path than what we currently have.
>
> So IMO this can be done with the new counter in place, because for
> anything called between device_suspend_late() and
> device_resume_early(), PM-runtime would be restrained by the PM core
> (it is disabled now), so rpm_resume() would return 1 for devices with
> PM-runtime status equal to RPM_ACTIVE (it fails now, unless the usage
> counter is exactly 1) and you resume the devices in question upfront,
> so it would be always safe to call rpm_resume() and rpm_suspend() for
> them during the noirq suspend and resume phases (it is now tricky,
> because it depends on the exact usage counter value).
>
> Between dpm_suspend_noirq() and dpm_resume_noirq(), you need to switch
> over to a different type of handling anyway, because all of the
> devices are expected to be suspended then.

Not sure I understand correctly, but I don't think I need to switch to
another handling. The devices in __psci_enter_domain_idle_state() are
managed as syscore devices with genpd, for the later system suspend
phases, this works well.

Perhaps you also saying that the goal with your change is to allow
rpm_resume() to return 1, when the state is RPM_ACTIVE for the device
and when the PM core has called pm_runtime_restrain() instead of
__pm_runtime_disable()? Right?

Kind regards
Uffe
Rafael J. Wysocki Dec. 1, 2021, 1:49 p.m. UTC | #2
On Wed, Dec 1, 2021 at 10:02 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> On Tue, 30 Nov 2021 at 18:26, Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Tue, Nov 30, 2021 at 5:41 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > >
> > > On Tue, 30 Nov 2021 at 14:02, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Tue, Nov 30, 2021 at 12:58 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > >
> > > > > [...]
> > > > >
> > > > > > > > > > >
> > > > > > > > > > > Am I thinking correctly that this is mostly about working around the
> > > > > > > > > > > limitations of pm_runtime_force_suspend()?
> > > > > > > > > >
> > > > > > > > > > No, this isn't related at all.
> > > > > > > > > >
> > > > > > > > > > The cpuidle-psci driver doesn't have PM callbacks, thus using
> > > > > > > > > > pm_runtime_force_suspend() would not work here.
> > > > > > > > >
> > > > > > > > > Just wanted to send a ping on this to see if we can come to a
> > > > > > > > > conclusion. Or maybe we did? :-)
> > > > > > > > >
> > > > > > > > > I think in the end, what slightly bothers me, is that the behavior is
> > > > > > > > > a bit inconsistent. Although, maybe it's the best we can do.
> > > > > > > >
> > > > > > > > I've been thinking about this and it looks like we can do better, but
> > > > > > > > instead of talking about this I'd rather send a patch.
> > > > > > >
> > > > > > > Alright.
> > > > > > >
> > > > > > > I was thinking along the lines of make similar changes for
> > > > > > > rpm_idle|suspend(). That would make the behaviour even more
> > > > > > > consistent, I think.
> > > > > > >
> > > > > > > Perhaps that's what you have in mind? :-)
> > > > > >
> > > > > > Well, not exactly.
> > > > > >
> > > > > > The idea is to add another counter (called restrain_depth in the patch)
> > > > > > to prevent rpm_resume() from running the callback when that is potentially
> > > > > > problematic.  With that, it is possible to actually distinguish devices
> > > > > > with PM-runtime enabled and it allows the PM-runtime status to be checked
> > > > > > when it is still known to be meaningful.
> > > > >
> > > > > Hmm, I don't quite understand the benefit of introducing a new flag
> > > > > for this. rpm_resume() already checks the disable_depth to understand
> > > > > when it's safe to invoke the callback. Maybe there is a reason why
> > > > > that isn't sufficient?
> > > >
> > > > The problem is that disable_depth > 0 may very well mean that runtime
> > > > PM has not been enabled at all for the given device which IMO is a
> > > > problem.
> > > >
> > > > As it stands, it is necessary to make assumptions, like disable_depth
> > > > == 1 meaning that runtime PM is really enabled, but the PM core has
> > > > disabled it temporarily, which is somewhat questionable.
> > > >
> > > > Another problem with disabling is that it causes rpm_resume() to fail
> > > > even if the status is RPM_ACTIVE and it has to do that exactly because
> > > > it cannot know why runtime PM has been disabled.  If it has never been
> > > > enabled, rpm_resume() must fail, but if it has been disabled
> > > > temporarily, rpm_resume() may return 1 when the status is RPM_ACTIVE.
> > > >
> > > > The new count allows the "enabled in general, but temporarily disabled
> > > > at the moment" to be handled cleanly.
> > >
> > > My overall comment is that I fail to understand why we need to
> > > distinguish between these two cases. To me, it shouldn't really
> > > matter, *why* runtime PM is (or have been) disabled for the device.
> >
> > It matters if you want to trust the status, because "disabled" means
> > "the status doesn't matter".
>
> Well, that doesn't really match how the runtime PM interface is being
> used today.

Well, I clearly disagree.

> For example, we have a whole bunch of helper functions, allowing us to
> update and check the runtime PM state of the device, even when the
> disable_depth > 0. Some functions, like pm_runtime_set_active() for
> example, even take parents and device-links into account.

That's true, but that's for a purpose.

If runtime PM becomes enabled after using pm_runtime_set_active(), the
status should better be consistent with the settings of the parent
etc.

> >
> > If you want the status to stay meaningful, but prevent callbacks from
> > running, you need something else.
> >
> > > The important point is that the default state for a device is
> > > RPM_SUSPENDED and someone has moved into RPM_ACTIVE, for whatever
> > > reason. That should be sufficient to allow rpm_resume() to return '1'
> > > when disable_depth > 0, shouldn't it?
> >
> > No, because there is no rule by which the status of devices with
> > PM-runtime disabled must be RPM_SUSPENDED.
>
> That's not what I was trying to say.
>
> The initial/default runtime PM state for a device is RPM_SUSPENDED,
> which is being set in pm_runtime_init(). Although, I agree that it
> can't be trusted that this state actually reflects the state of the
> HW, it's still a valid state for the device from a runtime PM point of
> view.

No, it is not.  It's just the default.

> However, and more importantly, if the state has moved to RPM_ACTIVE,
> someone must have deliberately moved the device into that state.

Sure, but it cannot be regarded as an indication on whether or not
runtime PM is supported and has ever been enabled for the given
device.

Again, there is no rule regarding the status value for devices with
runtime PM disabled, either way.

> For this reason, I believe it seems reasonable to trust it, both from HW
> point of view, but definitely also from a runtime PM point of view. If
> not, then what should we do?

Trust it only when runtime PM is enabled, ie. dev->power.disable_depth == 0.

That's exactly the reason why pm_runtime_suspended() returns false if
runtime PM is disabled for the target device and if
pm_runtime_suspended() is called during system-wide suspend and
resume, it is not clear how to interpret its return value.

If it returns true outside the system suspend-resume code path, that
means "runtime PM has been enabled and the device has been
runtime-suspended" and I want it to mean exactly the same thing during
system-wide suspend and resume, so people don't need to wonder about
the context in which the code is running.
Ulf Hansson Dec. 1, 2021, 3:22 p.m. UTC | #3
On Wed, 1 Dec 2021 at 14:49, Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Wed, Dec 1, 2021 at 10:02 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> >
> > On Tue, 30 Nov 2021 at 18:26, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Tue, Nov 30, 2021 at 5:41 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > >
> > > > On Tue, 30 Nov 2021 at 14:02, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Tue, Nov 30, 2021 at 12:58 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > >
> > > > > > [...]
> > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Am I thinking correctly that this is mostly about working around the
> > > > > > > > > > > > limitations of pm_runtime_force_suspend()?
> > > > > > > > > > >
> > > > > > > > > > > No, this isn't related at all.
> > > > > > > > > > >
> > > > > > > > > > > The cpuidle-psci driver doesn't have PM callbacks, thus using
> > > > > > > > > > > pm_runtime_force_suspend() would not work here.
> > > > > > > > > >
> > > > > > > > > > Just wanted to send a ping on this to see if we can come to a
> > > > > > > > > > conclusion. Or maybe we did? :-)
> > > > > > > > > >
> > > > > > > > > > I think in the end, what slightly bothers me, is that the behavior is
> > > > > > > > > > a bit inconsistent. Although, maybe it's the best we can do.
> > > > > > > > >
> > > > > > > > > I've been thinking about this and it looks like we can do better, but
> > > > > > > > > instead of talking about this I'd rather send a patch.
> > > > > > > >
> > > > > > > > Alright.
> > > > > > > >
> > > > > > > > I was thinking along the lines of make similar changes for
> > > > > > > > rpm_idle|suspend(). That would make the behaviour even more
> > > > > > > > consistent, I think.
> > > > > > > >
> > > > > > > > Perhaps that's what you have in mind? :-)
> > > > > > >
> > > > > > > Well, not exactly.
> > > > > > >
> > > > > > > The idea is to add another counter (called restrain_depth in the patch)
> > > > > > > to prevent rpm_resume() from running the callback when that is potentially
> > > > > > > problematic.  With that, it is possible to actually distinguish devices
> > > > > > > with PM-runtime enabled and it allows the PM-runtime status to be checked
> > > > > > > when it is still known to be meaningful.
> > > > > >
> > > > > > Hmm, I don't quite understand the benefit of introducing a new flag
> > > > > > for this. rpm_resume() already checks the disable_depth to understand
> > > > > > when it's safe to invoke the callback. Maybe there is a reason why
> > > > > > that isn't sufficient?
> > > > >
> > > > > The problem is that disable_depth > 0 may very well mean that runtime
> > > > > PM has not been enabled at all for the given device which IMO is a
> > > > > problem.
> > > > >
> > > > > As it stands, it is necessary to make assumptions, like disable_depth
> > > > > == 1 meaning that runtime PM is really enabled, but the PM core has
> > > > > disabled it temporarily, which is somewhat questionable.
> > > > >
> > > > > Another problem with disabling is that it causes rpm_resume() to fail
> > > > > even if the status is RPM_ACTIVE and it has to do that exactly because
> > > > > it cannot know why runtime PM has been disabled.  If it has never been
> > > > > enabled, rpm_resume() must fail, but if it has been disabled
> > > > > temporarily, rpm_resume() may return 1 when the status is RPM_ACTIVE.
> > > > >
> > > > > The new count allows the "enabled in general, but temporarily disabled
> > > > > at the moment" to be handled cleanly.
> > > >
> > > > My overall comment is that I fail to understand why we need to
> > > > distinguish between these two cases. To me, it shouldn't really
> > > > matter, *why* runtime PM is (or have been) disabled for the device.
> > >
> > > It matters if you want to trust the status, because "disabled" means
> > > "the status doesn't matter".
> >
> > Well, that doesn't really match how the runtime PM interface is being
> > used today.
>
> Well, I clearly disagree.

Alright, then we can agree to disagree. :-)

>
> > For example, we have a whole bunch of helper functions, allowing us to
> > update and check the runtime PM state of the device, even when the
> > disable_depth > 0. Some functions, like pm_runtime_set_active() for
> > example, even take parents and device-links into account.
>
> That's true, but that's for a purpose.
>
> If runtime PM becomes enabled after using pm_runtime_set_active(), the
> status should better be consistent with the settings of the parent
> etc.
>
> > >
> > > If you want the status to stay meaningful, but prevent callbacks from
> > > running, you need something else.
> > >
> > > > The important point is that the default state for a device is
> > > > RPM_SUSPENDED and someone has moved into RPM_ACTIVE, for whatever
> > > > reason. That should be sufficient to allow rpm_resume() to return '1'
> > > > when disable_depth > 0, shouldn't it?
> > >
> > > No, because there is no rule by which the status of devices with
> > > PM-runtime disabled must be RPM_SUSPENDED.
> >
> > That's not what I was trying to say.
> >
> > The initial/default runtime PM state for a device is RPM_SUSPENDED,
> > which is being set in pm_runtime_init(). Although, I agree that it
> > can't be trusted that this state actually reflects the state of the
> > HW, it's still a valid state for the device from a runtime PM point of
> > view.
>
> No, it is not.  It's just the default.
>
> > However, and more importantly, if the state has moved to RPM_ACTIVE,
> > someone must have deliberately moved the device into that state.
>
> Sure, but it cannot be regarded as an indication on whether or not
> runtime PM is supported and has ever been enabled for the given
> device.
>
> Again, there is no rule regarding the status value for devices with
> runtime PM disabled, either way.

If I understand correctly, that means you think the
pm_runtime_status_suspended() should really be converted to an
internal runtime PM interface, not being exported to users outside.
Right?

When it comes to the pm_runtime_active() interface, I noticed that
it's actually completely broken, as it returns "true" when runtime PM
has been disabled for the device - no matter whether the state is
RPM_ACTIVE or not. It's quite heavily used, so I guess the behaviour
is what the callers expect?

>
> > For this reason, I believe it seems reasonable to trust it, both from HW
> > point of view, but definitely also from a runtime PM point of view. If
> > not, then what should we do?
>
> Trust it only when runtime PM is enabled, ie. dev->power.disable_depth == 0.
>
> That's exactly the reason why pm_runtime_suspended() returns false if
> runtime PM is disabled for the target device and if
> pm_runtime_suspended() is called during system-wide suspend and
> resume, it is not clear how to interpret its return value.

To me, I don't think there is a problem interpreting
pm_runtime_suspended()'s return value. As you said, it tells whether
runtime PM has been enabled and whether the state is RPM_SUSPENDED.

For those that want to know the state when runtime PM has been
disabled, they can use pm_runtime_status_suspended().

What's the issue?

>
> If it returns true outside the system suspend-resume code path, that
> means "runtime PM has been enabled and the device has been
> runtime-suspended" and I want it to mean exactly the same thing during
> system-wide suspend and resume, so people don't need to wonder about
> the context in which the code is running.

Well, even if this seems nice, I wonder how useful this would be in the end.

Perhaps you have some concrete examples where this could improve things?

Kind regards
Uffe
Rafael J. Wysocki Dec. 1, 2021, 5:44 p.m. UTC | #4
On Wed, Dec 1, 2021 at 4:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> On Wed, 1 Dec 2021 at 14:49, Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Wed, Dec 1, 2021 at 10:02 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > >
> > > On Tue, 30 Nov 2021 at 18:26, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Tue, Nov 30, 2021 at 5:41 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > >
> > > > > On Tue, 30 Nov 2021 at 14:02, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > > >
> > > > > > On Tue, Nov 30, 2021 at 12:58 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > > >
> > > > > > > [...]
> > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Am I thinking correctly that this is mostly about working around the
> > > > > > > > > > > > > limitations of pm_runtime_force_suspend()?
> > > > > > > > > > > >
> > > > > > > > > > > > No, this isn't related at all.
> > > > > > > > > > > >
> > > > > > > > > > > > The cpuidle-psci driver doesn't have PM callbacks, thus using
> > > > > > > > > > > > pm_runtime_force_suspend() would not work here.
> > > > > > > > > > >
> > > > > > > > > > > Just wanted to send a ping on this to see if we can come to a
> > > > > > > > > > > conclusion. Or maybe we did? :-)
> > > > > > > > > > >
> > > > > > > > > > > I think in the end, what slightly bothers me, is that the behavior is
> > > > > > > > > > > a bit inconsistent. Although, maybe it's the best we can do.
> > > > > > > > > >
> > > > > > > > > > I've been thinking about this and it looks like we can do better, but
> > > > > > > > > > instead of talking about this I'd rather send a patch.
> > > > > > > > >
> > > > > > > > > Alright.
> > > > > > > > >
> > > > > > > > > I was thinking along the lines of make similar changes for
> > > > > > > > > rpm_idle|suspend(). That would make the behaviour even more
> > > > > > > > > consistent, I think.
> > > > > > > > >
> > > > > > > > > Perhaps that's what you have in mind? :-)
> > > > > > > >
> > > > > > > > Well, not exactly.
> > > > > > > >
> > > > > > > > The idea is to add another counter (called restrain_depth in the patch)
> > > > > > > > to prevent rpm_resume() from running the callback when that is potentially
> > > > > > > > problematic.  With that, it is possible to actually distinguish devices
> > > > > > > > with PM-runtime enabled and it allows the PM-runtime status to be checked
> > > > > > > > when it is still known to be meaningful.
> > > > > > >
> > > > > > > Hmm, I don't quite understand the benefit of introducing a new flag
> > > > > > > for this. rpm_resume() already checks the disable_depth to understand
> > > > > > > when it's safe to invoke the callback. Maybe there is a reason why
> > > > > > > that isn't sufficient?
> > > > > >
> > > > > > The problem is that disable_depth > 0 may very well mean that runtime
> > > > > > PM has not been enabled at all for the given device which IMO is a
> > > > > > problem.
> > > > > >
> > > > > > As it stands, it is necessary to make assumptions, like disable_depth
> > > > > > == 1 meaning that runtime PM is really enabled, but the PM core has
> > > > > > disabled it temporarily, which is somewhat questionable.
> > > > > >
> > > > > > Another problem with disabling is that it causes rpm_resume() to fail
> > > > > > even if the status is RPM_ACTIVE and it has to do that exactly because
> > > > > > it cannot know why runtime PM has been disabled.  If it has never been
> > > > > > enabled, rpm_resume() must fail, but if it has been disabled
> > > > > > temporarily, rpm_resume() may return 1 when the status is RPM_ACTIVE.
> > > > > >
> > > > > > The new count allows the "enabled in general, but temporarily disabled
> > > > > > at the moment" to be handled cleanly.
> > > > >
> > > > > My overall comment is that I fail to understand why we need to
> > > > > distinguish between these two cases. To me, it shouldn't really
> > > > > matter, *why* runtime PM is (or have been) disabled for the device.
> > > >
> > > > It matters if you want to trust the status, because "disabled" means
> > > > "the status doesn't matter".
> > >
> > > Well, that doesn't really match how the runtime PM interface is being
> > > used today.
> >
> > Well, I clearly disagree.
>
> Alright, then we can agree to disagree. :-)
>
> >
> > > For example, we have a whole bunch of helper functions, allowing us to
> > > update and check the runtime PM state of the device, even when the
> > > disable_depth > 0. Some functions, like pm_runtime_set_active() for
> > > example, even take parents and device-links into account.
> >
> > That's true, but that's for a purpose.
> >
> > If runtime PM becomes enabled after using pm_runtime_set_active(), the
> > status should better be consistent with the settings of the parent
> > etc.
> >
> > > >
> > > > If you want the status to stay meaningful, but prevent callbacks from
> > > > running, you need something else.
> > > >
> > > > > The important point is that the default state for a device is
> > > > > RPM_SUSPENDED and someone has moved into RPM_ACTIVE, for whatever
> > > > > reason. That should be sufficient to allow rpm_resume() to return '1'
> > > > > when disable_depth > 0, shouldn't it?
> > > >
> > > > No, because there is no rule by which the status of devices with
> > > > PM-runtime disabled must be RPM_SUSPENDED.
> > >
> > > That's not what I was trying to say.
> > >
> > > The initial/default runtime PM state for a device is RPM_SUSPENDED,
> > > which is being set in pm_runtime_init(). Although, I agree that it
> > > can't be trusted that this state actually reflects the state of the
> > > HW, it's still a valid state for the device from a runtime PM point of
> > > view.
> >
> > No, it is not.  It's just the default.
> >
> > > However, and more importantly, if the state has moved to RPM_ACTIVE,
> > > someone must have deliberately moved the device into that state.
> >
> > Sure, but it cannot be regarded as an indication on whether or not
> > runtime PM is supported and has ever been enabled for the given
> > device.
> >
> > Again, there is no rule regarding the status value for devices with
> > runtime PM disabled, either way.
>
> If I understand correctly, that means you think the
> pm_runtime_status_suspended() should really be converted to an
> internal runtime PM interface, not being exported to users outside.
> Right?

Not really.

I'm just saying that its usefulness is limited.

My basic concern is that system-wide PM transitions must always invoke
callbacks for devices with PM-runtime disabled, because they may (or
may not) be functional regardless of the PM-runtime status and if they
are functional, they must be suspended.  And note that supporting
system-wide PM is not optional and the only way to kind of disable it
is to return an error from a device suspend callback (but that's nasty
for some use cases).

So the "Has PM-runtime been enabled?" question is really fundamental
for system-wide PM and it is not sufficient to look at the PM-runtime
status to find out, but if the PM-core itself disables PM-runtime
(which is has to do at one point to prevent PM-runtime from racing
with system-wide PM), it is hard to answer definitely in general.

IMO the only way to make it possible to find that out in all cases is
to make the PM core retain the power.disable_depth value and that can
be done by making it use a different mechanism to prevent PM-runtime
callbacks from being run.

Alternatively, the current PM-runtime status could be "latched" during
the PM-runtime disable operation if power.disable_depth is 0 (and that
"latched" value would be initialized to "invalid" in case PM-runtime
is never enabled).

> When it comes to the pm_runtime_active() interface, I noticed that
> it's actually completely broken, as it returns "true" when runtime PM
> has been disabled for the device - no matter whether the state is
> RPM_ACTIVE or not. It's quite heavily used, so I guess the behaviour
> is what the callers expect?

I'm not sure about that after looking at some of them briefly.

I'm guessing that the original author of it wanted to use it in the
system-wide suspend path to decide whether or not to suspend the
device, but that obviously interferes with what the PM core does.

Anway, it looks like its name should be changed to something like
pm_runtime_active_or_disabled(), so it reflects the functionality.

> >
> > > For this reason, I believe it seems reasonable to trust it, both from HW
> > > point of view, but definitely also from a runtime PM point of view. If
> > > not, then what should we do?
> >
> > Trust it only when runtime PM is enabled, ie. dev->power.disable_depth == 0.
> >
> > That's exactly the reason why pm_runtime_suspended() returns false if
> > runtime PM is disabled for the target device and if
> > pm_runtime_suspended() is called during system-wide suspend and
> > resume, it is not clear how to interpret its return value.
>
> To me, I don't think there is a problem interpreting
> pm_runtime_suspended()'s return value. As you said, it tells whether
> runtime PM has been enabled and whether the state is RPM_SUSPENDED.
>
> For those that want to know the state when runtime PM has been
> disabled, they can use pm_runtime_status_suspended().
>
> What's the issue?

It may not match the real state of the device which may be relevant.

> >
> > If it returns true outside the system suspend-resume code path, that
> > means "runtime PM has been enabled and the device has been
> > runtime-suspended" and I want it to mean exactly the same thing during
> > system-wide suspend and resume, so people don't need to wonder about
> > the context in which the code is running.
>
> Well, even if this seems nice, I wonder how useful this would be in the end.
>
> Perhaps you have some concrete examples where this could improve things?

For example, what if pm_runtime_force_suspend() is called for a device
with PM-runtime disabled and the PM-runtime status equal to
RPM_SUSPENDED?  As I said above, the callback must be invoked for it
in case it is functional and needs to be suspended, but that won't
happen AFAICS.

However, if pm_runtime_force_suspend() can tell whether or not
PM-runtime has been enabled for the device, it can do the right thing
(basically, avoid running the callback if it has been run once already
by PM-runtime, which is the case when PM-runtime is enabled and the
status is RPM_SUSPENDED).
Rafael J. Wysocki Dec. 1, 2021, 8:11 p.m. UTC | #5
On Wednesday, December 1, 2021 6:44:08 PM CET Rafael J. Wysocki wrote:
> On Wed, Dec 1, 2021 at 4:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> >
> > On Wed, 1 Dec 2021 at 14:49, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Wed, Dec 1, 2021 at 10:02 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > >
> > > > On Tue, 30 Nov 2021 at 18:26, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Tue, Nov 30, 2021 at 5:41 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > >
> > > > > > On Tue, 30 Nov 2021 at 14:02, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > > > >
> > > > > > > On Tue, Nov 30, 2021 at 12:58 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > > > >
> > > > > > > > [...]
> > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Am I thinking correctly that this is mostly about working around the
> > > > > > > > > > > > > > limitations of pm_runtime_force_suspend()?
> > > > > > > > > > > > >
> > > > > > > > > > > > > No, this isn't related at all.
> > > > > > > > > > > > >
> > > > > > > > > > > > > The cpuidle-psci driver doesn't have PM callbacks, thus using
> > > > > > > > > > > > > pm_runtime_force_suspend() would not work here.
> > > > > > > > > > > >
> > > > > > > > > > > > Just wanted to send a ping on this to see if we can come to a
> > > > > > > > > > > > conclusion. Or maybe we did? :-)
> > > > > > > > > > > >
> > > > > > > > > > > > I think in the end, what slightly bothers me, is that the behavior is
> > > > > > > > > > > > a bit inconsistent. Although, maybe it's the best we can do.
> > > > > > > > > > >
> > > > > > > > > > > I've been thinking about this and it looks like we can do better, but
> > > > > > > > > > > instead of talking about this I'd rather send a patch.
> > > > > > > > > >
> > > > > > > > > > Alright.
> > > > > > > > > >
> > > > > > > > > > I was thinking along the lines of make similar changes for
> > > > > > > > > > rpm_idle|suspend(). That would make the behaviour even more
> > > > > > > > > > consistent, I think.
> > > > > > > > > >
> > > > > > > > > > Perhaps that's what you have in mind? :-)
> > > > > > > > >
> > > > > > > > > Well, not exactly.
> > > > > > > > >
> > > > > > > > > The idea is to add another counter (called restrain_depth in the patch)
> > > > > > > > > to prevent rpm_resume() from running the callback when that is potentially
> > > > > > > > > problematic.  With that, it is possible to actually distinguish devices
> > > > > > > > > with PM-runtime enabled and it allows the PM-runtime status to be checked
> > > > > > > > > when it is still known to be meaningful.
> > > > > > > >
> > > > > > > > Hmm, I don't quite understand the benefit of introducing a new flag
> > > > > > > > for this. rpm_resume() already checks the disable_depth to understand
> > > > > > > > when it's safe to invoke the callback. Maybe there is a reason why
> > > > > > > > that isn't sufficient?
> > > > > > >
> > > > > > > The problem is that disable_depth > 0 may very well mean that runtime
> > > > > > > PM has not been enabled at all for the given device which IMO is a
> > > > > > > problem.
> > > > > > >
> > > > > > > As it stands, it is necessary to make assumptions, like disable_depth
> > > > > > > == 1 meaning that runtime PM is really enabled, but the PM core has
> > > > > > > disabled it temporarily, which is somewhat questionable.
> > > > > > >
> > > > > > > Another problem with disabling is that it causes rpm_resume() to fail
> > > > > > > even if the status is RPM_ACTIVE and it has to do that exactly because
> > > > > > > it cannot know why runtime PM has been disabled.  If it has never been
> > > > > > > enabled, rpm_resume() must fail, but if it has been disabled
> > > > > > > temporarily, rpm_resume() may return 1 when the status is RPM_ACTIVE.
> > > > > > >
> > > > > > > The new count allows the "enabled in general, but temporarily disabled
> > > > > > > at the moment" to be handled cleanly.
> > > > > >
> > > > > > My overall comment is that I fail to understand why we need to
> > > > > > distinguish between these two cases. To me, it shouldn't really
> > > > > > matter, *why* runtime PM is (or have been) disabled for the device.
> > > > >
> > > > > It matters if you want to trust the status, because "disabled" means
> > > > > "the status doesn't matter".
> > > >
> > > > Well, that doesn't really match how the runtime PM interface is being
> > > > used today.
> > >
> > > Well, I clearly disagree.
> >
> > Alright, then we can agree to disagree. :-)
> >
> > >
> > > > For example, we have a whole bunch of helper functions, allowing us to
> > > > update and check the runtime PM state of the device, even when the
> > > > disable_depth > 0. Some functions, like pm_runtime_set_active() for
> > > > example, even take parents and device-links into account.
> > >
> > > That's true, but that's for a purpose.
> > >
> > > If runtime PM becomes enabled after using pm_runtime_set_active(), the
> > > status should better be consistent with the settings of the parent
> > > etc.
> > >
> > > > >
> > > > > If you want the status to stay meaningful, but prevent callbacks from
> > > > > running, you need something else.
> > > > >
> > > > > > The important point is that the default state for a device is
> > > > > > RPM_SUSPENDED and someone has moved into RPM_ACTIVE, for whatever
> > > > > > reason. That should be sufficient to allow rpm_resume() to return '1'
> > > > > > when disable_depth > 0, shouldn't it?
> > > > >
> > > > > No, because there is no rule by which the status of devices with
> > > > > PM-runtime disabled must be RPM_SUSPENDED.
> > > >
> > > > That's not what I was trying to say.
> > > >
> > > > The initial/default runtime PM state for a device is RPM_SUSPENDED,
> > > > which is being set in pm_runtime_init(). Although, I agree that it
> > > > can't be trusted that this state actually reflects the state of the
> > > > HW, it's still a valid state for the device from a runtime PM point of
> > > > view.
> > >
> > > No, it is not.  It's just the default.
> > >
> > > > However, and more importantly, if the state has moved to RPM_ACTIVE,
> > > > someone must have deliberately moved the device into that state.
> > >
> > > Sure, but it cannot be regarded as an indication on whether or not
> > > runtime PM is supported and has ever been enabled for the given
> > > device.
> > >
> > > Again, there is no rule regarding the status value for devices with
> > > runtime PM disabled, either way.
> >
> > If I understand correctly, that means you think the
> > pm_runtime_status_suspended() should really be converted to an
> > internal runtime PM interface, not being exported to users outside.
> > Right?
> 
> Not really.
> 
> I'm just saying that its usefulness is limited.
> 
> My basic concern is that system-wide PM transitions must always invoke
> callbacks for devices with PM-runtime disabled, because they may (or
> may not) be functional regardless of the PM-runtime status and if they
> are functional, they must be suspended.  And note that supporting
> system-wide PM is not optional and the only way to kind of disable it
> is to return an error from a device suspend callback (but that's nasty
> for some use cases).
> 
> So the "Has PM-runtime been enabled?" question is really fundamental
> for system-wide PM and it is not sufficient to look at the PM-runtime
> status to find out, but if the PM-core itself disables PM-runtime
> (which is has to do at one point to prevent PM-runtime from racing
> with system-wide PM), it is hard to answer definitely in general.
> 
> IMO the only way to make it possible to find that out in all cases is
> to make the PM core retain the power.disable_depth value and that can
> be done by making it use a different mechanism to prevent PM-runtime
> callbacks from being run.
> 
> Alternatively, the current PM-runtime status could be "latched" during
> the PM-runtime disable operation if power.disable_depth is 0 (and that
> "latched" value would be initialized to "invalid" in case PM-runtime
> is never enabled).

Which would be something like the patch below (which additionally cleans up
pm_runtime_enable() while at it).

The idea being that if the status was RPM_ACTIVE last time when
power.disable_depth was changing from 0 to 1 and it is still RPM_ACTIVE, it
can be assumed to reflect what happened to the device last time when it was
using PM-runtime.

---
 drivers/base/power/runtime.c |   38 ++++++++++++++++++++------------------
 include/linux/pm.h           |    2 ++
 2 files changed, 22 insertions(+), 18 deletions(-)

Index: linux-pm/drivers/base/power/runtime.c
===================================================================
--- linux-pm.orig/drivers/base/power/runtime.c
+++ linux-pm/drivers/base/power/runtime.c
@@ -744,11 +744,10 @@ static int rpm_resume(struct device *dev
  repeat:
 	if (dev->power.runtime_error)
 		retval = -EINVAL;
-	else if (dev->power.disable_depth == 1 && dev->power.is_suspended
-	    && dev->power.runtime_status == RPM_ACTIVE)
-		retval = 1;
 	else if (dev->power.disable_depth > 0)
-		retval = -EACCES;
+		retval = dev->power.runtime_status == RPM_ACTIVE &&
+			 dev->power.last_status == RPM_ACTIVE ? 1 : -EACCES;
+
 	if (retval)
 		goto out;
 
@@ -1410,8 +1409,10 @@ void __pm_runtime_disable(struct device
 	/* Update time accounting before disabling PM-runtime. */
 	update_pm_runtime_accounting(dev);
 
-	if (!dev->power.disable_depth++)
+	if (!dev->power.disable_depth++) {
 		__pm_runtime_barrier(dev);
+		dev->power.last_status = dev->power.runtime_status;
+	}
 
  out:
 	spin_unlock_irq(&dev->power.lock);
@@ -1428,23 +1429,23 @@ void pm_runtime_enable(struct device *de
 
 	spin_lock_irqsave(&dev->power.lock, flags);
 
-	if (dev->power.disable_depth > 0) {
-		dev->power.disable_depth--;
-
-		/* About to enable runtime pm, set accounting_timestamp to now */
-		if (!dev->power.disable_depth)
-			dev->power.accounting_timestamp = ktime_get_mono_fast_ns();
-	} else {
+	if (!dev->power.disable_depth) {
 		dev_warn(dev, "Unbalanced %s!\n", __func__);
+		goto out;
 	}
 
-	WARN(!dev->power.disable_depth &&
-	     dev->power.runtime_status == RPM_SUSPENDED &&
-	     !dev->power.ignore_children &&
-	     atomic_read(&dev->power.child_count) > 0,
-	     "Enabling runtime PM for inactive device (%s) with active children\n",
-	     dev_name(dev));
+	if (--dev->power.disable_depth > 0)
+		goto out;
+
+	dev->power.last_status = RPM_INVALID;
+	dev->power.accounting_timestamp = ktime_get_mono_fast_ns();
+
+	if (dev->power.runtime_status == RPM_SUSPENDED &&
+	    !dev->power.ignore_children &&
+	    atomic_read(&dev->power.child_count) > 0)
+		dev_warn(dev, "Enabling runtime PM for inactive device with active children\n");
 
+out:
 	spin_unlock_irqrestore(&dev->power.lock, flags);
 }
 EXPORT_SYMBOL_GPL(pm_runtime_enable);
@@ -1640,6 +1641,7 @@ EXPORT_SYMBOL_GPL(__pm_runtime_use_autos
 void pm_runtime_init(struct device *dev)
 {
 	dev->power.runtime_status = RPM_SUSPENDED;
+	dev->power.last_status = RPM_INVALID;
 	dev->power.idle_notification = false;
 
 	dev->power.disable_depth = 1;
Index: linux-pm/include/linux/pm.h
===================================================================
--- linux-pm.orig/include/linux/pm.h
+++ linux-pm/include/linux/pm.h
@@ -499,6 +499,7 @@ const struct dev_pm_ops __maybe_unused n
  */
 
 enum rpm_status {
+	RPM_INVALID = -1,
 	RPM_ACTIVE = 0,
 	RPM_RESUMING,
 	RPM_SUSPENDED,
@@ -612,6 +613,7 @@ struct dev_pm_info {
 	unsigned int		links_count;
 	enum rpm_request	request;
 	enum rpm_status		runtime_status;
+	enum rpm_status		last_status;
 	int			runtime_error;
 	int			autosuspend_delay;
 	u64			last_busy;
Ulf Hansson Dec. 2, 2021, 11:28 a.m. UTC | #6
On Wed, 1 Dec 2021 at 21:11, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> On Wednesday, December 1, 2021 6:44:08 PM CET Rafael J. Wysocki wrote:
> > On Wed, Dec 1, 2021 at 4:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > >
> > > On Wed, 1 Dec 2021 at 14:49, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Wed, Dec 1, 2021 at 10:02 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > >
> > > > > On Tue, 30 Nov 2021 at 18:26, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > > >
> > > > > > On Tue, Nov 30, 2021 at 5:41 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > > >
> > > > > > > On Tue, 30 Nov 2021 at 14:02, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > > > > >
> > > > > > > > On Tue, Nov 30, 2021 at 12:58 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > > > > >
> > > > > > > > > [...]
> > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Am I thinking correctly that this is mostly about working around the
> > > > > > > > > > > > > > > limitations of pm_runtime_force_suspend()?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > No, this isn't related at all.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > The cpuidle-psci driver doesn't have PM callbacks, thus using
> > > > > > > > > > > > > > pm_runtime_force_suspend() would not work here.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Just wanted to send a ping on this to see if we can come to a
> > > > > > > > > > > > > conclusion. Or maybe we did? :-)
> > > > > > > > > > > > >
> > > > > > > > > > > > > I think in the end, what slightly bothers me, is that the behavior is
> > > > > > > > > > > > > a bit inconsistent. Although, maybe it's the best we can do.
> > > > > > > > > > > >
> > > > > > > > > > > > I've been thinking about this and it looks like we can do better, but
> > > > > > > > > > > > instead of talking about this I'd rather send a patch.
> > > > > > > > > > >
> > > > > > > > > > > Alright.
> > > > > > > > > > >
> > > > > > > > > > > I was thinking along the lines of make similar changes for
> > > > > > > > > > > rpm_idle|suspend(). That would make the behaviour even more
> > > > > > > > > > > consistent, I think.
> > > > > > > > > > >
> > > > > > > > > > > Perhaps that's what you have in mind? :-)
> > > > > > > > > >
> > > > > > > > > > Well, not exactly.
> > > > > > > > > >
> > > > > > > > > > The idea is to add another counter (called restrain_depth in the patch)
> > > > > > > > > > to prevent rpm_resume() from running the callback when that is potentially
> > > > > > > > > > problematic.  With that, it is possible to actually distinguish devices
> > > > > > > > > > with PM-runtime enabled and it allows the PM-runtime status to be checked
> > > > > > > > > > when it is still known to be meaningful.
> > > > > > > > >
> > > > > > > > > Hmm, I don't quite understand the benefit of introducing a new flag
> > > > > > > > > for this. rpm_resume() already checks the disable_depth to understand
> > > > > > > > > when it's safe to invoke the callback. Maybe there is a reason why
> > > > > > > > > that isn't sufficient?
> > > > > > > >
> > > > > > > > The problem is that disable_depth > 0 may very well mean that runtime
> > > > > > > > PM has not been enabled at all for the given device which IMO is a
> > > > > > > > problem.
> > > > > > > >
> > > > > > > > As it stands, it is necessary to make assumptions, like disable_depth
> > > > > > > > == 1 meaning that runtime PM is really enabled, but the PM core has
> > > > > > > > disabled it temporarily, which is somewhat questionable.
> > > > > > > >
> > > > > > > > Another problem with disabling is that it causes rpm_resume() to fail
> > > > > > > > even if the status is RPM_ACTIVE and it has to do that exactly because
> > > > > > > > it cannot know why runtime PM has been disabled.  If it has never been
> > > > > > > > enabled, rpm_resume() must fail, but if it has been disabled
> > > > > > > > temporarily, rpm_resume() may return 1 when the status is RPM_ACTIVE.
> > > > > > > >
> > > > > > > > The new count allows the "enabled in general, but temporarily disabled
> > > > > > > > at the moment" to be handled cleanly.
> > > > > > >
> > > > > > > My overall comment is that I fail to understand why we need to
> > > > > > > distinguish between these two cases. To me, it shouldn't really
> > > > > > > matter, *why* runtime PM is (or have been) disabled for the device.
> > > > > >
> > > > > > It matters if you want to trust the status, because "disabled" means
> > > > > > "the status doesn't matter".
> > > > >
> > > > > Well, that doesn't really match how the runtime PM interface is being
> > > > > used today.
> > > >
> > > > Well, I clearly disagree.
> > >
> > > Alright, then we can agree to disagree. :-)
> > >
> > > >
> > > > > For example, we have a whole bunch of helper functions, allowing us to
> > > > > update and check the runtime PM state of the device, even when the
> > > > > disable_depth > 0. Some functions, like pm_runtime_set_active() for
> > > > > example, even take parents and device-links into account.
> > > >
> > > > That's true, but that's for a purpose.
> > > >
> > > > If runtime PM becomes enabled after using pm_runtime_set_active(), the
> > > > status should better be consistent with the settings of the parent
> > > > etc.
> > > >
> > > > > >
> > > > > > If you want the status to stay meaningful, but prevent callbacks from
> > > > > > running, you need something else.
> > > > > >
> > > > > > > The important point is that the default state for a device is
> > > > > > > RPM_SUSPENDED and someone has moved into RPM_ACTIVE, for whatever
> > > > > > > reason. That should be sufficient to allow rpm_resume() to return '1'
> > > > > > > when disable_depth > 0, shouldn't it?
> > > > > >
> > > > > > No, because there is no rule by which the status of devices with
> > > > > > PM-runtime disabled must be RPM_SUSPENDED.
> > > > >
> > > > > That's not what I was trying to say.
> > > > >
> > > > > The initial/default runtime PM state for a device is RPM_SUSPENDED,
> > > > > which is being set in pm_runtime_init(). Although, I agree that it
> > > > > can't be trusted that this state actually reflects the state of the
> > > > > HW, it's still a valid state for the device from a runtime PM point of
> > > > > view.
> > > >
> > > > No, it is not.  It's just the default.
> > > >
> > > > > However, and more importantly, if the state has moved to RPM_ACTIVE,
> > > > > someone must have deliberately moved the device into that state.
> > > >
> > > > Sure, but it cannot be regarded as an indication on whether or not
> > > > runtime PM is supported and has ever been enabled for the given
> > > > device.
> > > >
> > > > Again, there is no rule regarding the status value for devices with
> > > > runtime PM disabled, either way.
> > >
> > > If I understand correctly, that means you think the
> > > pm_runtime_status_suspended() should really be converted to an
> > > internal runtime PM interface, not being exported to users outside.
> > > Right?
> >
> > Not really.
> >
> > I'm just saying that its usefulness is limited.
> >
> > My basic concern is that system-wide PM transitions must always invoke
> > callbacks for devices with PM-runtime disabled, because they may (or
> > may not) be functional regardless of the PM-runtime status and if they
> > are functional, they must be suspended.  And note that supporting
> > system-wide PM is not optional and the only way to kind of disable it
> > is to return an error from a device suspend callback (but that's nasty
> > for some use cases).
> >
> > So the "Has PM-runtime been enabled?" question is really fundamental
> > for system-wide PM and it is not sufficient to look at the PM-runtime
> > status to find out, but if the PM-core itself disables PM-runtime
> > (which is has to do at one point to prevent PM-runtime from racing
> > with system-wide PM), it is hard to answer definitely in general.
> >
> > IMO the only way to make it possible to find that out in all cases is
> > to make the PM core retain the power.disable_depth value and that can
> > be done by making it use a different mechanism to prevent PM-runtime
> > callbacks from being run.
> >
> > Alternatively, the current PM-runtime status could be "latched" during
> > the PM-runtime disable operation if power.disable_depth is 0 (and that
> > "latched" value would be initialized to "invalid" in case PM-runtime
> > is never enabled).
>
> Which would be something like the patch below (which additionally cleans up
> pm_runtime_enable() while at it).
>
> The idea being that if the status was RPM_ACTIVE last time when
> power.disable_depth was changing from 0 to 1 and it is still RPM_ACTIVE, it
> can be assumed to reflect what happened to the device last time when it was
> using PM-runtime.

Alright, this sounds reasonable to me. I have also looked at the code
below and it looks good to me.

Do you intend to post a formal patch? In any case, feel free to add my
reviewed-by tag.

Kind regards
Uffe

>
> ---
>  drivers/base/power/runtime.c |   38 ++++++++++++++++++++------------------
>  include/linux/pm.h           |    2 ++
>  2 files changed, 22 insertions(+), 18 deletions(-)
>
> Index: linux-pm/drivers/base/power/runtime.c
> ===================================================================
> --- linux-pm.orig/drivers/base/power/runtime.c
> +++ linux-pm/drivers/base/power/runtime.c
> @@ -744,11 +744,10 @@ static int rpm_resume(struct device *dev
>   repeat:
>         if (dev->power.runtime_error)
>                 retval = -EINVAL;
> -       else if (dev->power.disable_depth == 1 && dev->power.is_suspended
> -           && dev->power.runtime_status == RPM_ACTIVE)
> -               retval = 1;
>         else if (dev->power.disable_depth > 0)
> -               retval = -EACCES;
> +               retval = dev->power.runtime_status == RPM_ACTIVE &&
> +                        dev->power.last_status == RPM_ACTIVE ? 1 : -EACCES;
> +
>         if (retval)
>                 goto out;
>
> @@ -1410,8 +1409,10 @@ void __pm_runtime_disable(struct device
>         /* Update time accounting before disabling PM-runtime. */
>         update_pm_runtime_accounting(dev);
>
> -       if (!dev->power.disable_depth++)
> +       if (!dev->power.disable_depth++) {
>                 __pm_runtime_barrier(dev);
> +               dev->power.last_status = dev->power.runtime_status;
> +       }
>
>   out:
>         spin_unlock_irq(&dev->power.lock);
> @@ -1428,23 +1429,23 @@ void pm_runtime_enable(struct device *de
>
>         spin_lock_irqsave(&dev->power.lock, flags);
>
> -       if (dev->power.disable_depth > 0) {
> -               dev->power.disable_depth--;
> -
> -               /* About to enable runtime pm, set accounting_timestamp to now */
> -               if (!dev->power.disable_depth)
> -                       dev->power.accounting_timestamp = ktime_get_mono_fast_ns();
> -       } else {
> +       if (!dev->power.disable_depth) {
>                 dev_warn(dev, "Unbalanced %s!\n", __func__);
> +               goto out;
>         }
>
> -       WARN(!dev->power.disable_depth &&
> -            dev->power.runtime_status == RPM_SUSPENDED &&
> -            !dev->power.ignore_children &&
> -            atomic_read(&dev->power.child_count) > 0,
> -            "Enabling runtime PM for inactive device (%s) with active children\n",
> -            dev_name(dev));
> +       if (--dev->power.disable_depth > 0)
> +               goto out;
> +
> +       dev->power.last_status = RPM_INVALID;
> +       dev->power.accounting_timestamp = ktime_get_mono_fast_ns();
> +
> +       if (dev->power.runtime_status == RPM_SUSPENDED &&
> +           !dev->power.ignore_children &&
> +           atomic_read(&dev->power.child_count) > 0)
> +               dev_warn(dev, "Enabling runtime PM for inactive device with active children\n");
>
> +out:
>         spin_unlock_irqrestore(&dev->power.lock, flags);
>  }
>  EXPORT_SYMBOL_GPL(pm_runtime_enable);
> @@ -1640,6 +1641,7 @@ EXPORT_SYMBOL_GPL(__pm_runtime_use_autos
>  void pm_runtime_init(struct device *dev)
>  {
>         dev->power.runtime_status = RPM_SUSPENDED;
> +       dev->power.last_status = RPM_INVALID;
>         dev->power.idle_notification = false;
>
>         dev->power.disable_depth = 1;
> Index: linux-pm/include/linux/pm.h
> ===================================================================
> --- linux-pm.orig/include/linux/pm.h
> +++ linux-pm/include/linux/pm.h
> @@ -499,6 +499,7 @@ const struct dev_pm_ops __maybe_unused n
>   */
>
>  enum rpm_status {
> +       RPM_INVALID = -1,
>         RPM_ACTIVE = 0,
>         RPM_RESUMING,
>         RPM_SUSPENDED,
> @@ -612,6 +613,7 @@ struct dev_pm_info {
>         unsigned int            links_count;
>         enum rpm_request        request;
>         enum rpm_status         runtime_status;
> +       enum rpm_status         last_status;
>         int                     runtime_error;
>         int                     autosuspend_delay;
>         u64                     last_busy;
>
>
>
Rafael J. Wysocki Dec. 2, 2021, 4:18 p.m. UTC | #7
On Thu, Dec 2, 2021 at 12:29 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> On Wed, 1 Dec 2021 at 21:11, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> >
> > On Wednesday, December 1, 2021 6:44:08 PM CET Rafael J. Wysocki wrote:
> > > On Wed, Dec 1, 2021 at 4:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > >
> > > > On Wed, 1 Dec 2021 at 14:49, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Wed, Dec 1, 2021 at 10:02 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > >
> > > > > > On Tue, 30 Nov 2021 at 18:26, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > > > >
> > > > > > > On Tue, Nov 30, 2021 at 5:41 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > > > >
> > > > > > > > On Tue, 30 Nov 2021 at 14:02, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > > > > > >
> > > > > > > > > On Tue, Nov 30, 2021 at 12:58 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > > > > > >
> > > > > > > > > > [...]
> > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Am I thinking correctly that this is mostly about working around the
> > > > > > > > > > > > > > > > limitations of pm_runtime_force_suspend()?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > No, this isn't related at all.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The cpuidle-psci driver doesn't have PM callbacks, thus using
> > > > > > > > > > > > > > > pm_runtime_force_suspend() would not work here.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Just wanted to send a ping on this to see if we can come to a
> > > > > > > > > > > > > > conclusion. Or maybe we did? :-)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I think in the end, what slightly bothers me, is that the behavior is
> > > > > > > > > > > > > > a bit inconsistent. Although, maybe it's the best we can do.
> > > > > > > > > > > > >
> > > > > > > > > > > > > I've been thinking about this and it looks like we can do better, but
> > > > > > > > > > > > > instead of talking about this I'd rather send a patch.
> > > > > > > > > > > >
> > > > > > > > > > > > Alright.
> > > > > > > > > > > >
> > > > > > > > > > > > I was thinking along the lines of make similar changes for
> > > > > > > > > > > > rpm_idle|suspend(). That would make the behaviour even more
> > > > > > > > > > > > consistent, I think.
> > > > > > > > > > > >
> > > > > > > > > > > > Perhaps that's what you have in mind? :-)
> > > > > > > > > > >
> > > > > > > > > > > Well, not exactly.
> > > > > > > > > > >
> > > > > > > > > > > The idea is to add another counter (called restrain_depth in the patch)
> > > > > > > > > > > to prevent rpm_resume() from running the callback when that is potentially
> > > > > > > > > > > problematic.  With that, it is possible to actually distinguish devices
> > > > > > > > > > > with PM-runtime enabled and it allows the PM-runtime status to be checked
> > > > > > > > > > > when it is still known to be meaningful.
> > > > > > > > > >
> > > > > > > > > > Hmm, I don't quite understand the benefit of introducing a new flag
> > > > > > > > > > for this. rpm_resume() already checks the disable_depth to understand
> > > > > > > > > > when it's safe to invoke the callback. Maybe there is a reason why
> > > > > > > > > > that isn't sufficient?
> > > > > > > > >
> > > > > > > > > The problem is that disable_depth > 0 may very well mean that runtime
> > > > > > > > > PM has not been enabled at all for the given device which IMO is a
> > > > > > > > > problem.
> > > > > > > > >
> > > > > > > > > As it stands, it is necessary to make assumptions, like disable_depth
> > > > > > > > > == 1 meaning that runtime PM is really enabled, but the PM core has
> > > > > > > > > disabled it temporarily, which is somewhat questionable.
> > > > > > > > >
> > > > > > > > > Another problem with disabling is that it causes rpm_resume() to fail
> > > > > > > > > even if the status is RPM_ACTIVE and it has to do that exactly because
> > > > > > > > > it cannot know why runtime PM has been disabled.  If it has never been
> > > > > > > > > enabled, rpm_resume() must fail, but if it has been disabled
> > > > > > > > > temporarily, rpm_resume() may return 1 when the status is RPM_ACTIVE.
> > > > > > > > >
> > > > > > > > > The new count allows the "enabled in general, but temporarily disabled
> > > > > > > > > at the moment" to be handled cleanly.
> > > > > > > >
> > > > > > > > My overall comment is that I fail to understand why we need to
> > > > > > > > distinguish between these two cases. To me, it shouldn't really
> > > > > > > > matter, *why* runtime PM is (or have been) disabled for the device.
> > > > > > >
> > > > > > > It matters if you want to trust the status, because "disabled" means
> > > > > > > "the status doesn't matter".
> > > > > >
> > > > > > Well, that doesn't really match how the runtime PM interface is being
> > > > > > used today.
> > > > >
> > > > > Well, I clearly disagree.
> > > >
> > > > Alright, then we can agree to disagree. :-)
> > > >
> > > > >
> > > > > > For example, we have a whole bunch of helper functions, allowing us to
> > > > > > update and check the runtime PM state of the device, even when the
> > > > > > disable_depth > 0. Some functions, like pm_runtime_set_active() for
> > > > > > example, even take parents and device-links into account.
> > > > >
> > > > > That's true, but that's for a purpose.
> > > > >
> > > > > If runtime PM becomes enabled after using pm_runtime_set_active(), the
> > > > > status should better be consistent with the settings of the parent
> > > > > etc.
> > > > >
> > > > > > >
> > > > > > > If you want the status to stay meaningful, but prevent callbacks from
> > > > > > > running, you need something else.
> > > > > > >
> > > > > > > > The important point is that the default state for a device is
> > > > > > > > RPM_SUSPENDED and someone has moved into RPM_ACTIVE, for whatever
> > > > > > > > reason. That should be sufficient to allow rpm_resume() to return '1'
> > > > > > > > when disable_depth > 0, shouldn't it?
> > > > > > >
> > > > > > > No, because there is no rule by which the status of devices with
> > > > > > > PM-runtime disabled must be RPM_SUSPENDED.
> > > > > >
> > > > > > That's not what I was trying to say.
> > > > > >
> > > > > > The initial/default runtime PM state for a device is RPM_SUSPENDED,
> > > > > > which is being set in pm_runtime_init(). Although, I agree that it
> > > > > > can't be trusted that this state actually reflects the state of the
> > > > > > HW, it's still a valid state for the device from a runtime PM point of
> > > > > > view.
> > > > >
> > > > > No, it is not.  It's just the default.
> > > > >
> > > > > > However, and more importantly, if the state has moved to RPM_ACTIVE,
> > > > > > someone must have deliberately moved the device into that state.
> > > > >
> > > > > Sure, but it cannot be regarded as an indication on whether or not
> > > > > runtime PM is supported and has ever been enabled for the given
> > > > > device.
> > > > >
> > > > > Again, there is no rule regarding the status value for devices with
> > > > > runtime PM disabled, either way.
> > > >
> > > > If I understand correctly, that means you think the
> > > > pm_runtime_status_suspended() should really be converted to an
> > > > internal runtime PM interface, not being exported to users outside.
> > > > Right?
> > >
> > > Not really.
> > >
> > > I'm just saying that its usefulness is limited.
> > >
> > > My basic concern is that system-wide PM transitions must always invoke
> > > callbacks for devices with PM-runtime disabled, because they may (or
> > > may not) be functional regardless of the PM-runtime status and if they
> > > are functional, they must be suspended.  And note that supporting
> > > system-wide PM is not optional and the only way to kind of disable it
> > > is to return an error from a device suspend callback (but that's nasty
> > > for some use cases).
> > >
> > > So the "Has PM-runtime been enabled?" question is really fundamental
> > > for system-wide PM and it is not sufficient to look at the PM-runtime
> > > status to find out, but if the PM-core itself disables PM-runtime
> > > (which is has to do at one point to prevent PM-runtime from racing
> > > with system-wide PM), it is hard to answer definitely in general.
> > >
> > > IMO the only way to make it possible to find that out in all cases is
> > > to make the PM core retain the power.disable_depth value and that can
> > > be done by making it use a different mechanism to prevent PM-runtime
> > > callbacks from being run.
> > >
> > > Alternatively, the current PM-runtime status could be "latched" during
> > > the PM-runtime disable operation if power.disable_depth is 0 (and that
> > > "latched" value would be initialized to "invalid" in case PM-runtime
> > > is never enabled).
> >
> > Which would be something like the patch below (which additionally cleans up
> > pm_runtime_enable() while at it).
> >
> > The idea being that if the status was RPM_ACTIVE last time when
> > power.disable_depth was changing from 0 to 1 and it is still RPM_ACTIVE, it
> > can be assumed to reflect what happened to the device last time when it was
> > using PM-runtime.
>
> Alright, this sounds reasonable to me. I have also looked at the code
> below and it looks good to me.
>
> Do you intend to post a formal patch? In any case, feel free to add my
> reviewed-by tag.

I will, thank you!
Alan Stern Dec. 2, 2021, 4:50 p.m. UTC | #8
On Thu, Dec 02, 2021 at 05:18:26PM +0100, Rafael J. Wysocki wrote:
> On Thu, Dec 2, 2021 at 12:29 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> >
> > Alright, this sounds reasonable to me. I have also looked at the code
> > below and it looks good to me.
> >
> > Do you intend to post a formal patch? In any case, feel free to add my
> > reviewed-by tag.
> 
> I will, thank you!

Don't forget to check for spelling errors (I noticed one in the first version of 
the patch you posted).  And please include a documentation update.

On the whole, the idea seems like a good improvement.

Alan Stern
Rafael J. Wysocki Dec. 2, 2021, 6:01 p.m. UTC | #9
On Thu, Dec 2, 2021 at 5:50 PM Alan Stern <stern@rowland.harvard.edu> wrote:
>
> On Thu, Dec 02, 2021 at 05:18:26PM +0100, Rafael J. Wysocki wrote:
> > On Thu, Dec 2, 2021 at 12:29 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > >
> > > Alright, this sounds reasonable to me. I have also looked at the code
> > > below and it looks good to me.
> > >
> > > Do you intend to post a formal patch? In any case, feel free to add my
> > > reviewed-by tag.
> >
> > I will, thank you!
>
> Don't forget to check for spelling errors (I noticed one in the first version of
> the patch you posted).  And please include a documentation update.

I will.

> On the whole, the idea seems like a good improvement.

Thanks!
diff mbox series

Patch

diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index ec94049442b9..fadc278e3a66 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -742,8 +742,8 @@  static int rpm_resume(struct device *dev, int rpmflags)
  repeat:
 	if (dev->power.runtime_error)
 		retval = -EINVAL;
-	else if (dev->power.disable_depth == 1 && dev->power.is_suspended
-	    && dev->power.runtime_status == RPM_ACTIVE)
+	else if (dev->power.disable_depth > 0 &&
+		dev->power.runtime_status == RPM_ACTIVE)
 		retval = 1;
 	else if (dev->power.disable_depth > 0)
 		retval = -EACCES;