diff mbox

mmc: core: Enable runtime PM management of host devices

Message ID 1427454915-12893-1-git-send-email-ulf.hansson@linaro.org
State New
Headers show

Commit Message

Ulf Hansson March 27, 2015, 11:15 a.m. UTC
Currently those host drivers which have deployed runtime PM, deals with
the runtime PM reference counting entirely by themselves.

Since host drivers don't know when the core will send the next request
through some of the host_ops callbacks, they need to handle runtime PM
get/put between each an every request.

In quite many cases this has some negative effects, since it leads to a
high frequency of scheduled runtime PM suspend operations. That due to
the runtime PM reference count will normally reach zero in-between
every request.

We can decrease that frequency, by enabling the core to deal with
runtime PM reference counting of the host device. Since the core often
knows that it will send a seqeunce of requests, it makes sense for it
to keep a runtime PM reference count during these periods.

More exactly, let's increase the runtime PM reference count by invoking
pm_runtime_get_sync() from __mmc_claim_host(). Restore that action by
invoking pm_runtime_mark_last_busy() and pm_runtime_put_autosuspend()
in mmc_release_host(). In this way a runtime PM reference count will be
kept during the complete cycle of a claim -> release host.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/mmc/core/core.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Ulf Hansson March 30, 2015, 8:48 a.m. UTC | #1
On 29 March 2015 at 12:53, Konstantin Dorfman <kdorfman@codeaurora.org> wrote:
>
>
> On 03/28/2015 12:52 AM, NeilBrown wrote:
>>
>> On Fri, 27 Mar 2015 12:15:15 +0100 Ulf Hansson <ulf.hansson@linaro.org>
>> wrote:
>>
>>> Currently those host drivers which have deployed runtime PM, deals with
>>> the runtime PM reference counting entirely by themselves.
>>>
>>> Since host drivers don't know when the core will send the next request
>>> through some of the host_ops callbacks, they need to handle runtime PM
>>> get/put between each an every request.
>>>
>>> In quite many cases this has some negative effects, since it leads to a
>>> high frequency of scheduled runtime PM suspend operations. That due to
>>> the runtime PM reference count will normally reach zero in-between
>>> every request.
>
>
> It would be nice to have some numbers about this suspend/resume jitter of
> platform device pm due to time between consecutive requests within burst of
> requests.
>
> Probably we can tune autosuspend timer of specific platform device to avoid
> this.

No, that's not going to do the trick.

What $subject patch does, is to decrease the frequency for when the
host device's runtime PM reference count reaches zero. Thus preventing
us from telling the runtime PM core to schedule a suspend operation,
when we actually know it isn't needed.

>
> Implementing the patch in a way that you are suggesting will imply that the
> mmc core will always use double buffering request polling scheme and the
> existing mechanism of "request burst" definition.

$subject patch will improve behaviour when handling "bursts", but
that's just one case for when mmc core knows that it's going to send a
sequence of commands/requests.

For example, the core also send sequences from the mmc_rescan() work,
system PM suspend, etc.

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ulf Hansson March 30, 2015, 8:58 a.m. UTC | #2
On 27 March 2015 at 22:52, NeilBrown <neilb@suse.de> wrote:
> On Fri, 27 Mar 2015 12:15:15 +0100 Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
>> Currently those host drivers which have deployed runtime PM, deals with
>> the runtime PM reference counting entirely by themselves.
>>
>> Since host drivers don't know when the core will send the next request
>> through some of the host_ops callbacks, they need to handle runtime PM
>> get/put between each an every request.
>>
>> In quite many cases this has some negative effects, since it leads to a
>> high frequency of scheduled runtime PM suspend operations. That due to
>> the runtime PM reference count will normally reach zero in-between
>> every request.
>
> I don't understand why this is a problem.
> All the drivers use put_autosuspend, so the suspend doesn't happen for
> (typically) 50ms, so the actually suspend won't happen if there is a sequence
> of requests.

It's not a problem, but it's suboptimal.

>
> Is it just the scheduling of a suspend - even without the suspend happening -
> that is causing problems?  If so, maybe the runtime_pm code needs optimising?

Correct. Though, I don't think we can improve the behaviour of the
runtime PM core in this regards.

If the reference count reaches zero, due to a "put", it will need to
perform some actions to schedule the suspend operation - no matter
what.

So to me, this is only an optimization of how we make use of the runtime PM API.

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 709ada9..c296bc0 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -897,6 +897,7 @@  int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
 	DECLARE_WAITQUEUE(wait, current);
 	unsigned long flags;
 	int stop;
+	bool pm = false;
 
 	might_sleep();
 
@@ -916,13 +917,18 @@  int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
 		host->claimed = 1;
 		host->claimer = current;
 		host->claim_cnt += 1;
+		if (host->claim_cnt == 1)
+			pm = true;
 	} else
 		wake_up(&host->wq);
 	spin_unlock_irqrestore(&host->lock, flags);
 	remove_wait_queue(&host->wq, &wait);
+
+	if (pm)
+		pm_runtime_get_sync(mmc_dev(host));
+
 	return stop;
 }
-
 EXPORT_SYMBOL(__mmc_claim_host);
 
 /**
@@ -947,6 +953,8 @@  void mmc_release_host(struct mmc_host *host)
 		host->claimer = NULL;
 		spin_unlock_irqrestore(&host->lock, flags);
 		wake_up(&host->wq);
+		pm_runtime_mark_last_busy(mmc_dev(host));
+		pm_runtime_put_autosuspend(mmc_dev(host));
 	}
 }
 EXPORT_SYMBOL(mmc_release_host);