[01/12] mmc: core: Throttle polling rate for CMD6

Message ID 20200204085449.32585-2-ulf.hansson@linaro.org
State New
Headers show
Series
  • mmc: core: Improve code for polling and HW busy detect
Related show

Commit Message

Ulf Hansson Feb. 4, 2020, 8:54 a.m.
In mmc_poll_for_busy() we loop continuously, either by sending a CMD13 or
by invoking the ->card_busy() host ops, as to detect when the card stops
signaling busy. This behaviour is problematic as it may cause CPU hogging,
especially when the busy signal time reaches beyond a few ms.

Let's fix the issue by adding a throttling mechanism, that inserts a
usleep_range() in between the polling attempts. The sleep range starts at
16-32us, but increases for each loop by a factor of 2, up until the range
reaches ~32-64ms. In this way, we are able to keep the loop fine-grained
enough for short busy signaling times, while also not hogging the CPU for
longer times.

Note that, this change is inspired by the similar throttling mechanism that
we already use for mmc_do_erase().

Reported-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>

---
 drivers/mmc/core/mmc_ops.c | 8 ++++++++
 1 file changed, 8 insertions(+)

-- 
2.17.1

Comments

Ludovic Barre Feb. 12, 2020, 1:51 p.m. | #1
hi Ulf

Le 2/4/20 à 9:54 AM, Ulf Hansson a écrit :
> In mmc_poll_for_busy() we loop continuously, either by sending a CMD13 or
> by invoking the ->card_busy() host ops, as to detect when the card stops
> signaling busy. This behaviour is problematic as it may cause CPU hogging,
> especially when the busy signal time reaches beyond a few ms.
> 
> Let's fix the issue by adding a throttling mechanism, that inserts a
> usleep_range() in between the polling attempts. The sleep range starts at
> 16-32us, but increases for each loop by a factor of 2, up until the range

Just to align comment and code: in the code the first usleep range start 
at 32-64us.

> reaches ~32-64ms. In this way, we are able to keep the loop fine-grained
> enough for short busy signaling times, while also not hogging the CPU for
> longer times.
> 
> Note that, this change is inspired by the similar throttling mechanism that
> we already use for mmc_do_erase().
> 
> Reported-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> ---
>   drivers/mmc/core/mmc_ops.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
> index da425ee2d9bf..446a37cc2a86 100644
> --- a/drivers/mmc/core/mmc_ops.c
> +++ b/drivers/mmc/core/mmc_ops.c
> @@ -456,6 +456,7 @@ static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
>   	struct mmc_host *host = card->host;
>   	int err;
>   	unsigned long timeout;
> +	unsigned int udelay = 32, udelay_max = 32768;
>   	u32 status = 0;
>   	bool expired = false;
>   	bool busy = false;
> @@ -500,6 +501,13 @@ static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
>   				mmc_hostname(host), __func__);
>   			return -ETIMEDOUT;
>   		}
> +
> +		/* Throttle the polling rate to avoid hogging the CPU. */
> +		if (busy) {
> +			usleep_range(udelay, udelay * 2);
> +			if (udelay < udelay_max)
> +				udelay *= 2;
> +		}
>   	} while (busy);
>   
>   	return 0;
>
Ludovic Barre Feb. 12, 2020, 2:24 p.m. | #2
Le 2/12/20 à 3:18 PM, Ulf Hansson a écrit :
> On Wed, 12 Feb 2020 at 14:51, Ludovic BARRE <ludovic.barre@st.com> wrote:
>>
>> hi Ulf
>>
>> Le 2/4/20 à 9:54 AM, Ulf Hansson a écrit :
>>> In mmc_poll_for_busy() we loop continuously, either by sending a CMD13 or
>>> by invoking the ->card_busy() host ops, as to detect when the card stops
>>> signaling busy. This behaviour is problematic as it may cause CPU hogging,
>>> especially when the busy signal time reaches beyond a few ms.
>>>
>>> Let's fix the issue by adding a throttling mechanism, that inserts a
>>> usleep_range() in between the polling attempts. The sleep range starts at
>>> 16-32us, but increases for each loop by a factor of 2, up until the range
>>
>> Just to align comment and code: in the code the first usleep range start
>> at 32-64us.
> 
> Yeah, good point, thanks. I was trying different values, but forgot to
> update the commit message. :-)

I tested series on mmci, sdmmc variant with/out MMC_CAP_WAIT_WHILE_BUSY
and it seems OK

yes, I reviewing the other patch of series but for this patch is OK.

Reviewed-by: Ludovic Barre <ludovic.barre@st.com>

> 
> Other than that, does the change look good to you?
> 
> [...]
> 
> Kind regards
> Uffe
>

Patch

diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
index da425ee2d9bf..446a37cc2a86 100644
--- a/drivers/mmc/core/mmc_ops.c
+++ b/drivers/mmc/core/mmc_ops.c
@@ -456,6 +456,7 @@  static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
 	struct mmc_host *host = card->host;
 	int err;
 	unsigned long timeout;
+	unsigned int udelay = 32, udelay_max = 32768;
 	u32 status = 0;
 	bool expired = false;
 	bool busy = false;
@@ -500,6 +501,13 @@  static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
 				mmc_hostname(host), __func__);
 			return -ETIMEDOUT;
 		}
+
+		/* Throttle the polling rate to avoid hogging the CPU. */
+		if (busy) {
+			usleep_range(udelay, udelay * 2);
+			if (udelay < udelay_max)
+				udelay *= 2;
+		}
 	} while (busy);
 
 	return 0;