diff mbox series

[1/3] mmc: sdio: Add API to manage SDIO IRQs from a workqueue

Message ID 1492518724-30511-2-git-send-email-ulf.hansson@linaro.org
State New
Headers show
Series mmc: Improve/fix support for SDIO IRQs | expand

Commit Message

Ulf Hansson April 18, 2017, 12:32 p.m. UTC
For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ,
the SDIO IRQs are processed from a dedicated kernel thread. For these
cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new
SDIO IRQ.

Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be
invoked to temporary disable the IRQs, before the kernel thread is woken up
to process it. When processing of the IRQs are completed, they are
re-enabled by the kernel thread, again via invoking the host's
->enable_sdio_irq().

The observation from this, is that the execution path is being unnecessary
complex, as the host driver already knows that it needs to temporary
disable the IRQs before signaling a new one. Moreover, replacing the kernel
thread with a work/workqueue would greatly simplify the code.

To address the above problems, let's continue to build upon the support for
MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be
processed without using the clumsy kernel thread, but it also avoids the
ping-ponging calls of the host's ->enable_sdio_irq() callback for each
processed IRQ.

Therefore, let's add new API sdio_signal_irq(), which enables hosts to
signal/process SDIO IRQs by using a work/workqueue, rather than using the
kernel thread.

Add also a new host callback ->ack_sdio_irq(), which the work invokes when
the SDIO IRQs are processed. This informs the host about when it can
re-enable the SDIO IRQs. Potentially, we could re-use the existing
->enable_sdio_irq() callback for this matter, however it has turned out
that it's more convenient for hosts to get this information via a separate
callback.

Hosts needs to enable MMC_CAP2_SDIO_IRQ_NOTHREAD to use this new feature,
however the feature is optional for already existing hosts suppporting
MMC_CAP2_SDIO_IRQ_NOTHREAD.

It's likely that all host can convert to use MMC_CAP2_SDIO_IRQ_NOTHREAD and
benefit from this feature. Further changes will have to tell. Until then
the old path using the kernel thread remains possible.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>

---
 drivers/mmc/core/host.c     |  2 ++
 drivers/mmc/core/sdio_irq.c | 32 ++++++++++++++++++++++++++++++--
 drivers/mmc/core/sdio_ops.h |  2 ++
 include/linux/mmc/host.h    |  3 +++
 4 files changed, 37 insertions(+), 2 deletions(-)

-- 
2.7.4

Comments

Doug Anderson April 18, 2017, 9:43 p.m. UTC | #1
Hi,

On Tue, Apr 18, 2017 at 5:32 AM, Ulf Hansson <ulf.hansson@linaro.org> wrote:
> For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ,

> the SDIO IRQs are processed from a dedicated kernel thread. For these

> cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new

> SDIO IRQ.

>

> Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be

> invoked to temporary disable the IRQs, before the kernel thread is woken up

> to process it. When processing of the IRQs are completed, they are

> re-enabled by the kernel thread, again via invoking the host's

> ->enable_sdio_irq().

>

> The observation from this, is that the execution path is being unnecessary

> complex, as the host driver already knows that it needs to temporary

> disable the IRQs before signaling a new one. Moreover, replacing the kernel

> thread with a work/workqueue would greatly simplify the code.

>

> To address the above problems, let's continue to build upon the support for

> MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be

> processed without using the clumsy kernel thread, but it also avoids the

> ping-ponging calls of the host's ->enable_sdio_irq() callback for each

> processed IRQ.

>

> Therefore, let's add new API sdio_signal_irq(), which enables hosts to

> signal/process SDIO IRQs by using a work/workqueue, rather than using the

> kernel thread.

>

> Add also a new host callback ->ack_sdio_irq(), which the work invokes when

> the SDIO IRQs are processed. This informs the host about when it can

> re-enable the SDIO IRQs. Potentially, we could re-use the existing

> ->enable_sdio_irq() callback for this matter, however it has turned out

> that it's more convenient for hosts to get this information via a separate

> callback.

>

> Hosts needs to enable MMC_CAP2_SDIO_IRQ_NOTHREAD to use this new feature,

> however the feature is optional for already existing hosts suppporting

> MMC_CAP2_SDIO_IRQ_NOTHREAD.

>

> It's likely that all host can convert to use MMC_CAP2_SDIO_IRQ_NOTHREAD and

> benefit from this feature. Further changes will have to tell. Until then

> the old path using the kernel thread remains possible.


So one other subtle problem with the new approach is that you totally
lose all of the polling logic in sdio_irq_thread().

...so if I take your series and then comment out "cap-sdio-irq;" in
the veyron "dtsi" then things stop working.  Right now dw_mmc only
enables SDIO Interrupts if that bit is set and relies on polling
otherwise.  Presumably there's not a _huge_ reason why you would need
to make dw_mmc work without actual SDIO IRQ signaling, but the way the
code is structured right now things will probably break for some users
out there.

One note is that I remember on exynos5250-snow that we needed to
enable a hybrid interrupt/polling mechanism.  The problem we ran into
was terribly rare and it was never root caused if there was just some
subtle bug or if certain versions of dw_mmc sometimes just dropped
interrupts (and the patch was never upstreamed), so possibly we don't
care.  ...but having the polling code there as a fallback seems like
it could have a benefit.
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ulf Hansson April 19, 2017, 10:48 a.m. UTC | #2
On 18 April 2017 at 23:43, Doug Anderson <dianders@google.com> wrote:
> Hi,

>

> On Tue, Apr 18, 2017 at 5:32 AM, Ulf Hansson <ulf.hansson@linaro.org> wrote:

>> For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ,

>> the SDIO IRQs are processed from a dedicated kernel thread. For these

>> cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new

>> SDIO IRQ.

>>

>> Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be

>> invoked to temporary disable the IRQs, before the kernel thread is woken up

>> to process it. When processing of the IRQs are completed, they are

>> re-enabled by the kernel thread, again via invoking the host's

>> ->enable_sdio_irq().

>>

>> The observation from this, is that the execution path is being unnecessary

>> complex, as the host driver already knows that it needs to temporary

>> disable the IRQs before signaling a new one. Moreover, replacing the kernel

>> thread with a work/workqueue would greatly simplify the code.

>>

>> To address the above problems, let's continue to build upon the support for

>> MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be

>> processed without using the clumsy kernel thread, but it also avoids the

>> ping-ponging calls of the host's ->enable_sdio_irq() callback for each

>> processed IRQ.

>>

>> Therefore, let's add new API sdio_signal_irq(), which enables hosts to

>> signal/process SDIO IRQs by using a work/workqueue, rather than using the

>> kernel thread.

>>

>> Add also a new host callback ->ack_sdio_irq(), which the work invokes when

>> the SDIO IRQs are processed. This informs the host about when it can

>> re-enable the SDIO IRQs. Potentially, we could re-use the existing

>> ->enable_sdio_irq() callback for this matter, however it has turned out

>> that it's more convenient for hosts to get this information via a separate

>> callback.

>>

>> Hosts needs to enable MMC_CAP2_SDIO_IRQ_NOTHREAD to use this new feature,

>> however the feature is optional for already existing hosts suppporting

>> MMC_CAP2_SDIO_IRQ_NOTHREAD.

>>

>> It's likely that all host can convert to use MMC_CAP2_SDIO_IRQ_NOTHREAD and

>> benefit from this feature. Further changes will have to tell. Until then

>> the old path using the kernel thread remains possible.

>

> So one other subtle problem with the new approach is that you totally

> lose all of the polling logic in sdio_irq_thread().


The polling is still there, as I haven't removed the kthread in this series.

I was also thinking of the next step, which could move the polling
inside the work, simply by re-schedule itself.

>

> ...so if I take your series and then comment out "cap-sdio-irq;" in

> the veyron "dtsi" then things stop working.  Right now dw_mmc only

> enables SDIO Interrupts if that bit is set and relies on polling

> otherwise.  Presumably there's not a _huge_ reason why you would need

> to make dw_mmc work without actual SDIO IRQ signaling, but the way the

> code is structured right now things will probably break for some users

> out there.


Did you actually test this or the conclusion was theoretical?

I *was* actually thinking of the polling case and I think it should be
addressed, unless I am missing something of course.

More precisely, in patch2, I make sure MMC_CAP2_SDIO_IRQ_NOTHREAD
becomes set for dw_mmc, only *if* MMC_CAP_SDIO_IRQ is set. That means
the polling with the kthread is done for cases when MMC_CAP_SDIO_IRQ
is unset. Right!?

>

> One note is that I remember on exynos5250-snow that we needed to

> enable a hybrid interrupt/polling mechanism.  The problem we ran into

> was terribly rare and it was never root caused if there was just some

> subtle bug or if certain versions of dw_mmc sometimes just dropped

> interrupts (and the patch was never upstreamed), so possibly we don't

> care.  ...but having the polling code there as a fallback seems like

> it could have a benefit.


I see. To be clear, removing the polling is not my intent and isn't
what the series tries to do.

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Doug Anderson April 19, 2017, 7:29 p.m. UTC | #3
Hi,

On Wed, Apr 19, 2017 at 3:48 AM, Ulf Hansson <ulf.hansson@linaro.org> wrote:
> On 18 April 2017 at 23:43, Doug Anderson <dianders@google.com> wrote:

>> Hi,

>>

>> On Tue, Apr 18, 2017 at 5:32 AM, Ulf Hansson <ulf.hansson@linaro.org> wrote:

>>> For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ,

>>> the SDIO IRQs are processed from a dedicated kernel thread. For these

>>> cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new

>>> SDIO IRQ.

>>>

>>> Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be

>>> invoked to temporary disable the IRQs, before the kernel thread is woken up

>>> to process it. When processing of the IRQs are completed, they are

>>> re-enabled by the kernel thread, again via invoking the host's

>>> ->enable_sdio_irq().

>>>

>>> The observation from this, is that the execution path is being unnecessary

>>> complex, as the host driver already knows that it needs to temporary

>>> disable the IRQs before signaling a new one. Moreover, replacing the kernel

>>> thread with a work/workqueue would greatly simplify the code.

>>>

>>> To address the above problems, let's continue to build upon the support for

>>> MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be

>>> processed without using the clumsy kernel thread, but it also avoids the

>>> ping-ponging calls of the host's ->enable_sdio_irq() callback for each

>>> processed IRQ.

>>>

>>> Therefore, let's add new API sdio_signal_irq(), which enables hosts to

>>> signal/process SDIO IRQs by using a work/workqueue, rather than using the

>>> kernel thread.

>>>

>>> Add also a new host callback ->ack_sdio_irq(), which the work invokes when

>>> the SDIO IRQs are processed. This informs the host about when it can

>>> re-enable the SDIO IRQs. Potentially, we could re-use the existing

>>> ->enable_sdio_irq() callback for this matter, however it has turned out

>>> that it's more convenient for hosts to get this information via a separate

>>> callback.

>>>

>>> Hosts needs to enable MMC_CAP2_SDIO_IRQ_NOTHREAD to use this new feature,

>>> however the feature is optional for already existing hosts suppporting

>>> MMC_CAP2_SDIO_IRQ_NOTHREAD.

>>>

>>> It's likely that all host can convert to use MMC_CAP2_SDIO_IRQ_NOTHREAD and

>>> benefit from this feature. Further changes will have to tell. Until then

>>> the old path using the kernel thread remains possible.

>>

>> So one other subtle problem with the new approach is that you totally

>> lose all of the polling logic in sdio_irq_thread().

>

> The polling is still there, as I haven't removed the kthread in this series.


The code still exists, but it won't be called, right?

Oh!  Shoot, I see that you only enable the new code in dw_mmc when the
cap is set.  Hrm.


> I was also thinking of the next step, which could move the polling

> inside the work, simply by re-schedule itself.

>

>>

>> ...so if I take your series and then comment out "cap-sdio-irq;" in

>> the veyron "dtsi" then things stop working.  Right now dw_mmc only

>> enables SDIO Interrupts if that bit is set and relies on polling

>> otherwise.  Presumably there's not a _huge_ reason why you would need

>> to make dw_mmc work without actual SDIO IRQ signaling, but the way the

>> code is structured right now things will probably break for some users

>> out there.

>

> Did you actually test this or the conclusion was theoretical?


I did, but I had confirmation bias so upon the first sign of failure I
decided "I must be right--it doesn't work".  :(  Maybe something else
was causing problems.  Trying again now.

OK, let's see:

With "cap-sdio-irq" commented out but without your 3 patches:
=> Boot up, "ifconfig eth0 down; ping -c 20 8.8.8.8".  Seems OK.

With "cap-sdio-irq" commented out but _with_ your 3 patches:
=> Boot up, "ifconfig eth0 down; ping -c 20 8.8.8.8".  Seems OK.

So I guess the conclusion is that I missed the part about your patch
only enabling the new features if MMC_CAP_SDIO_IRQ.  Sorry.  :(
...and then I must have hit some other unrelated failure that I can't
reproduce now and assumed it was your patch's fault.

So basically I would say that I've lightly tested your code.  It's not
code I've stressed a ton, but it survived some basic tests anyway...
:)  The code also looks pretty sane to me.


> I *was* actually thinking of the polling case and I think it should be

> addressed, unless I am missing something of course.

>

> More precisely, in patch2, I make sure MMC_CAP2_SDIO_IRQ_NOTHREAD

> becomes set for dw_mmc, only *if* MMC_CAP_SDIO_IRQ is set. That means

> the polling with the kthread is done for cases when MMC_CAP_SDIO_IRQ

> is unset. Right!?


Yeah, looks right to me now that I have my glasses on.


>> One note is that I remember on exynos5250-snow that we needed to

>> enable a hybrid interrupt/polling mechanism.  The problem we ran into

>> was terribly rare and it was never root caused if there was just some

>> subtle bug or if certain versions of dw_mmc sometimes just dropped

>> interrupts (and the patch was never upstreamed), so possibly we don't

>> care.  ...but having the polling code there as a fallback seems like

>> it could have a benefit.

>

> I see. To be clear, removing the polling is not my intent and isn't

> what the series tries to do.


OK, makes sense.  Just figured I'd mention this in case you had future
plans around this code.  :)
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ulf Hansson April 20, 2017, 12:14 p.m. UTC | #4
[...]

>

> With "cap-sdio-irq" commented out but without your 3 patches:

> => Boot up, "ifconfig eth0 down; ping -c 20 8.8.8.8".  Seems OK.

>

> With "cap-sdio-irq" commented out but _with_ your 3 patches:

> => Boot up, "ifconfig eth0 down; ping -c 20 8.8.8.8".  Seems OK.

>

> So I guess the conclusion is that I missed the part about your patch

> only enabling the new features if MMC_CAP_SDIO_IRQ.  Sorry.  :(

> ...and then I must have hit some other unrelated failure that I can't

> reproduce now and assumed it was your patch's fault.

>

> So basically I would say that I've lightly tested your code.  It's not

> code I've stressed a ton, but it survived some basic tests anyway...

> :)  The code also looks pretty sane to me.


Thanks a lot for your feedback a for running a new round of tests.
This seems promising then!

When you have the time, it we awesome if you could run yet another new
round of test with the new version of the series. I posted it
yesterday evening my local time.

I would also be very interested to know if converting to the work
queue approach has any impact on throughput. Maybe you have some
simple test suite to also verify that?

[...]

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Doug Anderson April 28, 2017, 8:31 p.m. UTC | #5
Ulf

On Thu, Apr 20, 2017 at 5:14 AM, Ulf Hansson <ulf.hansson@linaro.org> wrote:
> [...]

>

>>

>> With "cap-sdio-irq" commented out but without your 3 patches:

>> => Boot up, "ifconfig eth0 down; ping -c 20 8.8.8.8".  Seems OK.

>>

>> With "cap-sdio-irq" commented out but _with_ your 3 patches:

>> => Boot up, "ifconfig eth0 down; ping -c 20 8.8.8.8".  Seems OK.

>>

>> So I guess the conclusion is that I missed the part about your patch

>> only enabling the new features if MMC_CAP_SDIO_IRQ.  Sorry.  :(

>> ...and then I must have hit some other unrelated failure that I can't

>> reproduce now and assumed it was your patch's fault.

>>

>> So basically I would say that I've lightly tested your code.  It's not

>> code I've stressed a ton, but it survived some basic tests anyway...

>> :)  The code also looks pretty sane to me.

>

> Thanks a lot for your feedback a for running a new round of tests.

> This seems promising then!

>

> When you have the time, it we awesome if you could run yet another new

> round of test with the new version of the series. I posted it

> yesterday evening my local time.


Sorry for taking so long to get to this.  I've done this now and
things seem to be working fine with your new series.  As with before,
I'm not doing really stressful testing here, but what I've done seems
OK.


> I would also be very interested to know if converting to the work

> queue approach has any impact on throughput. Maybe you have some

> simple test suite to also verify that?


I don't personally.  To really get something repeatable for WiFi, you
need things in a chamber and need a test server setup.

I did this quick test with the device sitting on my desk (with
whatever the closest WiFi access point was), though:

ifconfig eth0 down
for i in $(seq 5); do
  curl http://SomeBigFile > /dev/null
done


From this very simple test, the performance looks like a wash.

--

Before your changes:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 54.0M  100 54.0M    0     0  2753k      0  0:00:20  0:00:20 --:--:-- 2952k
100 54.0M  100 54.0M    0     0  1073k      0  0:00:51  0:00:51 --:--:--  902k
100 54.0M  100 54.0M    0     0  2494k      0  0:00:22  0:00:22 --:--:-- 3061k
100 54.0M  100 54.0M    0     0  2455k      0  0:00:22  0:00:22 --:--:-- 2890k
100 54.0M  100 54.0M    0     0  1934k      0  0:00:28  0:00:28 --:--:-- 2301k

After your changes:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 54.0M  100 54.0M    0     0  2376k      0  0:00:23  0:00:23 --:--:-- 2739k
100 54.0M  100 54.0M    0     0  2303k      0  0:00:24  0:00:24 --:--:-- 2698k
100 54.0M  100 54.0M    0     0  2512k      0  0:00:22  0:00:22 --:--:-- 2359k
100 54.0M  100 54.0M    0     0  1774k      0  0:00:31  0:00:31 --:--:-- 1742k
100 54.0M  100 54.0M    0     0  2124k      0  0:00:26  0:00:26 --:--:-- 2753k

--

It's possible that Brian Norris (CCed) might have an easy way to do a
better test?

-Doug
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox series

Patch

diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 3f8c85d..77058cb 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -30,6 +30,7 @@ 
 #include "host.h"
 #include "slot-gpio.h"
 #include "pwrseq.h"
+#include "sdio_ops.h"
 
 #define cls_dev_to_mmc_host(d)	container_of(d, struct mmc_host, class_dev)
 
@@ -379,6 +380,7 @@  struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
 	spin_lock_init(&host->lock);
 	init_waitqueue_head(&host->wq);
 	INIT_DELAYED_WORK(&host->detect, mmc_rescan);
+	INIT_WORK(&host->sdio_irq_work, sdio_irq_work);
 	setup_timer(&host->retune_timer, mmc_retune_timer, (unsigned long)host);
 
 	/*
diff --git a/drivers/mmc/core/sdio_irq.c b/drivers/mmc/core/sdio_irq.c
index 6d4b720..1b6006d 100644
--- a/drivers/mmc/core/sdio_irq.c
+++ b/drivers/mmc/core/sdio_irq.c
@@ -92,15 +92,43 @@  static int process_sdio_pending_irqs(struct mmc_host *host)
 	return ret;
 }
 
-void sdio_run_irqs(struct mmc_host *host)
+static void __sdio_run_irqs(struct mmc_host *host)
 {
-	mmc_claim_host(host);
 	host->sdio_irq_pending = true;
 	process_sdio_pending_irqs(host);
+}
+
+void sdio_run_irqs(struct mmc_host *host)
+{
+	mmc_claim_host(host);
+	__sdio_run_irqs(host);
 	mmc_release_host(host);
 }
 EXPORT_SYMBOL_GPL(sdio_run_irqs);
 
+void sdio_irq_work(struct work_struct *work)
+{
+	struct mmc_host *host =
+		container_of(work, struct mmc_host, sdio_irq_work);
+
+	mmc_claim_host(host);
+	__sdio_run_irqs(host);
+	if (host->ops->ack_sdio_irq)
+		host->ops->ack_sdio_irq(host);
+	mmc_release_host(host);
+}
+
+void sdio_signal_irq(struct mmc_host *host)
+{
+	/*
+	 * The system_freezable_wq helps us to avoid processing IRQs while being
+	 * system PM suspended. Instead these IRQs becomes deferred and managed
+	 * when userspace is unfrozen.
+	 */
+	queue_work(system_freezable_wq, &host->sdio_irq_work);
+}
+EXPORT_SYMBOL_GPL(sdio_signal_irq);
+
 static int sdio_irq_thread(void *_host)
 {
 	struct mmc_host *host = _host;
diff --git a/drivers/mmc/core/sdio_ops.h b/drivers/mmc/core/sdio_ops.h
index bed8a83..836e405 100644
--- a/drivers/mmc/core/sdio_ops.h
+++ b/drivers/mmc/core/sdio_ops.h
@@ -17,6 +17,7 @@ 
 
 struct mmc_host;
 struct mmc_card;
+struct work_struct;
 
 int mmc_send_io_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr);
 int mmc_io_rw_direct(struct mmc_card *card, int write, unsigned fn,
@@ -25,6 +26,7 @@  int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
 	unsigned addr, int incr_addr, u8 *buf, unsigned blocks, unsigned blksz);
 int sdio_reset(struct mmc_host *host);
 unsigned int mmc_align_data_size(struct mmc_card *card, unsigned int sz);
+void sdio_irq_work(struct work_struct *work);
 
 static inline bool mmc_is_io_op(u32 opcode)
 {
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 21385ac..f03df539 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -130,6 +130,7 @@  struct mmc_host_ops {
 	int	(*get_cd)(struct mmc_host *host);
 
 	void	(*enable_sdio_irq)(struct mmc_host *host, int enable);
+	void	(*ack_sdio_irq)(struct mmc_host *host);
 
 	/* optional callback for HC quirks */
 	void	(*init_card)(struct mmc_host *host, struct mmc_card *card);
@@ -358,6 +359,7 @@  struct mmc_host {
 
 	unsigned int		sdio_irqs;
 	struct task_struct	*sdio_irq_thread;
+	struct work_struct	sdio_irq_work;
 	bool			sdio_irq_pending;
 	atomic_t		sdio_irq_thread_abort;
 
@@ -428,6 +430,7 @@  static inline void mmc_signal_sdio_irq(struct mmc_host *host)
 }
 
 void sdio_run_irqs(struct mmc_host *host);
+void sdio_signal_irq(struct mmc_host *host);
 
 #ifdef CONFIG_REGULATOR
 int mmc_regulator_get_ocrmask(struct regulator *supply);