diff mbox

[RFC,tip/core/rcu,21/41] rcu: Inform RCU of irq_exit() activity

Message ID 1328125319-5205-21-git-send-email-paulmck@linux.vnet.ibm.com
State Superseded
Headers show

Commit Message

Paul E. McKenney Feb. 1, 2012, 7:41 p.m. UTC
From: "Paul E. McKenney" <paul.mckenney@linaro.org>

This is a port to TINY_RCU of Peter Zijlstra's commit #ec433f0c5

The rcu_read_unlock_special() function relies on in_irq() to exclude
scheduler activity from interrupt level.  This fails because exit_irq()
can invoke the scheduler after clearing the preempt_count() bits that
in_irq() uses to determine that it is at interrupt level.  This situation
can result in failures as follows:

     $task			IRQ		SoftIRQ

     rcu_read_lock()

     /* do stuff */

     <preempt> |= UNLOCK_BLOCKED

     rcu_read_unlock()
       --t->rcu_read_lock_nesting

    			irq_enter();
    			/* do stuff, don't use RCU */
    			irq_exit();
    			  sub_preempt_count(IRQ_EXIT_OFFSET);
    			  invoke_softirq()

    					ttwu();
    					  spin_lock_irq(&pi->lock)
    					  rcu_read_lock();
    					  /* do stuff */
    					  rcu_read_unlock();
    					    rcu_read_unlock_special()
    					      rcu_report_exp_rnp()
    					        ttwu()
    					          spin_lock_irq(&pi->lock) /* deadlock */

       rcu_read_unlock_special(t);

This can be triggered 'easily' because invoke_softirq() immediately does
a ttwu() of ksoftirqd/# instead of doing the in-place softirq stuff first,
but even without that the above happens.

Cure this by also excluding softirqs from the rcu_read_unlock_special()
handler and ensuring the force_irqthreads ksoftirqd/# wakeup is done
from full softirq context.

[ Alternatively, delaying the ->rcu_read_lock_nesting decrement
  until after the special handling would make the thing more robust
  in the face of interrupts as well.  And there is a separate patch
  for that. ]

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcutiny_plugin.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

Comments

Josh Triplett Feb. 2, 2012, 2:30 a.m. UTC | #1
On Wed, Feb 01, 2012 at 11:41:39AM -0800, Paul E. McKenney wrote:
> [ Alternatively, delaying the ->rcu_read_lock_nesting decrement
>   until after the special handling would make the thing more robust
>   in the face of interrupts as well.  And there is a separate patch
>   for that. ]

Where does that separate patch live, and should it replace this one?

- Josh Triplett
Paul E. McKenney Feb. 2, 2012, 5:30 p.m. UTC | #2
On Wed, Feb 01, 2012 at 06:30:33PM -0800, Josh Triplett wrote:
> On Wed, Feb 01, 2012 at 11:41:39AM -0800, Paul E. McKenney wrote:
> > [ Alternatively, delaying the ->rcu_read_lock_nesting decrement
> >   until after the special handling would make the thing more robust
> >   in the face of interrupts as well.  And there is a separate patch
> >   for that. ]
> 
> Where does that separate patch live, and should it replace this one?

It is #18 in this series: "rcu: Protect __rcu_read_unlock() against
scheduler-using irq handlers".  Both are needed.  I will rework the
commit message appropriately.

							Thanx, Paul
diff mbox

Patch

diff --git a/kernel/rcutiny_plugin.h b/kernel/rcutiny_plugin.h
index 3f6c07e..9d7d985 100644
--- a/kernel/rcutiny_plugin.h
+++ b/kernel/rcutiny_plugin.h
@@ -570,7 +570,7 @@  static noinline void rcu_read_unlock_special(struct task_struct *t)
 		rcu_preempt_cpu_qs();
 
 	/* Hardware IRQ handlers cannot block. */
-	if (in_irq()) {
+	if (in_irq() || in_serving_softirq()) {
 		local_irq_restore(flags);
 		return;
 	}