diff mbox

[tip/core/rcu,10/23] rcu: Allow RCU quiescent-state forcing to be preempted

Message ID 1346350718-30937-10-git-send-email-paulmck@linux.vnet.ibm.com
State Superseded
Headers show

Commit Message

Paul E. McKenney Aug. 30, 2012, 6:18 p.m. UTC
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

RCU quiescent-state forcing is currently carried out without preemption
points, which can result in excessive latency spikes on large systems
(many hundreds or thousands of CPUs).  This patch therefore inserts
a voluntary preemption point into force_qs_rnp(), which should greatly
reduce the magnitude of these spikes.

Reported-by: Mike Galbraith <mgalbraith@suse.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcutree.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

Comments

Josh Triplett Sept. 2, 2012, 5:23 a.m. UTC | #1
On Thu, Aug 30, 2012 at 11:18:25AM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> RCU quiescent-state forcing is currently carried out without preemption
> points, which can result in excessive latency spikes on large systems
> (many hundreds or thousands of CPUs).  This patch therefore inserts
> a voluntary preemption point into force_qs_rnp(), which should greatly
> reduce the magnitude of these spikes.
> 
> Reported-by: Mike Galbraith <mgalbraith@suse.de>
> Reported-by: Dimitri Sivanich <sivanich@sgi.com>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Reviewed-by: Josh Triplett <josh@joshtriplett.org>

>  kernel/rcutree.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index 79c2c28..cce73ff 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -1784,6 +1784,7 @@ static void force_qs_rnp(struct rcu_state *rsp, int (*f)(struct rcu_data *))
>  	struct rcu_node *rnp;
>  
>  	rcu_for_each_leaf_node(rsp, rnp) {
> +		cond_resched();
>  		mask = 0;
>  		raw_spin_lock_irqsave(&rnp->lock, flags);
>  		if (!rcu_gp_in_progress(rsp)) {
> -- 
> 1.7.8
>
diff mbox

Patch

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 79c2c28..cce73ff 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -1784,6 +1784,7 @@  static void force_qs_rnp(struct rcu_state *rsp, int (*f)(struct rcu_data *))
 	struct rcu_node *rnp;
 
 	rcu_for_each_leaf_node(rsp, rnp) {
+		cond_resched();
 		mask = 0;
 		raw_spin_lock_irqsave(&rnp->lock, flags);
 		if (!rcu_gp_in_progress(rsp)) {