[tip/core/rcu,13/15] rcu: Introduce for_each_rcu_flavor() and use it

Message ID 1339794370-28119-13-git-send-email-paulmck@linux.vnet.ibm.com
State New
Headers show

Commit Message

Paul E. McKenney June 15, 2012, 9:06 p.m.
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The arrival of TREE_PREEMPT_RCU some years back included some ugly
code involving either #ifdef or #ifdef'ed wrapper functions to iterate
over all non-SRCU flavors of RCU.  This commit therefore introduces
a for_each_rcu_flavor() iterator over the rcu_state structures for each
flavor of RCU to clean up a bit of the ugliness.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/rcutree.c        |   53 +++++++++++++---------
 kernel/rcutree.h        |   12 ++---
 kernel/rcutree_plugin.h |  116 -----------------------------------------------
 3 files changed, 36 insertions(+), 145 deletions(-)

Comments

Josh Triplett June 15, 2012, 11:52 p.m. | #1
On Fri, Jun 15, 2012 at 02:06:08PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> 
> The arrival of TREE_PREEMPT_RCU some years back included some ugly
> code involving either #ifdef or #ifdef'ed wrapper functions to iterate
> over all non-SRCU flavors of RCU.  This commit therefore introduces
> a for_each_rcu_flavor() iterator over the rcu_state structures for each
> flavor of RCU to clean up a bit of the ugliness.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Great cleanup!

A few comments below, though.

>  kernel/rcutree.c        |   53 +++++++++++++---------
>  kernel/rcutree.h        |   12 ++---
>  kernel/rcutree_plugin.h |  116 -----------------------------------------------
>  3 files changed, 36 insertions(+), 145 deletions(-)

Awesome diffstat.

> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index bd4e41c..75ad92a 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -84,6 +84,7 @@ struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh, call_rcu_bh);
>  DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
>  
>  static struct rcu_state *rcu_state;
> +LIST_HEAD(rcu_struct_flavors);

Does any means exist to turn this into a constant array known at compile
time rather than a runtime linked list?  Having this as a compile-time
constant may allow the compiler to unroll for_each_rcu_flavor and
potentially inline the calls inside it.

> @@ -2539,9 +2548,10 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp, int preemptible)
>  
>  static void __cpuinit rcu_prepare_cpu(int cpu)
>  {
> -	rcu_init_percpu_data(cpu, &rcu_sched_state, 0);
> -	rcu_init_percpu_data(cpu, &rcu_bh_state, 0);
> -	rcu_preempt_init_percpu_data(cpu);
> +	struct rcu_state *rsp;
> +
> +	for_each_rcu_flavor(rsp)
> +		rcu_init_percpu_data(cpu, rsp, 0);

This results in passing 0 as the "preemptible" parameter of
rcu_init_percpu_data, which seems wrong if the preemptible parameter has
any meaning at all. :)

> @@ -2577,18 +2588,15 @@ static int __cpuinit rcu_cpu_notify(struct notifier_block *self,
>  		 * touch any data without introducing corruption. We send the
>  		 * dying CPU's callbacks to an arbitrarily chosen online CPU.
>  		 */
> -		rcu_cleanup_dying_cpu(&rcu_bh_state);
> -		rcu_cleanup_dying_cpu(&rcu_sched_state);
> -		rcu_preempt_cleanup_dying_cpu();
> -		rcu_cleanup_after_idle(cpu);
> +		for_each_rcu_flavor(rsp)
> +			rcu_cleanup_dying_cpu(rsp);

Why did rcu_cleanup_after_idle go away here?

- Josh Triplett
Paul E. McKenney June 16, 2012, 1:01 a.m. | #2
On Fri, Jun 15, 2012 at 04:52:40PM -0700, Josh Triplett wrote:
> On Fri, Jun 15, 2012 at 02:06:08PM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > 
> > The arrival of TREE_PREEMPT_RCU some years back included some ugly
> > code involving either #ifdef or #ifdef'ed wrapper functions to iterate
> > over all non-SRCU flavors of RCU.  This commit therefore introduces
> > a for_each_rcu_flavor() iterator over the rcu_state structures for each
> > flavor of RCU to clean up a bit of the ugliness.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> Great cleanup!
> 
> A few comments below, though.
> 
> >  kernel/rcutree.c        |   53 +++++++++++++---------
> >  kernel/rcutree.h        |   12 ++---
> >  kernel/rcutree_plugin.h |  116 -----------------------------------------------
> >  3 files changed, 36 insertions(+), 145 deletions(-)
> 
> Awesome diffstat.

;-)

> > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > index bd4e41c..75ad92a 100644
> > --- a/kernel/rcutree.c
> > +++ b/kernel/rcutree.c
> > @@ -84,6 +84,7 @@ struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh, call_rcu_bh);
> >  DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
> >  
> >  static struct rcu_state *rcu_state;
> > +LIST_HEAD(rcu_struct_flavors);
> 
> Does any means exist to turn this into a constant array known at compile
> time rather than a runtime linked list?  Having this as a compile-time
> constant may allow the compiler to unroll for_each_rcu_flavor and
> potentially inline the calls inside it.

I could do that, but none of the traversals is anywhere near performance
critical, and all the ways I can think of to do this are uglier than
the list.

> > @@ -2539,9 +2548,10 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp, int preemptible)
> >  
> >  static void __cpuinit rcu_prepare_cpu(int cpu)
> >  {
> > -	rcu_init_percpu_data(cpu, &rcu_sched_state, 0);
> > -	rcu_init_percpu_data(cpu, &rcu_bh_state, 0);
> > -	rcu_preempt_init_percpu_data(cpu);
> > +	struct rcu_state *rsp;
> > +
> > +	for_each_rcu_flavor(rsp)
> > +		rcu_init_percpu_data(cpu, rsp, 0);
> 
> This results in passing 0 as the "preemptible" parameter of
> rcu_init_percpu_data, which seems wrong if the preemptible parameter has
> any meaning at all. :)

Good catch!  Hmmm...  Probably best to move this to the rcu_state
initialization.

> > @@ -2577,18 +2588,15 @@ static int __cpuinit rcu_cpu_notify(struct notifier_block *self,
> >  		 * touch any data without introducing corruption. We send the
> >  		 * dying CPU's callbacks to an arbitrarily chosen online CPU.
> >  		 */
> > -		rcu_cleanup_dying_cpu(&rcu_bh_state);
> > -		rcu_cleanup_dying_cpu(&rcu_sched_state);
> > -		rcu_preempt_cleanup_dying_cpu();
> > -		rcu_cleanup_after_idle(cpu);
> > +		for_each_rcu_flavor(rsp)
> > +			rcu_cleanup_dying_cpu(rsp);
> 
> Why did rcu_cleanup_after_idle go away here?

Because I fat-fingered it.  Thank you very much for spotting this.  It
would have been nasty to find otherwise.

							Thanx, Paul
Josh Triplett June 16, 2012, 5:35 a.m. | #3
On Fri, Jun 15, 2012 at 06:01:49PM -0700, Paul E. McKenney wrote:
> On Fri, Jun 15, 2012 at 04:52:40PM -0700, Josh Triplett wrote:
> > On Fri, Jun 15, 2012 at 02:06:08PM -0700, Paul E. McKenney wrote:
> > > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > > index bd4e41c..75ad92a 100644
> > > --- a/kernel/rcutree.c
> > > +++ b/kernel/rcutree.c
> > > @@ -84,6 +84,7 @@ struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh, call_rcu_bh);
> > >  DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
> > >  
> > >  static struct rcu_state *rcu_state;
> > > +LIST_HEAD(rcu_struct_flavors);
> > 
> > Does any means exist to turn this into a constant array known at compile
> > time rather than a runtime linked list?  Having this as a compile-time
> > constant may allow the compiler to unroll for_each_rcu_flavor and
> > potentially inline the calls inside it.
> 
> I could do that, but none of the traversals is anywhere near performance
> critical, and all the ways I can think of to do this are uglier than
> the list.

All of the struct rcu_state instances exist at compile time, so you can
just create an array of pointers to them:

static struct rcu_state *const rcu_struct_flavors[] = {
    &rcu_data,
    &rcu_bh_data,
#ifdef CONFIG_TREE_PREEMPT_RCU
    &rcu_preempt_data,
#endif
};

Then just define for_each_rcu_flavor to iterate over that compile-time
constant array.  Any reason that wouldn't work?

- Josh Triplett
Paul E. McKenney June 16, 2012, 6:36 a.m. | #4
On Fri, Jun 15, 2012 at 10:35:00PM -0700, Josh Triplett wrote:
> On Fri, Jun 15, 2012 at 06:01:49PM -0700, Paul E. McKenney wrote:
> > On Fri, Jun 15, 2012 at 04:52:40PM -0700, Josh Triplett wrote:
> > > On Fri, Jun 15, 2012 at 02:06:08PM -0700, Paul E. McKenney wrote:
> > > > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > > > index bd4e41c..75ad92a 100644
> > > > --- a/kernel/rcutree.c
> > > > +++ b/kernel/rcutree.c
> > > > @@ -84,6 +84,7 @@ struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh, call_rcu_bh);
> > > >  DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
> > > >  
> > > >  static struct rcu_state *rcu_state;
> > > > +LIST_HEAD(rcu_struct_flavors);
> > > 
> > > Does any means exist to turn this into a constant array known at compile
> > > time rather than a runtime linked list?  Having this as a compile-time
> > > constant may allow the compiler to unroll for_each_rcu_flavor and
> > > potentially inline the calls inside it.
> > 
> > I could do that, but none of the traversals is anywhere near performance
> > critical, and all the ways I can think of to do this are uglier than
> > the list.
> 
> All of the struct rcu_state instances exist at compile time, so you can
> just create an array of pointers to them:
> 
> static struct rcu_state *const rcu_struct_flavors[] = {
>     &rcu_data,
>     &rcu_bh_data,
> #ifdef CONFIG_TREE_PREEMPT_RCU
>     &rcu_preempt_data,
> #endif
> };
> 
> Then just define for_each_rcu_flavor to iterate over that compile-time
> constant array.  Any reason that wouldn't work?

It could work, but I like the automated response of the current system.
Your array would add one more thing that would need to be manually
kept consistent.  Now, if any of the traversals were on a fastpath,
that would be different.

							Thanx, Paul

Patch

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index bd4e41c..75ad92a 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -84,6 +84,7 @@  struct rcu_state rcu_bh_state = RCU_STATE_INITIALIZER(rcu_bh, call_rcu_bh);
 DEFINE_PER_CPU(struct rcu_data, rcu_bh_data);
 
 static struct rcu_state *rcu_state;
+LIST_HEAD(rcu_struct_flavors);
 
 /* Increase (but not decrease) the CONFIG_RCU_FANOUT_LEAF at boot time. */
 static int rcu_fanout_leaf = CONFIG_RCU_FANOUT_LEAF;
@@ -859,9 +860,10 @@  static int rcu_panic(struct notifier_block *this, unsigned long ev, void *ptr)
  */
 void rcu_cpu_stall_reset(void)
 {
-	rcu_sched_state.jiffies_stall = jiffies + ULONG_MAX / 2;
-	rcu_bh_state.jiffies_stall = jiffies + ULONG_MAX / 2;
-	rcu_preempt_stall_reset();
+	struct rcu_state *rsp;
+
+	for_each_rcu_flavor(rsp)
+		rsp->jiffies_stall = jiffies + ULONG_MAX / 2;
 }
 
 static struct notifier_block rcu_panic_block = {
@@ -1824,10 +1826,11 @@  __rcu_process_callbacks(struct rcu_state *rsp)
  */
 static void rcu_process_callbacks(struct softirq_action *unused)
 {
+	struct rcu_state *rsp;
+
 	trace_rcu_utilization("Start RCU core");
-	__rcu_process_callbacks(&rcu_sched_state);
-	__rcu_process_callbacks(&rcu_bh_state);
-	rcu_preempt_process_callbacks();
+	for_each_rcu_flavor(rsp)
+		__rcu_process_callbacks(rsp);
 	trace_rcu_utilization("End RCU core");
 }
 
@@ -2238,9 +2241,12 @@  static int __rcu_pending(struct rcu_state *rsp, struct rcu_data *rdp)
  */
 static int rcu_pending(int cpu)
 {
-	return __rcu_pending(&rcu_sched_state, &per_cpu(rcu_sched_data, cpu)) ||
-	       __rcu_pending(&rcu_bh_state, &per_cpu(rcu_bh_data, cpu)) ||
-	       rcu_preempt_pending(cpu);
+	struct rcu_state *rsp;
+
+	for_each_rcu_flavor(rsp)
+		if (__rcu_pending(rsp, per_cpu_ptr(rsp->rda, cpu)))
+			return 1;
+	return 0;
 }
 
 /*
@@ -2250,10 +2256,13 @@  static int rcu_pending(int cpu)
  */
 static int rcu_cpu_has_callbacks(int cpu)
 {
+	struct rcu_state *rsp;
+
 	/* RCU callbacks either ready or pending? */
-	return per_cpu(rcu_sched_data, cpu).nxtlist ||
-	       per_cpu(rcu_bh_data, cpu).nxtlist ||
-	       rcu_preempt_cpu_has_callbacks(cpu);
+	for_each_rcu_flavor(rsp)
+		if (per_cpu_ptr(rsp->rda, cpu)->nxtlist)
+			return 1;
+	return 0;
 }
 
 /*
@@ -2539,9 +2548,10 @@  rcu_init_percpu_data(int cpu, struct rcu_state *rsp, int preemptible)
 
 static void __cpuinit rcu_prepare_cpu(int cpu)
 {
-	rcu_init_percpu_data(cpu, &rcu_sched_state, 0);
-	rcu_init_percpu_data(cpu, &rcu_bh_state, 0);
-	rcu_preempt_init_percpu_data(cpu);
+	struct rcu_state *rsp;
+
+	for_each_rcu_flavor(rsp)
+		rcu_init_percpu_data(cpu, rsp, 0);
 }
 
 /*
@@ -2553,6 +2563,7 @@  static int __cpuinit rcu_cpu_notify(struct notifier_block *self,
 	long cpu = (long)hcpu;
 	struct rcu_data *rdp = per_cpu_ptr(rcu_state->rda, cpu);
 	struct rcu_node *rnp = rdp->mynode;
+	struct rcu_state *rsp;
 
 	trace_rcu_utilization("Start CPU hotplug");
 	switch (action) {
@@ -2577,18 +2588,15 @@  static int __cpuinit rcu_cpu_notify(struct notifier_block *self,
 		 * touch any data without introducing corruption. We send the
 		 * dying CPU's callbacks to an arbitrarily chosen online CPU.
 		 */
-		rcu_cleanup_dying_cpu(&rcu_bh_state);
-		rcu_cleanup_dying_cpu(&rcu_sched_state);
-		rcu_preempt_cleanup_dying_cpu();
-		rcu_cleanup_after_idle(cpu);
+		for_each_rcu_flavor(rsp)
+			rcu_cleanup_dying_cpu(rsp);
 		break;
 	case CPU_DEAD:
 	case CPU_DEAD_FROZEN:
 	case CPU_UP_CANCELED:
 	case CPU_UP_CANCELED_FROZEN:
-		rcu_cleanup_dead_cpu(cpu, &rcu_bh_state);
-		rcu_cleanup_dead_cpu(cpu, &rcu_sched_state);
-		rcu_preempt_cleanup_dead_cpu(cpu);
+		for_each_rcu_flavor(rsp)
+			rcu_cleanup_dead_cpu(cpu, rsp);
 		break;
 	default:
 		break;
@@ -2705,6 +2713,7 @@  static void __init rcu_init_one(struct rcu_state *rsp,
 		per_cpu_ptr(rsp->rda, i)->mynode = rnp;
 		rcu_boot_init_percpu_data(i, rsp);
 	}
+	list_add(&rsp->flavors, &rcu_struct_flavors);
 }
 
 /*
diff --git a/kernel/rcutree.h b/kernel/rcutree.h
index a294f7f..138fb33 100644
--- a/kernel/rcutree.h
+++ b/kernel/rcutree.h
@@ -408,8 +408,13 @@  struct rcu_state {
 	unsigned long gp_max;			/* Maximum GP duration in */
 						/*  jiffies. */
 	char *name;				/* Name of structure. */
+	struct list_head flavors;		/* List of RCU flavors. */
 };
 
+extern struct list_head rcu_struct_flavors;
+#define for_each_rcu_flavor(rsp) \
+	list_for_each_entry((rsp), &rcu_struct_flavors, flavors)
+
 /* Return values for rcu_preempt_offline_tasks(). */
 
 #define RCU_OFL_TASKS_NORM_GP	0x1		/* Tasks blocking normal */
@@ -451,25 +456,18 @@  static void rcu_stop_cpu_kthread(int cpu);
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 static void rcu_print_detail_task_stall(struct rcu_state *rsp);
 static int rcu_print_task_stall(struct rcu_node *rnp);
-static void rcu_preempt_stall_reset(void);
 static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
 #ifdef CONFIG_HOTPLUG_CPU
 static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
 				     struct rcu_node *rnp,
 				     struct rcu_data *rdp);
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
-static void rcu_preempt_cleanup_dead_cpu(int cpu);
 static void rcu_preempt_check_callbacks(int cpu);
-static void rcu_preempt_process_callbacks(void);
 void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
 #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU)
 static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
 			       bool wake);
 #endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */
-static int rcu_preempt_pending(int cpu);
-static int rcu_preempt_cpu_has_callbacks(int cpu);
-static void __cpuinit rcu_preempt_init_percpu_data(int cpu);
-static void rcu_preempt_cleanup_dying_cpu(void);
 static void __init __rcu_init_preempt(void);
 static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
 static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 686cb55..41ce563 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -545,16 +545,6 @@  static int rcu_print_task_stall(struct rcu_node *rnp)
 }
 
 /*
- * Suppress preemptible RCU's CPU stall warnings by pushing the
- * time of the next stall-warning message comfortably far into the
- * future.
- */
-static void rcu_preempt_stall_reset(void)
-{
-	rcu_preempt_state.jiffies_stall = jiffies + ULONG_MAX / 2;
-}
-
-/*
  * Check that the list of blocked tasks for the newly completed grace
  * period is in fact empty.  It is a serious bug to complete a grace
  * period that still has RCU readers blocked!  This function must be
@@ -655,14 +645,6 @@  static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 
 /*
- * Do CPU-offline processing for preemptible RCU.
- */
-static void rcu_preempt_cleanup_dead_cpu(int cpu)
-{
-	rcu_cleanup_dead_cpu(cpu, &rcu_preempt_state);
-}
-
-/*
  * Check for a quiescent state from the current CPU.  When a task blocks,
  * the task is recorded in the corresponding CPU's rcu_node structure,
  * which is checked elsewhere.
@@ -682,14 +664,6 @@  static void rcu_preempt_check_callbacks(int cpu)
 		t->rcu_read_unlock_special |= RCU_READ_UNLOCK_NEED_QS;
 }
 
-/*
- * Process callbacks for preemptible RCU.
- */
-static void rcu_preempt_process_callbacks(void)
-{
-	__rcu_process_callbacks(&rcu_preempt_state);
-}
-
 #ifdef CONFIG_RCU_BOOST
 
 static void rcu_preempt_do_callbacks(void)
@@ -921,24 +895,6 @@  mb_ret:
 }
 EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
 
-/*
- * Check to see if there is any immediate preemptible-RCU-related work
- * to be done.
- */
-static int rcu_preempt_pending(int cpu)
-{
-	return __rcu_pending(&rcu_preempt_state,
-			     &per_cpu(rcu_preempt_data, cpu));
-}
-
-/*
- * Does preemptible RCU have callbacks on this CPU?
- */
-static int rcu_preempt_cpu_has_callbacks(int cpu)
-{
-	return !!per_cpu(rcu_preempt_data, cpu).nxtlist;
-}
-
 /**
  * rcu_barrier - Wait until all in-flight call_rcu() callbacks complete.
  */
@@ -949,23 +905,6 @@  void rcu_barrier(void)
 EXPORT_SYMBOL_GPL(rcu_barrier);
 
 /*
- * Initialize preemptible RCU's per-CPU data.
- */
-static void __cpuinit rcu_preempt_init_percpu_data(int cpu)
-{
-	rcu_init_percpu_data(cpu, &rcu_preempt_state, 1);
-}
-
-/*
- * Move preemptible RCU's callbacks from dying CPU to other online CPU
- * and record a quiescent state.
- */
-static void rcu_preempt_cleanup_dying_cpu(void)
-{
-	rcu_cleanup_dying_cpu(&rcu_preempt_state);
-}
-
-/*
  * Initialize preemptible RCU's state structures.
  */
 static void __init __rcu_init_preempt(void)
@@ -1042,14 +981,6 @@  static int rcu_print_task_stall(struct rcu_node *rnp)
 }
 
 /*
- * Because preemptible RCU does not exist, there is no need to suppress
- * its CPU stall warnings.
- */
-static void rcu_preempt_stall_reset(void)
-{
-}
-
-/*
  * Because there is no preemptible RCU, there can be no readers blocked,
  * so there is no need to check for blocked tasks.  So check only for
  * bogus qsmask values.
@@ -1077,14 +1008,6 @@  static int rcu_preempt_offline_tasks(struct rcu_state *rsp,
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 
 /*
- * Because preemptible RCU does not exist, it never needs CPU-offline
- * processing.
- */
-static void rcu_preempt_cleanup_dead_cpu(int cpu)
-{
-}
-
-/*
  * Because preemptible RCU does not exist, it never has any callbacks
  * to check.
  */
@@ -1093,14 +1016,6 @@  static void rcu_preempt_check_callbacks(int cpu)
 }
 
 /*
- * Because preemptible RCU does not exist, it never has any callbacks
- * to process.
- */
-static void rcu_preempt_process_callbacks(void)
-{
-}
-
-/*
  * Queue an RCU callback for lazy invocation after a grace period.
  * This will likely be later named something like "call_rcu_lazy()",
  * but this change will require some way of tagging the lazy RCU
@@ -1141,22 +1056,6 @@  static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 
 /*
- * Because preemptible RCU does not exist, it never has any work to do.
- */
-static int rcu_preempt_pending(int cpu)
-{
-	return 0;
-}
-
-/*
- * Because preemptible RCU does not exist, it never has callbacks
- */
-static int rcu_preempt_cpu_has_callbacks(int cpu)
-{
-	return 0;
-}
-
-/*
  * Because preemptible RCU does not exist, rcu_barrier() is just
  * another name for rcu_barrier_sched().
  */
@@ -1167,21 +1066,6 @@  void rcu_barrier(void)
 EXPORT_SYMBOL_GPL(rcu_barrier);
 
 /*
- * Because preemptible RCU does not exist, there is no per-CPU
- * data to initialize.
- */
-static void __cpuinit rcu_preempt_init_percpu_data(int cpu)
-{
-}
-
-/*
- * Because there is no preemptible RCU, there is no cleanup to do.
- */
-static void rcu_preempt_cleanup_dying_cpu(void)
-{
-}
-
-/*
  * Because preemptible RCU does not exist, it need not be initialized.
  */
 static void __init __rcu_init_preempt(void)