diff mbox

[2/3,RFC,paul/rcu/srcu] srcu: only increase the upper bit for srcu_read_lock()

Message ID 4F44B57C.3020104@cn.fujitsu.com
State New
Headers show

Commit Message

Lai Jiangshan Feb. 22, 2012, 9:29 a.m. UTC
From de49bb517e6367776e2226b931346ab6c798b122 Mon Sep 17 00:00:00 2001
From: Lai Jiangshan <laijs@cn.fujitsu.com>
Date: Wed, 22 Feb 2012 10:41:59 +0800
Subject: [PATCH 2/3 RFC paul/rcu/srcu] srcu: only increase the upper bit for srcu_read_lock()

The algorithm/smp_mb()s ensure 'there is only one srcu_read_lock()
between flip and recheck for a cpu'.
Increment of the upper bit for srcu_read_lock() only can
ensure a single pair of lock/unlock change the cpu counter.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 include/linux/srcu.h |    2 +-
 kernel/srcu.c        |   11 +++++------
 2 files changed, 6 insertions(+), 7 deletions(-)

Comments

Peter Zijlstra Feb. 22, 2012, 9:50 a.m. UTC | #1
On Wed, 2012-02-22 at 17:29 +0800, Lai Jiangshan wrote:
> +       ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;

That just looks funny..
Paul E. McKenney Feb. 22, 2012, 9:20 p.m. UTC | #2
On Wed, Feb 22, 2012 at 05:29:32PM +0800, Lai Jiangshan wrote:
> >From de49bb517e6367776e2226b931346ab6c798b122 Mon Sep 17 00:00:00 2001
> From: Lai Jiangshan <laijs@cn.fujitsu.com>
> Date: Wed, 22 Feb 2012 10:41:59 +0800
> Subject: [PATCH 2/3 RFC paul/rcu/srcu] srcu: only increase the upper bit for srcu_read_lock()
> 
> The algorithm/smp_mb()s ensure 'there is only one srcu_read_lock()
> between flip and recheck for a cpu'.
> Increment of the upper bit for srcu_read_lock() only can
> ensure a single pair of lock/unlock change the cpu counter.

Very nice!  Also makes is more clear in that no combination of operations
including exactly one increment can possibly wrap back to the same value,
because the upper bit would be different.

In deference to Peter Zijlstra's sensibilities, I changed the:

	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;

to:

	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) -= 1;

I did manage to resist the temptation to instead say:

	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) -= +1;

;-)

Queued, thank you!

							Thanx, Paul

> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> ---
>  include/linux/srcu.h |    2 +-
>  kernel/srcu.c        |   11 +++++------
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index a478c8e..5b49d41 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
> @@ -35,7 +35,7 @@ struct srcu_struct_array {
>  };
> 
>  /* Bit definitions for field ->c above and ->snap below. */
> -#define SRCU_USAGE_BITS		2
> +#define SRCU_USAGE_BITS		1
>  #define SRCU_REF_MASK		(ULONG_MAX >> SRCU_USAGE_BITS)
>  #define SRCU_USAGE_COUNT	(SRCU_REF_MASK + 1)
> 
> diff --git a/kernel/srcu.c b/kernel/srcu.c
> index 17e95bc..a51ac48 100644
> --- a/kernel/srcu.c
> +++ b/kernel/srcu.c
> @@ -138,10 +138,10 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *sp, int idx)
> 
>  	/*
>  	 * Now, we check the ->snap array that srcu_readers_active_idx()
> -	 * filled in from the per-CPU counter values.  Since both
> -	 * __srcu_read_lock() and __srcu_read_unlock() increment the
> -	 * upper bits of the per-CPU counter, an increment/decrement
> -	 * pair will change the value of the counter.  Since there is
> +	 * filled in from the per-CPU counter values. Since
> +	 * __srcu_read_lock() increments the upper bits of
> +	 * the per-CPU counter, an increment/decrement pair will
> +	 * change the value of the counter.  Since there is
>  	 * only one possible increment, the only way to wrap the counter
>  	 * is to have a huge number of counter decrements, which requires
>  	 * a huge number of tasks and huge SRCU read-side critical-section
> @@ -234,8 +234,7 @@ void __srcu_read_unlock(struct srcu_struct *sp, int idx)
>  {
>  	preempt_disable();
>  	smp_mb(); /* C */  /* Avoid leaking the critical section. */
> -	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) +=
> -		SRCU_USAGE_COUNT - 1;
> +	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;
>  	preempt_enable();
>  }
>  EXPORT_SYMBOL_GPL(__srcu_read_unlock);
> -- 
> 1.7.4.4
>
Paul E. McKenney Feb. 22, 2012, 9:26 p.m. UTC | #3
On Wed, Feb 22, 2012 at 01:20:56PM -0800, Paul E. McKenney wrote:
> On Wed, Feb 22, 2012 at 05:29:32PM +0800, Lai Jiangshan wrote:
> > >From de49bb517e6367776e2226b931346ab6c798b122 Mon Sep 17 00:00:00 2001
> > From: Lai Jiangshan <laijs@cn.fujitsu.com>
> > Date: Wed, 22 Feb 2012 10:41:59 +0800
> > Subject: [PATCH 2/3 RFC paul/rcu/srcu] srcu: only increase the upper bit for srcu_read_lock()
> > 
> > The algorithm/smp_mb()s ensure 'there is only one srcu_read_lock()
> > between flip and recheck for a cpu'.
> > Increment of the upper bit for srcu_read_lock() only can
> > ensure a single pair of lock/unlock change the cpu counter.
> 
> Very nice!  Also makes is more clear in that no combination of operations
> including exactly one increment can possibly wrap back to the same value,
> because the upper bit would be different.

Make that without underflow -- one increment and 2^31+1 decrements would
in fact return the counter to its original value, but that would require
cramming more than two billion tasks into a 32-bit address space, which
I believe to be sufficiently unlikely.  (Famous last words...)

							Thanx, Paul

> In deference to Peter Zijlstra's sensibilities, I changed the:
> 
> 	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;
> 
> to:
> 
> 	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) -= 1;
> 
> I did manage to resist the temptation to instead say:
> 
> 	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) -= +1;
> 
> ;-)
> 
> Queued, thank you!
> 
> 							Thanx, Paul
> 
> > Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> > ---
> >  include/linux/srcu.h |    2 +-
> >  kernel/srcu.c        |   11 +++++------
> >  2 files changed, 6 insertions(+), 7 deletions(-)
> > 
> > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > index a478c8e..5b49d41 100644
> > --- a/include/linux/srcu.h
> > +++ b/include/linux/srcu.h
> > @@ -35,7 +35,7 @@ struct srcu_struct_array {
> >  };
> > 
> >  /* Bit definitions for field ->c above and ->snap below. */
> > -#define SRCU_USAGE_BITS		2
> > +#define SRCU_USAGE_BITS		1
> >  #define SRCU_REF_MASK		(ULONG_MAX >> SRCU_USAGE_BITS)
> >  #define SRCU_USAGE_COUNT	(SRCU_REF_MASK + 1)
> > 
> > diff --git a/kernel/srcu.c b/kernel/srcu.c
> > index 17e95bc..a51ac48 100644
> > --- a/kernel/srcu.c
> > +++ b/kernel/srcu.c
> > @@ -138,10 +138,10 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *sp, int idx)
> > 
> >  	/*
> >  	 * Now, we check the ->snap array that srcu_readers_active_idx()
> > -	 * filled in from the per-CPU counter values.  Since both
> > -	 * __srcu_read_lock() and __srcu_read_unlock() increment the
> > -	 * upper bits of the per-CPU counter, an increment/decrement
> > -	 * pair will change the value of the counter.  Since there is
> > +	 * filled in from the per-CPU counter values. Since
> > +	 * __srcu_read_lock() increments the upper bits of
> > +	 * the per-CPU counter, an increment/decrement pair will
> > +	 * change the value of the counter.  Since there is
> >  	 * only one possible increment, the only way to wrap the counter
> >  	 * is to have a huge number of counter decrements, which requires
> >  	 * a huge number of tasks and huge SRCU read-side critical-section
> > @@ -234,8 +234,7 @@ void __srcu_read_unlock(struct srcu_struct *sp, int idx)
> >  {
> >  	preempt_disable();
> >  	smp_mb(); /* C */  /* Avoid leaking the critical section. */
> > -	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) +=
> > -		SRCU_USAGE_COUNT - 1;
> > +	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;
> >  	preempt_enable();
> >  }
> >  EXPORT_SYMBOL_GPL(__srcu_read_unlock);
> > -- 
> > 1.7.4.4
> >
Steven Rostedt Feb. 22, 2012, 9:39 p.m. UTC | #4
On Wed, 2012-02-22 at 13:26 -0800, Paul E. McKenney wrote:

> Make that without underflow -- one increment and 2^31+1 decrements would
> in fact return the counter to its original value, but that would require
> cramming more than two billion tasks into a 32-bit address space, which
> I believe to be sufficiently unlikely.  (Famous last words...)

I'll just expect to see you as President of the United States, counting
your money you won in the lottery, and being awarded a Nobel Prize for
curing cancer.

-- Steve
Paul E. McKenney Feb. 23, 2012, 1:01 a.m. UTC | #5
On Wed, Feb 22, 2012 at 04:39:15PM -0500, Steven Rostedt wrote:
> On Wed, 2012-02-22 at 13:26 -0800, Paul E. McKenney wrote:
> 
> > Make that without underflow -- one increment and 2^31+1 decrements would
> > in fact return the counter to its original value, but that would require
> > cramming more than two billion tasks into a 32-bit address space, which
> > I believe to be sufficiently unlikely.  (Famous last words...)
> 
> I'll just expect to see you as President of the United States, counting
> your money you won in the lottery, and being awarded a Nobel Prize for
> curing cancer.

Those possibilities also seem to me to be sufficiently unlikely.  ;-)

							Thanx, Paul
diff mbox

Patch

diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index a478c8e..5b49d41 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -35,7 +35,7 @@  struct srcu_struct_array {
 };
 
 /* Bit definitions for field ->c above and ->snap below. */
-#define SRCU_USAGE_BITS		2
+#define SRCU_USAGE_BITS		1
 #define SRCU_REF_MASK		(ULONG_MAX >> SRCU_USAGE_BITS)
 #define SRCU_USAGE_COUNT	(SRCU_REF_MASK + 1)
 
diff --git a/kernel/srcu.c b/kernel/srcu.c
index 17e95bc..a51ac48 100644
--- a/kernel/srcu.c
+++ b/kernel/srcu.c
@@ -138,10 +138,10 @@  static bool srcu_readers_active_idx_check(struct srcu_struct *sp, int idx)
 
 	/*
 	 * Now, we check the ->snap array that srcu_readers_active_idx()
-	 * filled in from the per-CPU counter values.  Since both
-	 * __srcu_read_lock() and __srcu_read_unlock() increment the
-	 * upper bits of the per-CPU counter, an increment/decrement
-	 * pair will change the value of the counter.  Since there is
+	 * filled in from the per-CPU counter values. Since
+	 * __srcu_read_lock() increments the upper bits of
+	 * the per-CPU counter, an increment/decrement pair will
+	 * change the value of the counter.  Since there is
 	 * only one possible increment, the only way to wrap the counter
 	 * is to have a huge number of counter decrements, which requires
 	 * a huge number of tasks and huge SRCU read-side critical-section
@@ -234,8 +234,7 @@  void __srcu_read_unlock(struct srcu_struct *sp, int idx)
 {
 	preempt_disable();
 	smp_mb(); /* C */  /* Avoid leaking the critical section. */
-	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) +=
-		SRCU_USAGE_COUNT - 1;
+	ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += -1;
 	preempt_enable();
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock);