diff mbox

rcu: shift by 1UL rather than 1 to fix sign extension error

Message ID 20161214105506.GA17982@leverpostej
State New
Headers show

Commit Message

Mark Rutland Dec. 14, 2016, 10:55 a.m. UTC
On Wed, Dec 14, 2016 at 09:40:02AM +0800, Boqun Feng wrote:
> On Wed, Dec 14, 2016 at 08:47:55AM +0800, Boqun Feng wrote:

> > On Tue, Dec 13, 2016 at 10:36:47AM -0800, Paul E. McKenney wrote:

> > > On Wed, Dec 14, 2016 at 02:09:27AM +0800, Boqun Feng wrote:

> > > > #define for_each_leaf_node_cpu(rnp, mask, cpu) \

> > > >         for((cpu) = (rnp)->grplo + find _first_bit(mask, MASK_BITS(mask)); \

> > > >             (cpu) >= (rnp)->grplo && (cpu) <= (rnp)->grphi; \

> > > >             (cpu) = (rnp)->grplo + find _next_bit(mask, ...,

> > > > leaf_node_cpu_bit(rnp, cpu) + 1))) \

> > > >                 if (!cpu_possible(cpu)) \

> > > >                         continue; \

> > > >                 else

> > > 

> > > What is the purpose of the cpu_possible() check?

> > > 

> > 

> > To filter out CPUs in range [grplo, grphi] but not in cpu_possible_mask.

> 

> Hmm.. if rcu_cpu_starting(cpu) is never called with "impossible" cpu,

> IOW, ->qsmask and ->expmask never mask "impossible" cpus, then this is

> just an over-care check.

> 

> I think I probably will remove this check eventually, let me audit the

> code a little more for safety ;-)


FWIW, back when I wrote bc75e99983df1efd ("rcu: Correctly handle sparse
possible cpus"), the only places I saw accesses to (percpu) data for
!possible cpus were the places I fixed up. IIRC I tested with a version
of the patch below.

That won't catch erroneous non-percpu accesses, but it covers part of
the problem, at least. ;)

Thanks,
Mark.

---->8----
From fcabcee9ce080073496c736c49e2378a0907c656 Mon Sep 17 00:00:00 2001
From: Mark Rutland <mark.rutland@arm.com>

Date: Mon, 16 May 2016 16:08:29 +0100
Subject: [PATCH] percpu: add possible cpu validation

Recently, the RCU tree code was seen to access per-cpu data for CPUs not
in cpu_possible_mask, for which a per-cpu region (and offset) had not
been allocated. Often this went unnoticed because valid (but erroneous)
pointers were generated, and the accesses hit some other data.

This patch adds a new CONFIG_DEBUG_PER_CPU. When selected, per_cpu_ptr
will verify that the provided CPU id is possible, and therefore there is
a backing percpu area. When the CPU is not possible, we WARN, though the
access proceeds are normal otherwise, matching the !CONFIG_DEBUG_PER_CPU
behaviour.

As the option can adversely affect performance, it is disabled by
default.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>

---
 include/linux/percpu-defs.h | 16 ++++++++++++++--
 lib/Kconfig.debug           | 10 ++++++++++
 2 files changed, 24 insertions(+), 2 deletions(-)

-- 
1.9.1

Comments

Boqun Feng Dec. 14, 2016, 1:03 p.m. UTC | #1
On Wed, Dec 14, 2016 at 10:55:07AM +0000, Mark Rutland wrote:
> On Wed, Dec 14, 2016 at 09:40:02AM +0800, Boqun Feng wrote:

> > On Wed, Dec 14, 2016 at 08:47:55AM +0800, Boqun Feng wrote:

> > > On Tue, Dec 13, 2016 at 10:36:47AM -0800, Paul E. McKenney wrote:

> > > > On Wed, Dec 14, 2016 at 02:09:27AM +0800, Boqun Feng wrote:

> > > > > #define for_each_leaf_node_cpu(rnp, mask, cpu) \

> > > > >         for((cpu) = (rnp)->grplo + find _first_bit(mask, MASK_BITS(mask)); \

> > > > >             (cpu) >= (rnp)->grplo && (cpu) <= (rnp)->grphi; \

> > > > >             (cpu) = (rnp)->grplo + find _next_bit(mask, ...,

> > > > > leaf_node_cpu_bit(rnp, cpu) + 1))) \

> > > > >                 if (!cpu_possible(cpu)) \

> > > > >                         continue; \

> > > > >                 else

> > > > 

> > > > What is the purpose of the cpu_possible() check?

> > > > 

> > > 

> > > To filter out CPUs in range [grplo, grphi] but not in cpu_possible_mask.

> > 

> > Hmm.. if rcu_cpu_starting(cpu) is never called with "impossible" cpu,

> > IOW, ->qsmask and ->expmask never mask "impossible" cpus, then this is

> > just an over-care check.

> > 

> > I think I probably will remove this check eventually, let me audit the

> > code a little more for safety ;-)

> 

> FWIW, back when I wrote bc75e99983df1efd ("rcu: Correctly handle sparse

> possible cpus"), the only places I saw accesses to (percpu) data for

> !possible cpus were the places I fixed up. IIRC I tested with a version

> of the patch below.

> 

> That won't catch erroneous non-percpu accesses, but it covers part of

> the problem, at least. ;)

> 


Good point ;-) I will also add a WARN_ON_ONCE() in macro
for_each_leaf_node_cpu() and help me watch if anyone try to play an
interesting game on those masks.

Regards,
Boqun

> Thanks,

> Mark.

> 

> ---->8----

> From fcabcee9ce080073496c736c49e2378a0907c656 Mon Sep 17 00:00:00 2001

> From: Mark Rutland <mark.rutland@arm.com>

> Date: Mon, 16 May 2016 16:08:29 +0100

> Subject: [PATCH] percpu: add possible cpu validation

> 

> Recently, the RCU tree code was seen to access per-cpu data for CPUs not

> in cpu_possible_mask, for which a per-cpu region (and offset) had not

> been allocated. Often this went unnoticed because valid (but erroneous)

> pointers were generated, and the accesses hit some other data.

> 

> This patch adds a new CONFIG_DEBUG_PER_CPU. When selected, per_cpu_ptr

> will verify that the provided CPU id is possible, and therefore there is

> a backing percpu area. When the CPU is not possible, we WARN, though the

> access proceeds are normal otherwise, matching the !CONFIG_DEBUG_PER_CPU

> behaviour.

> 

> As the option can adversely affect performance, it is disabled by

> default.

> 

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

> ---

>  include/linux/percpu-defs.h | 16 ++++++++++++++--

>  lib/Kconfig.debug           | 10 ++++++++++

>  2 files changed, 24 insertions(+), 2 deletions(-)

> 

> diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h

> index 8f16299..1525352 100644

> --- a/include/linux/percpu-defs.h

> +++ b/include/linux/percpu-defs.h

> @@ -207,6 +207,16 @@

>  	(void)__vpp_verify;						\

>  } while (0)

>  

> +/*

> + * __verify_pcpu_cpu() verifies that @cpu is possible, and hence has a valid

> + * percpu region.

> + */

> +#ifdef CONFIG_DEBUG_PER_CPU

> +#define __verify_pcpu_cpu(cpu)	WARN_ON_ONCE(!cpu_possible(cpu))

> +#else

> +#define __verify_pcpu_cpu(cpu) ({ (void)(cpu); })

> +#endif

> +

>  #ifdef CONFIG_SMP

>  

>  /*

> @@ -219,8 +229,10 @@

>  

>  #define per_cpu_ptr(ptr, cpu)						\

>  ({									\

> +	int ____cpu = (cpu);						\

> +	__verify_pcpu_cpu(____cpu);					\

>  	__verify_pcpu_ptr(ptr);						\

> -	SHIFT_PERCPU_PTR((ptr), per_cpu_offset((cpu)));			\

> +	SHIFT_PERCPU_PTR((ptr), per_cpu_offset((____cpu)));		\

>  })

>  

>  #define raw_cpu_ptr(ptr)						\

> @@ -247,7 +259,7 @@

>  	(typeof(*(__p)) __kernel __force *)(__p);			\

>  })

>  

> -#define per_cpu_ptr(ptr, cpu)	({ (void)(cpu); VERIFY_PERCPU_PTR(ptr); })

> +#define per_cpu_ptr(ptr, cpu)	({ __verify_pcpu_cpu(cpu); VERIFY_PERCPU_PTR(ptr); })

>  #define raw_cpu_ptr(ptr)	per_cpu_ptr(ptr, 0)

>  #define this_cpu_ptr(ptr)	raw_cpu_ptr(ptr)

>  

> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug

> index a6c8db1..14678d2 100644

> --- a/lib/Kconfig.debug

> +++ b/lib/Kconfig.debug

> @@ -665,6 +665,16 @@ config DEBUG_PER_CPU_MAPS

>  

>  	  Say N if unsure.

>  

> +config DEBUG_PER_CPU

> +	bool "Debug access to percpu objects"

> +	depends on DEBUG_KERNEL

> +	help

> +	  Say Y to verify that addresses are only generated for valid percpu

> +	  objects (i.e. for a possible CPU). This adds additional code and

> +	  decreases performance.

> +

> +	  Sey N if unsure.

> +

>  config DEBUG_HIGHMEM

>  	bool "Highmem debugging"

>  	depends on DEBUG_KERNEL && HIGHMEM

> -- 

> 1.9.1

>
diff mbox

Patch

diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h
index 8f16299..1525352 100644
--- a/include/linux/percpu-defs.h
+++ b/include/linux/percpu-defs.h
@@ -207,6 +207,16 @@ 
 	(void)__vpp_verify;						\
 } while (0)
 
+/*
+ * __verify_pcpu_cpu() verifies that @cpu is possible, and hence has a valid
+ * percpu region.
+ */
+#ifdef CONFIG_DEBUG_PER_CPU
+#define __verify_pcpu_cpu(cpu)	WARN_ON_ONCE(!cpu_possible(cpu))
+#else
+#define __verify_pcpu_cpu(cpu) ({ (void)(cpu); })
+#endif
+
 #ifdef CONFIG_SMP
 
 /*
@@ -219,8 +229,10 @@ 
 
 #define per_cpu_ptr(ptr, cpu)						\
 ({									\
+	int ____cpu = (cpu);						\
+	__verify_pcpu_cpu(____cpu);					\
 	__verify_pcpu_ptr(ptr);						\
-	SHIFT_PERCPU_PTR((ptr), per_cpu_offset((cpu)));			\
+	SHIFT_PERCPU_PTR((ptr), per_cpu_offset((____cpu)));		\
 })
 
 #define raw_cpu_ptr(ptr)						\
@@ -247,7 +259,7 @@ 
 	(typeof(*(__p)) __kernel __force *)(__p);			\
 })
 
-#define per_cpu_ptr(ptr, cpu)	({ (void)(cpu); VERIFY_PERCPU_PTR(ptr); })
+#define per_cpu_ptr(ptr, cpu)	({ __verify_pcpu_cpu(cpu); VERIFY_PERCPU_PTR(ptr); })
 #define raw_cpu_ptr(ptr)	per_cpu_ptr(ptr, 0)
 #define this_cpu_ptr(ptr)	raw_cpu_ptr(ptr)
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index a6c8db1..14678d2 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -665,6 +665,16 @@  config DEBUG_PER_CPU_MAPS
 
 	  Say N if unsure.
 
+config DEBUG_PER_CPU
+	bool "Debug access to percpu objects"
+	depends on DEBUG_KERNEL
+	help
+	  Say Y to verify that addresses are only generated for valid percpu
+	  objects (i.e. for a possible CPU). This adds additional code and
+	  decreases performance.
+
+	  Sey N if unsure.
+
 config DEBUG_HIGHMEM
 	bool "Highmem debugging"
 	depends on DEBUG_KERNEL && HIGHMEM