[ARM64,v4.4,V3,44/44] arm64: futex: Mask __user pointers prior to dereference

Message ID 965d727955b4a45ac1f12e67c6a433110ef94871.1567077734.git.viresh.kumar@linaro.org
State New
Headers show
Series
  • V4.4 backport of arm64 Spectre patches
Related show

Commit Message

Viresh Kumar Aug. 29, 2019, 11:34 a.m.
From: Will Deacon <will.deacon@arm.com>


commit 91b2d3442f6a44dce875670d702af22737ad5eff upstream.

The arm64 futex code has some explicit dereferencing of user pointers
where performing atomic operations in response to a futex command. This
patch uses masking to limit any speculative futex operations to within
the user address space.

Signed-off-by: Will Deacon <will.deacon@arm.com>

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

---
 arch/arm64/include/asm/futex.h | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

-- 
2.21.0.rc0.269.g1a574e7a288b

Comments

Mark Rutland Aug. 30, 2019, 9:42 a.m. | #1
On Thu, Aug 29, 2019 at 05:04:29PM +0530, Viresh Kumar wrote:
> From: Will Deacon <will.deacon@arm.com>

> 

> commit 91b2d3442f6a44dce875670d702af22737ad5eff upstream.

> 

> The arm64 futex code has some explicit dereferencing of user pointers

> where performing atomic operations in response to a futex command. This

> patch uses masking to limit any speculative futex operations to within

> the user address space.

> 

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>


This would have made more sense immediately following patch 11, as in
mainline and the v4.9 backport. Having things applied in the same order
makes it much easier to compare and verify.

Regardless:

Reviewed-by: Mark Rutland <mark.rutland@arm.com> [v4.4 backport]


Mark.

> ---

>  arch/arm64/include/asm/futex.h | 9 ++++++---

>  1 file changed, 6 insertions(+), 3 deletions(-)

> 

> diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h

> index 34d4d2e2f561..8ab6e83cb629 100644

> --- a/arch/arm64/include/asm/futex.h

> +++ b/arch/arm64/include/asm/futex.h

> @@ -53,9 +53,10 @@

>  	: "memory")

>  

>  static inline int

> -arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)

> +arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)

>  {

>  	int oldval = 0, ret, tmp;

> +	u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);

>  

>  	pagefault_disable();

>  

> @@ -93,15 +94,17 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)

>  }

>  

>  static inline int

> -futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,

> +futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,

>  			      u32 oldval, u32 newval)

>  {

>  	int ret = 0;

>  	u32 val, tmp;

> +	u32 __user *uaddr;

>  

> -	if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))

> +	if (!access_ok(VERIFY_WRITE, _uaddr, sizeof(u32)))

>  		return -EFAULT;

>  

> +	uaddr = __uaccess_mask_ptr(_uaddr);

>  	asm volatile("// futex_atomic_cmpxchg_inatomic\n"

>  ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, CONFIG_ARM64_PAN)

>  "	prfm	pstl1strm, %2\n"

> -- 

> 2.21.0.rc0.269.g1a574e7a288b

>
Viresh Kumar Sept. 3, 2019, 5:15 a.m. | #2
On 30-08-19, 10:42, Mark Rutland wrote:
> On Thu, Aug 29, 2019 at 05:04:29PM +0530, Viresh Kumar wrote:

> > From: Will Deacon <will.deacon@arm.com>

> > 

> > commit 91b2d3442f6a44dce875670d702af22737ad5eff upstream.

> > 

> > The arm64 futex code has some explicit dereferencing of user pointers

> > where performing atomic operations in response to a futex command. This

> > patch uses masking to limit any speculative futex operations to within

> > the user address space.

> > 

> > Signed-off-by: Will Deacon <will.deacon@arm.com>

> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

> > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

> 

> This would have made more sense immediately following patch 11, as in

> mainline and the v4.9 backport. Having things applied in the same order

> makes it much easier to compare and verify.


Ahh, indeed the order was that way in the arm64/kpti branch, but not
in the stable branch where it got applied at the end and I followed
that order :(

Fixed the ordering now. Thanks.

-- 
viresh

Patch

diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 34d4d2e2f561..8ab6e83cb629 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -53,9 +53,10 @@ 
 	: "memory")
 
 static inline int
-arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
+arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
 {
 	int oldval = 0, ret, tmp;
+	u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
 
 	pagefault_disable();
 
@@ -93,15 +94,17 @@  arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
 }
 
 static inline int
-futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
 			      u32 oldval, u32 newval)
 {
 	int ret = 0;
 	u32 val, tmp;
+	u32 __user *uaddr;
 
-	if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
+	if (!access_ok(VERIFY_WRITE, _uaddr, sizeof(u32)))
 		return -EFAULT;
 
+	uaddr = __uaccess_mask_ptr(_uaddr);
 	asm volatile("// futex_atomic_cmpxchg_inatomic\n"
 ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
 "	prfm	pstl1strm, %2\n"