diff mbox series

[RFCv2,4/4] bpf: inhibit speculated out-of-bounds pointers

Message ID 20180105145750.53294-5-mark.rutland@arm.com
State New
Headers show
Series API for inhibiting speculative arbitrary read primitives | expand

Commit Message

Mark Rutland Jan. 5, 2018, 2:57 p.m. UTC
Note: this patch is an *example* use of the nospec API. It is understood
that this is incomplete, etc.

Under speculation, CPUs may mis-predict branches in bounds checks. Thus,
memory accesses under a bounds check may be speculated even if the
bounds check fails, providing a primitive for building a side channel.

The EBPF map code has a number of such bounds-checks accesses in
map_lookup_elem implementations. This patch modifies these to use the
nospec helpers to inhibit such side channels.

The JITted lookup_elem implementations remain potentially vulnerable,
and are disabled (with JITted code falling back to the C
implementations).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Signed-off-by: Will Deacon <will.deacon@arm.com>

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/bpf/arraymap.c | 20 +++++++++++++-------
 kernel/bpf/cpumap.c   |  5 ++---
 kernel/bpf/devmap.c   |  3 ++-
 kernel/bpf/sockmap.c  |  3 ++-
 4 files changed, 19 insertions(+), 12 deletions(-)

-- 
2.11.0

Comments

Dan Williams Jan. 5, 2018, 4:38 p.m. UTC | #1
On Fri, Jan 5, 2018 at 6:57 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> Note: this patch is an *example* use of the nospec API. It is understood

> that this is incomplete, etc.

>

> Under speculation, CPUs may mis-predict branches in bounds checks. Thus,

> memory accesses under a bounds check may be speculated even if the

> bounds check fails, providing a primitive for building a side channel.

>

> The EBPF map code has a number of such bounds-checks accesses in

> map_lookup_elem implementations. This patch modifies these to use the

> nospec helpers to inhibit such side channels.

>

> The JITted lookup_elem implementations remain potentially vulnerable,

> and are disabled (with JITted code falling back to the C

> implementations).


Do we still need this given this patch from the bpf folks:

    https://patchwork.ozlabs.org/patch/855911/

?
Mark Rutland Jan. 5, 2018, 4:48 p.m. UTC | #2
On Fri, Jan 05, 2018 at 08:38:43AM -0800, Dan Williams wrote:
> On Fri, Jan 5, 2018 at 6:57 AM, Mark Rutland <mark.rutland@arm.com> wrote:

> > Note: this patch is an *example* use of the nospec API. It is understood

> > that this is incomplete, etc.

> >

> > Under speculation, CPUs may mis-predict branches in bounds checks. Thus,

> > memory accesses under a bounds check may be speculated even if the

> > bounds check fails, providing a primitive for building a side channel.

> >

> > The EBPF map code has a number of such bounds-checks accesses in

> > map_lookup_elem implementations. This patch modifies these to use the

> > nospec helpers to inhibit such side channels.

> >

> > The JITted lookup_elem implementations remain potentially vulnerable,

> > and are disabled (with JITted code falling back to the C

> > implementations).

> 

> Do we still need this given this patch from the bpf folks:

> 

>     https://patchwork.ozlabs.org/patch/855911/


Probably not; it was jsut easier to update this example than to write
new ones.

I've started on the set of cases Elena reported. Most cases fall out
quite nicely, though in places where there's a lot of pointer
arithmetic it's somewhat more painful. I'll try to use those in future,
unless someone beats me to implementing them. ;)

Thanks,
Mark.
Alexei Starovoitov Jan. 5, 2018, 5:04 p.m. UTC | #3
On Fri, Jan 05, 2018 at 02:57:50PM +0000, Mark Rutland wrote:
> Note: this patch is an *example* use of the nospec API. It is understood

> that this is incomplete, etc.

> 

> Under speculation, CPUs may mis-predict branches in bounds checks. Thus,

> memory accesses under a bounds check may be speculated even if the

> bounds check fails, providing a primitive for building a side channel.

> 

> The EBPF map code has a number of such bounds-checks accesses in

> map_lookup_elem implementations. This patch modifies these to use the

> nospec helpers to inhibit such side channels.

> 

> The JITted lookup_elem implementations remain potentially vulnerable,

> and are disabled (with JITted code falling back to the C

> implementations).

> 

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> Cc: Dan Williams <dan.j.williams@intel.com>

> Cc: Peter Zijlstra <peterz@infradead.org>

> ---

>  kernel/bpf/arraymap.c | 20 +++++++++++++-------

>  kernel/bpf/cpumap.c   |  5 ++---

>  kernel/bpf/devmap.c   |  3 ++-

>  kernel/bpf/sockmap.c  |  3 ++-

>  4 files changed, 19 insertions(+), 12 deletions(-)


Mark, did you see my email with this patch yesterday ?
https://patchwork.ozlabs.org/patch/855911/

btw your patch does not fix the variant 1 exploit.

Also all of the pre-embargo patches from Elena that add lfence
in the bpf interpreter and x64 JIT also do not fix it.

The exploit works via bpf_tail_call and not via map_lookup.
I'm trying to make both safer with minimal run-time cost
with above patch.

Also as I tried to explain earlier the variant 1 is relying
on 64-bit speculative address math in bpf_tail_call
that was fixed into 32-bit math in October, so the risk is
close to zero already.

If both x64 and arm folks can test the above patch at least
we will be able to merge it and close one known hole in the tree.
In parallel we can work on adding nospec/osb primitives
and sprinkling them all over the kernel, but please do not use
bpf side as an 'example'. It's unnecessary.

Thanks
Reshetova, Elena Jan. 8, 2018, 12:59 p.m. UTC | #4
> On Fri, Jan 05, 2018 at 02:57:50PM +0000, Mark Rutland wrote:

> > Note: this patch is an *example* use of the nospec API. It is understood

> > that this is incomplete, etc.

> >

> > Under speculation, CPUs may mis-predict branches in bounds checks. Thus,

> > memory accesses under a bounds check may be speculated even if the

> > bounds check fails, providing a primitive for building a side channel.

> >

> > The EBPF map code has a number of such bounds-checks accesses in

> > map_lookup_elem implementations. This patch modifies these to use the

> > nospec helpers to inhibit such side channels.

> >

> > The JITted lookup_elem implementations remain potentially vulnerable,

> > and are disabled (with JITted code falling back to the C

> > implementations).

> >

> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>

> > Signed-off-by: Will Deacon <will.deacon@arm.com>

> > Cc: Dan Williams <dan.j.williams@intel.com>

> > Cc: Peter Zijlstra <peterz@infradead.org>

> > ---

> >  kernel/bpf/arraymap.c | 20 +++++++++++++-------

> >  kernel/bpf/cpumap.c   |  5 ++---

> >  kernel/bpf/devmap.c   |  3 ++-

> >  kernel/bpf/sockmap.c  |  3 ++-

> >  4 files changed, 19 insertions(+), 12 deletions(-)

> 

> Mark, did you see my email with this patch yesterday ?

> https://patchwork.ozlabs.org/patch/855911/

> 

> btw your patch does not fix the variant 1 exploit.

> 

> Also all of the pre-embargo patches from Elena that add lfence

> in the bpf interpreter and x64 JIT also do not fix it.

> 

> The exploit works via bpf_tail_call and not via map_lookup.


Could you please clarify this part? The actual jump
to the out-of-bounds index is indeed made by bpf_tail_call, 
but the "speculation" bypassing step happens when it does map_lookup_elem
on the out-of-bound index.  


Best Regards,
Elena.
diff mbox series

Patch

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 7c25426d3cf5..deaad334a100 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -117,15 +117,20 @@  static void *array_map_lookup_elem(struct bpf_map *map, void *key)
 {
 	struct bpf_array *array = container_of(map, struct bpf_array, map);
 	u32 index = *(u32 *)key;
+	void *ptr, *high;
 
 	if (unlikely(index >= array->map.max_entries))
 		return NULL;
 
-	return array->value + array->elem_size * index;
+	ptr = array->value + array->elem_size * index;
+	high = array->value + array->elem_size * array->map.max_entries;
+
+	return nospec_ptr(ptr, array->value, high);
 }
 
 /* emit BPF instructions equivalent to C code of array_map_lookup_elem() */
-static u32 array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
+static u32 __maybe_unused array_map_gen_lookup(struct bpf_map *map,
+					       struct bpf_insn *insn_buf)
 {
 	struct bpf_insn *insn = insn_buf;
 	u32 elem_size = round_up(map->value_size, 8);
@@ -153,11 +158,14 @@  static void *percpu_array_map_lookup_elem(struct bpf_map *map, void *key)
 {
 	struct bpf_array *array = container_of(map, struct bpf_array, map);
 	u32 index = *(u32 *)key;
+	void __percpu *pptr;
 
 	if (unlikely(index >= array->map.max_entries))
 		return NULL;
 
-	return this_cpu_ptr(array->pptrs[index]);
+	pptr = nospec_array_ptr(array->pptrs, index, array->map.max_entries);
+
+	return this_cpu_ptr(pptr);
 }
 
 int bpf_percpu_array_copy(struct bpf_map *map, void *key, void *value)
@@ -302,7 +310,6 @@  const struct bpf_map_ops array_map_ops = {
 	.map_lookup_elem = array_map_lookup_elem,
 	.map_update_elem = array_map_update_elem,
 	.map_delete_elem = array_map_delete_elem,
-	.map_gen_lookup = array_map_gen_lookup,
 };
 
 const struct bpf_map_ops percpu_array_map_ops = {
@@ -610,8 +617,8 @@  static void *array_of_map_lookup_elem(struct bpf_map *map, void *key)
 	return READ_ONCE(*inner_map);
 }
 
-static u32 array_of_map_gen_lookup(struct bpf_map *map,
-				   struct bpf_insn *insn_buf)
+static u32 __maybe_unused array_of_map_gen_lookup(struct bpf_map *map,
+						  struct bpf_insn *insn_buf)
 {
 	u32 elem_size = round_up(map->value_size, 8);
 	struct bpf_insn *insn = insn_buf;
@@ -644,5 +651,4 @@  const struct bpf_map_ops array_of_maps_map_ops = {
 	.map_fd_get_ptr = bpf_map_fd_get_ptr,
 	.map_fd_put_ptr = bpf_map_fd_put_ptr,
 	.map_fd_sys_lookup_elem = bpf_map_fd_sys_lookup_elem,
-	.map_gen_lookup = array_of_map_gen_lookup,
 };
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index ce5b669003b2..6769a0e30c8c 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -551,13 +551,12 @@  void cpu_map_free(struct bpf_map *map)
 struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key)
 {
 	struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map);
-	struct bpf_cpu_map_entry *rcpu;
 
 	if (key >= map->max_entries)
 		return NULL;
 
-	rcpu = READ_ONCE(cmap->cpu_map[key]);
-	return rcpu;
+	return READ_ONCE(*nospec_array_ptr(cmap->cpu_map, key,
+					   map->max_entries));
 }
 
 static void *cpu_map_lookup_elem(struct bpf_map *map, void *key)
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index ebdef54bf7df..5a1050d270a0 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -254,7 +254,8 @@  struct net_device  *__dev_map_lookup_elem(struct bpf_map *map, u32 key)
 	if (key >= map->max_entries)
 		return NULL;
 
-	dev = READ_ONCE(dtab->netdev_map[key]);
+	dev = READ_ONCE(*nospec_array_ptr(dtab->netdev_map, key,
+					  map->max_entries));
 	return dev ? dev->dev : NULL;
 }
 
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 5ee2e41893d9..e912de3cd4ce 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -630,7 +630,8 @@  struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
 	if (key >= map->max_entries)
 		return NULL;
 
-	return READ_ONCE(stab->sock_map[key]);
+	return READ_ONCE(*nospec_array_ptr(stab->sock_map, key,
+					   map->max_entries));
 }
 
 static int sock_map_delete_elem(struct bpf_map *map, void *key)