diff mbox series

[bpf-next,1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases

Message ID 160097310597.12106.6191783180902126213.stgit@john-Precision-5820-Tower
State New
Headers show
Series [bpf-next,1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases | expand

Commit Message

John Fastabend Sept. 24, 2020, 6:45 p.m. UTC
In BPF_AND and BPF_OR alu cases we have this pattern when the src and dst
tnum is a constant.

 1 dst_reg->var_off = tnum_[op](dst_reg->var_off, src_reg.var_off)
 2 scalar32_min_max_[op]
 3       if (known) return
 4 scalar_min_max_[op]
 5       if (known)
 6          __mark_reg_known(dst_reg,
                   dst_reg->var_off.value [op] src_reg.var_off.value)

The result is in 1 we calculate the var_off value and store it in the
dst_reg. Then in 6 we duplicate this logic doing the op again on the
value.

The duplication comes from the the tnum_[op] handlers because they have
already done the value calcuation. For example this is tnum_and().

 struct tnum tnum_and(struct tnum a, struct tnum b)
 {
	u64 alpha, beta, v;

	alpha = a.value | a.mask;
	beta = b.value | b.mask;
	v = a.value & b.value;
	return TNUM(v, alpha & beta & ~v);
 }

So lets remove the redundant op calculation. Its confusing for readers
and unnecessary. Its also not harmful because those ops have the
property, r1 & r1 = r1 and r1 | r1 = r1.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 kernel/bpf/verifier.c |    6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

Comments

Alexei Starovoitov Sept. 25, 2020, 11:56 p.m. UTC | #1
On Thu, Sep 24, 2020 at 11:45 AM John Fastabend
<john.fastabend@gmail.com> wrote:
>

> In BPF_AND and BPF_OR alu cases we have this pattern when the src and dst

> tnum is a constant.

>

>  1 dst_reg->var_off = tnum_[op](dst_reg->var_off, src_reg.var_off)

>  2 scalar32_min_max_[op]

>  3       if (known) return

>  4 scalar_min_max_[op]

>  5       if (known)

>  6          __mark_reg_known(dst_reg,

>                    dst_reg->var_off.value [op] src_reg.var_off.value)

>

> The result is in 1 we calculate the var_off value and store it in the

> dst_reg. Then in 6 we duplicate this logic doing the op again on the

> value.

>

> The duplication comes from the the tnum_[op] handlers because they have

> already done the value calcuation. For example this is tnum_and().

>

>  struct tnum tnum_and(struct tnum a, struct tnum b)

>  {

>         u64 alpha, beta, v;

>

>         alpha = a.value | a.mask;

>         beta = b.value | b.mask;

>         v = a.value & b.value;

>         return TNUM(v, alpha & beta & ~v);

>  }

>

> So lets remove the redundant op calculation. Its confusing for readers

> and unnecessary. Its also not harmful because those ops have the

> property, r1 & r1 = r1 and r1 | r1 = r1.

>

> Signed-off-by: John Fastabend <john.fastabend@gmail.com>


Applied. Thanks for the follow up.
In the future please always cc bpf@vger for two reasons:
- to get proper 'Link:' integrated in git commit
- to get them into a new instance of
https://patchwork.kernel.org/project/bpf/list
  which we will start using soon to send automatic 'applied' emails.
John Fastabend Sept. 26, 2020, 4:36 a.m. UTC | #2
Alexei Starovoitov wrote:
> On Thu, Sep 24, 2020 at 11:45 AM John Fastabend

> <john.fastabend@gmail.com> wrote:

> >

> > In BPF_AND and BPF_OR alu cases we have this pattern when the src and dst

> > tnum is a constant.

> >

> >  1 dst_reg->var_off = tnum_[op](dst_reg->var_off, src_reg.var_off)

> >  2 scalar32_min_max_[op]

> >  3       if (known) return

> >  4 scalar_min_max_[op]

> >  5       if (known)

> >  6          __mark_reg_known(dst_reg,

> >                    dst_reg->var_off.value [op] src_reg.var_off.value)

> >

> > The result is in 1 we calculate the var_off value and store it in the

> > dst_reg. Then in 6 we duplicate this logic doing the op again on the

> > value.

> >

> > The duplication comes from the the tnum_[op] handlers because they have

> > already done the value calcuation. For example this is tnum_and().

> >

> >  struct tnum tnum_and(struct tnum a, struct tnum b)

> >  {

> >         u64 alpha, beta, v;

> >

> >         alpha = a.value | a.mask;

> >         beta = b.value | b.mask;

> >         v = a.value & b.value;

> >         return TNUM(v, alpha & beta & ~v);

> >  }

> >

> > So lets remove the redundant op calculation. Its confusing for readers

> > and unnecessary. Its also not harmful because those ops have the

> > property, r1 & r1 = r1 and r1 | r1 = r1.

> >

> > Signed-off-by: John Fastabend <john.fastabend@gmail.com>

> 

> Applied. Thanks for the follow up.

> In the future please always cc bpf@vger for two reasons:

> - to get proper 'Link:' integrated in git commit

> - to get them into a new instance of

> https://patchwork.kernel.org/project/bpf/list


+1

>   which we will start using soon to send automatic 'applied' emails.



Apologies, I updated some scripts and unfortunately typo dropped a '-'
and cut off bpf@vger from the CC list. Also I just used it to land
two more patches without bpf@vger happy to resend with CC included
if folks want. Sorry for the extra work/noise.

Thanks.
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 15ab889b0a3f..33fcc18fdd39 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5818,8 +5818,7 @@  static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
 	u64 umax_val = src_reg->umax_value;
 
 	if (src_known && dst_known) {
-		__mark_reg_known(dst_reg, dst_reg->var_off.value &
-					  src_reg->var_off.value);
+		__mark_reg_known(dst_reg, dst_reg->var_off.value);
 		return;
 	}
 
@@ -5889,8 +5888,7 @@  static void scalar_min_max_or(struct bpf_reg_state *dst_reg,
 	u64 umin_val = src_reg->umin_value;
 
 	if (src_known && dst_known) {
-		__mark_reg_known(dst_reg, dst_reg->var_off.value |
-					  src_reg->var_off.value);
+		__mark_reg_known(dst_reg, dst_reg->var_off.value);
 		return;
 	}