diff mbox series

Allow non-wi <op> wi

Message ID 87vajwdqrk.fsf@linaro.org
State New
Headers show
Series Allow non-wi <op> wi | expand

Commit Message

Richard Sandiford Oct. 3, 2017, 6:34 p.m. UTC
This patch uses global rather than member operators for wide-int.h,
so that the first operand can be a non-wide-int type.

The patch also removes the and_not and or_not member functions.
It was already inconsistent to have member functions for these
two operations (one of which was never used) and not other wi::
ones like udiv.  After the operator change, we'd have the additional
inconsistency that "non-wi & wi" would work but "non-wi.and_not (wi)"
wouldn't.

Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu.
Also tested by comparing the testsuite assembly output on at least one
target per CPU directory.  OK to install?

Richard


2017-10-03  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
	* wide-int.h (WI_BINARY_OPERATOR_RESULT): New macro.
	(WI_BINARY_PREDICATE_RESULT): Likewise.
	(wi::binary_traits::operator_result): New type.
	(wi::binary_traits::predicate_result): Likewise.
	(generic_wide_int::operator~, unary generic_wide_int::operator-)
	(generic_wide_int::operator==, generic_wide_int::operator!=)
	(generic_wide_int::operator&, generic_wide_int::and_not)
	(generic_wide_int::operator|, generic_wide_int::or_not)
	(generic_wide_int::operator^, generic_wide_int::operator+
	(binary generic_wide_int::operator-, generic_wide_int::operator*):
	Delete.
	(operator~, unary operator-, operator==, operator!=, operator&)
	(operator|, operator^, operator+, binary operator-, operator*): New
	functions.
	* expr.c (get_inner_reference): Use wi::bit_and_not.
	* fold-const.c (fold_binary_loc): Likewise.
	* ipa-prop.c (ipa_compute_jump_functions_for_edge): Likewise.
	* tree-ssa-ccp.c (get_value_from_alignment): Likewise.
	(bit_value_binop): Likewise.
	* tree-ssa-math-opts.c (find_bswap_or_nop_load): Likewise.
	* tree-vrp.c (zero_nonzero_bits_from_vr): Likewise.
	(extract_range_from_binary_expr_1): Likewise.
	(masked_increment): Likewise.
	(simplify_bit_ops_using_ranges): Likewise.

Comments

Richard Biener Oct. 5, 2017, 8:53 a.m. UTC | #1
On Tue, Oct 3, 2017 at 8:34 PM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> This patch uses global rather than member operators for wide-int.h,

> so that the first operand can be a non-wide-int type.


Not sure why we had the in-class ones.  If we had some good arguments
they'd still stand.  Do you remember?

> The patch also removes the and_not and or_not member functions.

> It was already inconsistent to have member functions for these

> two operations (one of which was never used) and not other wi::

> ones like udiv.  After the operator change, we'd have the additional

> inconsistency that "non-wi & wi" would work but "non-wi.and_not (wi)"

> wouldn't.

>

> Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu.

> Also tested by comparing the testsuite assembly output on at least one

> target per CPU directory.  OK to install?


Ok.

Thanks,
Richard.

> Richard

>

>

> 2017-10-03  Richard Sandiford  <richard.sandiford@linaro.org>

>

> gcc/

>         * wide-int.h (WI_BINARY_OPERATOR_RESULT): New macro.

>         (WI_BINARY_PREDICATE_RESULT): Likewise.

>         (wi::binary_traits::operator_result): New type.

>         (wi::binary_traits::predicate_result): Likewise.

>         (generic_wide_int::operator~, unary generic_wide_int::operator-)

>         (generic_wide_int::operator==, generic_wide_int::operator!=)

>         (generic_wide_int::operator&, generic_wide_int::and_not)

>         (generic_wide_int::operator|, generic_wide_int::or_not)

>         (generic_wide_int::operator^, generic_wide_int::operator+

>         (binary generic_wide_int::operator-, generic_wide_int::operator*):

>         Delete.

>         (operator~, unary operator-, operator==, operator!=, operator&)

>         (operator|, operator^, operator+, binary operator-, operator*): New

>         functions.

>         * expr.c (get_inner_reference): Use wi::bit_and_not.

>         * fold-const.c (fold_binary_loc): Likewise.

>         * ipa-prop.c (ipa_compute_jump_functions_for_edge): Likewise.

>         * tree-ssa-ccp.c (get_value_from_alignment): Likewise.

>         (bit_value_binop): Likewise.

>         * tree-ssa-math-opts.c (find_bswap_or_nop_load): Likewise.

>         * tree-vrp.c (zero_nonzero_bits_from_vr): Likewise.

>         (extract_range_from_binary_expr_1): Likewise.

>         (masked_increment): Likewise.

>         (simplify_bit_ops_using_ranges): Likewise.

>

> Index: gcc/wide-int.h

> ===================================================================

> --- gcc/wide-int.h      2017-09-11 17:10:58.656085547 +0100

> +++ gcc/wide-int.h      2017-10-03 19:32:39.077055063 +0100

> @@ -262,11 +262,22 @@ #define OFFSET_INT_ELTS (ADDR_MAX_PRECIS

>  #define WI_BINARY_RESULT(T1, T2) \

>    typename wi::binary_traits <T1, T2>::result_type

>

> +/* Likewise for binary operators, which excludes the case in which neither

> +   T1 nor T2 is a wide-int-based type.  */

> +#define WI_BINARY_OPERATOR_RESULT(T1, T2) \

> +  typename wi::binary_traits <T1, T2>::operator_result

> +

>  /* The type of result produced by T1 << T2.  Leads to substitution failure

>     if the operation isn't supported.  Defined purely for brevity.  */

>  #define WI_SIGNED_SHIFT_RESULT(T1, T2) \

>    typename wi::binary_traits <T1, T2>::signed_shift_result_type

>

> +/* The type of result produced by a sign-agnostic binary predicate on

> +   types T1 and T2.  This is bool if wide-int operations make sense for

> +   T1 and T2 and leads to substitution failure otherwise.  */

> +#define WI_BINARY_PREDICATE_RESULT(T1, T2) \

> +  typename wi::binary_traits <T1, T2>::predicate_result

> +

>  /* The type of result produced by a signed binary predicate on types T1 and T2.

>     This is bool if signed comparisons make sense for T1 and T2 and leads to

>     substitution failure otherwise.  */

> @@ -382,12 +393,15 @@ #define WIDE_INT_REF_FOR(T) \

>    struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>

>    {

>      typedef widest_int result_type;

> +    /* Don't define operators for this combination.  */

>    };

>

>    template <typename T1, typename T2>

>    struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>

>    {

>      typedef wide_int result_type;

> +    typedef result_type operator_result;

> +    typedef bool predicate_result;

>    };

>

>    template <typename T1, typename T2>

> @@ -397,6 +411,8 @@ #define WIDE_INT_REF_FOR(T) \

>         so as not to confuse gengtype.  */

>      typedef generic_wide_int < fixed_wide_int_storage

>                                <int_traits <T2>::precision> > result_type;

> +    typedef result_type operator_result;

> +    typedef bool predicate_result;

>      typedef bool signed_predicate_result;

>    };

>

> @@ -404,6 +420,8 @@ #define WIDE_INT_REF_FOR(T) \

>    struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>

>    {

>      typedef wide_int result_type;

> +    typedef result_type operator_result;

> +    typedef bool predicate_result;

>    };

>

>    template <typename T1, typename T2>

> @@ -413,6 +431,8 @@ #define WIDE_INT_REF_FOR(T) \

>         so as not to confuse gengtype.  */

>      typedef generic_wide_int < fixed_wide_int_storage

>                                <int_traits <T1>::precision> > result_type;

> +    typedef result_type operator_result;

> +    typedef bool predicate_result;

>      typedef result_type signed_shift_result_type;

>      typedef bool signed_predicate_result;

>    };

> @@ -420,11 +440,13 @@ #define WIDE_INT_REF_FOR(T) \

>    template <typename T1, typename T2>

>    struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>

>    {

> +    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);

>      /* Spelled out explicitly (rather than through FIXED_WIDE_INT)

>         so as not to confuse gengtype.  */

> -    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);

>      typedef generic_wide_int < fixed_wide_int_storage

>                                <int_traits <T1>::precision> > result_type;

> +    typedef result_type operator_result;

> +    typedef bool predicate_result;

>      typedef result_type signed_shift_result_type;

>      typedef bool signed_predicate_result;

>    };

> @@ -433,6 +455,8 @@ #define WIDE_INT_REF_FOR(T) \

>    struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>

>    {

>      typedef wide_int result_type;

> +    typedef result_type operator_result;

> +    typedef bool predicate_result;

>    };

>  }

>

> @@ -675,18 +699,6 @@ class GTY(()) generic_wide_int : public

>    template <typename T>

>    generic_wide_int &operator = (const T &);

>

> -#define BINARY_PREDICATE(OP, F) \

> -  template <typename T> \

> -  bool OP (const T &c) const { return wi::F (*this, c); }

> -

> -#define UNARY_OPERATOR(OP, F) \

> -  WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }

> -

> -#define BINARY_OPERATOR(OP, F) \

> -  template <typename T> \

> -    WI_BINARY_RESULT (generic_wide_int, T) \

> -    OP (const T &c) const { return wi::F (*this, c); }

> -

>  #define ASSIGNMENT_OPERATOR(OP, F) \

>    template <typename T> \

>      generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }

> @@ -699,18 +711,6 @@ #define SHIFT_ASSIGNMENT_OPERATOR(OP, OP

>  #define INCDEC_OPERATOR(OP, DELTA) \

>    generic_wide_int &OP () { *this += DELTA; return *this; }

>

> -  UNARY_OPERATOR (operator ~, bit_not)

> -  UNARY_OPERATOR (operator -, neg)

> -  BINARY_PREDICATE (operator ==, eq_p)

> -  BINARY_PREDICATE (operator !=, ne_p)

> -  BINARY_OPERATOR (operator &, bit_and)

> -  BINARY_OPERATOR (and_not, bit_and_not)

> -  BINARY_OPERATOR (operator |, bit_or)

> -  BINARY_OPERATOR (or_not, bit_or_not)

> -  BINARY_OPERATOR (operator ^, bit_xor)

> -  BINARY_OPERATOR (operator +, add)

> -  BINARY_OPERATOR (operator -, sub)

> -  BINARY_OPERATOR (operator *, mul)

>    ASSIGNMENT_OPERATOR (operator &=, bit_and)

>    ASSIGNMENT_OPERATOR (operator |=, bit_or)

>    ASSIGNMENT_OPERATOR (operator ^=, bit_xor)

> @@ -722,9 +722,6 @@ #define INCDEC_OPERATOR(OP, DELTA) \

>    INCDEC_OPERATOR (operator ++, 1)

>    INCDEC_OPERATOR (operator --, -1)

>

> -#undef BINARY_PREDICATE

> -#undef UNARY_OPERATOR

> -#undef BINARY_OPERATOR

>  #undef SHIFT_ASSIGNMENT_OPERATOR

>  #undef ASSIGNMENT_OPERATOR

>  #undef INCDEC_OPERATOR

> @@ -3123,6 +3120,45 @@ SIGNED_BINARY_PREDICATE (operator >=, ge

>

>  #undef SIGNED_BINARY_PREDICATE

>

> +#define UNARY_OPERATOR(OP, F) \

> +  template<typename T> \

> +  WI_UNARY_RESULT (generic_wide_int<T>) \

> +  OP (const generic_wide_int<T> &x) \

> +  { \

> +    return wi::F (x); \

> +  }

> +

> +#define BINARY_PREDICATE(OP, F) \

> +  template<typename T1, typename T2> \

> +  WI_BINARY_PREDICATE_RESULT (T1, T2) \

> +  OP (const T1 &x, const T2 &y) \

> +  { \

> +    return wi::F (x, y); \

> +  }

> +

> +#define BINARY_OPERATOR(OP, F) \

> +  template<typename T1, typename T2> \

> +  WI_BINARY_OPERATOR_RESULT (T1, T2) \

> +  OP (const T1 &x, const T2 &y) \

> +  { \

> +    return wi::F (x, y); \

> +  }

> +

> +UNARY_OPERATOR (operator ~, bit_not)

> +UNARY_OPERATOR (operator -, neg)

> +BINARY_PREDICATE (operator ==, eq_p)

> +BINARY_PREDICATE (operator !=, ne_p)

> +BINARY_OPERATOR (operator &, bit_and)

> +BINARY_OPERATOR (operator |, bit_or)

> +BINARY_OPERATOR (operator ^, bit_xor)

> +BINARY_OPERATOR (operator +, add)

> +BINARY_OPERATOR (operator -, sub)

> +BINARY_OPERATOR (operator *, mul)

> +

> +#undef UNARY_OPERATOR

> +#undef BINARY_PREDICATE

> +#undef BINARY_OPERATOR

> +

>  template <typename T1, typename T2>

>  inline WI_SIGNED_SHIFT_RESULT (T1, T2)

>  operator << (const T1 &x, const T2 &y)

> Index: gcc/expr.c

> ===================================================================

> --- gcc/expr.c  2017-09-25 13:33:39.989814299 +0100

> +++ gcc/expr.c  2017-10-03 19:32:39.071055063 +0100

> @@ -7153,7 +7153,7 @@ get_inner_reference (tree exp, HOST_WIDE

>        if (wi::neg_p (bit_offset) || !wi::fits_shwi_p (bit_offset))

>          {

>           offset_int mask = wi::mask <offset_int> (LOG2_BITS_PER_UNIT, false);

> -         offset_int tem = bit_offset.and_not (mask);

> +         offset_int tem = wi::bit_and_not (bit_offset, mask);

>           /* TEM is the bitpos rounded to BITS_PER_UNIT towards -Inf.

>              Subtract it to BIT_OFFSET and add it (scaled) to OFFSET.  */

>           bit_offset -= tem;

> Index: gcc/fold-const.c

> ===================================================================

> --- gcc/fold-const.c    2017-09-25 13:33:39.989814299 +0100

> +++ gcc/fold-const.c    2017-10-03 19:32:39.073055063 +0100

> @@ -9911,7 +9911,7 @@ fold_binary_loc (location_t loc,

>                                    TYPE_PRECISION (TREE_TYPE (arg1)));

>

>           /* If (C1|C2) == ~0 then (X&C1)|C2 becomes X|C2.  */

> -         if (msk.and_not (c1 | c2) == 0)

> +         if (wi::bit_and_not (msk, c1 | c2) == 0)

>             {

>               tem = fold_convert_loc (loc, type, TREE_OPERAND (arg0, 0));

>               return fold_build2_loc (loc, BIT_IOR_EXPR, type, tem, arg1);

> @@ -9922,12 +9922,13 @@ fold_binary_loc (location_t loc,

>              mode which allows further optimizations.  */

>           c1 &= msk;

>           c2 &= msk;

> -         wide_int c3 = c1.and_not (c2);

> +         wide_int c3 = wi::bit_and_not (c1, c2);

>           for (w = BITS_PER_UNIT; w <= width; w <<= 1)

>             {

>               wide_int mask = wi::mask (w, false,

>                                         TYPE_PRECISION (type));

> -             if (((c1 | c2) & mask) == mask && c1.and_not (mask) == 0)

> +             if (((c1 | c2) & mask) == mask

> +                 && wi::bit_and_not (c1, mask) == 0)

>                 {

>                   c3 = mask;

>                   break;

> Index: gcc/ipa-prop.c

> ===================================================================

> --- gcc/ipa-prop.c      2017-06-22 12:22:57.746312683 +0100

> +++ gcc/ipa-prop.c      2017-10-03 19:32:39.074055063 +0100

> @@ -1931,9 +1931,9 @@ ipa_compute_jump_functions_for_edge (str

>           unsigned align;

>

>           get_pointer_alignment_1 (arg, &align, &bitpos);

> -         widest_int mask

> -           = wi::mask<widest_int>(TYPE_PRECISION (TREE_TYPE (arg)), false)

> -           .and_not (align / BITS_PER_UNIT - 1);

> +         widest_int mask = wi::bit_and_not

> +           (wi::mask<widest_int> (TYPE_PRECISION (TREE_TYPE (arg)), false),

> +            align / BITS_PER_UNIT - 1);

>           widest_int value = bitpos / BITS_PER_UNIT;

>           ipa_set_jfunc_bits (jfunc, value, mask);

>         }

> Index: gcc/tree-ssa-ccp.c

> ===================================================================

> --- gcc/tree-ssa-ccp.c  2017-09-21 12:00:35.101846471 +0100

> +++ gcc/tree-ssa-ccp.c  2017-10-03 19:32:39.075055063 +0100

> @@ -569,9 +569,11 @@ get_value_from_alignment (tree expr)

>    gcc_assert (TREE_CODE (expr) == ADDR_EXPR);

>

>    get_pointer_alignment_1 (expr, &align, &bitpos);

> -  val.mask = (POINTER_TYPE_P (type) || TYPE_UNSIGNED (type)

> -             ? wi::mask <widest_int> (TYPE_PRECISION (type), false)

> -             : -1).and_not (align / BITS_PER_UNIT - 1);

> +  val.mask = wi::bit_and_not

> +    (POINTER_TYPE_P (type) || TYPE_UNSIGNED (type)

> +     ? wi::mask <widest_int> (TYPE_PRECISION (type), false)

> +     : -1,

> +     align / BITS_PER_UNIT - 1);

>    val.lattice_val

>      = wi::sext (val.mask, TYPE_PRECISION (type)) == -1 ? VARYING : CONSTANT;

>    if (val.lattice_val == CONSTANT)

> @@ -1308,8 +1310,9 @@ bit_value_binop (enum tree_code code, si

>      case BIT_IOR_EXPR:

>        /* The mask is constant where there is a known

>          set bit, (m1 | m2) & ~((v1 & ~m1) | (v2 & ~m2)).  */

> -      *mask = (r1mask | r2mask)

> -             .and_not (r1val.and_not (r1mask) | r2val.and_not (r2mask));

> +      *mask = wi::bit_and_not (r1mask | r2mask,

> +                              wi::bit_and_not (r1val, r1mask)

> +                              | wi::bit_and_not (r2val, r2mask));

>        *val = r1val | r2val;

>        break;

>

> @@ -1395,7 +1398,8 @@ bit_value_binop (enum tree_code code, si

>        {

>         /* Do the addition with unknown bits set to zero, to give carry-ins of

>            zero wherever possible.  */

> -       widest_int lo = r1val.and_not (r1mask) + r2val.and_not (r2mask);

> +       widest_int lo = (wi::bit_and_not (r1val, r1mask)

> +                        + wi::bit_and_not (r2val, r2mask));

>         lo = wi::ext (lo, width, sgn);

>         /* Do the addition with unknown bits set to one, to give carry-ins of

>            one wherever possible.  */

> @@ -1447,7 +1451,7 @@ bit_value_binop (enum tree_code code, si

>      case NE_EXPR:

>        {

>         widest_int m = r1mask | r2mask;

> -       if (r1val.and_not (m) != r2val.and_not (m))

> +       if (wi::bit_and_not (r1val, m) != wi::bit_and_not (r2val, m))

>           {

>             *mask = 0;

>             *val = ((code == EQ_EXPR) ? 0 : 1);

> @@ -1486,8 +1490,10 @@ bit_value_binop (enum tree_code code, si

>         /* If we know the most significant bits we know the values

>            value ranges by means of treating varying bits as zero

>            or one.  Do a cross comparison of the max/min pairs.  */

> -       maxmin = wi::cmp (o1val | o1mask, o2val.and_not (o2mask), sgn);

> -       minmax = wi::cmp (o1val.and_not (o1mask), o2val | o2mask, sgn);

> +       maxmin = wi::cmp (o1val | o1mask,

> +                         wi::bit_and_not (o2val, o2mask), sgn);

> +       minmax = wi::cmp (wi::bit_and_not (o1val, o1mask),

> +                         o2val | o2mask, sgn);

>         if (maxmin < 0)  /* o1 is less than o2.  */

>           {

>             *mask = 0;

> Index: gcc/tree-ssa-math-opts.c

> ===================================================================

> --- gcc/tree-ssa-math-opts.c    2017-08-30 12:19:19.718220029 +0100

> +++ gcc/tree-ssa-math-opts.c    2017-10-03 19:32:39.075055063 +0100

> @@ -2138,7 +2138,7 @@ find_bswap_or_nop_load (gimple *stmt, tr

>        if (wi::neg_p (bit_offset))

>         {

>           offset_int mask = wi::mask <offset_int> (LOG2_BITS_PER_UNIT, false);

> -         offset_int tem = bit_offset.and_not (mask);

> +         offset_int tem = wi::bit_and_not (bit_offset, mask);

>           /* TEM is the bitpos rounded to BITS_PER_UNIT towards -Inf.

>              Subtract it to BIT_OFFSET and add it (scaled) to OFFSET.  */

>           bit_offset -= tem;

> Index: gcc/tree-vrp.c

> ===================================================================

> --- gcc/tree-vrp.c      2017-09-22 18:00:33.560168917 +0100

> +++ gcc/tree-vrp.c      2017-10-03 19:32:39.077055063 +0100

> @@ -1769,7 +1769,7 @@ zero_nonzero_bits_from_vr (const tree ex

>           wide_int mask = wi::mask (wi::floor_log2 (xor_mask), false,

>                                     may_be_nonzero->get_precision ());

>           *may_be_nonzero = *may_be_nonzero | mask;

> -         *must_be_nonzero = must_be_nonzero->and_not (mask);

> +         *must_be_nonzero = wi::bit_and_not (*must_be_nonzero, mask);

>         }

>      }

>

> @@ -2975,8 +2975,8 @@ extract_range_from_binary_expr_1 (value_

>           wide_int result_zero_bits = ((must_be_nonzero0 & must_be_nonzero1)

>                                        | ~(may_be_nonzero0 | may_be_nonzero1));

>           wide_int result_one_bits

> -           = (must_be_nonzero0.and_not (may_be_nonzero1)

> -              | must_be_nonzero1.and_not (may_be_nonzero0));

> +           = (wi::bit_and_not (must_be_nonzero0, may_be_nonzero1)

> +              | wi::bit_and_not (must_be_nonzero1, may_be_nonzero0));

>           max = wide_int_to_tree (expr_type, ~result_zero_bits);

>           min = wide_int_to_tree (expr_type, result_one_bits);

>           /* If the range has all positive or all negative values the

> @@ -4877,7 +4877,7 @@ masked_increment (const wide_int &val_in

>        if ((res & bit) == 0)

>         continue;

>        res = bit - 1;

> -      res = (val + bit).and_not (res);

> +      res = wi::bit_and_not (val + bit, res);

>        res &= mask;

>        if (wi::gtu_p (res, val))

>         return res ^ sgnbit;

> @@ -9538,13 +9538,13 @@ simplify_bit_ops_using_ranges (gimple_st

>    switch (gimple_assign_rhs_code (stmt))

>      {

>      case BIT_AND_EXPR:

> -      mask = may_be_nonzero0.and_not (must_be_nonzero1);

> +      mask = wi::bit_and_not (may_be_nonzero0, must_be_nonzero1);

>        if (mask == 0)

>         {

>           op = op0;

>           break;

>         }

> -      mask = may_be_nonzero1.and_not (must_be_nonzero0);

> +      mask = wi::bit_and_not (may_be_nonzero1, must_be_nonzero0);

>        if (mask == 0)

>         {

>           op = op1;

> @@ -9552,13 +9552,13 @@ simplify_bit_ops_using_ranges (gimple_st

>         }

>        break;

>      case BIT_IOR_EXPR:

> -      mask = may_be_nonzero0.and_not (must_be_nonzero1);

> +      mask = wi::bit_and_not (may_be_nonzero0, must_be_nonzero1);

>        if (mask == 0)

>         {

>           op = op1;

>           break;

>         }

> -      mask = may_be_nonzero1.and_not (must_be_nonzero0);

> +      mask = wi::bit_and_not (may_be_nonzero1, must_be_nonzero0);

>        if (mask == 0)

>         {

>           op = op0;
Richard Sandiford Oct. 6, 2017, 9:35 a.m. UTC | #2
Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Oct 3, 2017 at 8:34 PM, Richard Sandiford

> <richard.sandiford@linaro.org> wrote:

>> This patch uses global rather than member operators for wide-int.h,

>> so that the first operand can be a non-wide-int type.

>

> Not sure why we had the in-class ones.  If we had some good arguments

> they'd still stand.  Do you remember?


Not really, sorry.  This might not have been discussed specifically.
It looks like Kenny and Mike's initial commit to the wide-int branch
had member operators, so it could just have been carried over by
default.  And using member operators in the initial commit might have
been influenced by double_int (which has them too), since at that time
wide_int was very much a direct replacement for double_int.

Thanks,
Richard
Richard Biener Oct. 6, 2017, 11:51 a.m. UTC | #3
On Fri, Oct 6, 2017 at 11:35 AM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:

>> On Tue, Oct 3, 2017 at 8:34 PM, Richard Sandiford

>> <richard.sandiford@linaro.org> wrote:

>>> This patch uses global rather than member operators for wide-int.h,

>>> so that the first operand can be a non-wide-int type.

>>

>> Not sure why we had the in-class ones.  If we had some good arguments

>> they'd still stand.  Do you remember?

>

> Not really, sorry.  This might not have been discussed specifically.

> It looks like Kenny and Mike's initial commit to the wide-int branch

> had member operators, so it could just have been carried over by

> default.  And using member operators in the initial commit might have

> been influenced by double_int (which has them too), since at that time

> wide_int was very much a direct replacement for double_int.


Ah, yeah...

> Thanks,

> Richard
Mike Stump Oct. 6, 2017, 2:56 p.m. UTC | #4
> On Oct 6, 2017, at 2:35 AM, Richard Sandiford <richard.sandiford@linaro.org> wrote:

> 

> Richard Biener <richard.guenther@gmail.com> writes:

>> On Tue, Oct 3, 2017 at 8:34 PM, Richard Sandiford

>> <richard.sandiford@linaro.org> wrote:

>>> This patch uses global rather than member operators for wide-int.h,

>>> so that the first operand can be a non-wide-int type.

>> 

>> Not sure why we had the in-class ones.  If we had some good arguments

>> they'd still stand.  Do you remember?

> 

> Not really, sorry.


No real good reason.  Copying double_int's style is most of the reason.  Just wanted to support the api clients used, and they, at the time, didn't require non-member versions.  If they had, we'd have done it outside.
diff mbox series

Patch

Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	2017-09-11 17:10:58.656085547 +0100
+++ gcc/wide-int.h	2017-10-03 19:32:39.077055063 +0100
@@ -262,11 +262,22 @@  #define OFFSET_INT_ELTS (ADDR_MAX_PRECIS
 #define WI_BINARY_RESULT(T1, T2) \
   typename wi::binary_traits <T1, T2>::result_type
 
+/* Likewise for binary operators, which excludes the case in which neither
+   T1 nor T2 is a wide-int-based type.  */
+#define WI_BINARY_OPERATOR_RESULT(T1, T2) \
+  typename wi::binary_traits <T1, T2>::operator_result
+
 /* The type of result produced by T1 << T2.  Leads to substitution failure
    if the operation isn't supported.  Defined purely for brevity.  */
 #define WI_SIGNED_SHIFT_RESULT(T1, T2) \
   typename wi::binary_traits <T1, T2>::signed_shift_result_type
 
+/* The type of result produced by a sign-agnostic binary predicate on
+   types T1 and T2.  This is bool if wide-int operations make sense for
+   T1 and T2 and leads to substitution failure otherwise.  */
+#define WI_BINARY_PREDICATE_RESULT(T1, T2) \
+  typename wi::binary_traits <T1, T2>::predicate_result
+
 /* The type of result produced by a signed binary predicate on types T1 and T2.
    This is bool if signed comparisons make sense for T1 and T2 and leads to
    substitution failure otherwise.  */
@@ -382,12 +393,15 @@  #define WIDE_INT_REF_FOR(T) \
   struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
   {
     typedef widest_int result_type;
+    /* Don't define operators for this combination.  */
   };
 
   template <typename T1, typename T2>
   struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
   {
     typedef wide_int result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
   };
 
   template <typename T1, typename T2>
@@ -397,6 +411,8 @@  #define WIDE_INT_REF_FOR(T) \
        so as not to confuse gengtype.  */
     typedef generic_wide_int < fixed_wide_int_storage
 			       <int_traits <T2>::precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
     typedef bool signed_predicate_result;
   };
 
@@ -404,6 +420,8 @@  #define WIDE_INT_REF_FOR(T) \
   struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
   {
     typedef wide_int result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
   };
 
   template <typename T1, typename T2>
@@ -413,6 +431,8 @@  #define WIDE_INT_REF_FOR(T) \
        so as not to confuse gengtype.  */
     typedef generic_wide_int < fixed_wide_int_storage
 			       <int_traits <T1>::precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
     typedef result_type signed_shift_result_type;
     typedef bool signed_predicate_result;
   };
@@ -420,11 +440,13 @@  #define WIDE_INT_REF_FOR(T) \
   template <typename T1, typename T2>
   struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
   {
+    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
     /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
        so as not to confuse gengtype.  */
-    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
     typedef generic_wide_int < fixed_wide_int_storage
 			       <int_traits <T1>::precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
     typedef result_type signed_shift_result_type;
     typedef bool signed_predicate_result;
   };
@@ -433,6 +455,8 @@  #define WIDE_INT_REF_FOR(T) \
   struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
   {
     typedef wide_int result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
   };
 }
 
@@ -675,18 +699,6 @@  class GTY(()) generic_wide_int : public
   template <typename T>
   generic_wide_int &operator = (const T &);
 
-#define BINARY_PREDICATE(OP, F) \
-  template <typename T> \
-  bool OP (const T &c) const { return wi::F (*this, c); }
-
-#define UNARY_OPERATOR(OP, F) \
-  WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
-
-#define BINARY_OPERATOR(OP, F) \
-  template <typename T> \
-    WI_BINARY_RESULT (generic_wide_int, T) \
-    OP (const T &c) const { return wi::F (*this, c); }
-
 #define ASSIGNMENT_OPERATOR(OP, F) \
   template <typename T> \
     generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
@@ -699,18 +711,6 @@  #define SHIFT_ASSIGNMENT_OPERATOR(OP, OP
 #define INCDEC_OPERATOR(OP, DELTA) \
   generic_wide_int &OP () { *this += DELTA; return *this; }
 
-  UNARY_OPERATOR (operator ~, bit_not)
-  UNARY_OPERATOR (operator -, neg)
-  BINARY_PREDICATE (operator ==, eq_p)
-  BINARY_PREDICATE (operator !=, ne_p)
-  BINARY_OPERATOR (operator &, bit_and)
-  BINARY_OPERATOR (and_not, bit_and_not)
-  BINARY_OPERATOR (operator |, bit_or)
-  BINARY_OPERATOR (or_not, bit_or_not)
-  BINARY_OPERATOR (operator ^, bit_xor)
-  BINARY_OPERATOR (operator +, add)
-  BINARY_OPERATOR (operator -, sub)
-  BINARY_OPERATOR (operator *, mul)
   ASSIGNMENT_OPERATOR (operator &=, bit_and)
   ASSIGNMENT_OPERATOR (operator |=, bit_or)
   ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
@@ -722,9 +722,6 @@  #define INCDEC_OPERATOR(OP, DELTA) \
   INCDEC_OPERATOR (operator ++, 1)
   INCDEC_OPERATOR (operator --, -1)
 
-#undef BINARY_PREDICATE
-#undef UNARY_OPERATOR
-#undef BINARY_OPERATOR
 #undef SHIFT_ASSIGNMENT_OPERATOR
 #undef ASSIGNMENT_OPERATOR
 #undef INCDEC_OPERATOR
@@ -3123,6 +3120,45 @@  SIGNED_BINARY_PREDICATE (operator >=, ge
 
 #undef SIGNED_BINARY_PREDICATE
 
+#define UNARY_OPERATOR(OP, F) \
+  template<typename T> \
+  WI_UNARY_RESULT (generic_wide_int<T>) \
+  OP (const generic_wide_int<T> &x) \
+  { \
+    return wi::F (x); \
+  }
+
+#define BINARY_PREDICATE(OP, F) \
+  template<typename T1, typename T2> \
+  WI_BINARY_PREDICATE_RESULT (T1, T2) \
+  OP (const T1 &x, const T2 &y) \
+  { \
+    return wi::F (x, y); \
+  }
+
+#define BINARY_OPERATOR(OP, F) \
+  template<typename T1, typename T2> \
+  WI_BINARY_OPERATOR_RESULT (T1, T2) \
+  OP (const T1 &x, const T2 &y) \
+  { \
+    return wi::F (x, y); \
+  }
+
+UNARY_OPERATOR (operator ~, bit_not)
+UNARY_OPERATOR (operator -, neg)
+BINARY_PREDICATE (operator ==, eq_p)
+BINARY_PREDICATE (operator !=, ne_p)
+BINARY_OPERATOR (operator &, bit_and)
+BINARY_OPERATOR (operator |, bit_or)
+BINARY_OPERATOR (operator ^, bit_xor)
+BINARY_OPERATOR (operator +, add)
+BINARY_OPERATOR (operator -, sub)
+BINARY_OPERATOR (operator *, mul)
+
+#undef UNARY_OPERATOR
+#undef BINARY_PREDICATE
+#undef BINARY_OPERATOR
+
 template <typename T1, typename T2>
 inline WI_SIGNED_SHIFT_RESULT (T1, T2)
 operator << (const T1 &x, const T2 &y)
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	2017-09-25 13:33:39.989814299 +0100
+++ gcc/expr.c	2017-10-03 19:32:39.071055063 +0100
@@ -7153,7 +7153,7 @@  get_inner_reference (tree exp, HOST_WIDE
       if (wi::neg_p (bit_offset) || !wi::fits_shwi_p (bit_offset))
         {
 	  offset_int mask = wi::mask <offset_int> (LOG2_BITS_PER_UNIT, false);
-	  offset_int tem = bit_offset.and_not (mask);
+	  offset_int tem = wi::bit_and_not (bit_offset, mask);
 	  /* TEM is the bitpos rounded to BITS_PER_UNIT towards -Inf.
 	     Subtract it to BIT_OFFSET and add it (scaled) to OFFSET.  */
 	  bit_offset -= tem;
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	2017-09-25 13:33:39.989814299 +0100
+++ gcc/fold-const.c	2017-10-03 19:32:39.073055063 +0100
@@ -9911,7 +9911,7 @@  fold_binary_loc (location_t loc,
 				   TYPE_PRECISION (TREE_TYPE (arg1)));
 
 	  /* If (C1|C2) == ~0 then (X&C1)|C2 becomes X|C2.  */
-	  if (msk.and_not (c1 | c2) == 0)
+	  if (wi::bit_and_not (msk, c1 | c2) == 0)
 	    {
 	      tem = fold_convert_loc (loc, type, TREE_OPERAND (arg0, 0));
 	      return fold_build2_loc (loc, BIT_IOR_EXPR, type, tem, arg1);
@@ -9922,12 +9922,13 @@  fold_binary_loc (location_t loc,
 	     mode which allows further optimizations.  */
 	  c1 &= msk;
 	  c2 &= msk;
-	  wide_int c3 = c1.and_not (c2);
+	  wide_int c3 = wi::bit_and_not (c1, c2);
 	  for (w = BITS_PER_UNIT; w <= width; w <<= 1)
 	    {
 	      wide_int mask = wi::mask (w, false,
 					TYPE_PRECISION (type));
-	      if (((c1 | c2) & mask) == mask && c1.and_not (mask) == 0)
+	      if (((c1 | c2) & mask) == mask
+		  && wi::bit_and_not (c1, mask) == 0)
 		{
 		  c3 = mask;
 		  break;
Index: gcc/ipa-prop.c
===================================================================
--- gcc/ipa-prop.c	2017-06-22 12:22:57.746312683 +0100
+++ gcc/ipa-prop.c	2017-10-03 19:32:39.074055063 +0100
@@ -1931,9 +1931,9 @@  ipa_compute_jump_functions_for_edge (str
 	  unsigned align;
 
 	  get_pointer_alignment_1 (arg, &align, &bitpos);
-	  widest_int mask
-	    = wi::mask<widest_int>(TYPE_PRECISION (TREE_TYPE (arg)), false)
-	    .and_not (align / BITS_PER_UNIT - 1);
+	  widest_int mask = wi::bit_and_not
+	    (wi::mask<widest_int> (TYPE_PRECISION (TREE_TYPE (arg)), false),
+	     align / BITS_PER_UNIT - 1);
 	  widest_int value = bitpos / BITS_PER_UNIT;
 	  ipa_set_jfunc_bits (jfunc, value, mask);
 	}
Index: gcc/tree-ssa-ccp.c
===================================================================
--- gcc/tree-ssa-ccp.c	2017-09-21 12:00:35.101846471 +0100
+++ gcc/tree-ssa-ccp.c	2017-10-03 19:32:39.075055063 +0100
@@ -569,9 +569,11 @@  get_value_from_alignment (tree expr)
   gcc_assert (TREE_CODE (expr) == ADDR_EXPR);
 
   get_pointer_alignment_1 (expr, &align, &bitpos);
-  val.mask = (POINTER_TYPE_P (type) || TYPE_UNSIGNED (type)
-	      ? wi::mask <widest_int> (TYPE_PRECISION (type), false)
-	      : -1).and_not (align / BITS_PER_UNIT - 1);
+  val.mask = wi::bit_and_not
+    (POINTER_TYPE_P (type) || TYPE_UNSIGNED (type)
+     ? wi::mask <widest_int> (TYPE_PRECISION (type), false)
+     : -1,
+     align / BITS_PER_UNIT - 1);
   val.lattice_val
     = wi::sext (val.mask, TYPE_PRECISION (type)) == -1 ? VARYING : CONSTANT;
   if (val.lattice_val == CONSTANT)
@@ -1308,8 +1310,9 @@  bit_value_binop (enum tree_code code, si
     case BIT_IOR_EXPR:
       /* The mask is constant where there is a known
 	 set bit, (m1 | m2) & ~((v1 & ~m1) | (v2 & ~m2)).  */
-      *mask = (r1mask | r2mask)
-	      .and_not (r1val.and_not (r1mask) | r2val.and_not (r2mask));
+      *mask = wi::bit_and_not (r1mask | r2mask,
+			       wi::bit_and_not (r1val, r1mask)
+			       | wi::bit_and_not (r2val, r2mask));
       *val = r1val | r2val;
       break;
 
@@ -1395,7 +1398,8 @@  bit_value_binop (enum tree_code code, si
       {
 	/* Do the addition with unknown bits set to zero, to give carry-ins of
 	   zero wherever possible.  */
-	widest_int lo = r1val.and_not (r1mask) + r2val.and_not (r2mask);
+	widest_int lo = (wi::bit_and_not (r1val, r1mask)
+			 + wi::bit_and_not (r2val, r2mask));
 	lo = wi::ext (lo, width, sgn);
 	/* Do the addition with unknown bits set to one, to give carry-ins of
 	   one wherever possible.  */
@@ -1447,7 +1451,7 @@  bit_value_binop (enum tree_code code, si
     case NE_EXPR:
       {
 	widest_int m = r1mask | r2mask;
-	if (r1val.and_not (m) != r2val.and_not (m))
+	if (wi::bit_and_not (r1val, m) != wi::bit_and_not (r2val, m))
 	  {
 	    *mask = 0;
 	    *val = ((code == EQ_EXPR) ? 0 : 1);
@@ -1486,8 +1490,10 @@  bit_value_binop (enum tree_code code, si
 	/* If we know the most significant bits we know the values
 	   value ranges by means of treating varying bits as zero
 	   or one.  Do a cross comparison of the max/min pairs.  */
-	maxmin = wi::cmp (o1val | o1mask, o2val.and_not (o2mask), sgn);
-	minmax = wi::cmp (o1val.and_not (o1mask), o2val | o2mask, sgn);
+	maxmin = wi::cmp (o1val | o1mask,
+			  wi::bit_and_not (o2val, o2mask), sgn);
+	minmax = wi::cmp (wi::bit_and_not (o1val, o1mask),
+			  o2val | o2mask, sgn);
 	if (maxmin < 0)  /* o1 is less than o2.  */
 	  {
 	    *mask = 0;
Index: gcc/tree-ssa-math-opts.c
===================================================================
--- gcc/tree-ssa-math-opts.c	2017-08-30 12:19:19.718220029 +0100
+++ gcc/tree-ssa-math-opts.c	2017-10-03 19:32:39.075055063 +0100
@@ -2138,7 +2138,7 @@  find_bswap_or_nop_load (gimple *stmt, tr
       if (wi::neg_p (bit_offset))
 	{
 	  offset_int mask = wi::mask <offset_int> (LOG2_BITS_PER_UNIT, false);
-	  offset_int tem = bit_offset.and_not (mask);
+	  offset_int tem = wi::bit_and_not (bit_offset, mask);
 	  /* TEM is the bitpos rounded to BITS_PER_UNIT towards -Inf.
 	     Subtract it to BIT_OFFSET and add it (scaled) to OFFSET.  */
 	  bit_offset -= tem;
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	2017-09-22 18:00:33.560168917 +0100
+++ gcc/tree-vrp.c	2017-10-03 19:32:39.077055063 +0100
@@ -1769,7 +1769,7 @@  zero_nonzero_bits_from_vr (const tree ex
 	  wide_int mask = wi::mask (wi::floor_log2 (xor_mask), false,
 				    may_be_nonzero->get_precision ());
 	  *may_be_nonzero = *may_be_nonzero | mask;
-	  *must_be_nonzero = must_be_nonzero->and_not (mask);
+	  *must_be_nonzero = wi::bit_and_not (*must_be_nonzero, mask);
 	}
     }
 
@@ -2975,8 +2975,8 @@  extract_range_from_binary_expr_1 (value_
 	  wide_int result_zero_bits = ((must_be_nonzero0 & must_be_nonzero1)
 				       | ~(may_be_nonzero0 | may_be_nonzero1));
 	  wide_int result_one_bits
-	    = (must_be_nonzero0.and_not (may_be_nonzero1)
-	       | must_be_nonzero1.and_not (may_be_nonzero0));
+	    = (wi::bit_and_not (must_be_nonzero0, may_be_nonzero1)
+	       | wi::bit_and_not (must_be_nonzero1, may_be_nonzero0));
 	  max = wide_int_to_tree (expr_type, ~result_zero_bits);
 	  min = wide_int_to_tree (expr_type, result_one_bits);
 	  /* If the range has all positive or all negative values the
@@ -4877,7 +4877,7 @@  masked_increment (const wide_int &val_in
       if ((res & bit) == 0)
 	continue;
       res = bit - 1;
-      res = (val + bit).and_not (res);
+      res = wi::bit_and_not (val + bit, res);
       res &= mask;
       if (wi::gtu_p (res, val))
 	return res ^ sgnbit;
@@ -9538,13 +9538,13 @@  simplify_bit_ops_using_ranges (gimple_st
   switch (gimple_assign_rhs_code (stmt))
     {
     case BIT_AND_EXPR:
-      mask = may_be_nonzero0.and_not (must_be_nonzero1);
+      mask = wi::bit_and_not (may_be_nonzero0, must_be_nonzero1);
       if (mask == 0)
 	{
 	  op = op0;
 	  break;
 	}
-      mask = may_be_nonzero1.and_not (must_be_nonzero0);
+      mask = wi::bit_and_not (may_be_nonzero1, must_be_nonzero0);
       if (mask == 0)
 	{
 	  op = op1;
@@ -9552,13 +9552,13 @@  simplify_bit_ops_using_ranges (gimple_st
 	}
       break;
     case BIT_IOR_EXPR:
-      mask = may_be_nonzero0.and_not (must_be_nonzero1);
+      mask = wi::bit_and_not (may_be_nonzero0, must_be_nonzero1);
       if (mask == 0)
 	{
 	  op = op1;
 	  break;
 	}
-      mask = may_be_nonzero1.and_not (must_be_nonzero0);
+      mask = wi::bit_and_not (may_be_nonzero1, must_be_nonzero0);
       if (mask == 0)
 	{
 	  op = op0;