diff mbox

Add a full_integral_type_p helper function

Message ID 87mv6xmh3b.fsf@linaro.org
State New
Headers show

Commit Message

Richard Sandiford Aug. 18, 2017, 8:10 a.m. UTC
There are several places that test whether:

    TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))

for some integer type T.  With SVE variable-length modes, this would
need to become:

    TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))

(or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).
But rather than add the "SCALAR_" everywhere, it seemed neater to
introduce a new helper function that tests whether T is an integral
type that has the same number of bits as its underlying mode.  This
patch does that, calling it full_integral_type_p.

It isn't possible to use TYPE_MODE in tree.h because vector_type_mode
is defined in stor-layout.h, so for now the function accesses the mode
field directly.  After the 77-patch machine_mode series (thanks again
Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.

Of the changes that didn't previously have an INTEGRAL_TYPE_P check:

- for fold_single_bit_test_into_sign_test it is obvious from the
  integer_foop tests that this is restricted to integral types.

- vect_recog_vector_vector_shift_pattern is inherently restricted
  to integral types.

- the register_edge_assert_for_2 hunk is dominated by:

      TREE_CODE (val) == INTEGER_CST

- the ubsan_instrument_shift hunk is preceded by an early exit:

      if (!INTEGRAL_TYPE_P (type0))
	return NULL_TREE;

- the second and third match.pd hunks are from:

    /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))
            (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))
       if the new mask might be further optimized.  */

I'm a bit confused about:

/* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))
   when profitable.
   For bitwise binary operations apply operand conversions to the
   binary operation result instead of to the operands.  This allows
   to combine successive conversions and bitwise binary operations.
   We combine the above two cases by using a conditional convert.  */
(for bitop (bit_and bit_ior bit_xor)
 (simplify
  (bitop (convert @0) (convert? @1))
  (if (((TREE_CODE (@1) == INTEGER_CST
	 && INTEGRAL_TYPE_P (TREE_TYPE (@0))
	 && int_fits_type_p (@1, TREE_TYPE (@0)))
	|| types_match (@0, @1))
       /* ???  This transform conflicts with fold-const.c doing
	  Convert (T)(x & c) into (T)x & (T)c, if c is an integer
	  constants (if x has signed type, the sign bit cannot be set
	  in c).  This folds extension into the BIT_AND_EXPR.
	  Restrict it to GIMPLE to avoid endless recursions.  */
       && (bitop != BIT_AND_EXPR || GIMPLE)
       && (/* That's a good idea if the conversion widens the operand, thus
	      after hoisting the conversion the operation will be narrower.  */
	   TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)
	   /* It's also a good idea if the conversion is to a non-integer
	      mode.  */
	   || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
	   /* Or if the precision of TO is not the same as the precision
	      of its mode.  */
	   || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
   (convert (bitop @0 (convert @1))))))

though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't
rely on @0 and @1 being integral (although conversions from float would
use FLOAT_EXPR), but then what is:

	   /* It's also a good idea if the conversion is to a non-integer
	      mode.  */
	   || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer
mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those
it would be better to apply the scalar rules to the element type.

Either way, having allowed all non-INT modes, using full_integral_type_p
for the remaining condition seems correct.

If the feeling is that this isn't a useful abstraction, I can just update
each site individually to cope with variable-sized modes.

Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?

Richard


2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree.h (scalar_type_is_full_p): New function.
	(full_integral_type_p): Likewise.
	* fold-const.c (fold_single_bit_test_into_sign_test): Likewise.
	* match.pd: Likewise.
	* tree-ssa-forwprop.c (simplify_rotate): Likewise.
	* tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)
	(vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.
	(adjust_bool_pattern): Likewise.
	* tree-vrp.c (register_edge_assert_for_2): Likewise.
	* ubsan.c (instrument_si_overflow): Likewise.

gcc/c-family/
	* c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.

Comments

Richard Biener Aug. 18, 2017, 10:45 a.m. UTC | #1
On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> There are several places that test whether:

>

>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))

>

> for some integer type T.  With SVE variable-length modes, this would

> need to become:

>

>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))

>

> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).

> But rather than add the "SCALAR_" everywhere, it seemed neater to

> introduce a new helper function that tests whether T is an integral

> type that has the same number of bits as its underlying mode.  This

> patch does that, calling it full_integral_type_p.

>

> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode

> is defined in stor-layout.h, so for now the function accesses the mode

> field directly.  After the 77-patch machine_mode series (thanks again

> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.

>

> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:

>

> - for fold_single_bit_test_into_sign_test it is obvious from the

>   integer_foop tests that this is restricted to integral types.

>

> - vect_recog_vector_vector_shift_pattern is inherently restricted

>   to integral types.

>

> - the register_edge_assert_for_2 hunk is dominated by:

>

>       TREE_CODE (val) == INTEGER_CST

>

> - the ubsan_instrument_shift hunk is preceded by an early exit:

>

>       if (!INTEGRAL_TYPE_P (type0))

>         return NULL_TREE;

>

> - the second and third match.pd hunks are from:

>

>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))

>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))

>        if the new mask might be further optimized.  */

>

> I'm a bit confused about:

>

> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))

>    when profitable.

>    For bitwise binary operations apply operand conversions to the

>    binary operation result instead of to the operands.  This allows

>    to combine successive conversions and bitwise binary operations.

>    We combine the above two cases by using a conditional convert.  */

> (for bitop (bit_and bit_ior bit_xor)

>  (simplify

>   (bitop (convert @0) (convert? @1))

>   (if (((TREE_CODE (@1) == INTEGER_CST

>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>          && int_fits_type_p (@1, TREE_TYPE (@0)))

>         || types_match (@0, @1))

>        /* ???  This transform conflicts with fold-const.c doing

>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer

>           constants (if x has signed type, the sign bit cannot be set

>           in c).  This folds extension into the BIT_AND_EXPR.

>           Restrict it to GIMPLE to avoid endless recursions.  */

>        && (bitop != BIT_AND_EXPR || GIMPLE)

>        && (/* That's a good idea if the conversion widens the operand, thus

>               after hoisting the conversion the operation will be narrower.  */

>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)

>            /* It's also a good idea if the conversion is to a non-integer

>               mode.  */

>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>            /* Or if the precision of TO is not the same as the precision

>               of its mode.  */

>            || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

>    (convert (bitop @0 (convert @1))))))

>

> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't

> rely on @0 and @1 being integral (although conversions from float would

> use FLOAT_EXPR), but then what is:


bit_and is valid on POINTER_TYPE and vector integer types

>

>            /* It's also a good idea if the conversion is to a non-integer

>               mode.  */

>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>

> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer

> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those

> it would be better to apply the scalar rules to the element type.


I suppose extra caution ;)  I think I have seen BLKmode for not naturally
aligned integer types at least on strict-align targets?  The code is
a copy from original code in tree-ssa-forwprop.c.

> Either way, having allowed all non-INT modes, using full_integral_type_p

> for the remaining condition seems correct.

>

> If the feeling is that this isn't a useful abstraction, I can just update

> each site individually to cope with variable-sized modes.


I think "full_integral_type_p" is a name from which I cannot infer
its meaning.  Maybe type_has_mode_precision_p?  Or
type_matches_mode_p?  Does TYPE_PRECISION == GET_MODE_PRECISION
imply TYPE_SIZE == GET_MODE_BITSIZE btw?

Richard.

> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?

>

> Richard

>

>

> 2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>

>             Alan Hayward  <alan.hayward@arm.com>

>             David Sherwood  <david.sherwood@arm.com>

>

> gcc/

>         * tree.h (scalar_type_is_full_p): New function.

>         (full_integral_type_p): Likewise.

>         * fold-const.c (fold_single_bit_test_into_sign_test): Likewise.

>         * match.pd: Likewise.

>         * tree-ssa-forwprop.c (simplify_rotate): Likewise.

>         * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)

>         (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.

>         (adjust_bool_pattern): Likewise.

>         * tree-vrp.c (register_edge_assert_for_2): Likewise.

>         * ubsan.c (instrument_si_overflow): Likewise.

>

> gcc/c-family/

>         * c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.

>

> diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c

> index b1386db..20f78e7 100644

> --- a/gcc/c-family/c-ubsan.c

> +++ b/gcc/c-family/c-ubsan.c

> @@ -131,8 +131,8 @@ ubsan_instrument_shift (location_t loc, enum tree_code code,

>

>    /* If this is not a signed operation, don't perform overflow checks.

>       Also punt on bit-fields.  */

> -  if (TYPE_OVERFLOW_WRAPS (type0)

> -      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)

> +  if (!full_integral_type_p (type0)

> +      || TYPE_OVERFLOW_WRAPS (type0)

>        || !sanitize_flags_p (SANITIZE_SHIFT_BASE))

>      ;

>

> diff --git a/gcc/fold-const.c b/gcc/fold-const.c

> index 0a5b168..1985a14 100644

> --- a/gcc/fold-const.c

> +++ b/gcc/fold-const.c

> @@ -6672,8 +6672,7 @@ fold_single_bit_test_into_sign_test (location_t loc,

>        if (arg00 != NULL_TREE

>           /* This is only a win if casting to a signed type is cheap,

>              i.e. when arg00's type is not a partial mode.  */

> -         && TYPE_PRECISION (TREE_TYPE (arg00))

> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))

> +         && full_integral_type_p (TREE_TYPE (arg00)))

>         {

>           tree stype = signed_type_for (TREE_TYPE (arg00));

>           return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,

> diff --git a/gcc/match.pd b/gcc/match.pd

> index 0e36f46..9ad9930 100644

> --- a/gcc/match.pd

> +++ b/gcc/match.pd

> @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>            /* Or if the precision of TO is not the same as the precision

>               of its mode.  */

> -          || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

> +          || !full_integral_type_p (type)))

>     (convert (bitop @0 (convert @1))))))

>

>  (for bitop (bit_and bit_ior)

> @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>         if (shift == LSHIFT_EXPR)

>          zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);

>         else if (shift == RSHIFT_EXPR

> -               && (TYPE_PRECISION (shift_type)

> -                   == GET_MODE_PRECISION (TYPE_MODE (shift_type))))

> +               && full_integral_type_p (shift_type))

>          {

>            prec = TYPE_PRECISION (TREE_TYPE (@3));

>            tree arg00 = @0;

> @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>                && TYPE_UNSIGNED (TREE_TYPE (@0)))

>              {

>                tree inner_type = TREE_TYPE (@0);

> -              if ((TYPE_PRECISION (inner_type)

> -                   == GET_MODE_PRECISION (TYPE_MODE (inner_type)))

> +              if (full_integral_type_p (inner_type)

>                    && TYPE_PRECISION (inner_type) < prec)

>                  {

>                    prec = TYPE_PRECISION (inner_type);

> @@ -3225,9 +3223,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>       ncmp (ge lt)

>   (simplify

>    (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)

> -  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))

> -       && (TYPE_PRECISION (TREE_TYPE (@0))

> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

> +  (if (full_integral_type_p (TREE_TYPE (@0))

>         && element_precision (@2) >= element_precision (@0)

>         && wi::only_sign_bit_p (@1, element_precision (@0)))

>     (with { tree stype = signed_type_for (TREE_TYPE (@0)); }

> @@ -4021,19 +4017,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>  (for op (plus minus)

>    (simplify

>      (convert (op:s (convert@2 @0) (convert?@3 @1)))

> -    (if (INTEGRAL_TYPE_P (type)

> -        /* We check for type compatibility between @0 and @1 below,

> -           so there's no need to check that @1/@3 are integral types.  */

> -        && INTEGRAL_TYPE_P (TREE_TYPE (@0))

> -        && INTEGRAL_TYPE_P (TREE_TYPE (@2))

> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))

>          /* The precision of the type of each operand must match the

>             precision of the mode of each operand, similarly for the

>             result.  */

> -        && (TYPE_PRECISION (TREE_TYPE (@0))

> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

> -        && (TYPE_PRECISION (TREE_TYPE (@1))

> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

> -        && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

> +        && full_integral_type_p (TREE_TYPE (@0))

> +        && full_integral_type_p (TREE_TYPE (@1))

> +        && full_integral_type_p (type)

>          /* The inner conversion must be a widening conversion.  */

>          && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>          && types_match (@0, type)

> @@ -4055,19 +4045,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>  (for op (minus plus)

>   (simplify

>    (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)

> -  (if (INTEGRAL_TYPE_P (type)

> -       /* We check for type compatibility between @0 and @1 below,

> -         so there's no need to check that @1/@3 are integral types.  */

> -       && INTEGRAL_TYPE_P (TREE_TYPE (@0))

> -       && INTEGRAL_TYPE_P (TREE_TYPE (@2))

> +  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))

>         /* The precision of the type of each operand must match the

>           precision of the mode of each operand, similarly for the

>           result.  */

> -       && (TYPE_PRECISION (TREE_TYPE (@0))

> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

> -       && (TYPE_PRECISION (TREE_TYPE (@1))

> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

> -       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

> +       && full_integral_type_p (TREE_TYPE (@0))

> +       && full_integral_type_p (TREE_TYPE (@1))

> +       && full_integral_type_p (type)

>         /* The inner conversion must be a widening conversion.  */

>         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>         && types_match (@0, @1)

> diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c

> index 5719b99..20d5c86 100644

> --- a/gcc/tree-ssa-forwprop.c

> +++ b/gcc/tree-ssa-forwprop.c

> @@ -1528,8 +1528,7 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>

>    /* Only create rotates in complete modes.  Other cases are not

>       expanded properly.  */

> -  if (!INTEGRAL_TYPE_P (rtype)

> -      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))

> +  if (!full_integral_type_p (rtype))

>      return false;

>

>    for (i = 0; i < 2; i++)

> @@ -1606,11 +1605,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>           defcodefor_name (def_arg2[i], &cdef_code[i],

>                            &cdef_arg1[i], &cdef_arg2[i]);

>           if (CONVERT_EXPR_CODE_P (cdef_code[i])

> -             && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))

> +             && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))

>               && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

> -                > floor_log2 (TYPE_PRECISION (rtype))

> -             && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))

> +                > floor_log2 (TYPE_PRECISION (rtype)))

>             {

>               def_arg2_alt[i] = cdef_arg1[i];

>               defcodefor_name (def_arg2_alt[i], &cdef_code[i],

> @@ -1636,11 +1633,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>               }

>             defcodefor_name (cdef_arg2[i], &code, &tem, NULL);

>             if (CONVERT_EXPR_CODE_P (code)

> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))

> +               && full_integral_type_p (TREE_TYPE (tem))

>                 && TYPE_PRECISION (TREE_TYPE (tem))

>                  > floor_log2 (TYPE_PRECISION (rtype))

> -               && TYPE_PRECISION (TREE_TYPE (tem))

> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

>                 && (tem == def_arg2[1 - i]

>                     || tem == def_arg2_alt[1 - i]))

>               {

> @@ -1664,11 +1659,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>

>             defcodefor_name (cdef_arg1[i], &code, &tem, NULL);

>             if (CONVERT_EXPR_CODE_P (code)

> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))

> +               && full_integral_type_p (TREE_TYPE (tem))

>                 && TYPE_PRECISION (TREE_TYPE (tem))

> -                > floor_log2 (TYPE_PRECISION (rtype))

> -               && TYPE_PRECISION (TREE_TYPE (tem))

> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))

> +                > floor_log2 (TYPE_PRECISION (rtype)))

>               defcodefor_name (tem, &code, &tem, NULL);

>

>             if (code == NEGATE_EXPR)

> @@ -1680,11 +1673,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>                   }

>                 defcodefor_name (tem, &code, &tem, NULL);

>                 if (CONVERT_EXPR_CODE_P (code)

> -                   && INTEGRAL_TYPE_P (TREE_TYPE (tem))

> +                   && full_integral_type_p (TREE_TYPE (tem))

>                     && TYPE_PRECISION (TREE_TYPE (tem))

>                        > floor_log2 (TYPE_PRECISION (rtype))

> -                   && TYPE_PRECISION (TREE_TYPE (tem))

> -                      == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

>                     && (tem == def_arg2[1 - i]

>                         || tem == def_arg2_alt[1 - i]))

>                   {

> diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c

> index 17d1083..a96f784 100644

> --- a/gcc/tree-vect-patterns.c

> +++ b/gcc/tree-vect-patterns.c

> @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,

>    if (TREE_CODE (oprnd0) != SSA_NAME

>        || TREE_CODE (oprnd1) != SSA_NAME

>        || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))

> -      || TYPE_PRECISION (TREE_TYPE (oprnd1))

> -        != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))

> +      || !full_integral_type_p (TREE_TYPE (oprnd1))

>        || TYPE_PRECISION (TREE_TYPE (lhs))

>          != TYPE_PRECISION (TREE_TYPE (oprnd0)))

>      return NULL;

> @@ -2469,8 +2468,7 @@ vect_recog_mult_pattern (vec<gimple *> *stmts,

>

>    if (TREE_CODE (oprnd0) != SSA_NAME

>        || TREE_CODE (oprnd1) != INTEGER_CST

> -      || !INTEGRAL_TYPE_P (itype)

> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

> +      || !full_integral_type_p (itype))

>      return NULL;

>

>    vectype = get_vectype_for_scalar_type (itype);

> @@ -2584,8 +2582,7 @@ vect_recog_divmod_pattern (vec<gimple *> *stmts,

>    itype = TREE_TYPE (oprnd0);

>    if (TREE_CODE (oprnd0) != SSA_NAME

>        || TREE_CODE (oprnd1) != INTEGER_CST

> -      || TREE_CODE (itype) != INTEGER_TYPE

> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

> +      || !full_integral_type_p (itype))

>      return NULL;

>

>    vectype = get_vectype_for_scalar_type (itype);

> @@ -3385,9 +3382,8 @@ adjust_bool_pattern (tree var, tree out_type,

>      do_compare:

>        gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);

>        if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE

> -         || !TYPE_UNSIGNED (TREE_TYPE (rhs1))

> -         || (TYPE_PRECISION (TREE_TYPE (rhs1))

> -             != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))

> +         || !full_integral_type_p (TREE_TYPE (rhs1))

> +         || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))

>         {

>           machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));

>           itype

> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c

> index 657a8d1..cfa9a97 100644

> --- a/gcc/tree-vrp.c

> +++ b/gcc/tree-vrp.c

> @@ -5245,7 +5245,7 @@ register_edge_assert_for_2 (tree name, edge e,

>               && tree_fits_uhwi_p (cst2)

>               && INTEGRAL_TYPE_P (TREE_TYPE (name2))

>               && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)

> -             && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))

> +             && full_integral_type_p (TREE_TYPE (val)))

>             {

>               mask = wi::mask (tree_to_uhwi (cst2), false, prec);

>               val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);

> diff --git a/gcc/tree.h b/gcc/tree.h

> index 46debc1..237f234 100644

> --- a/gcc/tree.h

> +++ b/gcc/tree.h

> @@ -5394,4 +5394,25 @@ struct builtin_structptr_type

>    const char *str;

>  };

>  extern const builtin_structptr_type builtin_structptr_types[6];

> +

> +/* Return true if the mode underlying scalar type T has the same number

> +   of bits as T does.  Examples of when this is false include bitfields

> +   that are narrower than the mode that contains them.  */

> +

> +inline bool

> +scalar_type_is_full_p (const_tree t)

> +{

> +  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)

> +         == TYPE_PRECISION (t));

> +}

> +

> +/* Return true if T is an integral type that has the same number of bits

> +   as its underlying mode.  */

> +

> +inline bool

> +full_integral_type_p (const_tree t)

> +{

> +  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);

> +}

> +

>  #endif  /* GCC_TREE_H  */

> diff --git a/gcc/ubsan.c b/gcc/ubsan.c

> index 49e38fa..40f5f3e 100644

> --- a/gcc/ubsan.c

> +++ b/gcc/ubsan.c

> @@ -1582,9 +1582,8 @@ instrument_si_overflow (gimple_stmt_iterator gsi)

>

>    /* If this is not a signed operation, don't instrument anything here.

>       Also punt on bit-fields.  */

> -  if (!INTEGRAL_TYPE_P (lhsinner)

> -      || TYPE_OVERFLOW_WRAPS (lhsinner)

> -      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))

> +  if (!full_integral_type_p (lhsinner)

> +      || TYPE_OVERFLOW_WRAPS (lhsinner))

>      return;

>

>    switch (code)
Richard Sandiford Aug. 18, 2017, 11:04 a.m. UTC | #2
Richard Biener <richard.guenther@gmail.com> writes:
> On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford

> <richard.sandiford@linaro.org> wrote:

>> There are several places that test whether:

>>

>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))

>>

>> for some integer type T.  With SVE variable-length modes, this would

>> need to become:

>>

>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))

>>

>> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).

>> But rather than add the "SCALAR_" everywhere, it seemed neater to

>> introduce a new helper function that tests whether T is an integral

>> type that has the same number of bits as its underlying mode.  This

>> patch does that, calling it full_integral_type_p.

>>

>> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode

>> is defined in stor-layout.h, so for now the function accesses the mode

>> field directly.  After the 77-patch machine_mode series (thanks again

>> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.

>>

>> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:

>>

>> - for fold_single_bit_test_into_sign_test it is obvious from the

>>   integer_foop tests that this is restricted to integral types.

>>

>> - vect_recog_vector_vector_shift_pattern is inherently restricted

>>   to integral types.

>>

>> - the register_edge_assert_for_2 hunk is dominated by:

>>

>>       TREE_CODE (val) == INTEGER_CST

>>

>> - the ubsan_instrument_shift hunk is preceded by an early exit:

>>

>>       if (!INTEGRAL_TYPE_P (type0))

>>         return NULL_TREE;

>>

>> - the second and third match.pd hunks are from:

>>

>>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))

>>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))

>>        if the new mask might be further optimized.  */

>>

>> I'm a bit confused about:

>>

>> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))

>>    when profitable.

>>    For bitwise binary operations apply operand conversions to the

>>    binary operation result instead of to the operands.  This allows

>>    to combine successive conversions and bitwise binary operations.

>>    We combine the above two cases by using a conditional convert.  */

>> (for bitop (bit_and bit_ior bit_xor)

>>  (simplify

>>   (bitop (convert @0) (convert? @1))

>>   (if (((TREE_CODE (@1) == INTEGER_CST

>>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>>          && int_fits_type_p (@1, TREE_TYPE (@0)))

>>         || types_match (@0, @1))

>>        /* ???  This transform conflicts with fold-const.c doing

>>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer

>>           constants (if x has signed type, the sign bit cannot be set

>>           in c).  This folds extension into the BIT_AND_EXPR.

>>           Restrict it to GIMPLE to avoid endless recursions.  */

>>        && (bitop != BIT_AND_EXPR || GIMPLE)

>>        && (/* That's a good idea if the conversion widens the operand, thus

>>               after hoisting the conversion the operation will be narrower.  */

>>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)

>>            /* It's also a good idea if the conversion is to a non-integer

>>               mode.  */

>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>            /* Or if the precision of TO is not the same as the precision

>>               of its mode.  */

>>            || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

>>    (convert (bitop @0 (convert @1))))))

>>

>> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't

>> rely on @0 and @1 being integral (although conversions from float would

>> use FLOAT_EXPR), but then what is:

>

> bit_and is valid on POINTER_TYPE and vector integer types

>

>>

>>            /* It's also a good idea if the conversion is to a non-integer

>>               mode.  */

>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>

>> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer

>> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those

>> it would be better to apply the scalar rules to the element type.

>

> I suppose extra caution ;)  I think I have seen BLKmode for not naturally

> aligned integer types at least on strict-align targets?  The code is

> a copy from original code in tree-ssa-forwprop.c.

>

>> Either way, having allowed all non-INT modes, using full_integral_type_p

>> for the remaining condition seems correct.

>>

>> If the feeling is that this isn't a useful abstraction, I can just update

>> each site individually to cope with variable-sized modes.

>

> I think "full_integral_type_p" is a name from which I cannot infer

> its meaning.  Maybe type_has_mode_precision_p?  Or

> type_matches_mode_p?


type_has_mode_precision_p sounds good.  With that name I guess it
should be written to cope with all types (even those with variable-
width modes), so I think we'd need to continue using TYPE_MODE.
The VECTOR_MODE_P check should get optimised away in most of
the cases touched by the patch though.

Would it be OK to move the declaration of vector_type_mode to tree.h
so that type_has_mode_precision_p can be inline?

> Does TYPE_PRECISION == GET_MODE_PRECISION

> imply TYPE_SIZE == GET_MODE_BITSIZE btw?


Good question :-)  You'd know better than me.  The only case I can
think of is if we ever tried to use BImode for Fortran logicals
(regardless of kind).  But I guess we wouldn't do that?

Thanks,
Richard

>

> Richard.

>

>> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?

>>

>> Richard

>>

>>

>> 2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>

>>             Alan Hayward  <alan.hayward@arm.com>

>>             David Sherwood  <david.sherwood@arm.com>

>>

>> gcc/

>>         * tree.h (scalar_type_is_full_p): New function.

>>         (full_integral_type_p): Likewise.

>>         * fold-const.c (fold_single_bit_test_into_sign_test): Likewise.

>>         * match.pd: Likewise.

>>         * tree-ssa-forwprop.c (simplify_rotate): Likewise.

>>         * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)

>>         (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.

>>         (adjust_bool_pattern): Likewise.

>>         * tree-vrp.c (register_edge_assert_for_2): Likewise.

>>         * ubsan.c (instrument_si_overflow): Likewise.

>>

>> gcc/c-family/

>>         * c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.

>>

>> diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c

>> index b1386db..20f78e7 100644

>> --- a/gcc/c-family/c-ubsan.c

>> +++ b/gcc/c-family/c-ubsan.c

>> @@ -131,8 +131,8 @@ ubsan_instrument_shift (location_t loc, enum tree_code code,

>>

>>    /* If this is not a signed operation, don't perform overflow checks.

>>       Also punt on bit-fields.  */

>> -  if (TYPE_OVERFLOW_WRAPS (type0)

>> -      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)

>> +  if (!full_integral_type_p (type0)

>> +      || TYPE_OVERFLOW_WRAPS (type0)

>>        || !sanitize_flags_p (SANITIZE_SHIFT_BASE))

>>      ;

>>

>> diff --git a/gcc/fold-const.c b/gcc/fold-const.c

>> index 0a5b168..1985a14 100644

>> --- a/gcc/fold-const.c

>> +++ b/gcc/fold-const.c

>> @@ -6672,8 +6672,7 @@ fold_single_bit_test_into_sign_test (location_t loc,

>>        if (arg00 != NULL_TREE

>>           /* This is only a win if casting to a signed type is cheap,

>>              i.e. when arg00's type is not a partial mode.  */

>> -         && TYPE_PRECISION (TREE_TYPE (arg00))

>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))

>> +         && full_integral_type_p (TREE_TYPE (arg00)))

>>         {

>>           tree stype = signed_type_for (TREE_TYPE (arg00));

>>           return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,

>> diff --git a/gcc/match.pd b/gcc/match.pd

>> index 0e36f46..9ad9930 100644

>> --- a/gcc/match.pd

>> +++ b/gcc/match.pd

>> @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>            /* Or if the precision of TO is not the same as the precision

>>               of its mode.  */

>> -          || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

>> +          || !full_integral_type_p (type)))

>>     (convert (bitop @0 (convert @1))))))

>>

>>  (for bitop (bit_and bit_ior)

>> @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>         if (shift == LSHIFT_EXPR)

>>          zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);

>>         else if (shift == RSHIFT_EXPR

>> -               && (TYPE_PRECISION (shift_type)

>> -                   == GET_MODE_PRECISION (TYPE_MODE (shift_type))))

>> +               && full_integral_type_p (shift_type))

>>          {

>>            prec = TYPE_PRECISION (TREE_TYPE (@3));

>>            tree arg00 = @0;

>> @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>                && TYPE_UNSIGNED (TREE_TYPE (@0)))

>>              {

>>                tree inner_type = TREE_TYPE (@0);

>> -              if ((TYPE_PRECISION (inner_type)

>> -                   == GET_MODE_PRECISION (TYPE_MODE (inner_type)))

>> +              if (full_integral_type_p (inner_type)

>>                    && TYPE_PRECISION (inner_type) < prec)

>>                  {

>>                    prec = TYPE_PRECISION (inner_type);

>> @@ -3225,9 +3223,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>       ncmp (ge lt)

>>   (simplify

>>    (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)

>> -  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))

>> -       && (TYPE_PRECISION (TREE_TYPE (@0))

>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

>> +  (if (full_integral_type_p (TREE_TYPE (@0))

>>         && element_precision (@2) >= element_precision (@0)

>>         && wi::only_sign_bit_p (@1, element_precision (@0)))

>>     (with { tree stype = signed_type_for (TREE_TYPE (@0)); }

>> @@ -4021,19 +4017,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>  (for op (plus minus)

>>    (simplify

>>      (convert (op:s (convert@2 @0) (convert?@3 @1)))

>> -    (if (INTEGRAL_TYPE_P (type)

>> -        /* We check for type compatibility between @0 and @1 below,

>> -           so there's no need to check that @1/@3 are integral types.  */

>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@2))

>> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))

>>          /* The precision of the type of each operand must match the

>>             precision of the mode of each operand, similarly for the

>>             result.  */

>> -        && (TYPE_PRECISION (TREE_TYPE (@0))

>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

>> -        && (TYPE_PRECISION (TREE_TYPE (@1))

>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

>> -        && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

>> +        && full_integral_type_p (TREE_TYPE (@0))

>> +        && full_integral_type_p (TREE_TYPE (@1))

>> +        && full_integral_type_p (type)

>>          /* The inner conversion must be a widening conversion.  */

>>          && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>>          && types_match (@0, type)

>> @@ -4055,19 +4045,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>  (for op (minus plus)

>>   (simplify

>>    (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)

>> -  (if (INTEGRAL_TYPE_P (type)

>> -       /* We check for type compatibility between @0 and @1 below,

>> -         so there's no need to check that @1/@3 are integral types.  */

>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@2))

>> +  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))

>>         /* The precision of the type of each operand must match the

>>           precision of the mode of each operand, similarly for the

>>           result.  */

>> -       && (TYPE_PRECISION (TREE_TYPE (@0))

>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

>> -       && (TYPE_PRECISION (TREE_TYPE (@1))

>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

>> -       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

>> +       && full_integral_type_p (TREE_TYPE (@0))

>> +       && full_integral_type_p (TREE_TYPE (@1))

>> +       && full_integral_type_p (type)

>>         /* The inner conversion must be a widening conversion.  */

>>         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>>         && types_match (@0, @1)

>> diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c

>> index 5719b99..20d5c86 100644

>> --- a/gcc/tree-ssa-forwprop.c

>> +++ b/gcc/tree-ssa-forwprop.c

>> @@ -1528,8 +1528,7 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>

>>    /* Only create rotates in complete modes.  Other cases are not

>>       expanded properly.  */

>> -  if (!INTEGRAL_TYPE_P (rtype)

>> -      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))

>> +  if (!full_integral_type_p (rtype))

>>      return false;

>>

>>    for (i = 0; i < 2; i++)

>> @@ -1606,11 +1605,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>           defcodefor_name (def_arg2[i], &cdef_code[i],

>>                            &cdef_arg1[i], &cdef_arg2[i]);

>>           if (CONVERT_EXPR_CODE_P (cdef_code[i])

>> -             && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))

>> +             && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))

>>               && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

>> -                > floor_log2 (TYPE_PRECISION (rtype))

>> -             && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))

>> +                > floor_log2 (TYPE_PRECISION (rtype)))

>>             {

>>               def_arg2_alt[i] = cdef_arg1[i];

>>               defcodefor_name (def_arg2_alt[i], &cdef_code[i],

>> @@ -1636,11 +1633,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>               }

>>             defcodefor_name (cdef_arg2[i], &code, &tem, NULL);

>>             if (CONVERT_EXPR_CODE_P (code)

>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>> +               && full_integral_type_p (TREE_TYPE (tem))

>>                 && TYPE_PRECISION (TREE_TYPE (tem))

>>                  > floor_log2 (TYPE_PRECISION (rtype))

>> -               && TYPE_PRECISION (TREE_TYPE (tem))

>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

>>                 && (tem == def_arg2[1 - i]

>>                     || tem == def_arg2_alt[1 - i]))

>>               {

>> @@ -1664,11 +1659,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>

>>             defcodefor_name (cdef_arg1[i], &code, &tem, NULL);

>>             if (CONVERT_EXPR_CODE_P (code)

>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>> +               && full_integral_type_p (TREE_TYPE (tem))

>>                 && TYPE_PRECISION (TREE_TYPE (tem))

>> -                > floor_log2 (TYPE_PRECISION (rtype))

>> -               && TYPE_PRECISION (TREE_TYPE (tem))

>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))

>> +                > floor_log2 (TYPE_PRECISION (rtype)))

>>               defcodefor_name (tem, &code, &tem, NULL);

>>

>>             if (code == NEGATE_EXPR)

>> @@ -1680,11 +1673,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>                   }

>>                 defcodefor_name (tem, &code, &tem, NULL);

>>                 if (CONVERT_EXPR_CODE_P (code)

>> -                   && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>> +                   && full_integral_type_p (TREE_TYPE (tem))

>>                     && TYPE_PRECISION (TREE_TYPE (tem))

>>                        > floor_log2 (TYPE_PRECISION (rtype))

>> -                   && TYPE_PRECISION (TREE_TYPE (tem))

>> -                      == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

>>                     && (tem == def_arg2[1 - i]

>>                         || tem == def_arg2_alt[1 - i]))

>>                   {

>> diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c

>> index 17d1083..a96f784 100644

>> --- a/gcc/tree-vect-patterns.c

>> +++ b/gcc/tree-vect-patterns.c

>> @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,

>>    if (TREE_CODE (oprnd0) != SSA_NAME

>>        || TREE_CODE (oprnd1) != SSA_NAME

>>        || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))

>> -      || TYPE_PRECISION (TREE_TYPE (oprnd1))

>> -        != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))

>> +      || !full_integral_type_p (TREE_TYPE (oprnd1))

>>        || TYPE_PRECISION (TREE_TYPE (lhs))

>>          != TYPE_PRECISION (TREE_TYPE (oprnd0)))

>>      return NULL;

>> @@ -2469,8 +2468,7 @@ vect_recog_mult_pattern (vec<gimple *> *stmts,

>>

>>    if (TREE_CODE (oprnd0) != SSA_NAME

>>        || TREE_CODE (oprnd1) != INTEGER_CST

>> -      || !INTEGRAL_TYPE_P (itype)

>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

>> +      || !full_integral_type_p (itype))

>>      return NULL;

>>

>>    vectype = get_vectype_for_scalar_type (itype);

>> @@ -2584,8 +2582,7 @@ vect_recog_divmod_pattern (vec<gimple *> *stmts,

>>    itype = TREE_TYPE (oprnd0);

>>    if (TREE_CODE (oprnd0) != SSA_NAME

>>        || TREE_CODE (oprnd1) != INTEGER_CST

>> -      || TREE_CODE (itype) != INTEGER_TYPE

>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

>> +      || !full_integral_type_p (itype))

>>      return NULL;

>>

>>    vectype = get_vectype_for_scalar_type (itype);

>> @@ -3385,9 +3382,8 @@ adjust_bool_pattern (tree var, tree out_type,

>>      do_compare:

>>        gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);

>>        if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE

>> -         || !TYPE_UNSIGNED (TREE_TYPE (rhs1))

>> -         || (TYPE_PRECISION (TREE_TYPE (rhs1))

>> -             != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))

>> +         || !full_integral_type_p (TREE_TYPE (rhs1))

>> +         || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))

>>         {

>>           machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));

>>           itype

>> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c

>> index 657a8d1..cfa9a97 100644

>> --- a/gcc/tree-vrp.c

>> +++ b/gcc/tree-vrp.c

>> @@ -5245,7 +5245,7 @@ register_edge_assert_for_2 (tree name, edge e,

>>               && tree_fits_uhwi_p (cst2)

>>               && INTEGRAL_TYPE_P (TREE_TYPE (name2))

>>               && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)

>> -             && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))

>> +             && full_integral_type_p (TREE_TYPE (val)))

>>             {

>>               mask = wi::mask (tree_to_uhwi (cst2), false, prec);

>>               val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);

>> diff --git a/gcc/tree.h b/gcc/tree.h

>> index 46debc1..237f234 100644

>> --- a/gcc/tree.h

>> +++ b/gcc/tree.h

>> @@ -5394,4 +5394,25 @@ struct builtin_structptr_type

>>    const char *str;

>>  };

>>  extern const builtin_structptr_type builtin_structptr_types[6];

>> +

>> +/* Return true if the mode underlying scalar type T has the same number

>> +   of bits as T does.  Examples of when this is false include bitfields

>> +   that are narrower than the mode that contains them.  */

>> +

>> +inline bool

>> +scalar_type_is_full_p (const_tree t)

>> +{

>> +  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)

>> +         == TYPE_PRECISION (t));

>> +}

>> +

>> +/* Return true if T is an integral type that has the same number of bits

>> +   as its underlying mode.  */

>> +

>> +inline bool

>> +full_integral_type_p (const_tree t)

>> +{

>> +  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);

>> +}

>> +

>>  #endif  /* GCC_TREE_H  */

>> diff --git a/gcc/ubsan.c b/gcc/ubsan.c

>> index 49e38fa..40f5f3e 100644

>> --- a/gcc/ubsan.c

>> +++ b/gcc/ubsan.c

>> @@ -1582,9 +1582,8 @@ instrument_si_overflow (gimple_stmt_iterator gsi)

>>

>>    /* If this is not a signed operation, don't instrument anything here.

>>       Also punt on bit-fields.  */

>> -  if (!INTEGRAL_TYPE_P (lhsinner)

>> -      || TYPE_OVERFLOW_WRAPS (lhsinner)

>> -      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))

>> +  if (!full_integral_type_p (lhsinner)

>> +      || TYPE_OVERFLOW_WRAPS (lhsinner))

>>      return;

>>

>>    switch (code)
Richard Biener Aug. 18, 2017, 12:18 p.m. UTC | #3
On Fri, Aug 18, 2017 at 1:04 PM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:

>> On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford

>> <richard.sandiford@linaro.org> wrote:

>>> There are several places that test whether:

>>>

>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))

>>>

>>> for some integer type T.  With SVE variable-length modes, this would

>>> need to become:

>>>

>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))

>>>

>>> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).

>>> But rather than add the "SCALAR_" everywhere, it seemed neater to

>>> introduce a new helper function that tests whether T is an integral

>>> type that has the same number of bits as its underlying mode.  This

>>> patch does that, calling it full_integral_type_p.

>>>

>>> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode

>>> is defined in stor-layout.h, so for now the function accesses the mode

>>> field directly.  After the 77-patch machine_mode series (thanks again

>>> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.

>>>

>>> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:

>>>

>>> - for fold_single_bit_test_into_sign_test it is obvious from the

>>>   integer_foop tests that this is restricted to integral types.

>>>

>>> - vect_recog_vector_vector_shift_pattern is inherently restricted

>>>   to integral types.

>>>

>>> - the register_edge_assert_for_2 hunk is dominated by:

>>>

>>>       TREE_CODE (val) == INTEGER_CST

>>>

>>> - the ubsan_instrument_shift hunk is preceded by an early exit:

>>>

>>>       if (!INTEGRAL_TYPE_P (type0))

>>>         return NULL_TREE;

>>>

>>> - the second and third match.pd hunks are from:

>>>

>>>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))

>>>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))

>>>        if the new mask might be further optimized.  */

>>>

>>> I'm a bit confused about:

>>>

>>> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))

>>>    when profitable.

>>>    For bitwise binary operations apply operand conversions to the

>>>    binary operation result instead of to the operands.  This allows

>>>    to combine successive conversions and bitwise binary operations.

>>>    We combine the above two cases by using a conditional convert.  */

>>> (for bitop (bit_and bit_ior bit_xor)

>>>  (simplify

>>>   (bitop (convert @0) (convert? @1))

>>>   (if (((TREE_CODE (@1) == INTEGER_CST

>>>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>>>          && int_fits_type_p (@1, TREE_TYPE (@0)))

>>>         || types_match (@0, @1))

>>>        /* ???  This transform conflicts with fold-const.c doing

>>>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer

>>>           constants (if x has signed type, the sign bit cannot be set

>>>           in c).  This folds extension into the BIT_AND_EXPR.

>>>           Restrict it to GIMPLE to avoid endless recursions.  */

>>>        && (bitop != BIT_AND_EXPR || GIMPLE)

>>>        && (/* That's a good idea if the conversion widens the operand, thus

>>>               after hoisting the conversion the operation will be narrower.  */

>>>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)

>>>            /* It's also a good idea if the conversion is to a non-integer

>>>               mode.  */

>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>>            /* Or if the precision of TO is not the same as the precision

>>>               of its mode.  */

>>>            || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

>>>    (convert (bitop @0 (convert @1))))))

>>>

>>> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't

>>> rely on @0 and @1 being integral (although conversions from float would

>>> use FLOAT_EXPR), but then what is:

>>

>> bit_and is valid on POINTER_TYPE and vector integer types

>>

>>>

>>>            /* It's also a good idea if the conversion is to a non-integer

>>>               mode.  */

>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>>

>>> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer

>>> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those

>>> it would be better to apply the scalar rules to the element type.

>>

>> I suppose extra caution ;)  I think I have seen BLKmode for not naturally

>> aligned integer types at least on strict-align targets?  The code is

>> a copy from original code in tree-ssa-forwprop.c.

>>

>>> Either way, having allowed all non-INT modes, using full_integral_type_p

>>> for the remaining condition seems correct.

>>>

>>> If the feeling is that this isn't a useful abstraction, I can just update

>>> each site individually to cope with variable-sized modes.

>>

>> I think "full_integral_type_p" is a name from which I cannot infer

>> its meaning.  Maybe type_has_mode_precision_p?  Or

>> type_matches_mode_p?

>

> type_has_mode_precision_p sounds good.  With that name I guess it

> should be written to cope with all types (even those with variable-

> width modes), so I think we'd need to continue using TYPE_MODE.

> The VECTOR_MODE_P check should get optimised away in most of

> the cases touched by the patch though.

>

> Would it be OK to move the declaration of vector_type_mode to tree.h

> so that type_has_mode_precision_p can be inline?


Just move the implementation to tree.c then.

>> Does TYPE_PRECISION == GET_MODE_PRECISION

>> imply TYPE_SIZE == GET_MODE_BITSIZE btw?

>

> Good question :-)  You'd know better than me.  The only case I can

> think of is if we ever tried to use BImode for Fortran logicals

> (regardless of kind).  But I guess we wouldn't do that?


OTOH TYPE_SIZE should always be == GET_MODE_BITSIZE and if
not the type should get BLKmode.

Might grep for such checks and replace them with an assert ... ;)

Richard.

> Thanks,

> Richard

>

>>

>> Richard.

>>

>>> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?

>>>

>>> Richard

>>>

>>>

>>> 2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>

>>>             Alan Hayward  <alan.hayward@arm.com>

>>>             David Sherwood  <david.sherwood@arm.com>

>>>

>>> gcc/

>>>         * tree.h (scalar_type_is_full_p): New function.

>>>         (full_integral_type_p): Likewise.

>>>         * fold-const.c (fold_single_bit_test_into_sign_test): Likewise.

>>>         * match.pd: Likewise.

>>>         * tree-ssa-forwprop.c (simplify_rotate): Likewise.

>>>         * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)

>>>         (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.

>>>         (adjust_bool_pattern): Likewise.

>>>         * tree-vrp.c (register_edge_assert_for_2): Likewise.

>>>         * ubsan.c (instrument_si_overflow): Likewise.

>>>

>>> gcc/c-family/

>>>         * c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.

>>>

>>> diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c

>>> index b1386db..20f78e7 100644

>>> --- a/gcc/c-family/c-ubsan.c

>>> +++ b/gcc/c-family/c-ubsan.c

>>> @@ -131,8 +131,8 @@ ubsan_instrument_shift (location_t loc, enum tree_code code,

>>>

>>>    /* If this is not a signed operation, don't perform overflow checks.

>>>       Also punt on bit-fields.  */

>>> -  if (TYPE_OVERFLOW_WRAPS (type0)

>>> -      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)

>>> +  if (!full_integral_type_p (type0)

>>> +      || TYPE_OVERFLOW_WRAPS (type0)

>>>        || !sanitize_flags_p (SANITIZE_SHIFT_BASE))

>>>      ;

>>>

>>> diff --git a/gcc/fold-const.c b/gcc/fold-const.c

>>> index 0a5b168..1985a14 100644

>>> --- a/gcc/fold-const.c

>>> +++ b/gcc/fold-const.c

>>> @@ -6672,8 +6672,7 @@ fold_single_bit_test_into_sign_test (location_t loc,

>>>        if (arg00 != NULL_TREE

>>>           /* This is only a win if casting to a signed type is cheap,

>>>              i.e. when arg00's type is not a partial mode.  */

>>> -         && TYPE_PRECISION (TREE_TYPE (arg00))

>>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))

>>> +         && full_integral_type_p (TREE_TYPE (arg00)))

>>>         {

>>>           tree stype = signed_type_for (TREE_TYPE (arg00));

>>>           return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,

>>> diff --git a/gcc/match.pd b/gcc/match.pd

>>> index 0e36f46..9ad9930 100644

>>> --- a/gcc/match.pd

>>> +++ b/gcc/match.pd

>>> @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>>            /* Or if the precision of TO is not the same as the precision

>>>               of its mode.  */

>>> -          || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

>>> +          || !full_integral_type_p (type)))

>>>     (convert (bitop @0 (convert @1))))))

>>>

>>>  (for bitop (bit_and bit_ior)

>>> @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>>         if (shift == LSHIFT_EXPR)

>>>          zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);

>>>         else if (shift == RSHIFT_EXPR

>>> -               && (TYPE_PRECISION (shift_type)

>>> -                   == GET_MODE_PRECISION (TYPE_MODE (shift_type))))

>>> +               && full_integral_type_p (shift_type))

>>>          {

>>>            prec = TYPE_PRECISION (TREE_TYPE (@3));

>>>            tree arg00 = @0;

>>> @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>>                && TYPE_UNSIGNED (TREE_TYPE (@0)))

>>>              {

>>>                tree inner_type = TREE_TYPE (@0);

>>> -              if ((TYPE_PRECISION (inner_type)

>>> -                   == GET_MODE_PRECISION (TYPE_MODE (inner_type)))

>>> +              if (full_integral_type_p (inner_type)

>>>                    && TYPE_PRECISION (inner_type) < prec)

>>>                  {

>>>                    prec = TYPE_PRECISION (inner_type);

>>> @@ -3225,9 +3223,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>>       ncmp (ge lt)

>>>   (simplify

>>>    (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)

>>> -  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))

>>> -       && (TYPE_PRECISION (TREE_TYPE (@0))

>>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

>>> +  (if (full_integral_type_p (TREE_TYPE (@0))

>>>         && element_precision (@2) >= element_precision (@0)

>>>         && wi::only_sign_bit_p (@1, element_precision (@0)))

>>>     (with { tree stype = signed_type_for (TREE_TYPE (@0)); }

>>> @@ -4021,19 +4017,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>>  (for op (plus minus)

>>>    (simplify

>>>      (convert (op:s (convert@2 @0) (convert?@3 @1)))

>>> -    (if (INTEGRAL_TYPE_P (type)

>>> -        /* We check for type compatibility between @0 and @1 below,

>>> -           so there's no need to check that @1/@3 are integral types.  */

>>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@2))

>>> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))

>>>          /* The precision of the type of each operand must match the

>>>             precision of the mode of each operand, similarly for the

>>>             result.  */

>>> -        && (TYPE_PRECISION (TREE_TYPE (@0))

>>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

>>> -        && (TYPE_PRECISION (TREE_TYPE (@1))

>>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

>>> -        && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

>>> +        && full_integral_type_p (TREE_TYPE (@0))

>>> +        && full_integral_type_p (TREE_TYPE (@1))

>>> +        && full_integral_type_p (type)

>>>          /* The inner conversion must be a widening conversion.  */

>>>          && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>>>          && types_match (@0, type)

>>> @@ -4055,19 +4045,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>>>  (for op (minus plus)

>>>   (simplify

>>>    (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)

>>> -  (if (INTEGRAL_TYPE_P (type)

>>> -       /* We check for type compatibility between @0 and @1 below,

>>> -         so there's no need to check that @1/@3 are integral types.  */

>>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@2))

>>> +  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))

>>>         /* The precision of the type of each operand must match the

>>>           precision of the mode of each operand, similarly for the

>>>           result.  */

>>> -       && (TYPE_PRECISION (TREE_TYPE (@0))

>>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

>>> -       && (TYPE_PRECISION (TREE_TYPE (@1))

>>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

>>> -       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

>>> +       && full_integral_type_p (TREE_TYPE (@0))

>>> +       && full_integral_type_p (TREE_TYPE (@1))

>>> +       && full_integral_type_p (type)

>>>         /* The inner conversion must be a widening conversion.  */

>>>         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>>>         && types_match (@0, @1)

>>> diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c

>>> index 5719b99..20d5c86 100644

>>> --- a/gcc/tree-ssa-forwprop.c

>>> +++ b/gcc/tree-ssa-forwprop.c

>>> @@ -1528,8 +1528,7 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>>

>>>    /* Only create rotates in complete modes.  Other cases are not

>>>       expanded properly.  */

>>> -  if (!INTEGRAL_TYPE_P (rtype)

>>> -      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))

>>> +  if (!full_integral_type_p (rtype))

>>>      return false;

>>>

>>>    for (i = 0; i < 2; i++)

>>> @@ -1606,11 +1605,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>>           defcodefor_name (def_arg2[i], &cdef_code[i],

>>>                            &cdef_arg1[i], &cdef_arg2[i]);

>>>           if (CONVERT_EXPR_CODE_P (cdef_code[i])

>>> -             && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))

>>> +             && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))

>>>               && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

>>> -                > floor_log2 (TYPE_PRECISION (rtype))

>>> -             && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

>>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))

>>> +                > floor_log2 (TYPE_PRECISION (rtype)))

>>>             {

>>>               def_arg2_alt[i] = cdef_arg1[i];

>>>               defcodefor_name (def_arg2_alt[i], &cdef_code[i],

>>> @@ -1636,11 +1633,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>>               }

>>>             defcodefor_name (cdef_arg2[i], &code, &tem, NULL);

>>>             if (CONVERT_EXPR_CODE_P (code)

>>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>>> +               && full_integral_type_p (TREE_TYPE (tem))

>>>                 && TYPE_PRECISION (TREE_TYPE (tem))

>>>                  > floor_log2 (TYPE_PRECISION (rtype))

>>> -               && TYPE_PRECISION (TREE_TYPE (tem))

>>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

>>>                 && (tem == def_arg2[1 - i]

>>>                     || tem == def_arg2_alt[1 - i]))

>>>               {

>>> @@ -1664,11 +1659,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>>

>>>             defcodefor_name (cdef_arg1[i], &code, &tem, NULL);

>>>             if (CONVERT_EXPR_CODE_P (code)

>>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>>> +               && full_integral_type_p (TREE_TYPE (tem))

>>>                 && TYPE_PRECISION (TREE_TYPE (tem))

>>> -                > floor_log2 (TYPE_PRECISION (rtype))

>>> -               && TYPE_PRECISION (TREE_TYPE (tem))

>>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))

>>> +                > floor_log2 (TYPE_PRECISION (rtype)))

>>>               defcodefor_name (tem, &code, &tem, NULL);

>>>

>>>             if (code == NEGATE_EXPR)

>>> @@ -1680,11 +1673,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)

>>>                   }

>>>                 defcodefor_name (tem, &code, &tem, NULL);

>>>                 if (CONVERT_EXPR_CODE_P (code)

>>> -                   && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>>> +                   && full_integral_type_p (TREE_TYPE (tem))

>>>                     && TYPE_PRECISION (TREE_TYPE (tem))

>>>                        > floor_log2 (TYPE_PRECISION (rtype))

>>> -                   && TYPE_PRECISION (TREE_TYPE (tem))

>>> -                      == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

>>>                     && (tem == def_arg2[1 - i]

>>>                         || tem == def_arg2_alt[1 - i]))

>>>                   {

>>> diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c

>>> index 17d1083..a96f784 100644

>>> --- a/gcc/tree-vect-patterns.c

>>> +++ b/gcc/tree-vect-patterns.c

>>> @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,

>>>    if (TREE_CODE (oprnd0) != SSA_NAME

>>>        || TREE_CODE (oprnd1) != SSA_NAME

>>>        || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))

>>> -      || TYPE_PRECISION (TREE_TYPE (oprnd1))

>>> -        != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))

>>> +      || !full_integral_type_p (TREE_TYPE (oprnd1))

>>>        || TYPE_PRECISION (TREE_TYPE (lhs))

>>>          != TYPE_PRECISION (TREE_TYPE (oprnd0)))

>>>      return NULL;

>>> @@ -2469,8 +2468,7 @@ vect_recog_mult_pattern (vec<gimple *> *stmts,

>>>

>>>    if (TREE_CODE (oprnd0) != SSA_NAME

>>>        || TREE_CODE (oprnd1) != INTEGER_CST

>>> -      || !INTEGRAL_TYPE_P (itype)

>>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

>>> +      || !full_integral_type_p (itype))

>>>      return NULL;

>>>

>>>    vectype = get_vectype_for_scalar_type (itype);

>>> @@ -2584,8 +2582,7 @@ vect_recog_divmod_pattern (vec<gimple *> *stmts,

>>>    itype = TREE_TYPE (oprnd0);

>>>    if (TREE_CODE (oprnd0) != SSA_NAME

>>>        || TREE_CODE (oprnd1) != INTEGER_CST

>>> -      || TREE_CODE (itype) != INTEGER_TYPE

>>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

>>> +      || !full_integral_type_p (itype))

>>>      return NULL;

>>>

>>>    vectype = get_vectype_for_scalar_type (itype);

>>> @@ -3385,9 +3382,8 @@ adjust_bool_pattern (tree var, tree out_type,

>>>      do_compare:

>>>        gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);

>>>        if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE

>>> -         || !TYPE_UNSIGNED (TREE_TYPE (rhs1))

>>> -         || (TYPE_PRECISION (TREE_TYPE (rhs1))

>>> -             != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))

>>> +         || !full_integral_type_p (TREE_TYPE (rhs1))

>>> +         || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))

>>>         {

>>>           machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));

>>>           itype

>>> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c

>>> index 657a8d1..cfa9a97 100644

>>> --- a/gcc/tree-vrp.c

>>> +++ b/gcc/tree-vrp.c

>>> @@ -5245,7 +5245,7 @@ register_edge_assert_for_2 (tree name, edge e,

>>>               && tree_fits_uhwi_p (cst2)

>>>               && INTEGRAL_TYPE_P (TREE_TYPE (name2))

>>>               && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)

>>> -             && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))

>>> +             && full_integral_type_p (TREE_TYPE (val)))

>>>             {

>>>               mask = wi::mask (tree_to_uhwi (cst2), false, prec);

>>>               val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);

>>> diff --git a/gcc/tree.h b/gcc/tree.h

>>> index 46debc1..237f234 100644

>>> --- a/gcc/tree.h

>>> +++ b/gcc/tree.h

>>> @@ -5394,4 +5394,25 @@ struct builtin_structptr_type

>>>    const char *str;

>>>  };

>>>  extern const builtin_structptr_type builtin_structptr_types[6];

>>> +

>>> +/* Return true if the mode underlying scalar type T has the same number

>>> +   of bits as T does.  Examples of when this is false include bitfields

>>> +   that are narrower than the mode that contains them.  */

>>> +

>>> +inline bool

>>> +scalar_type_is_full_p (const_tree t)

>>> +{

>>> +  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)

>>> +         == TYPE_PRECISION (t));

>>> +}

>>> +

>>> +/* Return true if T is an integral type that has the same number of bits

>>> +   as its underlying mode.  */

>>> +

>>> +inline bool

>>> +full_integral_type_p (const_tree t)

>>> +{

>>> +  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);

>>> +}

>>> +

>>>  #endif  /* GCC_TREE_H  */

>>> diff --git a/gcc/ubsan.c b/gcc/ubsan.c

>>> index 49e38fa..40f5f3e 100644

>>> --- a/gcc/ubsan.c

>>> +++ b/gcc/ubsan.c

>>> @@ -1582,9 +1582,8 @@ instrument_si_overflow (gimple_stmt_iterator gsi)

>>>

>>>    /* If this is not a signed operation, don't instrument anything here.

>>>       Also punt on bit-fields.  */

>>> -  if (!INTEGRAL_TYPE_P (lhsinner)

>>> -      || TYPE_OVERFLOW_WRAPS (lhsinner)

>>> -      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))

>>> +  if (!full_integral_type_p (lhsinner)

>>> +      || TYPE_OVERFLOW_WRAPS (lhsinner))

>>>      return;

>>>

>>>    switch (code)
Richard Sandiford Aug. 21, 2017, 9:58 a.m. UTC | #4
Richard Biener <richard.guenther@gmail.com> writes:
> On Fri, Aug 18, 2017 at 1:04 PM, Richard Sandiford

> <richard.sandiford@linaro.org> wrote:

>> Richard Biener <richard.guenther@gmail.com> writes:

>>> On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford

>>> <richard.sandiford@linaro.org> wrote:

>>>> There are several places that test whether:

>>>>

>>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))

>>>>

>>>> for some integer type T.  With SVE variable-length modes, this would

>>>> need to become:

>>>>

>>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))

>>>>

>>>> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).

>>>> But rather than add the "SCALAR_" everywhere, it seemed neater to

>>>> introduce a new helper function that tests whether T is an integral

>>>> type that has the same number of bits as its underlying mode.  This

>>>> patch does that, calling it full_integral_type_p.

>>>>

>>>> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode

>>>> is defined in stor-layout.h, so for now the function accesses the mode

>>>> field directly.  After the 77-patch machine_mode series (thanks again

>>>> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.

>>>>

>>>> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:

>>>>

>>>> - for fold_single_bit_test_into_sign_test it is obvious from the

>>>>   integer_foop tests that this is restricted to integral types.

>>>>

>>>> - vect_recog_vector_vector_shift_pattern is inherently restricted

>>>>   to integral types.

>>>>

>>>> - the register_edge_assert_for_2 hunk is dominated by:

>>>>

>>>>       TREE_CODE (val) == INTEGER_CST

>>>>

>>>> - the ubsan_instrument_shift hunk is preceded by an early exit:

>>>>

>>>>       if (!INTEGRAL_TYPE_P (type0))

>>>>         return NULL_TREE;

>>>>

>>>> - the second and third match.pd hunks are from:

>>>>

>>>>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))

>>>>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))

>>>>        if the new mask might be further optimized.  */

>>>>

>>>> I'm a bit confused about:

>>>>

>>>> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))

>>>>    when profitable.

>>>>    For bitwise binary operations apply operand conversions to the

>>>>    binary operation result instead of to the operands.  This allows

>>>>    to combine successive conversions and bitwise binary operations.

>>>>    We combine the above two cases by using a conditional convert.  */

>>>> (for bitop (bit_and bit_ior bit_xor)

>>>>  (simplify

>>>>   (bitop (convert @0) (convert? @1))

>>>>   (if (((TREE_CODE (@1) == INTEGER_CST

>>>>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>>>>          && int_fits_type_p (@1, TREE_TYPE (@0)))

>>>>         || types_match (@0, @1))

>>>>        /* ???  This transform conflicts with fold-const.c doing

>>>>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer

>>>>           constants (if x has signed type, the sign bit cannot be set

>>>>           in c).  This folds extension into the BIT_AND_EXPR.

>>>>           Restrict it to GIMPLE to avoid endless recursions.  */

>>>>        && (bitop != BIT_AND_EXPR || GIMPLE)

>>>>        && (/* That's a good idea if the conversion widens the operand, thus

>>>> after hoisting the conversion the operation will be narrower.  */

>>>>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)

>>>>            /* It's also a good idea if the conversion is to a non-integer

>>>>               mode.  */

>>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>>>            /* Or if the precision of TO is not the same as the precision

>>>>               of its mode.  */

>>>> || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

>>>>    (convert (bitop @0 (convert @1))))))

>>>>

>>>> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't

>>>> rely on @0 and @1 being integral (although conversions from float would

>>>> use FLOAT_EXPR), but then what is:

>>>

>>> bit_and is valid on POINTER_TYPE and vector integer types

>>>

>>>>

>>>>            /* It's also a good idea if the conversion is to a non-integer

>>>>               mode.  */

>>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>>>

>>>> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer

>>>> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those

>>>> it would be better to apply the scalar rules to the element type.

>>>

>>> I suppose extra caution ;)  I think I have seen BLKmode for not naturally

>>> aligned integer types at least on strict-align targets?  The code is

>>> a copy from original code in tree-ssa-forwprop.c.

>>>

>>>> Either way, having allowed all non-INT modes, using full_integral_type_p

>>>> for the remaining condition seems correct.

>>>>

>>>> If the feeling is that this isn't a useful abstraction, I can just update

>>>> each site individually to cope with variable-sized modes.

>>>

>>> I think "full_integral_type_p" is a name from which I cannot infer

>>> its meaning.  Maybe type_has_mode_precision_p?  Or

>>> type_matches_mode_p?

>>

>> type_has_mode_precision_p sounds good.  With that name I guess it

>> should be written to cope with all types (even those with variable-

>> width modes), so I think we'd need to continue using TYPE_MODE.

>> The VECTOR_MODE_P check should get optimised away in most of

>> the cases touched by the patch though.

>>

>> Would it be OK to move the declaration of vector_type_mode to tree.h

>> so that type_has_mode_precision_p can be inline?

>

> Just move the implementation to tree.c then.


OK, I posted that as: https://gcc.gnu.org/ml/gcc-patches/2017-08/msg01184.html

Here's the patch to add type_has_mode_precision_p.  Tested on
aarch64-linux-gnu and x86_64-linux-gnu, and by diffing the before and
after testsuite assembly for one target per CPU (there were no differences).
OK to install?

Thanks,
Richard


2017-08-21  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
	* tree.h (type_has_mode_precision_p): New function.
	* convert.c (convert_to_integer_1): Use it.
	* expr.c (expand_expr_real_2): Likewise.
	(expand_expr_real_1): Likewise.
	* fold-const.c (fold_single_bit_test_into_sign_test): Likewise.
	* match.pd: Likewise.
	* tree-ssa-forwprop.c (simplify_rotate): Likewise.
	* tree-ssa-math-opts.c (convert_mult_to_fma): Likewise.
	* tree-tailcall.c (process_assignment): Likewise.
	* tree-vect-loop.c (vectorizable_reduction): Likewise.
	* tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)
	(vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.
	* tree-vect-stmts.c (vectorizable_conversion): Likewise.
	(vectorizable_assignment): Likewise.
	(vectorizable_shift): Likewise.
	(vectorizable_operation): Likewise.
	* tree-vrp.c (register_edge_assert_for_2): Likewise.Index: gcc/tree.h
===================================================================
--- gcc/tree.h	2017-08-21 10:52:43.717019857 +0100
+++ gcc/tree.h	2017-08-21 10:55:12.419951940 +0100
@@ -5414,4 +5414,13 @@ struct builtin_structptr_type
   const char *str;
 };
 extern const builtin_structptr_type builtin_structptr_types[6];
+
+/* Return true if type T has the same precision as its underlying mode.  */
+
+inline bool
+type_has_mode_precision_p (const_tree t)
+{
+  return TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t));
+}
+
 #endif  /* GCC_TREE_H  */
Index: gcc/convert.c
===================================================================
--- gcc/convert.c	2017-08-21 10:41:51.158103275 +0100
+++ gcc/convert.c	2017-08-21 10:55:12.412951940 +0100
@@ -711,8 +711,7 @@ convert_to_integer_1 (tree type, tree ex
 	     the signed-to-unsigned case the high-order bits have to
 	     be cleared.  */
 	  if (TYPE_UNSIGNED (type) != TYPE_UNSIGNED (TREE_TYPE (expr))
-	      && (TYPE_PRECISION (TREE_TYPE (expr))
-		  != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (expr)))))
+	      && !type_has_mode_precision_p (TREE_TYPE (expr)))
 	    code = CONVERT_EXPR;
 	  else
 	    code = NOP_EXPR;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	2017-08-21 10:41:51.158103275 +0100
+++ gcc/expr.c	2017-08-21 10:55:12.413951940 +0100
@@ -8244,7 +8244,7 @@ #define REDUCE_BIT_FIELD(expr)	(reduce_b
      result to be reduced to the precision of the bit-field type,
      which is narrower than that of the type's mode.  */
   reduce_bit_field = (INTEGRAL_TYPE_P (type)
-		      && GET_MODE_PRECISION (mode) > TYPE_PRECISION (type));
+		      && !type_has_mode_precision_p (type));
 
   if (reduce_bit_field && modifier == EXPAND_STACK_PARM)
     target = 0;
@@ -9097,8 +9097,7 @@ #define REDUCE_BIT_FIELD(expr)	(reduce_b
     case LROTATE_EXPR:
     case RROTATE_EXPR:
       gcc_assert (VECTOR_MODE_P (TYPE_MODE (type))
-		  || (GET_MODE_PRECISION (TYPE_MODE (type))
-		      == TYPE_PRECISION (type)));
+		  || type_has_mode_precision_p (type));
       /* fall through */
 
     case LSHIFT_EXPR:
@@ -9671,7 +9670,7 @@ expand_expr_real_1 (tree exp, rtx target
      which is narrower than that of the type's mode.  */
   reduce_bit_field = (!ignore
 		      && INTEGRAL_TYPE_P (type)
-		      && GET_MODE_PRECISION (mode) > TYPE_PRECISION (type));
+		      && !type_has_mode_precision_p (type));
 
   /* If we are going to ignore this result, we need only do something
      if there is a side-effect somewhere in the expression.  If there
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	2017-08-21 10:41:51.265103275 +0100
+++ gcc/fold-const.c	2017-08-21 10:55:12.414951940 +0100
@@ -6638,8 +6638,7 @@ fold_single_bit_test_into_sign_test (loc
       if (arg00 != NULL_TREE
 	  /* This is only a win if casting to a signed type is cheap,
 	     i.e. when arg00's type is not a partial mode.  */
-	  && TYPE_PRECISION (TREE_TYPE (arg00))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))
+	  && type_has_mode_precision_p (TREE_TYPE (arg00)))
 	{
 	  tree stype = signed_type_for (TREE_TYPE (arg00));
 	  return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
Index: gcc/match.pd
===================================================================
--- gcc/match.pd	2017-08-21 10:41:51.265103275 +0100
+++ gcc/match.pd	2017-08-21 10:55:12.414951940 +0100
@@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 	   || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
 	   /* Or if the precision of TO is not the same as the precision
 	      of its mode.  */
-	   || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
+	   || !type_has_mode_precision_p (type)))
    (convert (bitop @0 (convert @1))))))
 
 (for bitop (bit_and bit_ior)
@@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
        if (shift == LSHIFT_EXPR)
 	 zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);
        else if (shift == RSHIFT_EXPR
-		&& (TYPE_PRECISION (shift_type)
-		    == GET_MODE_PRECISION (TYPE_MODE (shift_type))))
+		&& type_has_mode_precision_p (shift_type))
 	 {
 	   prec = TYPE_PRECISION (TREE_TYPE (@3));
 	   tree arg00 = @0;
@@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 	       && TYPE_UNSIGNED (TREE_TYPE (@0)))
 	     {
 	       tree inner_type = TREE_TYPE (@0);
-	       if ((TYPE_PRECISION (inner_type)
-		    == GET_MODE_PRECISION (TYPE_MODE (inner_type)))
+	       if (type_has_mode_precision_p (inner_type)
 		   && TYPE_PRECISION (inner_type) < prec)
 		 {
 		   prec = TYPE_PRECISION (inner_type);
@@ -3226,8 +3224,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
  (simplify
   (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)
   (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
-       && (TYPE_PRECISION (TREE_TYPE (@0))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
+       && type_has_mode_precision_p (TREE_TYPE (@0))
        && element_precision (@2) >= element_precision (@0)
        && wi::only_sign_bit_p (@1, element_precision (@0)))
    (with { tree stype = signed_type_for (TREE_TYPE (@0)); }
@@ -4029,11 +4026,9 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 	 /* The precision of the type of each operand must match the
 	    precision of the mode of each operand, similarly for the
 	    result.  */
-	 && (TYPE_PRECISION (TREE_TYPE (@0))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
-	 && (TYPE_PRECISION (TREE_TYPE (@1))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
-	 && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
+	 && type_has_mode_precision_p (TREE_TYPE (@0))
+	 && type_has_mode_precision_p (TREE_TYPE (@1))
+	 && type_has_mode_precision_p (type)
 	 /* The inner conversion must be a widening conversion.  */
 	 && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
 	 && types_match (@0, type)
@@ -4063,11 +4058,9 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
        /* The precision of the type of each operand must match the
 	  precision of the mode of each operand, similarly for the
 	  result.  */
-       && (TYPE_PRECISION (TREE_TYPE (@0))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
-       && (TYPE_PRECISION (TREE_TYPE (@1))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
-       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
+       && type_has_mode_precision_p (TREE_TYPE (@0))
+       && type_has_mode_precision_p (TREE_TYPE (@1))
+       && type_has_mode_precision_p (type)
        /* The inner conversion must be a widening conversion.  */
        && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
        && types_match (@0, @1)
Index: gcc/tree-ssa-forwprop.c
===================================================================
--- gcc/tree-ssa-forwprop.c	2017-08-21 10:41:51.265103275 +0100
+++ gcc/tree-ssa-forwprop.c	2017-08-21 10:55:12.415951940 +0100
@@ -1529,7 +1529,7 @@ simplify_rotate (gimple_stmt_iterator *g
   /* Only create rotates in complete modes.  Other cases are not
      expanded properly.  */
   if (!INTEGRAL_TYPE_P (rtype)
-      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))
+      || !type_has_mode_precision_p (rtype))
     return false;
 
   for (i = 0; i < 2; i++)
@@ -1609,8 +1609,7 @@ simplify_rotate (gimple_stmt_iterator *g
 	      && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))
 	      && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
 		 > floor_log2 (TYPE_PRECISION (rtype))

-	      && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))
+	      && type_has_mode_precision_p (TREE_TYPE (cdef_arg1[i])))
 	    {
 	      def_arg2_alt[i] = cdef_arg1[i];
 	      defcodefor_name (def_arg2_alt[i], &cdef_code[i],
@@ -1639,8 +1638,7 @@ simplify_rotate (gimple_stmt_iterator *g
 		&& INTEGRAL_TYPE_P (TREE_TYPE (tem))
 		&& TYPE_PRECISION (TREE_TYPE (tem))
 		 > floor_log2 (TYPE_PRECISION (rtype))

-		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
+		&& type_has_mode_precision_p (TREE_TYPE (tem))
 		&& (tem == def_arg2[1 - i]
 		    || tem == def_arg2_alt[1 - i]))
 	      {
@@ -1667,8 +1665,7 @@ simplify_rotate (gimple_stmt_iterator *g
 		&& INTEGRAL_TYPE_P (TREE_TYPE (tem))
 		&& TYPE_PRECISION (TREE_TYPE (tem))
 		 > floor_log2 (TYPE_PRECISION (rtype))

-		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))
+		&& type_has_mode_precision_p (TREE_TYPE (tem)))
 	      defcodefor_name (tem, &code, &tem, NULL);
 
 	    if (code == NEGATE_EXPR)
@@ -1683,8 +1680,7 @@ simplify_rotate (gimple_stmt_iterator *g
 		    && INTEGRAL_TYPE_P (TREE_TYPE (tem))
 		    && TYPE_PRECISION (TREE_TYPE (tem))
 		       > floor_log2 (TYPE_PRECISION (rtype))

-		    && TYPE_PRECISION (TREE_TYPE (tem))
-		       == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
+		    && type_has_mode_precision_p (TREE_TYPE (tem))
 		    && (tem == def_arg2[1 - i]
 			|| tem == def_arg2_alt[1 - i]))
 		  {
Index: gcc/tree-ssa-math-opts.c
===================================================================
--- gcc/tree-ssa-math-opts.c	2017-08-21 10:41:51.158103275 +0100
+++ gcc/tree-ssa-math-opts.c	2017-08-21 10:55:12.415951940 +0100
@@ -3564,8 +3564,7 @@ convert_mult_to_fma (gimple *mul_stmt, t
 
   /* We don't want to do bitfield reduction ops.  */
   if (INTEGRAL_TYPE_P (type)
-      && (TYPE_PRECISION (type)
-	  != GET_MODE_PRECISION (TYPE_MODE (type))))
+      && !type_has_mode_precision_p (type))
     return false;
 
   /* If the target doesn't support it, don't generate it.  We assume that
Index: gcc/tree-tailcall.c
===================================================================
--- gcc/tree-tailcall.c	2017-08-21 10:41:51.158103275 +0100
+++ gcc/tree-tailcall.c	2017-08-21 10:55:12.415951940 +0100
@@ -289,8 +289,7 @@ process_assignment (gassign *stmt,
 	     type is smaller than mode's precision,
 	     reduce_to_bit_field_precision would generate additional code.  */
 	  if (INTEGRAL_TYPE_P (TREE_TYPE (dest))
-	      && (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (dest)))
-		  > TYPE_PRECISION (TREE_TYPE (dest))))
+	      && !type_has_mode_precision_p (TREE_TYPE (dest)))
 	    return FAIL;
 	}
 
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2017-08-21 10:41:51.158103275 +0100
+++ gcc/tree-vect-loop.c	2017-08-21 10:55:12.416951940 +0100
@@ -5848,8 +5848,7 @@ vectorizable_reduction (gimple *stmt, gi
     return false;
 
   /* Do not try to vectorize bit-precision reductions.  */
-  if ((TYPE_PRECISION (scalar_type)
-       != GET_MODE_PRECISION (TYPE_MODE (scalar_type))))
+  if (!type_has_mode_precision_p (scalar_type))
     return false;
 
   /* All uses but the last are expected to be defined in the loop.
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2017-08-21 10:41:51.265103275 +0100
+++ gcc/tree-vect-patterns.c	2017-08-21 10:55:12.416951940 +0100
@@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != SSA_NAME
       || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))
-      || TYPE_PRECISION (TREE_TYPE (oprnd1))
-	 != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))
+      || !type_has_mode_precision_p (TREE_TYPE (oprnd1))
       || TYPE_PRECISION (TREE_TYPE (lhs))
 	 != TYPE_PRECISION (TREE_TYPE (oprnd0)))
     return NULL;
@@ -2470,7 +2469,7 @@ vect_recog_mult_pattern (vec<gimple *> *
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != INTEGER_CST
       || !INTEGRAL_TYPE_P (itype)
-      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
+      || !type_has_mode_precision_p (itype))
     return NULL;
 
   vectype = get_vectype_for_scalar_type (itype);
@@ -2585,7 +2584,7 @@ vect_recog_divmod_pattern (vec<gimple *>
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != INTEGER_CST
       || TREE_CODE (itype) != INTEGER_TYPE
-      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
+      || !type_has_mode_precision_p (itype))
     return NULL;
 
   vectype = get_vectype_for_scalar_type (itype);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2017-08-21 10:42:51.088530428 +0100
+++ gcc/tree-vect-stmts.c	2017-08-21 10:55:12.417951940 +0100
@@ -4098,11 +4098,9 @@ vectorizable_conversion (gimple *stmt, g
 
   if (!VECTOR_BOOLEAN_TYPE_P (vectype_out)
       && ((INTEGRAL_TYPE_P (lhs_type)
-	   && (TYPE_PRECISION (lhs_type)
-	       != GET_MODE_PRECISION (TYPE_MODE (lhs_type))))
+	   && !type_has_mode_precision_p (lhs_type))
 	  || (INTEGRAL_TYPE_P (rhs_type)
-	      && (TYPE_PRECISION (rhs_type)
-		  != GET_MODE_PRECISION (TYPE_MODE (rhs_type))))))
+	      && !type_has_mode_precision_p (rhs_type))))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -4696,10 +4694,8 @@ vectorizable_assignment (gimple *stmt, g
   if ((CONVERT_EXPR_CODE_P (code)
        || code == VIEW_CONVERT_EXPR)
       && INTEGRAL_TYPE_P (TREE_TYPE (scalar_dest))
-      && ((TYPE_PRECISION (TREE_TYPE (scalar_dest))
-	   != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (scalar_dest))))
-	  || ((TYPE_PRECISION (TREE_TYPE (op))
-	       != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (op))))))
+      && (!type_has_mode_precision_p (TREE_TYPE (scalar_dest))
+	  || !type_has_mode_precision_p (TREE_TYPE (op)))
       /* But a conversion that does not change the bit-pattern is ok.  */
       && !((TYPE_PRECISION (TREE_TYPE (scalar_dest))
 	    > TYPE_PRECISION (TREE_TYPE (op)))

@@ -4875,8 +4871,7 @@ vectorizable_shift (gimple *stmt, gimple
 
   scalar_dest = gimple_assign_lhs (stmt);
   vectype_out = STMT_VINFO_VECTYPE (stmt_info);
-  if (TYPE_PRECISION (TREE_TYPE (scalar_dest))
-      != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (scalar_dest))))
+  if (!type_has_mode_precision_p (TREE_TYPE (scalar_dest)))
     {
       if (dump_enabled_p ())
         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -5264,8 +5259,7 @@ vectorizable_operation (gimple *stmt, gi
   /* Most operations cannot handle bit-precision types without extra
      truncations.  */
   if (!VECTOR_BOOLEAN_TYPE_P (vectype_out)
-      && (TYPE_PRECISION (TREE_TYPE (scalar_dest))
-	  != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (scalar_dest))))
+      && !type_has_mode_precision_p (TREE_TYPE (scalar_dest))
       /* Exception are bitwise binary operations.  */
       && code != BIT_IOR_EXPR
       && code != BIT_XOR_EXPR
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	2017-08-21 10:41:51.265103275 +0100
+++ gcc/tree-vrp.c	2017-08-21 10:55:12.418951940 +0100
@@ -5247,7 +5247,7 @@ register_edge_assert_for_2 (tree name, e
 	      && tree_fits_uhwi_p (cst2)
 	      && INTEGRAL_TYPE_P (TREE_TYPE (name2))
 	      && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)
-	      && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))
+	      && type_has_mode_precision_p (TREE_TYPE (val)))
 	    {
 	      mask = wi::mask (tree_to_uhwi (cst2), false, prec);
 	      val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);

Richard Biener Aug. 21, 2017, 12:55 p.m. UTC | #5
On Mon, Aug 21, 2017 at 11:58 AM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:

>> On Fri, Aug 18, 2017 at 1:04 PM, Richard Sandiford

>> <richard.sandiford@linaro.org> wrote:

>>> Richard Biener <richard.guenther@gmail.com> writes:

>>>> On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford

>>>> <richard.sandiford@linaro.org> wrote:

>>>>> There are several places that test whether:

>>>>>

>>>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))

>>>>>

>>>>> for some integer type T.  With SVE variable-length modes, this would

>>>>> need to become:

>>>>>

>>>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))

>>>>>

>>>>> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).

>>>>> But rather than add the "SCALAR_" everywhere, it seemed neater to

>>>>> introduce a new helper function that tests whether T is an integral

>>>>> type that has the same number of bits as its underlying mode.  This

>>>>> patch does that, calling it full_integral_type_p.

>>>>>

>>>>> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode

>>>>> is defined in stor-layout.h, so for now the function accesses the mode

>>>>> field directly.  After the 77-patch machine_mode series (thanks again

>>>>> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.

>>>>>

>>>>> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:

>>>>>

>>>>> - for fold_single_bit_test_into_sign_test it is obvious from the

>>>>>   integer_foop tests that this is restricted to integral types.

>>>>>

>>>>> - vect_recog_vector_vector_shift_pattern is inherently restricted

>>>>>   to integral types.

>>>>>

>>>>> - the register_edge_assert_for_2 hunk is dominated by:

>>>>>

>>>>>       TREE_CODE (val) == INTEGER_CST

>>>>>

>>>>> - the ubsan_instrument_shift hunk is preceded by an early exit:

>>>>>

>>>>>       if (!INTEGRAL_TYPE_P (type0))

>>>>>         return NULL_TREE;

>>>>>

>>>>> - the second and third match.pd hunks are from:

>>>>>

>>>>>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))

>>>>>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))

>>>>>        if the new mask might be further optimized.  */

>>>>>

>>>>> I'm a bit confused about:

>>>>>

>>>>> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))

>>>>>    when profitable.

>>>>>    For bitwise binary operations apply operand conversions to the

>>>>>    binary operation result instead of to the operands.  This allows

>>>>>    to combine successive conversions and bitwise binary operations.

>>>>>    We combine the above two cases by using a conditional convert.  */

>>>>> (for bitop (bit_and bit_ior bit_xor)

>>>>>  (simplify

>>>>>   (bitop (convert @0) (convert? @1))

>>>>>   (if (((TREE_CODE (@1) == INTEGER_CST

>>>>>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))

>>>>>          && int_fits_type_p (@1, TREE_TYPE (@0)))

>>>>>         || types_match (@0, @1))

>>>>>        /* ???  This transform conflicts with fold-const.c doing

>>>>>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer

>>>>>           constants (if x has signed type, the sign bit cannot be set

>>>>>           in c).  This folds extension into the BIT_AND_EXPR.

>>>>>           Restrict it to GIMPLE to avoid endless recursions.  */

>>>>>        && (bitop != BIT_AND_EXPR || GIMPLE)

>>>>>        && (/* That's a good idea if the conversion widens the operand, thus

>>>>> after hoisting the conversion the operation will be narrower.  */

>>>>>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)

>>>>>            /* It's also a good idea if the conversion is to a non-integer

>>>>>               mode.  */

>>>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>>>>            /* Or if the precision of TO is not the same as the precision

>>>>>               of its mode.  */

>>>>> || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

>>>>>    (convert (bitop @0 (convert @1))))))

>>>>>

>>>>> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't

>>>>> rely on @0 and @1 being integral (although conversions from float would

>>>>> use FLOAT_EXPR), but then what is:

>>>>

>>>> bit_and is valid on POINTER_TYPE and vector integer types

>>>>

>>>>>

>>>>>            /* It's also a good idea if the conversion is to a non-integer

>>>>>               mode.  */

>>>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>>>>>

>>>>> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer

>>>>> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those

>>>>> it would be better to apply the scalar rules to the element type.

>>>>

>>>> I suppose extra caution ;)  I think I have seen BLKmode for not naturally

>>>> aligned integer types at least on strict-align targets?  The code is

>>>> a copy from original code in tree-ssa-forwprop.c.

>>>>

>>>>> Either way, having allowed all non-INT modes, using full_integral_type_p

>>>>> for the remaining condition seems correct.

>>>>>

>>>>> If the feeling is that this isn't a useful abstraction, I can just update

>>>>> each site individually to cope with variable-sized modes.

>>>>

>>>> I think "full_integral_type_p" is a name from which I cannot infer

>>>> its meaning.  Maybe type_has_mode_precision_p?  Or

>>>> type_matches_mode_p?

>>>

>>> type_has_mode_precision_p sounds good.  With that name I guess it

>>> should be written to cope with all types (even those with variable-

>>> width modes), so I think we'd need to continue using TYPE_MODE.

>>> The VECTOR_MODE_P check should get optimised away in most of

>>> the cases touched by the patch though.

>>>

>>> Would it be OK to move the declaration of vector_type_mode to tree.h

>>> so that type_has_mode_precision_p can be inline?

>>

>> Just move the implementation to tree.c then.

>

> OK, I posted that as: https://gcc.gnu.org/ml/gcc-patches/2017-08/msg01184.html

>

> Here's the patch to add type_has_mode_precision_p.  Tested on

> aarch64-linux-gnu and x86_64-linux-gnu, and by diffing the before and

> after testsuite assembly for one target per CPU (there were no differences).

> OK to install?


Ok.

Richard.

> Thanks,

> Richard

>

>

> 2017-08-21  Richard Sandiford  <richard.sandiford@linaro.org>

>

> gcc/

>         * tree.h (type_has_mode_precision_p): New function.

>         * convert.c (convert_to_integer_1): Use it.

>         * expr.c (expand_expr_real_2): Likewise.

>         (expand_expr_real_1): Likewise.

>         * fold-const.c (fold_single_bit_test_into_sign_test): Likewise.

>         * match.pd: Likewise.

>         * tree-ssa-forwprop.c (simplify_rotate): Likewise.

>         * tree-ssa-math-opts.c (convert_mult_to_fma): Likewise.

>         * tree-tailcall.c (process_assignment): Likewise.

>         * tree-vect-loop.c (vectorizable_reduction): Likewise.

>         * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)

>         (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.

>         * tree-vect-stmts.c (vectorizable_conversion): Likewise.

>         (vectorizable_assignment): Likewise.

>         (vectorizable_shift): Likewise.

>         (vectorizable_operation): Likewise.

>         * tree-vrp.c (register_edge_assert_for_2): Likewise.

>

> Index: gcc/tree.h

> ===================================================================

> --- gcc/tree.h  2017-08-21 10:52:43.717019857 +0100

> +++ gcc/tree.h  2017-08-21 10:55:12.419951940 +0100

> @@ -5414,4 +5414,13 @@ struct builtin_structptr_type

>    const char *str;

>  };

>  extern const builtin_structptr_type builtin_structptr_types[6];

> +

> +/* Return true if type T has the same precision as its underlying mode.  */

> +

> +inline bool

> +type_has_mode_precision_p (const_tree t)

> +{

> +  return TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t));

> +}

> +

>  #endif  /* GCC_TREE_H  */

> Index: gcc/convert.c

> ===================================================================

> --- gcc/convert.c       2017-08-21 10:41:51.158103275 +0100

> +++ gcc/convert.c       2017-08-21 10:55:12.412951940 +0100

> @@ -711,8 +711,7 @@ convert_to_integer_1 (tree type, tree ex

>              the signed-to-unsigned case the high-order bits have to

>              be cleared.  */

>           if (TYPE_UNSIGNED (type) != TYPE_UNSIGNED (TREE_TYPE (expr))

> -             && (TYPE_PRECISION (TREE_TYPE (expr))

> -                 != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (expr)))))

> +             && !type_has_mode_precision_p (TREE_TYPE (expr)))

>             code = CONVERT_EXPR;

>           else

>             code = NOP_EXPR;

> Index: gcc/expr.c

> ===================================================================

> --- gcc/expr.c  2017-08-21 10:41:51.158103275 +0100

> +++ gcc/expr.c  2017-08-21 10:55:12.413951940 +0100

> @@ -8244,7 +8244,7 @@ #define REDUCE_BIT_FIELD(expr)    (reduce_b

>       result to be reduced to the precision of the bit-field type,

>       which is narrower than that of the type's mode.  */

>    reduce_bit_field = (INTEGRAL_TYPE_P (type)

> -                     && GET_MODE_PRECISION (mode) > TYPE_PRECISION (type));

> +                     && !type_has_mode_precision_p (type));

>

>    if (reduce_bit_field && modifier == EXPAND_STACK_PARM)

>      target = 0;

> @@ -9097,8 +9097,7 @@ #define REDUCE_BIT_FIELD(expr)    (reduce_b

>      case LROTATE_EXPR:

>      case RROTATE_EXPR:

>        gcc_assert (VECTOR_MODE_P (TYPE_MODE (type))

> -                 || (GET_MODE_PRECISION (TYPE_MODE (type))

> -                     == TYPE_PRECISION (type)));

> +                 || type_has_mode_precision_p (type));

>        /* fall through */

>

>      case LSHIFT_EXPR:

> @@ -9671,7 +9670,7 @@ expand_expr_real_1 (tree exp, rtx target

>       which is narrower than that of the type's mode.  */

>    reduce_bit_field = (!ignore

>                       && INTEGRAL_TYPE_P (type)

> -                     && GET_MODE_PRECISION (mode) > TYPE_PRECISION (type));

> +                     && !type_has_mode_precision_p (type));

>

>    /* If we are going to ignore this result, we need only do something

>       if there is a side-effect somewhere in the expression.  If there

> Index: gcc/fold-const.c

> ===================================================================

> --- gcc/fold-const.c    2017-08-21 10:41:51.265103275 +0100

> +++ gcc/fold-const.c    2017-08-21 10:55:12.414951940 +0100

> @@ -6638,8 +6638,7 @@ fold_single_bit_test_into_sign_test (loc

>        if (arg00 != NULL_TREE

>           /* This is only a win if casting to a signed type is cheap,

>              i.e. when arg00's type is not a partial mode.  */

> -         && TYPE_PRECISION (TREE_TYPE (arg00))

> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))

> +         && type_has_mode_precision_p (TREE_TYPE (arg00)))

>         {

>           tree stype = signed_type_for (TREE_TYPE (arg00));

>           return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,

> Index: gcc/match.pd

> ===================================================================

> --- gcc/match.pd        2017-08-21 10:41:51.265103275 +0100

> +++ gcc/match.pd        2017-08-21 10:55:12.414951940 +0100

> @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

>            /* Or if the precision of TO is not the same as the precision

>               of its mode.  */

> -          || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))

> +          || !type_has_mode_precision_p (type)))

>     (convert (bitop @0 (convert @1))))))

>

>  (for bitop (bit_and bit_ior)

> @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>         if (shift == LSHIFT_EXPR)

>          zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);

>         else if (shift == RSHIFT_EXPR

> -               && (TYPE_PRECISION (shift_type)

> -                   == GET_MODE_PRECISION (TYPE_MODE (shift_type))))

> +               && type_has_mode_precision_p (shift_type))

>          {

>            prec = TYPE_PRECISION (TREE_TYPE (@3));

>            tree arg00 = @0;

> @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>                && TYPE_UNSIGNED (TREE_TYPE (@0)))

>              {

>                tree inner_type = TREE_TYPE (@0);

> -              if ((TYPE_PRECISION (inner_type)

> -                   == GET_MODE_PRECISION (TYPE_MODE (inner_type)))

> +              if (type_has_mode_precision_p (inner_type)

>                    && TYPE_PRECISION (inner_type) < prec)

>                  {

>                    prec = TYPE_PRECISION (inner_type);

> @@ -3226,8 +3224,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>   (simplify

>    (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)

>    (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))

> -       && (TYPE_PRECISION (TREE_TYPE (@0))

> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

> +       && type_has_mode_precision_p (TREE_TYPE (@0))

>         && element_precision (@2) >= element_precision (@0)

>         && wi::only_sign_bit_p (@1, element_precision (@0)))

>     (with { tree stype = signed_type_for (TREE_TYPE (@0)); }

> @@ -4029,11 +4026,9 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>          /* The precision of the type of each operand must match the

>             precision of the mode of each operand, similarly for the

>             result.  */

> -        && (TYPE_PRECISION (TREE_TYPE (@0))

> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

> -        && (TYPE_PRECISION (TREE_TYPE (@1))

> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

> -        && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

> +        && type_has_mode_precision_p (TREE_TYPE (@0))

> +        && type_has_mode_precision_p (TREE_TYPE (@1))

> +        && type_has_mode_precision_p (type)

>          /* The inner conversion must be a widening conversion.  */

>          && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>          && types_match (@0, type)

> @@ -4063,11 +4058,9 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>         /* The precision of the type of each operand must match the

>           precision of the mode of each operand, similarly for the

>           result.  */

> -       && (TYPE_PRECISION (TREE_TYPE (@0))

> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))

> -       && (TYPE_PRECISION (TREE_TYPE (@1))

> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))

> -       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))

> +       && type_has_mode_precision_p (TREE_TYPE (@0))

> +       && type_has_mode_precision_p (TREE_TYPE (@1))

> +       && type_has_mode_precision_p (type)

>         /* The inner conversion must be a widening conversion.  */

>         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))

>         && types_match (@0, @1)

> Index: gcc/tree-ssa-forwprop.c

> ===================================================================

> --- gcc/tree-ssa-forwprop.c     2017-08-21 10:41:51.265103275 +0100

> +++ gcc/tree-ssa-forwprop.c     2017-08-21 10:55:12.415951940 +0100

> @@ -1529,7 +1529,7 @@ simplify_rotate (gimple_stmt_iterator *g

>    /* Only create rotates in complete modes.  Other cases are not

>       expanded properly.  */

>    if (!INTEGRAL_TYPE_P (rtype)

> -      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))

> +      || !type_has_mode_precision_p (rtype))

>      return false;

>

>    for (i = 0; i < 2; i++)

> @@ -1609,8 +1609,7 @@ simplify_rotate (gimple_stmt_iterator *g

>               && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))

>               && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

>                  > floor_log2 (TYPE_PRECISION (rtype))

> -             && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))

> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))

> +             && type_has_mode_precision_p (TREE_TYPE (cdef_arg1[i])))

>             {

>               def_arg2_alt[i] = cdef_arg1[i];

>               defcodefor_name (def_arg2_alt[i], &cdef_code[i],

> @@ -1639,8 +1638,7 @@ simplify_rotate (gimple_stmt_iterator *g

>                 && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>                 && TYPE_PRECISION (TREE_TYPE (tem))

>                  > floor_log2 (TYPE_PRECISION (rtype))

> -               && TYPE_PRECISION (TREE_TYPE (tem))

> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

> +               && type_has_mode_precision_p (TREE_TYPE (tem))

>                 && (tem == def_arg2[1 - i]

>                     || tem == def_arg2_alt[1 - i]))

>               {

> @@ -1667,8 +1665,7 @@ simplify_rotate (gimple_stmt_iterator *g

>                 && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>                 && TYPE_PRECISION (TREE_TYPE (tem))

>                  > floor_log2 (TYPE_PRECISION (rtype))

> -               && TYPE_PRECISION (TREE_TYPE (tem))

> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))

> +               && type_has_mode_precision_p (TREE_TYPE (tem)))

>               defcodefor_name (tem, &code, &tem, NULL);

>

>             if (code == NEGATE_EXPR)

> @@ -1683,8 +1680,7 @@ simplify_rotate (gimple_stmt_iterator *g

>                     && INTEGRAL_TYPE_P (TREE_TYPE (tem))

>                     && TYPE_PRECISION (TREE_TYPE (tem))

>                        > floor_log2 (TYPE_PRECISION (rtype))

> -                   && TYPE_PRECISION (TREE_TYPE (tem))

> -                      == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))

> +                   && type_has_mode_precision_p (TREE_TYPE (tem))

>                     && (tem == def_arg2[1 - i]

>                         || tem == def_arg2_alt[1 - i]))

>                   {

> Index: gcc/tree-ssa-math-opts.c

> ===================================================================

> --- gcc/tree-ssa-math-opts.c    2017-08-21 10:41:51.158103275 +0100

> +++ gcc/tree-ssa-math-opts.c    2017-08-21 10:55:12.415951940 +0100

> @@ -3564,8 +3564,7 @@ convert_mult_to_fma (gimple *mul_stmt, t

>

>    /* We don't want to do bitfield reduction ops.  */

>    if (INTEGRAL_TYPE_P (type)

> -      && (TYPE_PRECISION (type)

> -         != GET_MODE_PRECISION (TYPE_MODE (type))))

> +      && !type_has_mode_precision_p (type))

>      return false;

>

>    /* If the target doesn't support it, don't generate it.  We assume that

> Index: gcc/tree-tailcall.c

> ===================================================================

> --- gcc/tree-tailcall.c 2017-08-21 10:41:51.158103275 +0100

> +++ gcc/tree-tailcall.c 2017-08-21 10:55:12.415951940 +0100

> @@ -289,8 +289,7 @@ process_assignment (gassign *stmt,

>              type is smaller than mode's precision,

>              reduce_to_bit_field_precision would generate additional code.  */

>           if (INTEGRAL_TYPE_P (TREE_TYPE (dest))

> -             && (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (dest)))

> -                 > TYPE_PRECISION (TREE_TYPE (dest))))

> +             && !type_has_mode_precision_p (TREE_TYPE (dest)))

>             return FAIL;

>         }

>

> Index: gcc/tree-vect-loop.c

> ===================================================================

> --- gcc/tree-vect-loop.c        2017-08-21 10:41:51.158103275 +0100

> +++ gcc/tree-vect-loop.c        2017-08-21 10:55:12.416951940 +0100

> @@ -5848,8 +5848,7 @@ vectorizable_reduction (gimple *stmt, gi

>      return false;

>

>    /* Do not try to vectorize bit-precision reductions.  */

> -  if ((TYPE_PRECISION (scalar_type)

> -       != GET_MODE_PRECISION (TYPE_MODE (scalar_type))))

> +  if (!type_has_mode_precision_p (scalar_type))

>      return false;

>

>    /* All uses but the last are expected to be defined in the loop.

> Index: gcc/tree-vect-patterns.c

> ===================================================================

> --- gcc/tree-vect-patterns.c    2017-08-21 10:41:51.265103275 +0100

> +++ gcc/tree-vect-patterns.c    2017-08-21 10:55:12.416951940 +0100

> @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (

>    if (TREE_CODE (oprnd0) != SSA_NAME

>        || TREE_CODE (oprnd1) != SSA_NAME

>        || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))

> -      || TYPE_PRECISION (TREE_TYPE (oprnd1))

> -        != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))

> +      || !type_has_mode_precision_p (TREE_TYPE (oprnd1))

>        || TYPE_PRECISION (TREE_TYPE (lhs))

>          != TYPE_PRECISION (TREE_TYPE (oprnd0)))

>      return NULL;

> @@ -2470,7 +2469,7 @@ vect_recog_mult_pattern (vec<gimple *> *

>    if (TREE_CODE (oprnd0) != SSA_NAME

>        || TREE_CODE (oprnd1) != INTEGER_CST

>        || !INTEGRAL_TYPE_P (itype)

> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

> +      || !type_has_mode_precision_p (itype))

>      return NULL;

>

>    vectype = get_vectype_for_scalar_type (itype);

> @@ -2585,7 +2584,7 @@ vect_recog_divmod_pattern (vec<gimple *>

>    if (TREE_CODE (oprnd0) != SSA_NAME

>        || TREE_CODE (oprnd1) != INTEGER_CST

>        || TREE_CODE (itype) != INTEGER_TYPE

> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))

> +      || !type_has_mode_precision_p (itype))

>      return NULL;

>

>    vectype = get_vectype_for_scalar_type (itype);

> Index: gcc/tree-vect-stmts.c

> ===================================================================

> --- gcc/tree-vect-stmts.c       2017-08-21 10:42:51.088530428 +0100

> +++ gcc/tree-vect-stmts.c       2017-08-21 10:55:12.417951940 +0100

> @@ -4098,11 +4098,9 @@ vectorizable_conversion (gimple *stmt, g

>

>    if (!VECTOR_BOOLEAN_TYPE_P (vectype_out)

>        && ((INTEGRAL_TYPE_P (lhs_type)

> -          && (TYPE_PRECISION (lhs_type)

> -              != GET_MODE_PRECISION (TYPE_MODE (lhs_type))))

> +          && !type_has_mode_precision_p (lhs_type))

>           || (INTEGRAL_TYPE_P (rhs_type)

> -             && (TYPE_PRECISION (rhs_type)

> -                 != GET_MODE_PRECISION (TYPE_MODE (rhs_type))))))

> +             && !type_has_mode_precision_p (rhs_type))))

>      {

>        if (dump_enabled_p ())

>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,

> @@ -4696,10 +4694,8 @@ vectorizable_assignment (gimple *stmt, g

>    if ((CONVERT_EXPR_CODE_P (code)

>         || code == VIEW_CONVERT_EXPR)

>        && INTEGRAL_TYPE_P (TREE_TYPE (scalar_dest))

> -      && ((TYPE_PRECISION (TREE_TYPE (scalar_dest))

> -          != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (scalar_dest))))

> -         || ((TYPE_PRECISION (TREE_TYPE (op))

> -              != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (op))))))

> +      && (!type_has_mode_precision_p (TREE_TYPE (scalar_dest))

> +         || !type_has_mode_precision_p (TREE_TYPE (op)))

>        /* But a conversion that does not change the bit-pattern is ok.  */

>        && !((TYPE_PRECISION (TREE_TYPE (scalar_dest))

>             > TYPE_PRECISION (TREE_TYPE (op)))

> @@ -4875,8 +4871,7 @@ vectorizable_shift (gimple *stmt, gimple

>

>    scalar_dest = gimple_assign_lhs (stmt);

>    vectype_out = STMT_VINFO_VECTYPE (stmt_info);

> -  if (TYPE_PRECISION (TREE_TYPE (scalar_dest))

> -      != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (scalar_dest))))

> +  if (!type_has_mode_precision_p (TREE_TYPE (scalar_dest)))

>      {

>        if (dump_enabled_p ())

>          dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,

> @@ -5264,8 +5259,7 @@ vectorizable_operation (gimple *stmt, gi

>    /* Most operations cannot handle bit-precision types without extra

>       truncations.  */

>    if (!VECTOR_BOOLEAN_TYPE_P (vectype_out)

> -      && (TYPE_PRECISION (TREE_TYPE (scalar_dest))

> -         != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (scalar_dest))))

> +      && !type_has_mode_precision_p (TREE_TYPE (scalar_dest))

>        /* Exception are bitwise binary operations.  */

>        && code != BIT_IOR_EXPR

>        && code != BIT_XOR_EXPR

> Index: gcc/tree-vrp.c

> ===================================================================

> --- gcc/tree-vrp.c      2017-08-21 10:41:51.265103275 +0100

> +++ gcc/tree-vrp.c      2017-08-21 10:55:12.418951940 +0100

> @@ -5247,7 +5247,7 @@ register_edge_assert_for_2 (tree name, e

>               && tree_fits_uhwi_p (cst2)

>               && INTEGRAL_TYPE_P (TREE_TYPE (name2))

>               && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)

> -             && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))

> +             && type_has_mode_precision_p (TREE_TYPE (val)))

>             {

>               mask = wi::mask (tree_to_uhwi (cst2), false, prec);

>               val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);
diff mbox

Patch

diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c
index b1386db..20f78e7 100644
--- a/gcc/c-family/c-ubsan.c
+++ b/gcc/c-family/c-ubsan.c
@@ -131,8 +131,8 @@  ubsan_instrument_shift (location_t loc, enum tree_code code,
 
   /* If this is not a signed operation, don't perform overflow checks.
      Also punt on bit-fields.  */
-  if (TYPE_OVERFLOW_WRAPS (type0)
-      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)
+  if (!full_integral_type_p (type0)
+      || TYPE_OVERFLOW_WRAPS (type0)
       || !sanitize_flags_p (SANITIZE_SHIFT_BASE))
     ;
 
diff --git a/gcc/fold-const.c b/gcc/fold-const.c
index 0a5b168..1985a14 100644
--- a/gcc/fold-const.c
+++ b/gcc/fold-const.c
@@ -6672,8 +6672,7 @@  fold_single_bit_test_into_sign_test (location_t loc,
       if (arg00 != NULL_TREE
 	  /* This is only a win if casting to a signed type is cheap,
 	     i.e. when arg00's type is not a partial mode.  */
-	  && TYPE_PRECISION (TREE_TYPE (arg00))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))
+	  && full_integral_type_p (TREE_TYPE (arg00)))
 	{
 	  tree stype = signed_type_for (TREE_TYPE (arg00));
 	  return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
diff --git a/gcc/match.pd b/gcc/match.pd
index 0e36f46..9ad9930 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -992,7 +992,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 	   || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
 	   /* Or if the precision of TO is not the same as the precision
 	      of its mode.  */
-	   || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
+	   || !full_integral_type_p (type)))
    (convert (bitop @0 (convert @1))))))
 
 (for bitop (bit_and bit_ior)
@@ -1920,8 +1920,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
        if (shift == LSHIFT_EXPR)
 	 zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);
        else if (shift == RSHIFT_EXPR
-		&& (TYPE_PRECISION (shift_type)
-		    == GET_MODE_PRECISION (TYPE_MODE (shift_type))))
+		&& full_integral_type_p (shift_type))
 	 {
 	   prec = TYPE_PRECISION (TREE_TYPE (@3));
 	   tree arg00 = @0;
@@ -1931,8 +1930,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 	       && TYPE_UNSIGNED (TREE_TYPE (@0)))
 	     {
 	       tree inner_type = TREE_TYPE (@0);
-	       if ((TYPE_PRECISION (inner_type)
-		    == GET_MODE_PRECISION (TYPE_MODE (inner_type)))
+	       if (full_integral_type_p (inner_type)
 		   && TYPE_PRECISION (inner_type) < prec)
 		 {
 		   prec = TYPE_PRECISION (inner_type);
@@ -3225,9 +3223,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
      ncmp (ge lt)
  (simplify
   (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)
-  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
-       && (TYPE_PRECISION (TREE_TYPE (@0))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
+  (if (full_integral_type_p (TREE_TYPE (@0))
        && element_precision (@2) >= element_precision (@0)
        && wi::only_sign_bit_p (@1, element_precision (@0)))
    (with { tree stype = signed_type_for (TREE_TYPE (@0)); }
@@ -4021,19 +4017,13 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (for op (plus minus)
   (simplify
     (convert (op:s (convert@2 @0) (convert?@3 @1)))
-    (if (INTEGRAL_TYPE_P (type)
-	 /* We check for type compatibility between @0 and @1 below,
-	    so there's no need to check that @1/@3 are integral types.  */
-	 && INTEGRAL_TYPE_P (TREE_TYPE (@0))
-	 && INTEGRAL_TYPE_P (TREE_TYPE (@2))
+    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
 	 /* The precision of the type of each operand must match the
 	    precision of the mode of each operand, similarly for the
 	    result.  */
-	 && (TYPE_PRECISION (TREE_TYPE (@0))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
-	 && (TYPE_PRECISION (TREE_TYPE (@1))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
-	 && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
+	 && full_integral_type_p (TREE_TYPE (@0))
+	 && full_integral_type_p (TREE_TYPE (@1))
+	 && full_integral_type_p (type)
 	 /* The inner conversion must be a widening conversion.  */
 	 && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
 	 && types_match (@0, type)
@@ -4055,19 +4045,13 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (for op (minus plus)
  (simplify
   (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)
-  (if (INTEGRAL_TYPE_P (type)
-       /* We check for type compatibility between @0 and @1 below,
-	  so there's no need to check that @1/@3 are integral types.  */
-       && INTEGRAL_TYPE_P (TREE_TYPE (@0))
-       && INTEGRAL_TYPE_P (TREE_TYPE (@2))
+  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
        /* The precision of the type of each operand must match the
 	  precision of the mode of each operand, similarly for the
 	  result.  */
-       && (TYPE_PRECISION (TREE_TYPE (@0))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
-       && (TYPE_PRECISION (TREE_TYPE (@1))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
-       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
+       && full_integral_type_p (TREE_TYPE (@0))
+       && full_integral_type_p (TREE_TYPE (@1))
+       && full_integral_type_p (type)
        /* The inner conversion must be a widening conversion.  */
        && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
        && types_match (@0, @1)
diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
index 5719b99..20d5c86 100644
--- a/gcc/tree-ssa-forwprop.c
+++ b/gcc/tree-ssa-forwprop.c
@@ -1528,8 +1528,7 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 
   /* Only create rotates in complete modes.  Other cases are not
      expanded properly.  */
-  if (!INTEGRAL_TYPE_P (rtype)
-      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))
+  if (!full_integral_type_p (rtype))
     return false;
 
   for (i = 0; i < 2; i++)
@@ -1606,11 +1605,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 	  defcodefor_name (def_arg2[i], &cdef_code[i],
 			   &cdef_arg1[i], &cdef_arg2[i]);
 	  if (CONVERT_EXPR_CODE_P (cdef_code[i])
-	      && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))
+	      && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))
 	      && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
-		 > floor_log2 (TYPE_PRECISION (rtype))
-	      && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))
+		 > floor_log2 (TYPE_PRECISION (rtype)))
 	    {
 	      def_arg2_alt[i] = cdef_arg1[i];
 	      defcodefor_name (def_arg2_alt[i], &cdef_code[i],
@@ -1636,11 +1633,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 	      }
 	    defcodefor_name (cdef_arg2[i], &code, &tem, NULL);
 	    if (CONVERT_EXPR_CODE_P (code)
-		&& INTEGRAL_TYPE_P (TREE_TYPE (tem))
+		&& full_integral_type_p (TREE_TYPE (tem))
 		&& TYPE_PRECISION (TREE_TYPE (tem))
 		 > floor_log2 (TYPE_PRECISION (rtype))
-		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
 		&& (tem == def_arg2[1 - i]
 		    || tem == def_arg2_alt[1 - i]))
 	      {
@@ -1664,11 +1659,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 
 	    defcodefor_name (cdef_arg1[i], &code, &tem, NULL);
 	    if (CONVERT_EXPR_CODE_P (code)
-		&& INTEGRAL_TYPE_P (TREE_TYPE (tem))
+		&& full_integral_type_p (TREE_TYPE (tem))
 		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 > floor_log2 (TYPE_PRECISION (rtype))
-		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))
+		 > floor_log2 (TYPE_PRECISION (rtype)))
 	      defcodefor_name (tem, &code, &tem, NULL);
 
 	    if (code == NEGATE_EXPR)
@@ -1680,11 +1673,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 		  }
 		defcodefor_name (tem, &code, &tem, NULL);
 		if (CONVERT_EXPR_CODE_P (code)
-		    && INTEGRAL_TYPE_P (TREE_TYPE (tem))
+		    && full_integral_type_p (TREE_TYPE (tem))
 		    && TYPE_PRECISION (TREE_TYPE (tem))
 		       > floor_log2 (TYPE_PRECISION (rtype))
-		    && TYPE_PRECISION (TREE_TYPE (tem))
-		       == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
 		    && (tem == def_arg2[1 - i]
 			|| tem == def_arg2_alt[1 - i]))
 		  {
diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c
index 17d1083..a96f784 100644
--- a/gcc/tree-vect-patterns.c
+++ b/gcc/tree-vect-patterns.c
@@ -2067,8 +2067,7 @@  vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != SSA_NAME
       || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))
-      || TYPE_PRECISION (TREE_TYPE (oprnd1))
-	 != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))
+      || !full_integral_type_p (TREE_TYPE (oprnd1))
       || TYPE_PRECISION (TREE_TYPE (lhs))
 	 != TYPE_PRECISION (TREE_TYPE (oprnd0)))
     return NULL;
@@ -2469,8 +2468,7 @@  vect_recog_mult_pattern (vec<gimple *> *stmts,
 
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != INTEGER_CST
-      || !INTEGRAL_TYPE_P (itype)
-      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
+      || !full_integral_type_p (itype))
     return NULL;
 
   vectype = get_vectype_for_scalar_type (itype);
@@ -2584,8 +2582,7 @@  vect_recog_divmod_pattern (vec<gimple *> *stmts,
   itype = TREE_TYPE (oprnd0);
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != INTEGER_CST
-      || TREE_CODE (itype) != INTEGER_TYPE
-      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
+      || !full_integral_type_p (itype))
     return NULL;
 
   vectype = get_vectype_for_scalar_type (itype);
@@ -3385,9 +3382,8 @@  adjust_bool_pattern (tree var, tree out_type,
     do_compare:
       gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);
       if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE
-	  || !TYPE_UNSIGNED (TREE_TYPE (rhs1))
-	  || (TYPE_PRECISION (TREE_TYPE (rhs1))
-	      != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))
+	  || !full_integral_type_p (TREE_TYPE (rhs1))
+	  || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))
 	{
 	  machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));
 	  itype
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index 657a8d1..cfa9a97 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -5245,7 +5245,7 @@  register_edge_assert_for_2 (tree name, edge e,
 	      && tree_fits_uhwi_p (cst2)
 	      && INTEGRAL_TYPE_P (TREE_TYPE (name2))
 	      && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)
-	      && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))
+	      && full_integral_type_p (TREE_TYPE (val)))
 	    {
 	      mask = wi::mask (tree_to_uhwi (cst2), false, prec);
 	      val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);
diff --git a/gcc/tree.h b/gcc/tree.h
index 46debc1..237f234 100644
--- a/gcc/tree.h
+++ b/gcc/tree.h
@@ -5394,4 +5394,25 @@  struct builtin_structptr_type
   const char *str;
 };
 extern const builtin_structptr_type builtin_structptr_types[6];
+
+/* Return true if the mode underlying scalar type T has the same number
+   of bits as T does.  Examples of when this is false include bitfields
+   that are narrower than the mode that contains them.  */
+
+inline bool
+scalar_type_is_full_p (const_tree t)
+{
+  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)
+	  == TYPE_PRECISION (t));
+}
+
+/* Return true if T is an integral type that has the same number of bits
+   as its underlying mode.  */
+
+inline bool
+full_integral_type_p (const_tree t)
+{
+  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);
+}
+
 #endif  /* GCC_TREE_H  */
diff --git a/gcc/ubsan.c b/gcc/ubsan.c
index 49e38fa..40f5f3e 100644
--- a/gcc/ubsan.c
+++ b/gcc/ubsan.c
@@ -1582,9 +1582,8 @@  instrument_si_overflow (gimple_stmt_iterator gsi)
 
   /* If this is not a signed operation, don't instrument anything here.
      Also punt on bit-fields.  */
-  if (!INTEGRAL_TYPE_P (lhsinner)
-      || TYPE_OVERFLOW_WRAPS (lhsinner)
-      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))
+  if (!full_integral_type_p (lhsinner)
+      || TYPE_OVERFLOW_WRAPS (lhsinner))
     return;
 
   switch (code)