diff mbox series

[103/nnn] poly_int: TYPE_VECTOR_SUBPARTS

Message ID 87fua9damy.fsf@linaro.org
State New
Headers show
Series [103/nnn] poly_int: TYPE_VECTOR_SUBPARTS | expand

Commit Message

Richard Sandiford Oct. 23, 2017, 5:41 p.m. UTC
This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64.  The value is
encoded in the 10-bit precision field and was previously always stored
as a simple log2 value.  The challenge was to use this 10 bits to
encode the number of elements in variable-length vectors, so that
we didn't need to increase the size of the tree.

In practice the number of vector elements should always have the form
N + N * X (where X is the runtime value), and as for constant-length
vectors, N must be a power of 2 (even though X itself might not be).
The patch therefore uses the low bit to select between constant-length
and variable-length and uses the upper 9 bits to encode log2(N).
Targets without variable-length vectors continue to use the old scheme.

A new valid_vector_subparts_p function tests whether a given number
of elements can be encoded.  This is false for the vector modes that
represent an LD3 or ST3 vector triple (which we want to treat as arrays
of vectors rather than single vectors).

Most of the patch is mechanical; previous patches handled the changes
that weren't entirely straightforward.


2017-10-23  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree.h (TYPE_VECTOR_SUBPARTS): Turn into a function and handle
	polynomial numbers of units.
	(SET_TYPE_VECTOR_SUBPARTS): Likewise.
	(valid_vector_subparts_p): New function.
	(build_vector_type): Remove temporary shim and take the number
	of units as a poly_uint64 rather than an int.
	(build_opaque_vector_type): Take the number of units as a
	poly_uint64 rather than an int.
	* tree.c (build_vector): Handle polynomial TYPE_VECTOR_SUBPARTS.
	(build_vector_from_ctor, type_hash_canon_hash): Likewise.
	(type_cache_hasher::equal, uniform_vector_p): Likewise.
	(vector_type_mode): Likewise.
	(build_vector_from_val): If the number of units isn't constant,
	use build_vec_duplicate_cst for constant operands and
	VEC_DUPLICATE_EXPR otherwise.
	(make_vector_type): Remove temporary is_constant ().
	(build_vector_type, build_opaque_vector_type): Take the number of
	units as a poly_uint64 rather than an int.
	* cfgexpand.c (expand_debug_expr): Handle polynomial
	TYPE_VECTOR_SUBPARTS.
	* expr.c (count_type_elements, store_constructor): Likewise.
	* fold-const.c (const_binop, const_unop, fold_convert_const)
	(operand_equal_p, fold_view_convert_expr, fold_vec_perm)
	(fold_ternary_loc, fold_relational_const): Likewise.
	(native_interpret_vector): Likewise.  Change the size from an
	int to an unsigned int.
	* gimple-fold.c (gimple_fold_stmt_to_constant_1): Handle polynomial
	TYPE_VECTOR_SUBPARTS.
	(gimple_fold_indirect_ref, gimple_build_vector): Likewise.
	(gimple_build_vector_from_val): Use VEC_DUPLICATE_EXPR when
	duplicating a non-constant operand into a variable-length vector.
	* match.pd: Handle polynomial TYPE_VECTOR_SUBPARTS.
	* omp-simd-clone.c (simd_clone_subparts): Likewise.
	* print-tree.c (print_node): Likewise.
	* stor-layout.c (layout_type): Likewise.
	* targhooks.c (default_builtin_vectorization_cost): Likewise.
	* tree-cfg.c (verify_gimple_comparison): Likewise.
	(verify_gimple_assign_binary): Likewise.
	(verify_gimple_assign_ternary): Likewise.
	(verify_gimple_assign_single): Likewise.
	* tree-ssa-forwprop.c (simplify_vector_constructor): Likewise.
	* tree-vect-data-refs.c (vect_permute_store_chain): Likewise.
	(vect_grouped_load_supported, vect_permute_load_chain): Likewise.
	(vect_shift_permute_load_chain): Likewise.
	* tree-vect-generic.c (nunits_for_known_piecewise_op): Likewise.
	(expand_vector_condition, optimize_vector_constructor): Likewise.
	(lower_vec_perm, get_compute_type): Likewise.
	* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
	(get_initial_defs_for_reduction, vect_transform_loop): Likewise.
	* tree-vect-patterns.c (vect_recog_bool_pattern): Likewise.
	(vect_recog_mask_conversion_pattern): Likewise.
	* tree-vect-slp.c (vect_supported_load_permutation_p): Likewise.
	(vect_get_constant_vectors, vect_transform_slp_perm_load): Likewise.
	* tree-vect-stmts.c (perm_mask_for_reverse): Likewise.
	(get_group_load_store_type, vectorizable_mask_load_store): Likewise.
	(vectorizable_bswap, simd_clone_subparts, vectorizable_assignment)
	(vectorizable_shift, vectorizable_operation, vectorizable_store)
	(vect_gen_perm_mask_any, vectorizable_load, vect_is_simple_cond)
	(vectorizable_comparison, supportable_widening_operation): Likewise.
	(supportable_narrowing_operation): Likewise.

gcc/ada/
	* gcc-interface/utils.c (gnat_types_compatible_p): Handle
	polynomial TYPE_VECTOR_SUBPARTS.

gcc/brig/
	* brigfrontend/brig-to-generic.cc (get_unsigned_int_type): Handle
	polynomial TYPE_VECTOR_SUBPARTS.
	* brigfrontend/brig-util.h (gccbrig_type_vector_subparts): Likewise.

gcc/c-family/
	* c-common.c (vector_types_convertible_p, c_build_vec_perm_expr)
	(convert_vector_to_array_for_subscript): Handle polynomial
	TYPE_VECTOR_SUBPARTS.
	(c_common_type_for_mode): Check valid_vector_subparts_p.

gcc/c/
	* c-typeck.c (comptypes_internal, build_binary_op): Handle polynomial
	TYPE_VECTOR_SUBPARTS.

gcc/cp/
	* call.c (build_conditional_expr_1): Handle polynomial
	TYPE_VECTOR_SUBPARTS.
	* constexpr.c (cxx_fold_indirect_ref): Likewise.
	* decl.c (cp_finish_decomp): Likewise.
	* mangle.c (write_type): Likewise.
	* typeck.c (structural_comptypes): Likewise.
	(cp_build_binary_op): Likewise.
	* typeck2.c (process_init_constructor_array): Likewise.

gcc/fortran/
	* trans-types.c (gfc_type_for_mode): Check valid_vector_subparts_p.

gcc/lto/
	* lto-lang.c (lto_type_for_mode): Check valid_vector_subparts_p.
	* lto.c (hash_canonical_type): Handle polynomial TYPE_VECTOR_SUBPARTS.

gcc/go/
	* go-lang.c (go_langhook_type_for_mode): Check valid_vector_subparts_p.

Comments

Richard Biener Oct. 24, 2017, 8:49 a.m. UTC | #1
On Mon, Oct 23, 2017 at 7:41 PM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64.  The value is

> encoded in the 10-bit precision field and was previously always stored

> as a simple log2 value.  The challenge was to use this 10 bits to

> encode the number of elements in variable-length vectors, so that

> we didn't need to increase the size of the tree.

>

> In practice the number of vector elements should always have the form

> N + N * X (where X is the runtime value), and as for constant-length

> vectors, N must be a power of 2 (even though X itself might not be).

> The patch therefore uses the low bit to select between constant-length

> and variable-length and uses the upper 9 bits to encode log2(N).

> Targets without variable-length vectors continue to use the old scheme.

>

> A new valid_vector_subparts_p function tests whether a given number

> of elements can be encoded.  This is false for the vector modes that

> represent an LD3 or ST3 vector triple (which we want to treat as arrays

> of vectors rather than single vectors).

>

> Most of the patch is mechanical; previous patches handled the changes

> that weren't entirely straightforward.


One comment, w/o actually reviewing may/must stuff (will comment on that
elsewhere).

You split 10 bits into 9 and 1, wouldn't it be more efficient to use the
lower 8 bits for the log2 value of N and either of the two remaining bits
for the flag?  That way the 8 bits for the shift amount can be eventually
accessed in a more efficient way.

Guess you'd need to compare code-generation of the TYPE_VECTOR_SUBPARTS
accessor on aarch64 / x86_64.

Am I correct that NUM_POLY_INT_COEFFS is 1 for targets that do not
have variable length vector modes?

Richard.

>

> 2017-10-23  Richard Sandiford  <richard.sandiford@linaro.org>

>             Alan Hayward  <alan.hayward@arm.com>

>             David Sherwood  <david.sherwood@arm.com>

>

> gcc/

>         * tree.h (TYPE_VECTOR_SUBPARTS): Turn into a function and handle

>         polynomial numbers of units.

>         (SET_TYPE_VECTOR_SUBPARTS): Likewise.

>         (valid_vector_subparts_p): New function.

>         (build_vector_type): Remove temporary shim and take the number

>         of units as a poly_uint64 rather than an int.

>         (build_opaque_vector_type): Take the number of units as a

>         poly_uint64 rather than an int.

>         * tree.c (build_vector): Handle polynomial TYPE_VECTOR_SUBPARTS.

>         (build_vector_from_ctor, type_hash_canon_hash): Likewise.

>         (type_cache_hasher::equal, uniform_vector_p): Likewise.

>         (vector_type_mode): Likewise.

>         (build_vector_from_val): If the number of units isn't constant,

>         use build_vec_duplicate_cst for constant operands and

>         VEC_DUPLICATE_EXPR otherwise.

>         (make_vector_type): Remove temporary is_constant ().

>         (build_vector_type, build_opaque_vector_type): Take the number of

>         units as a poly_uint64 rather than an int.

>         * cfgexpand.c (expand_debug_expr): Handle polynomial

>         TYPE_VECTOR_SUBPARTS.

>         * expr.c (count_type_elements, store_constructor): Likewise.

>         * fold-const.c (const_binop, const_unop, fold_convert_const)

>         (operand_equal_p, fold_view_convert_expr, fold_vec_perm)

>         (fold_ternary_loc, fold_relational_const): Likewise.

>         (native_interpret_vector): Likewise.  Change the size from an

>         int to an unsigned int.

>         * gimple-fold.c (gimple_fold_stmt_to_constant_1): Handle polynomial

>         TYPE_VECTOR_SUBPARTS.

>         (gimple_fold_indirect_ref, gimple_build_vector): Likewise.

>         (gimple_build_vector_from_val): Use VEC_DUPLICATE_EXPR when

>         duplicating a non-constant operand into a variable-length vector.

>         * match.pd: Handle polynomial TYPE_VECTOR_SUBPARTS.

>         * omp-simd-clone.c (simd_clone_subparts): Likewise.

>         * print-tree.c (print_node): Likewise.

>         * stor-layout.c (layout_type): Likewise.

>         * targhooks.c (default_builtin_vectorization_cost): Likewise.

>         * tree-cfg.c (verify_gimple_comparison): Likewise.

>         (verify_gimple_assign_binary): Likewise.

>         (verify_gimple_assign_ternary): Likewise.

>         (verify_gimple_assign_single): Likewise.

>         * tree-ssa-forwprop.c (simplify_vector_constructor): Likewise.

>         * tree-vect-data-refs.c (vect_permute_store_chain): Likewise.

>         (vect_grouped_load_supported, vect_permute_load_chain): Likewise.

>         (vect_shift_permute_load_chain): Likewise.

>         * tree-vect-generic.c (nunits_for_known_piecewise_op): Likewise.

>         (expand_vector_condition, optimize_vector_constructor): Likewise.

>         (lower_vec_perm, get_compute_type): Likewise.

>         * tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.

>         (get_initial_defs_for_reduction, vect_transform_loop): Likewise.

>         * tree-vect-patterns.c (vect_recog_bool_pattern): Likewise.

>         (vect_recog_mask_conversion_pattern): Likewise.

>         * tree-vect-slp.c (vect_supported_load_permutation_p): Likewise.

>         (vect_get_constant_vectors, vect_transform_slp_perm_load): Likewise.

>         * tree-vect-stmts.c (perm_mask_for_reverse): Likewise.

>         (get_group_load_store_type, vectorizable_mask_load_store): Likewise.

>         (vectorizable_bswap, simd_clone_subparts, vectorizable_assignment)

>         (vectorizable_shift, vectorizable_operation, vectorizable_store)

>         (vect_gen_perm_mask_any, vectorizable_load, vect_is_simple_cond)

>         (vectorizable_comparison, supportable_widening_operation): Likewise.

>         (supportable_narrowing_operation): Likewise.

>

> gcc/ada/

>         * gcc-interface/utils.c (gnat_types_compatible_p): Handle

>         polynomial TYPE_VECTOR_SUBPARTS.

>

> gcc/brig/

>         * brigfrontend/brig-to-generic.cc (get_unsigned_int_type): Handle

>         polynomial TYPE_VECTOR_SUBPARTS.

>         * brigfrontend/brig-util.h (gccbrig_type_vector_subparts): Likewise.

>

> gcc/c-family/

>         * c-common.c (vector_types_convertible_p, c_build_vec_perm_expr)

>         (convert_vector_to_array_for_subscript): Handle polynomial

>         TYPE_VECTOR_SUBPARTS.

>         (c_common_type_for_mode): Check valid_vector_subparts_p.

>

> gcc/c/

>         * c-typeck.c (comptypes_internal, build_binary_op): Handle polynomial

>         TYPE_VECTOR_SUBPARTS.

>

> gcc/cp/

>         * call.c (build_conditional_expr_1): Handle polynomial

>         TYPE_VECTOR_SUBPARTS.

>         * constexpr.c (cxx_fold_indirect_ref): Likewise.

>         * decl.c (cp_finish_decomp): Likewise.

>         * mangle.c (write_type): Likewise.

>         * typeck.c (structural_comptypes): Likewise.

>         (cp_build_binary_op): Likewise.

>         * typeck2.c (process_init_constructor_array): Likewise.

>

> gcc/fortran/

>         * trans-types.c (gfc_type_for_mode): Check valid_vector_subparts_p.

>

> gcc/lto/

>         * lto-lang.c (lto_type_for_mode): Check valid_vector_subparts_p.

>         * lto.c (hash_canonical_type): Handle polynomial TYPE_VECTOR_SUBPARTS.

>

> gcc/go/

>         * go-lang.c (go_langhook_type_for_mode): Check valid_vector_subparts_p.

>

> Index: gcc/tree.h

> ===================================================================

> --- gcc/tree.h  2017-10-23 17:22:35.831905077 +0100

> +++ gcc/tree.h  2017-10-23 17:25:51.773378674 +0100

> @@ -2041,15 +2041,6 @@ #define TREE_VISITED(NODE) ((NODE)->base

>     If set in a INTEGER_TYPE, indicates a character type.  */

>  #define TYPE_STRING_FLAG(NODE) (TYPE_CHECK (NODE)->type_common.string_flag)

>

> -/* For a VECTOR_TYPE, this is the number of sub-parts of the vector.  */

> -#define TYPE_VECTOR_SUBPARTS(VECTOR_TYPE) \

> -  (HOST_WIDE_INT_1U \

> -   << VECTOR_TYPE_CHECK (VECTOR_TYPE)->type_common.precision)

> -

> -/* Set precision to n when we have 2^n sub-parts of the vector.  */

> -#define SET_TYPE_VECTOR_SUBPARTS(VECTOR_TYPE, X) \

> -  (VECTOR_TYPE_CHECK (VECTOR_TYPE)->type_common.precision = exact_log2 (X))

> -

>  /* Nonzero in a VECTOR_TYPE if the frontends should not emit warnings

>     about missing conversions to other vector types of the same size.  */

>  #define TYPE_VECTOR_OPAQUE(NODE) \

> @@ -3671,6 +3662,64 @@ id_equal (const char *str, const_tree id

>    return !strcmp (str, IDENTIFIER_POINTER (id));

>  }

>

> +/* Return the number of elements in the VECTOR_TYPE given by NODE.  */

> +

> +inline poly_uint64

> +TYPE_VECTOR_SUBPARTS (const_tree node)

> +{

> +  STATIC_ASSERT (NUM_POLY_INT_COEFFS <= 2);

> +  unsigned int precision = VECTOR_TYPE_CHECK (node)->type_common.precision;

> +  if (NUM_POLY_INT_COEFFS == 2)

> +    {

> +      poly_uint64 res = 0;

> +      res.coeffs[0] = 1 << (precision / 2);

> +      if (precision & 1)

> +       res.coeffs[1] = 1 << (precision / 2);

> +      return res;

> +    }

> +  else

> +    return 1 << precision;

> +}

> +

> +/* Set the number of elements in VECTOR_TYPE NODE to SUBPARTS, which must

> +   satisfy valid_vector_subparts_p.  */

> +

> +inline void

> +SET_TYPE_VECTOR_SUBPARTS (tree node, poly_uint64 subparts)

> +{

> +  STATIC_ASSERT (NUM_POLY_INT_COEFFS <= 2);

> +  unsigned HOST_WIDE_INT coeff0 = subparts.coeffs[0];

> +  int index = exact_log2 (coeff0);

> +  gcc_assert (index >= 0);

> +  if (NUM_POLY_INT_COEFFS == 2)

> +    {

> +      unsigned HOST_WIDE_INT coeff1 = subparts.coeffs[1];

> +      gcc_assert (coeff1 == 0 || coeff1 == coeff0);

> +      VECTOR_TYPE_CHECK (node)->type_common.precision

> +       = index * 2 + (coeff1 != 0);

> +    }

> +  else

> +    VECTOR_TYPE_CHECK (node)->type_common.precision = index;

> +}

> +

> +/* Return true if we can construct vector types with the given number

> +   of subparts.  */

> +

> +static inline bool

> +valid_vector_subparts_p (poly_uint64 subparts)

> +{

> +  unsigned HOST_WIDE_INT coeff0 = subparts.coeffs[0];

> +  if (!pow2p_hwi (coeff0))

> +    return false;

> +  if (NUM_POLY_INT_COEFFS == 2)

> +    {

> +      unsigned HOST_WIDE_INT coeff1 = subparts.coeffs[1];

> +      if (coeff1 != 0 && coeff1 != coeff0)

> +       return false;

> +    }

> +  return true;

> +}

> +

>  #define error_mark_node                        global_trees[TI_ERROR_MARK]

>

>  #define intQI_type_node                        global_trees[TI_INTQI_TYPE]

> @@ -4108,16 +4157,10 @@ extern tree build_pointer_type (tree);

>  extern tree build_reference_type_for_mode (tree, machine_mode, bool);

>  extern tree build_reference_type (tree);

>  extern tree build_vector_type_for_mode (tree, machine_mode);

> -extern tree build_vector_type (tree innertype, int nunits);

> -/* Temporary.  */

> -inline tree

> -build_vector_type (tree innertype, poly_uint64 nunits)

> -{

> -  return build_vector_type (innertype, (int) nunits.to_constant ());

> -}

> +extern tree build_vector_type (tree, poly_int64);

>  extern tree build_truth_vector_type (poly_uint64, poly_uint64);

>  extern tree build_same_sized_truth_vector_type (tree vectype);

> -extern tree build_opaque_vector_type (tree innertype, int nunits);

> +extern tree build_opaque_vector_type (tree, poly_int64);

>  extern tree build_index_type (tree);

>  extern tree build_array_type (tree, tree, bool = false);

>  extern tree build_nonshared_array_type (tree, tree);

> Index: gcc/tree.c

> ===================================================================

> --- gcc/tree.c  2017-10-23 17:25:48.625491825 +0100

> +++ gcc/tree.c  2017-10-23 17:25:51.771378746 +0100

> @@ -1877,7 +1877,7 @@ make_vector (unsigned len MEM_STAT_DECL)

>  build_vector (tree type, vec<tree> vals MEM_STAT_DECL)

>  {

>    unsigned int nelts = vals.length ();

> -  gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));

> +  gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));

>    int over = 0;

>    unsigned cnt = 0;

>    tree v = make_vector (nelts);

> @@ -1907,10 +1907,11 @@ build_vector (tree type, vec<tree> vals

>  tree

>  build_vector_from_ctor (tree type, vec<constructor_elt, va_gc> *v)

>  {

> -  unsigned int nelts = TYPE_VECTOR_SUBPARTS (type);

> -  unsigned HOST_WIDE_INT idx;

> +  unsigned HOST_WIDE_INT idx, nelts;

>    tree value;

>

> +  /* We can't construct a VECTOR_CST for a variable number of elements.  */

> +  nelts = TYPE_VECTOR_SUBPARTS (type).to_constant ();

>    auto_vec<tree, 32> vec (nelts);

>    FOR_EACH_CONSTRUCTOR_VALUE (v, idx, value)

>      {

> @@ -1928,9 +1929,9 @@ build_vector_from_ctor (tree type, vec<c

>

>  /* Build a vector of type VECTYPE where all the elements are SCs.  */

>  tree

> -build_vector_from_val (tree vectype, tree sc)

> +build_vector_from_val (tree vectype, tree sc)

>  {

> -  int i, nunits = TYPE_VECTOR_SUBPARTS (vectype);

> +  unsigned HOST_WIDE_INT i, nunits;

>

>    if (sc == error_mark_node)

>      return sc;

> @@ -1944,6 +1945,13 @@ build_vector_from_val (tree vectype, tre

>    gcc_checking_assert (types_compatible_p (TYPE_MAIN_VARIANT (TREE_TYPE (sc)),

>                                            TREE_TYPE (vectype)));

>

> +  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits))

> +    {

> +      if (CONSTANT_CLASS_P (sc))

> +       return build_vec_duplicate_cst (vectype, sc);

> +      return fold_build1 (VEC_DUPLICATE_EXPR, vectype, sc);

> +    }

> +

>    if (CONSTANT_CLASS_P (sc))

>      {

>        auto_vec<tree, 32> v (nunits);

> @@ -6575,11 +6583,8 @@ type_hash_canon_hash (tree type)

>        }

>

>      case VECTOR_TYPE:

> -      {

> -       unsigned nunits = TYPE_VECTOR_SUBPARTS (type);

> -       hstate.add_object (nunits);

> -       break;

> -      }

> +      hstate.add_poly_int (TYPE_VECTOR_SUBPARTS (type));

> +      break;

>

>      default:

>        break;

> @@ -6623,7 +6628,8 @@ type_cache_hasher::equal (type_hash *a,

>        return 1;

>

>      case VECTOR_TYPE:

> -      return TYPE_VECTOR_SUBPARTS (a->type) == TYPE_VECTOR_SUBPARTS (b->type);

> +      return must_eq (TYPE_VECTOR_SUBPARTS (a->type),

> +                     TYPE_VECTOR_SUBPARTS (b->type));

>

>      case ENUMERAL_TYPE:

>        if (TYPE_VALUES (a->type) != TYPE_VALUES (b->type)

> @@ -9666,7 +9672,7 @@ make_vector_type (tree innertype, poly_i

>

>    t = make_node (VECTOR_TYPE);

>    TREE_TYPE (t) = mv_innertype;

> -  SET_TYPE_VECTOR_SUBPARTS (t, nunits.to_constant ()); /* Temporary */

> +  SET_TYPE_VECTOR_SUBPARTS (t, nunits);

>    SET_TYPE_MODE (t, mode);

>

>    if (TYPE_STRUCTURAL_EQUALITY_P (mv_innertype) || in_lto_p)

> @@ -10582,7 +10588,7 @@ build_vector_type_for_mode (tree innerty

>     a power of two.  */

>

>  tree

> -build_vector_type (tree innertype, int nunits)

> +build_vector_type (tree innertype, poly_int64 nunits)

>  {

>    return make_vector_type (innertype, nunits, VOIDmode);

>  }

> @@ -10627,7 +10633,7 @@ build_same_sized_truth_vector_type (tree

>  /* Similarly, but builds a variant type with TYPE_VECTOR_OPAQUE set.  */

>

>  tree

> -build_opaque_vector_type (tree innertype, int nunits)

> +build_opaque_vector_type (tree innertype, poly_int64 nunits)

>  {

>    tree t = make_vector_type (innertype, nunits, VOIDmode);

>    tree cand;

> @@ -10730,7 +10736,7 @@ initializer_zerop (const_tree init)

>  uniform_vector_p (const_tree vec)

>  {

>    tree first, t;

> -  unsigned i;

> +  unsigned HOST_WIDE_INT i, nelts;

>

>    if (vec == NULL_TREE)

>      return NULL_TREE;

> @@ -10753,7 +10759,8 @@ uniform_vector_p (const_tree vec)

>        return first;

>      }

>

> -  else if (TREE_CODE (vec) == CONSTRUCTOR)

> +  else if (TREE_CODE (vec) == CONSTRUCTOR

> +          && TYPE_VECTOR_SUBPARTS (TREE_TYPE (vec)).is_constant (&nelts))

>      {

>        first = error_mark_node;

>

> @@ -10767,7 +10774,7 @@ uniform_vector_p (const_tree vec)

>           if (!operand_equal_p (first, t, 0))

>             return NULL_TREE;

>          }

> -      if (i != TYPE_VECTOR_SUBPARTS (TREE_TYPE (vec)))

> +      if (i != nelts)

>         return NULL_TREE;

>

>        return first;

> @@ -13011,8 +13018,8 @@ vector_type_mode (const_tree t)

>        /* For integers, try mapping it to a same-sized scalar mode.  */

>        if (is_int_mode (TREE_TYPE (t)->type_common.mode, &innermode))

>         {

> -         unsigned int size = (TYPE_VECTOR_SUBPARTS (t)

> -                              * GET_MODE_BITSIZE (innermode));

> +         poly_int64 size = (TYPE_VECTOR_SUBPARTS (t)

> +                            * GET_MODE_BITSIZE (innermode));

>           scalar_int_mode mode;

>           if (int_mode_for_size (size, 0).exists (&mode)

>               && have_regs_of_mode[mode])

> Index: gcc/cfgexpand.c

> ===================================================================

> --- gcc/cfgexpand.c     2017-10-23 17:19:04.559212322 +0100

> +++ gcc/cfgexpand.c     2017-10-23 17:25:51.727380328 +0100

> @@ -4961,10 +4961,13 @@ expand_debug_expr (tree exp)

>        else if (TREE_CODE (TREE_TYPE (exp)) == VECTOR_TYPE)

>         {

>           unsigned i;

> +         unsigned HOST_WIDE_INT nelts;

>           tree val;

>

> -         op0 = gen_rtx_CONCATN

> -           (mode, rtvec_alloc (TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp))));

> +         if (!TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)).is_constant (&nelts))

> +           goto flag_unsupported;

> +

> +         op0 = gen_rtx_CONCATN (mode, rtvec_alloc (nelts));

>

>           FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (exp), i, val)

>             {

> @@ -4974,7 +4977,7 @@ expand_debug_expr (tree exp)

>               XVECEXP (op0, 0, i) = op1;

>             }

>

> -         if (i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)))

> +         if (i < nelts)

>             {

>               op1 = expand_debug_expr

>                 (build_zero_cst (TREE_TYPE (TREE_TYPE (exp))));

> @@ -4982,7 +4985,7 @@ expand_debug_expr (tree exp)

>               if (!op1)

>                 return NULL;

>

> -             for (; i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)); i++)

> +             for (; i < nelts; i++)

>                 XVECEXP (op0, 0, i) = op1;

>             }

>

> Index: gcc/expr.c

> ===================================================================

> --- gcc/expr.c  2017-10-23 17:25:38.241865064 +0100

> +++ gcc/expr.c  2017-10-23 17:25:51.740379860 +0100

> @@ -5847,7 +5847,13 @@ count_type_elements (const_tree type, bo

>        return 2;

>

>      case VECTOR_TYPE:

> -      return TYPE_VECTOR_SUBPARTS (type);

> +      {

> +       unsigned HOST_WIDE_INT nelts;

> +       if (TYPE_VECTOR_SUBPARTS (type).is_constant (&nelts))

> +         return nelts;

> +       else

> +         return -1;

> +      }

>

>      case INTEGER_TYPE:

>      case REAL_TYPE:

> @@ -6594,7 +6600,8 @@ store_constructor (tree exp, rtx target,

>         HOST_WIDE_INT bitsize;

>         HOST_WIDE_INT bitpos;

>         rtvec vector = NULL;

> -       unsigned n_elts;

> +       poly_uint64 n_elts;

> +       unsigned HOST_WIDE_INT const_n_elts;

>         alias_set_type alias;

>         bool vec_vec_init_p = false;

>         machine_mode mode = GET_MODE (target);

> @@ -6619,7 +6626,9 @@ store_constructor (tree exp, rtx target,

>           }

>

>         n_elts = TYPE_VECTOR_SUBPARTS (type);

> -       if (REG_P (target) && VECTOR_MODE_P (mode))

> +       if (REG_P (target)

> +           && VECTOR_MODE_P (mode)

> +           && n_elts.is_constant (&const_n_elts))

>           {

>             machine_mode emode = eltmode;

>

> @@ -6628,14 +6637,15 @@ store_constructor (tree exp, rtx target,

>                     == VECTOR_TYPE))

>               {

>                 tree etype = TREE_TYPE (CONSTRUCTOR_ELT (exp, 0)->value);

> -               gcc_assert (CONSTRUCTOR_NELTS (exp) * TYPE_VECTOR_SUBPARTS (etype)

> -                           == n_elts);

> +               gcc_assert (must_eq (CONSTRUCTOR_NELTS (exp)

> +                                    * TYPE_VECTOR_SUBPARTS (etype),

> +                                    n_elts));

>                 emode = TYPE_MODE (etype);

>               }

>             icode = convert_optab_handler (vec_init_optab, mode, emode);

>             if (icode != CODE_FOR_nothing)

>               {

> -               unsigned int i, n = n_elts;

> +               unsigned int i, n = const_n_elts;

>

>                 if (emode != eltmode)

>                   {

> @@ -6674,7 +6684,8 @@ store_constructor (tree exp, rtx target,

>

>             /* Clear the entire vector first if there are any missing elements,

>                or if the incidence of zero elements is >= 75%.  */

> -           need_to_clear = (count < n_elts || 4 * zero_count >= 3 * count);

> +           need_to_clear = (may_lt (count, n_elts)

> +                            || 4 * zero_count >= 3 * count);

>           }

>

>         if (need_to_clear && may_gt (size, 0) && !vector)

> Index: gcc/fold-const.c

> ===================================================================

> --- gcc/fold-const.c    2017-10-23 17:22:48.984540760 +0100

> +++ gcc/fold-const.c    2017-10-23 17:25:51.744379717 +0100

> @@ -1645,7 +1645,7 @@ const_binop (enum tree_code code, tree t

>         in_nelts = VECTOR_CST_NELTS (arg1);

>         out_nelts = in_nelts * 2;

>         gcc_assert (in_nelts == VECTOR_CST_NELTS (arg2)

> -                   && out_nelts == TYPE_VECTOR_SUBPARTS (type));

> +                   && must_eq (out_nelts, TYPE_VECTOR_SUBPARTS (type)));

>

>         auto_vec<tree, 32> elts (out_nelts);

>         for (i = 0; i < out_nelts; i++)

> @@ -1677,7 +1677,7 @@ const_binop (enum tree_code code, tree t

>         in_nelts = VECTOR_CST_NELTS (arg1);

>         out_nelts = in_nelts / 2;

>         gcc_assert (in_nelts == VECTOR_CST_NELTS (arg2)

> -                   && out_nelts == TYPE_VECTOR_SUBPARTS (type));

> +                   && must_eq (out_nelts, TYPE_VECTOR_SUBPARTS (type)));

>

>         if (code == VEC_WIDEN_MULT_LO_EXPR)

>           scale = 0, ofs = BYTES_BIG_ENDIAN ? out_nelts : 0;

> @@ -1841,7 +1841,7 @@ const_unop (enum tree_code code, tree ty

>

>         in_nelts = VECTOR_CST_NELTS (arg0);

>         out_nelts = in_nelts / 2;

> -       gcc_assert (out_nelts == TYPE_VECTOR_SUBPARTS (type));

> +       gcc_assert (must_eq (out_nelts, TYPE_VECTOR_SUBPARTS (type)));

>

>         unsigned int offset = 0;

>         if ((!BYTES_BIG_ENDIAN) ^ (code == VEC_UNPACK_LO_EXPR

> @@ -2329,7 +2329,7 @@ fold_convert_const (enum tree_code code,

>    else if (TREE_CODE (type) == VECTOR_TYPE)

>      {

>        if (TREE_CODE (arg1) == VECTOR_CST

> -         && TYPE_VECTOR_SUBPARTS (type) == VECTOR_CST_NELTS (arg1))

> +         && must_eq (TYPE_VECTOR_SUBPARTS (type), VECTOR_CST_NELTS (arg1)))

>         {

>           int len = VECTOR_CST_NELTS (arg1);

>           tree elttype = TREE_TYPE (type);

> @@ -2345,8 +2345,8 @@ fold_convert_const (enum tree_code code,

>           return build_vector (type, v);

>         }

>        if (TREE_CODE (arg1) == VEC_DUPLICATE_CST

> -         && (TYPE_VECTOR_SUBPARTS (type)

> -             == TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1))))

> +         && must_eq (TYPE_VECTOR_SUBPARTS (type),

> +                     TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1))))

>         {

>           tree sub = fold_convert_const (code, TREE_TYPE (type),

>                                          VEC_DUPLICATE_CST_ELT (arg1));

> @@ -3491,8 +3491,8 @@ #define OP_SAME_WITH_NULL(N)                              \

>              We only tested element precision and modes to match.

>              Vectors may be BLKmode and thus also check that the number of

>              parts match.  */

> -         if (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0))

> -             != TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)))

> +         if (may_ne (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)),

> +                     TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1))))

>             return 0;

>

>           vec<constructor_elt, va_gc> *v0 = CONSTRUCTOR_ELTS (arg0);

> @@ -7613,15 +7613,16 @@ native_interpret_complex (tree type, con

>     If the buffer cannot be interpreted, return NULL_TREE.  */

>

>  static tree

> -native_interpret_vector (tree type, const unsigned char *ptr, int len)

> +native_interpret_vector (tree type, const unsigned char *ptr, unsigned int len)

>  {

>    tree etype, elem;

> -  int i, size, count;

> +  unsigned int i, size;

> +  unsigned HOST_WIDE_INT count;

>

>    etype = TREE_TYPE (type);

>    size = GET_MODE_SIZE (SCALAR_TYPE_MODE (etype));

> -  count = TYPE_VECTOR_SUBPARTS (type);

> -  if (size * count > len)

> +  if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&count)

> +      || size * count > len)

>      return NULL_TREE;

>

>    auto_vec<tree, 32> elements (count);

> @@ -7707,7 +7708,8 @@ fold_view_convert_expr (tree type, tree

>    tree expr_type = TREE_TYPE (expr);

>    if (TREE_CODE (expr) == VEC_DUPLICATE_CST

>        && VECTOR_TYPE_P (type)

> -      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (expr_type)

> +      && must_eq (TYPE_VECTOR_SUBPARTS (type),

> +                 TYPE_VECTOR_SUBPARTS (expr_type))

>        && TYPE_SIZE (TREE_TYPE (type)) == TYPE_SIZE (TREE_TYPE (expr_type)))

>      {

>        tree sub = fold_view_convert_expr (TREE_TYPE (type),

> @@ -9025,9 +9027,9 @@ fold_vec_perm (tree type, tree arg0, tre

>    bool need_ctor = false;

>

>    unsigned int nelts = sel.length ();

> -  gcc_assert (TYPE_VECTOR_SUBPARTS (type) == nelts

> -             && TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)) == nelts

> -             && TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)) == nelts);

> +  gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (type), nelts)

> +             && must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)), nelts)

> +             && must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)), nelts));

>    if (TREE_TYPE (TREE_TYPE (arg0)) != TREE_TYPE (type)

>        || TREE_TYPE (TREE_TYPE (arg1)) != TREE_TYPE (type))

>      return NULL_TREE;

> @@ -11440,7 +11442,7 @@ fold_ternary_loc (location_t loc, enum t

>                   || TREE_CODE (arg2) == CONSTRUCTOR))

>             {

>               unsigned int nelts = VECTOR_CST_NELTS (arg0), i;

> -             gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));

> +             gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));

>               auto_vec_perm_indices sel (nelts);

>               for (i = 0; i < nelts; i++)

>                 {

> @@ -11706,7 +11708,8 @@ fold_ternary_loc (location_t loc, enum t

>           if (n != 0

>               && (idx % width) == 0

>               && (n % width) == 0

> -             && ((idx + n) / width) <= TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)))

> +             && must_le ((idx + n) / width,

> +                         TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0))))

>             {

>               idx = idx / width;

>               n = n / width;

> @@ -11783,7 +11786,7 @@ fold_ternary_loc (location_t loc, enum t

>

>           mask2 = 2 * nelts - 1;

>           mask = single_arg ? (nelts - 1) : mask2;

> -         gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));

> +         gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));

>           auto_vec_perm_indices sel (nelts);

>           auto_vec_perm_indices sel2 (nelts);

>           for (i = 0; i < nelts; i++)

> @@ -14034,7 +14037,7 @@ fold_relational_const (enum tree_code co

>         }

>        unsigned count = VECTOR_CST_NELTS (op0);

>        gcc_assert (VECTOR_CST_NELTS (op1) == count

> -                 && TYPE_VECTOR_SUBPARTS (type) == count);

> +                 && must_eq (TYPE_VECTOR_SUBPARTS (type), count));

>

>        auto_vec<tree, 32> elts (count);

>        for (unsigned i = 0; i < count; i++)

> Index: gcc/gimple-fold.c

> ===================================================================

> --- gcc/gimple-fold.c   2017-10-23 17:22:18.228825053 +0100

> +++ gcc/gimple-fold.c   2017-10-23 17:25:51.747379609 +0100

> @@ -5909,13 +5909,13 @@ gimple_fold_stmt_to_constant_1 (gimple *

>                 }

>               else if (TREE_CODE (rhs) == CONSTRUCTOR

>                        && TREE_CODE (TREE_TYPE (rhs)) == VECTOR_TYPE

> -                      && (CONSTRUCTOR_NELTS (rhs)

> -                          == TYPE_VECTOR_SUBPARTS (TREE_TYPE (rhs))))

> +                      && must_eq (CONSTRUCTOR_NELTS (rhs),

> +                                  TYPE_VECTOR_SUBPARTS (TREE_TYPE (rhs))))

>                 {

>                   unsigned i, nelts;

>                   tree val;

>

> -                 nelts = TYPE_VECTOR_SUBPARTS (TREE_TYPE (rhs));

> +                 nelts = CONSTRUCTOR_NELTS (rhs);

>                   auto_vec<tree, 32> vec (nelts);

>                   FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (rhs), i, val)

>                     {

> @@ -6761,8 +6761,8 @@ gimple_fold_indirect_ref (tree t)

>              = tree_to_shwi (part_width) / BITS_PER_UNIT;

>            unsigned HOST_WIDE_INT indexi = offset * BITS_PER_UNIT;

>            tree index = bitsize_int (indexi);

> -          if (offset / part_widthi

> -             < TYPE_VECTOR_SUBPARTS (TREE_TYPE (addrtype)))

> +         if (must_lt (offset / part_widthi,

> +                      TYPE_VECTOR_SUBPARTS (TREE_TYPE (addrtype))))

>              return fold_build3 (BIT_FIELD_REF, type, TREE_OPERAND (addr, 0),

>                                  part_width, index);

>         }

> @@ -7064,6 +7064,10 @@ gimple_convert_to_ptrofftype (gimple_seq

>  gimple_build_vector_from_val (gimple_seq *seq, location_t loc, tree type,

>                               tree op)

>  {

> +  if (!TYPE_VECTOR_SUBPARTS (type).is_constant ()

> +      && !CONSTANT_CLASS_P (op))

> +    return gimple_build (seq, loc, VEC_DUPLICATE_EXPR, type, op);

> +

>    tree res, vec = build_vector_from_val (type, op);

>    if (is_gimple_val (vec))

>      return vec;

> @@ -7086,7 +7090,7 @@ gimple_build_vector (gimple_seq *seq, lo

>                      vec<tree> elts)

>  {

>    unsigned int nelts = elts.length ();

> -  gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));

> +  gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));

>    for (unsigned int i = 0; i < nelts; ++i)

>      if (!TREE_CONSTANT (elts[i]))

>        {

> Index: gcc/match.pd

> ===================================================================

> --- gcc/match.pd        2017-10-23 17:22:50.031432167 +0100

> +++ gcc/match.pd        2017-10-23 17:25:51.750379501 +0100

> @@ -83,7 +83,8 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>  (match (nop_convert @0)

>   (view_convert @0)

>   (if (VECTOR_TYPE_P (type) && VECTOR_TYPE_P (TREE_TYPE (@0))

> -      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (TREE_TYPE (@0))

> +      && must_eq (TYPE_VECTOR_SUBPARTS (type),

> +                 TYPE_VECTOR_SUBPARTS (TREE_TYPE (@0)))

>        && tree_nop_conversion_p (TREE_TYPE (type), TREE_TYPE (TREE_TYPE (@0))))))

>  /* This one has to be last, or it shadows the others.  */

>  (match (nop_convert @0)

> @@ -2628,7 +2629,8 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>  (simplify

>   (plus:c @3 (view_convert? (vec_cond:s @0 integer_each_onep@1 integer_zerop@2)))

>   (if (VECTOR_TYPE_P (type)

> -      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1))

> +      && must_eq (TYPE_VECTOR_SUBPARTS (type),

> +                 TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1)))

>        && (TYPE_MODE (TREE_TYPE (type))

>            == TYPE_MODE (TREE_TYPE (TREE_TYPE (@1)))))

>    (minus @3 (view_convert (vec_cond @0 (negate @1) @2)))))

> @@ -2637,7 +2639,8 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>  (simplify

>   (minus @3 (view_convert? (vec_cond:s @0 integer_each_onep@1 integer_zerop@2)))

>   (if (VECTOR_TYPE_P (type)

> -      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1))

> +      && must_eq (TYPE_VECTOR_SUBPARTS (type),

> +                 TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1)))

>        && (TYPE_MODE (TREE_TYPE (type))

>            == TYPE_MODE (TREE_TYPE (TREE_TYPE (@1)))))

>    (plus @3 (view_convert (vec_cond @0 (negate @1) @2)))))

> @@ -4301,7 +4304,8 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)

>     (if (n != 0

>         && (idx % width) == 0

>         && (n % width) == 0

> -       && ((idx + n) / width) <= TYPE_VECTOR_SUBPARTS (TREE_TYPE (ctor)))

> +       && must_le ((idx + n) / width,

> +                   TYPE_VECTOR_SUBPARTS (TREE_TYPE (ctor))))

>      (with

>       {

>         idx = idx / width;

> Index: gcc/omp-simd-clone.c

> ===================================================================

> --- gcc/omp-simd-clone.c        2017-10-23 17:22:47.947648317 +0100

> +++ gcc/omp-simd-clone.c        2017-10-23 17:25:51.751379465 +0100

> @@ -57,7 +57,7 @@ Software Foundation; either version 3, o

>  static unsigned HOST_WIDE_INT

>  simd_clone_subparts (tree vectype)

>  {

> -  return TYPE_VECTOR_SUBPARTS (vectype);

> +  return TYPE_VECTOR_SUBPARTS (vectype).to_constant ();

>  }

>

>  /* Allocate a fresh `simd_clone' and return it.  NARGS is the number

> Index: gcc/print-tree.c

> ===================================================================

> --- gcc/print-tree.c    2017-10-23 17:11:40.246949037 +0100

> +++ gcc/print-tree.c    2017-10-23 17:25:51.751379465 +0100

> @@ -630,7 +630,10 @@ print_node (FILE *file, const char *pref

>        else if (code == ARRAY_TYPE)

>         print_node (file, "domain", TYPE_DOMAIN (node), indent + 4);

>        else if (code == VECTOR_TYPE)

> -       fprintf (file, " nunits:%d", (int) TYPE_VECTOR_SUBPARTS (node));

> +       {

> +         fprintf (file, " nunits:");

> +         print_dec (TYPE_VECTOR_SUBPARTS (node), file);

> +       }

>        else if (code == RECORD_TYPE

>                || code == UNION_TYPE

>                || code == QUAL_UNION_TYPE)

> Index: gcc/stor-layout.c

> ===================================================================

> --- gcc/stor-layout.c   2017-10-23 17:11:54.535862371 +0100

> +++ gcc/stor-layout.c   2017-10-23 17:25:51.753379393 +0100

> @@ -2267,11 +2267,9 @@ layout_type (tree type)

>

>      case VECTOR_TYPE:

>        {

> -       int nunits = TYPE_VECTOR_SUBPARTS (type);

> +       poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type);

>         tree innertype = TREE_TYPE (type);

>

> -       gcc_assert (!(nunits & (nunits - 1)));

> -

>         /* Find an appropriate mode for the vector type.  */

>         if (TYPE_MODE (type) == VOIDmode)

>           SET_TYPE_MODE (type,

> Index: gcc/targhooks.c

> ===================================================================

> --- gcc/targhooks.c     2017-10-23 17:22:32.725227332 +0100

> +++ gcc/targhooks.c     2017-10-23 17:25:51.753379393 +0100

> @@ -683,7 +683,7 @@ default_builtin_vectorization_cost (enum

>          return 3;

>

>        case vec_construct:

> -       return TYPE_VECTOR_SUBPARTS (vectype) - 1;

> +       return estimated_poly_value (TYPE_VECTOR_SUBPARTS (vectype)) - 1;

>

>        default:

>          gcc_unreachable ();

> Index: gcc/tree-cfg.c

> ===================================================================

> --- gcc/tree-cfg.c      2017-10-23 17:20:50.883679845 +0100

> +++ gcc/tree-cfg.c      2017-10-23 17:25:51.756379285 +0100

> @@ -3640,7 +3640,8 @@ verify_gimple_comparison (tree type, tre

>            return true;

>          }

>

> -      if (TYPE_VECTOR_SUBPARTS (type) != TYPE_VECTOR_SUBPARTS (op0_type))

> +      if (may_ne (TYPE_VECTOR_SUBPARTS (type),

> +                 TYPE_VECTOR_SUBPARTS (op0_type)))

>          {

>            error ("invalid vector comparison resulting type");

>            debug_generic_expr (type);

> @@ -4070,8 +4071,8 @@ verify_gimple_assign_binary (gassign *st

>        if (VECTOR_BOOLEAN_TYPE_P (lhs_type)

>           && VECTOR_BOOLEAN_TYPE_P (rhs1_type)

>           && types_compatible_p (rhs1_type, rhs2_type)

> -         && (TYPE_VECTOR_SUBPARTS (lhs_type)

> -             == 2 * TYPE_VECTOR_SUBPARTS (rhs1_type)))

> +         && must_eq (TYPE_VECTOR_SUBPARTS (lhs_type),

> +                     2 * TYPE_VECTOR_SUBPARTS (rhs1_type)))

>         return false;

>

>        /* Fallthru.  */

> @@ -4221,8 +4222,8 @@ verify_gimple_assign_ternary (gassign *s

>

>      case VEC_COND_EXPR:

>        if (!VECTOR_BOOLEAN_TYPE_P (rhs1_type)

> -         || TYPE_VECTOR_SUBPARTS (rhs1_type)

> -            != TYPE_VECTOR_SUBPARTS (lhs_type))

> +         || may_ne (TYPE_VECTOR_SUBPARTS (rhs1_type),

> +                    TYPE_VECTOR_SUBPARTS (lhs_type)))

>         {

>           error ("the first argument of a VEC_COND_EXPR must be of a "

>                  "boolean vector type of the same number of elements "

> @@ -4268,11 +4269,12 @@ verify_gimple_assign_ternary (gassign *s

>           return true;

>         }

>

> -      if (TYPE_VECTOR_SUBPARTS (rhs1_type) != TYPE_VECTOR_SUBPARTS (rhs2_type)

> -         || TYPE_VECTOR_SUBPARTS (rhs2_type)

> -            != TYPE_VECTOR_SUBPARTS (rhs3_type)

> -         || TYPE_VECTOR_SUBPARTS (rhs3_type)

> -            != TYPE_VECTOR_SUBPARTS (lhs_type))

> +      if (may_ne (TYPE_VECTOR_SUBPARTS (rhs1_type),

> +                 TYPE_VECTOR_SUBPARTS (rhs2_type))

> +         || may_ne (TYPE_VECTOR_SUBPARTS (rhs2_type),

> +                    TYPE_VECTOR_SUBPARTS (rhs3_type))

> +         || may_ne (TYPE_VECTOR_SUBPARTS (rhs3_type),

> +                    TYPE_VECTOR_SUBPARTS (lhs_type)))

>         {

>           error ("vectors with different element number found "

>                  "in vector permute expression");

> @@ -4554,9 +4556,9 @@ verify_gimple_assign_single (gassign *st

>                           debug_generic_stmt (rhs1);

>                           return true;

>                         }

> -                     else if (CONSTRUCTOR_NELTS (rhs1)

> -                              * TYPE_VECTOR_SUBPARTS (elt_t)

> -                              != TYPE_VECTOR_SUBPARTS (rhs1_type))

> +                     else if (may_ne (CONSTRUCTOR_NELTS (rhs1)

> +                                      * TYPE_VECTOR_SUBPARTS (elt_t),

> +                                      TYPE_VECTOR_SUBPARTS (rhs1_type)))

>                         {

>                           error ("incorrect number of vector CONSTRUCTOR"

>                                  " elements");

> @@ -4571,8 +4573,8 @@ verify_gimple_assign_single (gassign *st

>                       debug_generic_stmt (rhs1);

>                       return true;

>                     }

> -                 else if (CONSTRUCTOR_NELTS (rhs1)

> -                          > TYPE_VECTOR_SUBPARTS (rhs1_type))

> +                 else if (may_gt (CONSTRUCTOR_NELTS (rhs1),

> +                                  TYPE_VECTOR_SUBPARTS (rhs1_type)))

>                     {

>                       error ("incorrect number of vector CONSTRUCTOR elements");

>                       debug_generic_stmt (rhs1);

> Index: gcc/tree-ssa-forwprop.c

> ===================================================================

> --- gcc/tree-ssa-forwprop.c     2017-10-23 17:20:50.883679845 +0100

> +++ gcc/tree-ssa-forwprop.c     2017-10-23 17:25:51.756379285 +0100

> @@ -1948,7 +1948,8 @@ simplify_vector_constructor (gimple_stmt

>    gimple *stmt = gsi_stmt (*gsi);

>    gimple *def_stmt;

>    tree op, op2, orig, type, elem_type;

> -  unsigned elem_size, nelts, i;

> +  unsigned elem_size, i;

> +  unsigned HOST_WIDE_INT nelts;

>    enum tree_code code, conv_code;

>    constructor_elt *elt;

>    bool maybe_ident;

> @@ -1959,7 +1960,8 @@ simplify_vector_constructor (gimple_stmt

>    type = TREE_TYPE (op);

>    gcc_checking_assert (TREE_CODE (type) == VECTOR_TYPE);

>

> -  nelts = TYPE_VECTOR_SUBPARTS (type);

> +  if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&nelts))

> +    return false;

>    elem_type = TREE_TYPE (type);

>    elem_size = TREE_INT_CST_LOW (TYPE_SIZE (elem_type));

>

> @@ -2031,8 +2033,8 @@ simplify_vector_constructor (gimple_stmt

>      return false;

>

>    if (! VECTOR_TYPE_P (TREE_TYPE (orig))

> -      || (TYPE_VECTOR_SUBPARTS (type)

> -         != TYPE_VECTOR_SUBPARTS (TREE_TYPE (orig))))

> +      || may_ne (TYPE_VECTOR_SUBPARTS (type),

> +                TYPE_VECTOR_SUBPARTS (TREE_TYPE (orig))))

>      return false;

>

>    tree tem;

> Index: gcc/tree-vect-data-refs.c

> ===================================================================

> --- gcc/tree-vect-data-refs.c   2017-10-23 17:25:50.361429427 +0100

> +++ gcc/tree-vect-data-refs.c   2017-10-23 17:25:51.758379213 +0100

> @@ -4743,7 +4743,7 @@ vect_permute_store_chain (vec<tree> dr_c

>    if (length == 3)

>      {

>        /* vect_grouped_store_supported ensures that this is constant.  */

> -      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype);

> +      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();

>        unsigned int j0 = 0, j1 = 0, j2 = 0;

>

>        auto_vec_perm_indices sel (nelt);

> @@ -4807,7 +4807,7 @@ vect_permute_store_chain (vec<tree> dr_c

>        gcc_assert (pow2p_hwi (length));

>

>        /* vect_grouped_store_supported ensures that this is constant.  */

> -      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype);

> +      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();

>        auto_vec_perm_indices sel (nelt);

>        sel.quick_grow (nelt);

>        for (i = 0, n = nelt / 2; i < n; i++)

> @@ -5140,7 +5140,7 @@ vect_grouped_load_supported (tree vectyp

>       that leaves unused vector loads around punt - we at least create

>       very sub-optimal code in that case (and blow up memory,

>       see PR65518).  */

> -  if (single_element_p && count > TYPE_VECTOR_SUBPARTS (vectype))

> +  if (single_element_p && may_gt (count, TYPE_VECTOR_SUBPARTS (vectype)))

>      {

>        if (dump_enabled_p ())

>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,

> @@ -5333,7 +5333,7 @@ vect_permute_load_chain (vec<tree> dr_ch

>    if (length == 3)

>      {

>        /* vect_grouped_load_supported ensures that this is constant.  */

> -      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype);

> +      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();

>        unsigned int k;

>

>        auto_vec_perm_indices sel (nelt);

> @@ -5384,7 +5384,7 @@ vect_permute_load_chain (vec<tree> dr_ch

>        gcc_assert (pow2p_hwi (length));

>

>        /* vect_grouped_load_supported ensures that this is constant.  */

> -      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype);

> +      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();

>        auto_vec_perm_indices sel (nelt);

>        sel.quick_grow (nelt);

>        for (i = 0; i < nelt; ++i)

> @@ -5525,12 +5525,12 @@ vect_shift_permute_load_chain (vec<tree>

>

>    tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));

>    unsigned int i;

> -  unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype);

>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);

>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);

>

> -  unsigned HOST_WIDE_INT vf;

> -  if (!LOOP_VINFO_VECT_FACTOR (loop_vinfo).is_constant (&vf))

> +  unsigned HOST_WIDE_INT nelt, vf;

> +  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nelt)

> +      || !LOOP_VINFO_VECT_FACTOR (loop_vinfo).is_constant (&vf))

>      /* Not supported for variable-length vectors.  */

>      return false;

>

> Index: gcc/tree-vect-generic.c

> ===================================================================

> --- gcc/tree-vect-generic.c     2017-10-23 17:25:48.623491897 +0100

> +++ gcc/tree-vect-generic.c     2017-10-23 17:25:51.759379177 +0100

> @@ -48,7 +48,7 @@ static void expand_vector_operations_1 (

>  static unsigned int

>  nunits_for_known_piecewise_op (const_tree type)

>  {

> -  return TYPE_VECTOR_SUBPARTS (type);

> +  return TYPE_VECTOR_SUBPARTS (type).to_constant ();

>  }

>

>  /* Return true if TYPE1 has more elements than TYPE2, where either

> @@ -916,9 +916,9 @@ expand_vector_condition (gimple_stmt_ite

>       Similarly for vbfld_10 instead of x_2 < y_3.  */

>    if (VECTOR_BOOLEAN_TYPE_P (type)

>        && SCALAR_INT_MODE_P (TYPE_MODE (type))

> -      && (GET_MODE_BITSIZE (TYPE_MODE (type))

> -         < (TYPE_VECTOR_SUBPARTS (type)

> -            * GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (type)))))

> +      && must_lt (GET_MODE_BITSIZE (TYPE_MODE (type)),

> +                 TYPE_VECTOR_SUBPARTS (type)

> +                 * GET_MODE_BITSIZE (SCALAR_TYPE_MODE (TREE_TYPE (type))))

>        && (a_is_comparison

>           ? useless_type_conversion_p (type, TREE_TYPE (a))

>           : expand_vec_cmp_expr_p (TREE_TYPE (a1), type, TREE_CODE (a))))

> @@ -1083,14 +1083,17 @@ optimize_vector_constructor (gimple_stmt

>    tree lhs = gimple_assign_lhs (stmt);

>    tree rhs = gimple_assign_rhs1 (stmt);

>    tree type = TREE_TYPE (rhs);

> -  unsigned int i, j, nelts = TYPE_VECTOR_SUBPARTS (type);

> +  unsigned int i, j;

> +  unsigned HOST_WIDE_INT nelts;

>    bool all_same = true;

>    constructor_elt *elt;

>    gimple *g;

>    tree base = NULL_TREE;

>    optab op;

>

> -  if (nelts <= 2 || CONSTRUCTOR_NELTS (rhs) != nelts)

> +  if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&nelts)

> +      || nelts <= 2

> +      || CONSTRUCTOR_NELTS (rhs) != nelts)

>      return;

>    op = optab_for_tree_code (PLUS_EXPR, type, optab_default);

>    if (op == unknown_optab

> @@ -1302,7 +1305,7 @@ lower_vec_perm (gimple_stmt_iterator *gs

>    tree mask_type = TREE_TYPE (mask);

>    tree vect_elt_type = TREE_TYPE (vect_type);

>    tree mask_elt_type = TREE_TYPE (mask_type);

> -  unsigned int elements = TYPE_VECTOR_SUBPARTS (vect_type);

> +  unsigned HOST_WIDE_INT elements;

>    vec<constructor_elt, va_gc> *v;

>    tree constr, t, si, i_val;

>    tree vec0tmp = NULL_TREE, vec1tmp = NULL_TREE, masktmp = NULL_TREE;

> @@ -1310,6 +1313,9 @@ lower_vec_perm (gimple_stmt_iterator *gs

>    location_t loc = gimple_location (gsi_stmt (*gsi));

>    unsigned i;

>

> +  if (!TYPE_VECTOR_SUBPARTS (vect_type).is_constant (&elements))

> +    return;

> +

>    if (TREE_CODE (mask) == SSA_NAME)

>      {

>        gimple *def_stmt = SSA_NAME_DEF_STMT (mask);

> @@ -1467,7 +1473,7 @@ get_compute_type (enum tree_code code, o

>         = type_for_widest_vector_mode (TREE_TYPE (type), op);

>        if (vector_compute_type != NULL_TREE

>           && subparts_gt (compute_type, vector_compute_type)

> -         && TYPE_VECTOR_SUBPARTS (vector_compute_type) > 1

> +         && may_ne (TYPE_VECTOR_SUBPARTS (vector_compute_type), 1U)

>           && (optab_handler (op, TYPE_MODE (vector_compute_type))

>               != CODE_FOR_nothing))

>         compute_type = vector_compute_type;

> Index: gcc/tree-vect-loop.c

> ===================================================================

> --- gcc/tree-vect-loop.c        2017-10-23 17:25:48.624491861 +0100

> +++ gcc/tree-vect-loop.c        2017-10-23 17:25:51.761379105 +0100

> @@ -255,9 +255,11 @@ vect_determine_vectorization_factor (loo

>                 }

>

>               if (dump_enabled_p ())

> -               dump_printf_loc (MSG_NOTE, vect_location,

> -                                "nunits = " HOST_WIDE_INT_PRINT_DEC "\n",

> -                                 TYPE_VECTOR_SUBPARTS (vectype));

> +               {

> +                 dump_printf_loc (MSG_NOTE, vect_location, "nunits = ");

> +                 dump_dec (MSG_NOTE, TYPE_VECTOR_SUBPARTS (vectype));

> +                 dump_printf (MSG_NOTE, "\n");

> +               }

>

>               vect_update_max_nunits (&vectorization_factor, vectype);

>             }

> @@ -548,9 +550,11 @@ vect_determine_vectorization_factor (loo

>             }

>

>           if (dump_enabled_p ())

> -           dump_printf_loc (MSG_NOTE, vect_location,

> -                            "nunits = " HOST_WIDE_INT_PRINT_DEC "\n",

> -                            TYPE_VECTOR_SUBPARTS (vf_vectype));

> +           {

> +             dump_printf_loc (MSG_NOTE, vect_location, "nunits = ");

> +             dump_dec (MSG_NOTE, TYPE_VECTOR_SUBPARTS (vf_vectype));

> +             dump_printf (MSG_NOTE, "\n");

> +           }

>

>           vect_update_max_nunits (&vectorization_factor, vf_vectype);

>

> @@ -632,8 +636,8 @@ vect_determine_vectorization_factor (loo

>

>               if (!mask_type)

>                 mask_type = vectype;

> -             else if (TYPE_VECTOR_SUBPARTS (mask_type)

> -                      != TYPE_VECTOR_SUBPARTS (vectype))

> +             else if (may_ne (TYPE_VECTOR_SUBPARTS (mask_type),

> +                              TYPE_VECTOR_SUBPARTS (vectype)))

>                 {

>                   if (dump_enabled_p ())

>                     {

> @@ -4152,7 +4156,7 @@ get_initial_defs_for_reduction (slp_tree

>    scalar_type = TREE_TYPE (vector_type);

>    /* vectorizable_reduction has already rejected SLP reductions on

>       variable-length vectors.  */

> -  nunits = TYPE_VECTOR_SUBPARTS (vector_type);

> +  nunits = TYPE_VECTOR_SUBPARTS (vector_type).to_constant ();

>

>    gcc_assert (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def);

>

> @@ -7672,9 +7676,8 @@ vect_transform_loop (loop_vec_info loop_

>

>           if (STMT_VINFO_VECTYPE (stmt_info))

>             {

> -             unsigned int nunits

> -               = (unsigned int)

> -                 TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (stmt_info));

> +             poly_uint64 nunits

> +               = TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (stmt_info));

>               if (!STMT_SLP_TYPE (stmt_info)

>                   && may_ne (nunits, vf)

>                   && dump_enabled_p ())

> Index: gcc/tree-vect-patterns.c

> ===================================================================

> --- gcc/tree-vect-patterns.c    2017-10-10 17:55:22.109175458 +0100

> +++ gcc/tree-vect-patterns.c    2017-10-23 17:25:51.763379034 +0100

> @@ -3714,8 +3714,9 @@ vect_recog_bool_pattern (vec<gimple *> *

>           vectorized matches the vector type of the result in

>          size and number of elements.  */

>        unsigned prec

> -       = wi::udiv_trunc (wi::to_wide (TYPE_SIZE (vectype)),

> -                         TYPE_VECTOR_SUBPARTS (vectype)).to_uhwi ();

> +       = vector_element_size (tree_to_poly_uint64 (TYPE_SIZE (vectype)),

> +                              TYPE_VECTOR_SUBPARTS (vectype));

> +

>        tree type

>         = build_nonstandard_integer_type (prec,

>                                           TYPE_UNSIGNED (TREE_TYPE (var)));

> @@ -3898,7 +3899,8 @@ vect_recog_mask_conversion_pattern (vec<

>        vectype2 = get_mask_type_for_scalar_type (rhs1_type);

>

>        if (!vectype1 || !vectype2

> -         || TYPE_VECTOR_SUBPARTS (vectype1) == TYPE_VECTOR_SUBPARTS (vectype2))

> +         || must_eq (TYPE_VECTOR_SUBPARTS (vectype1),

> +                     TYPE_VECTOR_SUBPARTS (vectype2)))

>         return NULL;

>

>        tmp = build_mask_conversion (rhs1, vectype1, stmt_vinfo, vinfo);

> @@ -3973,7 +3975,8 @@ vect_recog_mask_conversion_pattern (vec<

>        vectype2 = get_mask_type_for_scalar_type (rhs1_type);

>

>        if (!vectype1 || !vectype2

> -         || TYPE_VECTOR_SUBPARTS (vectype1) == TYPE_VECTOR_SUBPARTS (vectype2))

> +         || must_eq (TYPE_VECTOR_SUBPARTS (vectype1),

> +                     TYPE_VECTOR_SUBPARTS (vectype2)))

>         return NULL;

>

>        /* If rhs1 is a comparison we need to move it into a

> Index: gcc/tree-vect-slp.c

> ===================================================================

> --- gcc/tree-vect-slp.c 2017-10-23 17:22:43.865071801 +0100

> +++ gcc/tree-vect-slp.c 2017-10-23 17:25:51.764378998 +0100

> @@ -1621,15 +1621,16 @@ vect_supported_load_permutation_p (slp_i

>               stmt_vec_info group_info

>                 = vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);

>               group_info = vinfo_for_stmt (GROUP_FIRST_ELEMENT (group_info));

> -             unsigned nunits

> -               = TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (group_info));

> +             unsigned HOST_WIDE_INT nunits;

>               unsigned k, maxk = 0;

>               FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)

>                 if (k > maxk)

>                   maxk = k;

>               /* In BB vectorization we may not actually use a loaded vector

>                  accessing elements in excess of GROUP_SIZE.  */

> -             if (maxk >= (GROUP_SIZE (group_info) & ~(nunits - 1)))

> +             tree vectype = STMT_VINFO_VECTYPE (group_info);

> +             if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits)

> +                 || maxk >= (GROUP_SIZE (group_info) & ~(nunits - 1)))

>                 {

>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,

>                                    "BB vectorization with gaps at the end of "

> @@ -3243,7 +3244,7 @@ vect_get_constant_vectors (tree op, slp_

>    else

>      vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));

>    /* Enforced by vect_get_and_check_slp_defs.  */

> -  nunits = TYPE_VECTOR_SUBPARTS (vector_type);

> +  nunits = TYPE_VECTOR_SUBPARTS (vector_type).to_constant ();

>

>    if (STMT_VINFO_DATA_REF (stmt_vinfo))

>      {

> @@ -3600,12 +3601,12 @@ vect_transform_slp_perm_load (slp_tree n

>    gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];

>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);

>    tree mask_element_type = NULL_TREE, mask_type;

> -  int nunits, vec_index = 0;

> +  int vec_index = 0;

>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);

>    int group_size = SLP_INSTANCE_GROUP_SIZE (slp_node_instance);

> -  int mask_element;

> +  unsigned int mask_element;

>    machine_mode mode;

> -  unsigned HOST_WIDE_INT const_vf;

> +  unsigned HOST_WIDE_INT nunits, const_vf;

>

>    if (!STMT_VINFO_GROUPED_ACCESS (stmt_info))

>      return false;

> @@ -3615,8 +3616,10 @@ vect_transform_slp_perm_load (slp_tree n

>    mode = TYPE_MODE (vectype);

>

>    /* At the moment, all permutations are represented using per-element

> -     indices, so we can't cope with variable vectorization factors.  */

> -  if (!vf.is_constant (&const_vf))

> +     indices, so we can't cope with variable vector lengths or

> +     vectorization factors.  */

> +  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits)

> +      || !vf.is_constant (&const_vf))

>      return false;

>

>    /* The generic VEC_PERM_EXPR code always uses an integral type of the

> @@ -3624,7 +3627,6 @@ vect_transform_slp_perm_load (slp_tree n

>    mask_element_type = lang_hooks.types.type_for_mode

>      (int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))).require (), 1);

>    mask_type = get_vectype_for_scalar_type (mask_element_type);

> -  nunits = TYPE_VECTOR_SUBPARTS (vectype);

>    auto_vec_perm_indices mask (nunits);

>    mask.quick_grow (nunits);

>

> @@ -3654,7 +3656,7 @@ vect_transform_slp_perm_load (slp_tree n

>       {c2,a3,b3,c3}.  */

>

>    int vect_stmts_counter = 0;

> -  int index = 0;

> +  unsigned int index = 0;

>    int first_vec_index = -1;

>    int second_vec_index = -1;

>    bool noop_p = true;

> @@ -3664,8 +3666,8 @@ vect_transform_slp_perm_load (slp_tree n

>      {

>        for (int k = 0; k < group_size; k++)

>         {

> -         int i = (SLP_TREE_LOAD_PERMUTATION (node)[k]

> -                  + j * STMT_VINFO_GROUP_SIZE (stmt_info));

> +         unsigned int i = (SLP_TREE_LOAD_PERMUTATION (node)[k]

> +                           + j * STMT_VINFO_GROUP_SIZE (stmt_info));

>           vec_index = i / nunits;

>           mask_element = i % nunits;

>           if (vec_index == first_vec_index

> @@ -3693,8 +3695,7 @@ vect_transform_slp_perm_load (slp_tree n

>               return false;

>             }

>

> -         gcc_assert (mask_element >= 0

> -                     && mask_element < 2 * nunits);

> +         gcc_assert (mask_element < 2 * nunits);

>           if (mask_element != index)

>             noop_p = false;

>           mask[index++] = mask_element;

> @@ -3727,7 +3728,7 @@ vect_transform_slp_perm_load (slp_tree n

>                   if (! noop_p)

>                     {

>                       auto_vec<tree, 32> mask_elts (nunits);

> -                     for (int l = 0; l < nunits; ++l)

> +                     for (unsigned int l = 0; l < nunits; ++l)

>                         mask_elts.quick_push (build_int_cst (mask_element_type,

>                                                              mask[l]));

>                       mask_vec = build_vector (mask_type, mask_elts);

> Index: gcc/tree-vect-stmts.c

> ===================================================================

> --- gcc/tree-vect-stmts.c       2017-10-23 17:22:41.879277786 +0100

> +++ gcc/tree-vect-stmts.c       2017-10-23 17:25:51.767378890 +0100

> @@ -1713,9 +1713,10 @@ compare_step_with_zero (gimple *stmt)

>  static tree

>  perm_mask_for_reverse (tree vectype)

>  {

> -  int i, nunits;

> +  unsigned HOST_WIDE_INT i, nunits;

>

> -  nunits = TYPE_VECTOR_SUBPARTS (vectype);

> +  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits))

> +    return NULL_TREE;

>

>    auto_vec_perm_indices sel (nunits);

>    for (i = 0; i < nunits; ++i)

> @@ -1750,7 +1751,7 @@ get_group_load_store_type (gimple *stmt,

>    bool single_element_p = (stmt == first_stmt

>                            && !GROUP_NEXT_ELEMENT (stmt_info));

>    unsigned HOST_WIDE_INT gap = GROUP_GAP (vinfo_for_stmt (first_stmt));

> -  unsigned nunits = TYPE_VECTOR_SUBPARTS (vectype);

> +  poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);

>

>    /* True if the vectorized statements would access beyond the last

>       statement in the group.  */

> @@ -1774,7 +1775,7 @@ get_group_load_store_type (gimple *stmt,

>           /* Try to use consecutive accesses of GROUP_SIZE elements,

>              separated by the stride, until we have a complete vector.

>              Fall back to scalar accesses if that isn't possible.  */

> -         if (nunits % group_size == 0)

> +         if (multiple_p (nunits, group_size))

>             *memory_access_type = VMAT_STRIDED_SLP;

>           else

>             *memory_access_type = VMAT_ELEMENTWISE;

> @@ -2102,7 +2103,8 @@ vectorizable_mask_load_store (gimple *st

>      mask_vectype = get_mask_type_for_scalar_type (TREE_TYPE (vectype));

>

>    if (!mask_vectype || !VECTOR_BOOLEAN_TYPE_P (mask_vectype)

> -      || TYPE_VECTOR_SUBPARTS (mask_vectype) != TYPE_VECTOR_SUBPARTS (vectype))

> +      || may_ne (TYPE_VECTOR_SUBPARTS (mask_vectype),

> +                TYPE_VECTOR_SUBPARTS (vectype)))

>      return false;

>

>    if (gimple_call_internal_fn (stmt) == IFN_MASK_STORE)

> @@ -2255,8 +2257,8 @@ vectorizable_mask_load_store (gimple *st

>

>           if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))

>             {

> -             gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op))

> -                         == TYPE_VECTOR_SUBPARTS (idxtype));

> +             gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op)),

> +                                  TYPE_VECTOR_SUBPARTS (idxtype)));

>               var = vect_get_new_ssa_name (idxtype, vect_simple_var);

>               op = build1 (VIEW_CONVERT_EXPR, idxtype, op);

>               new_stmt

> @@ -2281,8 +2283,9 @@ vectorizable_mask_load_store (gimple *st

>               mask_op = vec_mask;

>               if (!useless_type_conversion_p (masktype, TREE_TYPE (vec_mask)))

>                 {

> -                 gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask_op))

> -                             == TYPE_VECTOR_SUBPARTS (masktype));

> +                 gcc_assert

> +                   (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask_op)),

> +                             TYPE_VECTOR_SUBPARTS (masktype)));

>                   var = vect_get_new_ssa_name (masktype, vect_simple_var);

>                   mask_op = build1 (VIEW_CONVERT_EXPR, masktype, mask_op);

>                   new_stmt

> @@ -2298,8 +2301,8 @@ vectorizable_mask_load_store (gimple *st

>

>           if (!useless_type_conversion_p (vectype, rettype))

>             {

> -             gcc_assert (TYPE_VECTOR_SUBPARTS (vectype)

> -                         == TYPE_VECTOR_SUBPARTS (rettype));

> +             gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (vectype),

> +                                  TYPE_VECTOR_SUBPARTS (rettype)));

>               op = vect_get_new_ssa_name (rettype, vect_simple_var);

>               gimple_call_set_lhs (new_stmt, op);

>               vect_finish_stmt_generation (stmt, new_stmt, gsi);

> @@ -2493,11 +2496,14 @@ vectorizable_bswap (gimple *stmt, gimple

>    tree op, vectype;

>    stmt_vec_info stmt_info = vinfo_for_stmt (stmt);

>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);

> -  unsigned ncopies, nunits;

> +  unsigned ncopies;

> +  unsigned HOST_WIDE_INT nunits, num_bytes;

>

>    op = gimple_call_arg (stmt, 0);

>    vectype = STMT_VINFO_VECTYPE (stmt_info);

> -  nunits = TYPE_VECTOR_SUBPARTS (vectype);

> +

> +  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits))

> +    return false;

>

>    /* Multiple types in SLP are handled by creating the appropriate number of

>       vectorized stmts for each SLP node.  Hence, NCOPIES is always 1 in

> @@ -2513,7 +2519,9 @@ vectorizable_bswap (gimple *stmt, gimple

>    if (! char_vectype)

>      return false;

>

> -  unsigned int num_bytes = TYPE_VECTOR_SUBPARTS (char_vectype);

> +  if (!TYPE_VECTOR_SUBPARTS (char_vectype).is_constant (&num_bytes))

> +    return false;

> +

>    unsigned word_bytes = num_bytes / nunits;

>

>    auto_vec_perm_indices elts (num_bytes);

> @@ -3213,7 +3221,7 @@ vect_simd_lane_linear (tree op, struct l

>  static unsigned HOST_WIDE_INT

>  simd_clone_subparts (tree vectype)

>  {

> -  return TYPE_VECTOR_SUBPARTS (vectype);

> +  return TYPE_VECTOR_SUBPARTS (vectype).to_constant ();

>  }

>

>  /* Function vectorizable_simd_clone_call.

> @@ -4732,7 +4740,7 @@ vectorizable_assignment (gimple *stmt, g

>      op = TREE_OPERAND (op, 0);

>

>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);

> -  unsigned int nunits = TYPE_VECTOR_SUBPARTS (vectype);

> +  poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);

>

>    /* Multiple types in SLP are handled by creating the appropriate number of

>       vectorized stmts for each SLP node.  Hence, NCOPIES is always 1 in

> @@ -4757,7 +4765,7 @@ vectorizable_assignment (gimple *stmt, g

>    if ((CONVERT_EXPR_CODE_P (code)

>         || code == VIEW_CONVERT_EXPR)

>        && (!vectype_in

> -         || TYPE_VECTOR_SUBPARTS (vectype_in) != nunits

> +         || may_ne (TYPE_VECTOR_SUBPARTS (vectype_in), nunits)

>           || (GET_MODE_SIZE (TYPE_MODE (vectype))

>               != GET_MODE_SIZE (TYPE_MODE (vectype_in)))))

>      return false;

> @@ -4906,8 +4914,8 @@ vectorizable_shift (gimple *stmt, gimple

>    int ndts = 2;

>    gimple *new_stmt = NULL;

>    stmt_vec_info prev_stmt_info;

> -  int nunits_in;

> -  int nunits_out;

> +  poly_uint64 nunits_in;

> +  poly_uint64 nunits_out;

>    tree vectype_out;

>    tree op1_vectype;

>    int ncopies;

> @@ -4974,7 +4982,7 @@ vectorizable_shift (gimple *stmt, gimple

>

>    nunits_out = TYPE_VECTOR_SUBPARTS (vectype_out);

>    nunits_in = TYPE_VECTOR_SUBPARTS (vectype);

> -  if (nunits_out != nunits_in)

> +  if (may_ne (nunits_out, nunits_in))

>      return false;

>

>    op1 = gimple_assign_rhs2 (stmt);

> @@ -5274,8 +5282,8 @@ vectorizable_operation (gimple *stmt, gi

>    int ndts = 3;

>    gimple *new_stmt = NULL;

>    stmt_vec_info prev_stmt_info;

> -  int nunits_in;

> -  int nunits_out;

> +  poly_uint64 nunits_in;

> +  poly_uint64 nunits_out;

>    tree vectype_out;

>    int ncopies;

>    int j, i;

> @@ -5385,7 +5393,7 @@ vectorizable_operation (gimple *stmt, gi

>

>    nunits_out = TYPE_VECTOR_SUBPARTS (vectype_out);

>    nunits_in = TYPE_VECTOR_SUBPARTS (vectype);

> -  if (nunits_out != nunits_in)

> +  if (may_ne (nunits_out, nunits_in))

>      return false;

>

>    if (op_type == binary_op || op_type == ternary_op)

> @@ -5937,8 +5945,8 @@ vectorizable_store (gimple *stmt, gimple

>

>           if (!useless_type_conversion_p (srctype, TREE_TYPE (src)))

>             {

> -             gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (src))

> -                         == TYPE_VECTOR_SUBPARTS (srctype));

> +             gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (src)),

> +                                  TYPE_VECTOR_SUBPARTS (srctype)));

>               var = vect_get_new_ssa_name (srctype, vect_simple_var);

>               src = build1 (VIEW_CONVERT_EXPR, srctype, src);

>               new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, src);

> @@ -5948,8 +5956,8 @@ vectorizable_store (gimple *stmt, gimple

>

>           if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))

>             {

> -             gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op))

> -                         == TYPE_VECTOR_SUBPARTS (idxtype));

> +             gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op)),

> +                                  TYPE_VECTOR_SUBPARTS (idxtype)));

>               var = vect_get_new_ssa_name (idxtype, vect_simple_var);

>               op = build1 (VIEW_CONVERT_EXPR, idxtype, op);

>               new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);

> @@ -6554,7 +6562,7 @@ vect_gen_perm_mask_any (tree vectype, ve

>    tree mask_elt_type, mask_type, mask_vec;

>

>    unsigned int nunits = sel.length ();

> -  gcc_checking_assert (nunits == TYPE_VECTOR_SUBPARTS (vectype));

> +  gcc_checking_assert (must_eq (nunits, TYPE_VECTOR_SUBPARTS (vectype)));

>

>    mask_elt_type = lang_hooks.types.type_for_mode

>      (int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))).require (), 1);

> @@ -6993,8 +7001,8 @@ vectorizable_load (gimple *stmt, gimple_

>

>           if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))

>             {

> -             gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op))

> -                         == TYPE_VECTOR_SUBPARTS (idxtype));

> +             gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op)),

> +                                  TYPE_VECTOR_SUBPARTS (idxtype)));

>               var = vect_get_new_ssa_name (idxtype, vect_simple_var);

>               op = build1 (VIEW_CONVERT_EXPR, idxtype, op);

>               new_stmt

> @@ -7008,8 +7016,8 @@ vectorizable_load (gimple *stmt, gimple_

>

>           if (!useless_type_conversion_p (vectype, rettype))

>             {

> -             gcc_assert (TYPE_VECTOR_SUBPARTS (vectype)

> -                         == TYPE_VECTOR_SUBPARTS (rettype));

> +             gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (vectype),

> +                                  TYPE_VECTOR_SUBPARTS (rettype)));

>               op = vect_get_new_ssa_name (rettype, vect_simple_var);

>               gimple_call_set_lhs (new_stmt, op);

>               vect_finish_stmt_generation (stmt, new_stmt, gsi);

> @@ -7905,7 +7913,8 @@ vect_is_simple_cond (tree cond, vec_info

>      return false;

>

>    if (vectype1 && vectype2

> -      && TYPE_VECTOR_SUBPARTS (vectype1) != TYPE_VECTOR_SUBPARTS (vectype2))

> +      && may_ne (TYPE_VECTOR_SUBPARTS (vectype1),

> +                TYPE_VECTOR_SUBPARTS (vectype2)))

>      return false;

>

>    *comp_vectype = vectype1 ? vectype1 : vectype2;

> @@ -8308,7 +8317,7 @@ vectorizable_comparison (gimple *stmt, g

>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);

>    enum vect_def_type dts[2] = {vect_unknown_def_type, vect_unknown_def_type};

>    int ndts = 2;

> -  unsigned nunits;

> +  poly_uint64 nunits;

>    int ncopies;

>    enum tree_code code, bitop1 = NOP_EXPR, bitop2 = NOP_EXPR;

>    stmt_vec_info prev_stmt_info = NULL;

> @@ -8368,7 +8377,8 @@ vectorizable_comparison (gimple *stmt, g

>      return false;

>

>    if (vectype1 && vectype2

> -      && TYPE_VECTOR_SUBPARTS (vectype1) != TYPE_VECTOR_SUBPARTS (vectype2))

> +      && may_ne (TYPE_VECTOR_SUBPARTS (vectype1),

> +                TYPE_VECTOR_SUBPARTS (vectype2)))

>      return false;

>

>    vectype = vectype1 ? vectype1 : vectype2;

> @@ -8377,10 +8387,10 @@ vectorizable_comparison (gimple *stmt, g

>    if (!vectype)

>      {

>        vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));

> -      if (TYPE_VECTOR_SUBPARTS (vectype) != nunits)

> +      if (may_ne (TYPE_VECTOR_SUBPARTS (vectype), nunits))

>         return false;

>      }

> -  else if (nunits != TYPE_VECTOR_SUBPARTS (vectype))

> +  else if (may_ne (nunits, TYPE_VECTOR_SUBPARTS (vectype)))

>      return false;

>

>    /* Can't compare mask and non-mask types.  */

> @@ -9611,8 +9621,8 @@ supportable_widening_operation (enum tre

>          vector types having the same QImode.  Thus we

>          add additional check for elements number.  */

>      return (!VECTOR_BOOLEAN_TYPE_P (vectype)

> -           || (TYPE_VECTOR_SUBPARTS (vectype) / 2

> -               == TYPE_VECTOR_SUBPARTS (wide_vectype)));

> +           || must_eq (TYPE_VECTOR_SUBPARTS (vectype),

> +                       TYPE_VECTOR_SUBPARTS (wide_vectype) * 2));

>

>    /* Check if it's a multi-step conversion that can be done using intermediate

>       types.  */

> @@ -9633,8 +9643,10 @@ supportable_widening_operation (enum tre

>        intermediate_mode = insn_data[icode1].operand[0].mode;

>        if (VECTOR_BOOLEAN_TYPE_P (prev_type))

>         {

> +         poly_uint64 intermediate_nelts

> +           = exact_div (TYPE_VECTOR_SUBPARTS (prev_type), 2);

>           intermediate_type

> -           = build_truth_vector_type (TYPE_VECTOR_SUBPARTS (prev_type) / 2,

> +           = build_truth_vector_type (intermediate_nelts,

>                                        current_vector_size);

>           if (intermediate_mode != TYPE_MODE (intermediate_type))

>             return false;

> @@ -9664,8 +9676,8 @@ supportable_widening_operation (enum tre

>        if (insn_data[icode1].operand[0].mode == TYPE_MODE (wide_vectype)

>           && insn_data[icode2].operand[0].mode == TYPE_MODE (wide_vectype))

>         return (!VECTOR_BOOLEAN_TYPE_P (vectype)

> -               || (TYPE_VECTOR_SUBPARTS (intermediate_type) / 2

> -                   == TYPE_VECTOR_SUBPARTS (wide_vectype)));

> +               || must_eq (TYPE_VECTOR_SUBPARTS (intermediate_type),

> +                           TYPE_VECTOR_SUBPARTS (wide_vectype) * 2));

>

>        prev_type = intermediate_type;

>        prev_mode = intermediate_mode;

> @@ -9753,8 +9765,8 @@ supportable_narrowing_operation (enum tr

>         vector types having the same QImode.  Thus we

>         add additional check for elements number.  */

>      return (!VECTOR_BOOLEAN_TYPE_P (vectype)

> -           || (TYPE_VECTOR_SUBPARTS (vectype) * 2

> -               == TYPE_VECTOR_SUBPARTS (narrow_vectype)));

> +           || must_eq (TYPE_VECTOR_SUBPARTS (vectype) * 2,

> +                       TYPE_VECTOR_SUBPARTS (narrow_vectype)));

>

>    /* Check if it's a multi-step conversion that can be done using intermediate

>       types.  */

> @@ -9820,8 +9832,8 @@ supportable_narrowing_operation (enum tr

>

>        if (insn_data[icode1].operand[0].mode == TYPE_MODE (narrow_vectype))

>         return (!VECTOR_BOOLEAN_TYPE_P (vectype)

> -               || (TYPE_VECTOR_SUBPARTS (intermediate_type) * 2

> -                   == TYPE_VECTOR_SUBPARTS (narrow_vectype)));

> +               || must_eq (TYPE_VECTOR_SUBPARTS (intermediate_type) * 2,

> +                           TYPE_VECTOR_SUBPARTS (narrow_vectype)));

>

>        prev_mode = intermediate_mode;

>        prev_type = intermediate_type;

> Index: gcc/ada/gcc-interface/utils.c

> ===================================================================

> --- gcc/ada/gcc-interface/utils.c       2017-10-23 11:41:24.988650286 +0100

> +++ gcc/ada/gcc-interface/utils.c       2017-10-23 17:25:51.723380471 +0100

> @@ -3528,7 +3528,7 @@ gnat_types_compatible_p (tree t1, tree t

>    /* Vector types are also compatible if they have the same number of subparts

>       and the same form of (scalar) element type.  */

>    if (code == VECTOR_TYPE

> -      && TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2)

> +      && must_eq (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))

>        && TREE_CODE (TREE_TYPE (t1)) == TREE_CODE (TREE_TYPE (t2))

>        && TYPE_PRECISION (TREE_TYPE (t1)) == TYPE_PRECISION (TREE_TYPE (t2)))

>      return 1;

> Index: gcc/brig/brigfrontend/brig-to-generic.cc

> ===================================================================

> --- gcc/brig/brigfrontend/brig-to-generic.cc    2017-10-10 16:57:41.296192291 +0100

> +++ gcc/brig/brigfrontend/brig-to-generic.cc    2017-10-23 17:25:51.724380435 +0100

> @@ -869,7 +869,7 @@ get_unsigned_int_type (tree original_typ

>      {

>        size_t esize

>         = int_size_in_bytes (TREE_TYPE (original_type)) * BITS_PER_UNIT;

> -      size_t ecount = TYPE_VECTOR_SUBPARTS (original_type);

> +      poly_uint64 ecount = TYPE_VECTOR_SUBPARTS (original_type);

>        return build_vector_type (build_nonstandard_integer_type (esize, true),

>                                 ecount);

>      }

> Index: gcc/brig/brigfrontend/brig-util.h

> ===================================================================

> --- gcc/brig/brigfrontend/brig-util.h   2017-10-23 17:22:46.882758777 +0100

> +++ gcc/brig/brigfrontend/brig-util.h   2017-10-23 17:25:51.724380435 +0100

> @@ -81,7 +81,7 @@ bool hsa_type_packed_p (BrigType16_t typ

>  inline unsigned HOST_WIDE_INT

>  gccbrig_type_vector_subparts (const_tree type)

>  {

> -  return TYPE_VECTOR_SUBPARTS (type);

> +  return TYPE_VECTOR_SUBPARTS (type).to_constant ();

>  }

>

>  #endif

> Index: gcc/c-family/c-common.c

> ===================================================================

> --- gcc/c-family/c-common.c     2017-10-23 11:41:23.219573771 +0100

> +++ gcc/c-family/c-common.c     2017-10-23 17:25:51.725380399 +0100

> @@ -942,15 +942,16 @@ vector_types_convertible_p (const_tree t

>

>    convertible_lax =

>      (tree_int_cst_equal (TYPE_SIZE (t1), TYPE_SIZE (t2))

> -     && (TREE_CODE (TREE_TYPE (t1)) != REAL_TYPE ||

> -        TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2))

> +     && (TREE_CODE (TREE_TYPE (t1)) != REAL_TYPE

> +        || must_eq (TYPE_VECTOR_SUBPARTS (t1),

> +                    TYPE_VECTOR_SUBPARTS (t2)))

>       && (INTEGRAL_TYPE_P (TREE_TYPE (t1))

>          == INTEGRAL_TYPE_P (TREE_TYPE (t2))));

>

>    if (!convertible_lax || flag_lax_vector_conversions)

>      return convertible_lax;

>

> -  if (TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2)

> +  if (must_eq (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))

>        && lang_hooks.types_compatible_p (TREE_TYPE (t1), TREE_TYPE (t2)))

>      return true;

>

> @@ -1018,10 +1019,10 @@ c_build_vec_perm_expr (location_t loc, t

>        return error_mark_node;

>      }

>

> -  if (TYPE_VECTOR_SUBPARTS (TREE_TYPE (v0))

> -      != TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask))

> -      && TYPE_VECTOR_SUBPARTS (TREE_TYPE (v1))

> -        != TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask)))

> +  if (may_ne (TYPE_VECTOR_SUBPARTS (TREE_TYPE (v0)),

> +             TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask)))

> +      && may_ne (TYPE_VECTOR_SUBPARTS (TREE_TYPE (v1)),

> +                TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask))))

>      {

>        if (complain)

>         error_at (loc, "__builtin_shuffle number of elements of the "

> @@ -2280,7 +2281,8 @@ c_common_type_for_mode (machine_mode mod

>        if (inner_type != NULL_TREE)

>         return build_complex_type (inner_type);

>      }

> -  else if (VECTOR_MODE_P (mode))

> +  else if (VECTOR_MODE_P (mode)

> +          && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))

>      {

>        machine_mode inner_mode = GET_MODE_INNER (mode);

>        tree inner_type = c_common_type_for_mode (inner_mode, unsignedp);

> @@ -7591,7 +7593,7 @@ convert_vector_to_array_for_subscript (l

>

>        if (TREE_CODE (index) == INTEGER_CST)

>          if (!tree_fits_uhwi_p (index)

> -            || tree_to_uhwi (index) >= TYPE_VECTOR_SUBPARTS (type))

> +           || may_ge (tree_to_uhwi (index), TYPE_VECTOR_SUBPARTS (type)))

>            warning_at (loc, OPT_Warray_bounds, "index value is out of bound");

>

>        /* We are building an ARRAY_REF so mark the vector as addressable

> Index: gcc/c/c-typeck.c

> ===================================================================

> --- gcc/c/c-typeck.c    2017-10-10 17:55:22.067175462 +0100

> +++ gcc/c/c-typeck.c    2017-10-23 17:25:51.726380364 +0100

> @@ -1238,7 +1238,7 @@ comptypes_internal (const_tree type1, co

>        break;

>

>      case VECTOR_TYPE:

> -      val = (TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2)

> +      val = (must_eq (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))

>              && comptypes_internal (TREE_TYPE (t1), TREE_TYPE (t2),

>                                     enum_and_int_p, different_types_p));

>        break;

> @@ -11343,7 +11343,8 @@ build_binary_op (location_t location, en

>        if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE

>           && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE

>           && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE

> -         && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))

> +         && must_eq (TYPE_VECTOR_SUBPARTS (type0),

> +                     TYPE_VECTOR_SUBPARTS (type1)))

>         {

>           result_type = type0;

>           converted = 1;

> @@ -11400,7 +11401,8 @@ build_binary_op (location_t location, en

>        if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE

>           && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE

>           && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE

> -         && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))

> +         && must_eq (TYPE_VECTOR_SUBPARTS (type0),

> +                     TYPE_VECTOR_SUBPARTS (type1)))

>         {

>           result_type = type0;

>           converted = 1;

> @@ -11474,7 +11476,8 @@ build_binary_op (location_t location, en

>                return error_mark_node;

>              }

>

> -          if (TYPE_VECTOR_SUBPARTS (type0) != TYPE_VECTOR_SUBPARTS (type1))

> +         if (may_ne (TYPE_VECTOR_SUBPARTS (type0),

> +                     TYPE_VECTOR_SUBPARTS (type1)))

>              {

>                error_at (location, "comparing vectors with different "

>                                    "number of elements");

> @@ -11634,7 +11637,8 @@ build_binary_op (location_t location, en

>                return error_mark_node;

>              }

>

> -          if (TYPE_VECTOR_SUBPARTS (type0) != TYPE_VECTOR_SUBPARTS (type1))

> +         if (may_ne (TYPE_VECTOR_SUBPARTS (type0),

> +                     TYPE_VECTOR_SUBPARTS (type1)))

>              {

>                error_at (location, "comparing vectors with different "

>                                    "number of elements");

> Index: gcc/cp/call.c

> ===================================================================

> --- gcc/cp/call.c       2017-10-23 11:41:24.251615675 +0100

> +++ gcc/cp/call.c       2017-10-23 17:25:51.728380292 +0100

> @@ -4928,8 +4928,8 @@ build_conditional_expr_1 (location_t loc

>         }

>

>        if (!same_type_p (arg2_type, arg3_type)

> -         || TYPE_VECTOR_SUBPARTS (arg1_type)

> -            != TYPE_VECTOR_SUBPARTS (arg2_type)

> +         || may_ne (TYPE_VECTOR_SUBPARTS (arg1_type),

> +                    TYPE_VECTOR_SUBPARTS (arg2_type))

>           || TYPE_SIZE (arg1_type) != TYPE_SIZE (arg2_type))

>         {

>           if (complain & tf_error)

> Index: gcc/cp/constexpr.c

> ===================================================================

> --- gcc/cp/constexpr.c  2017-10-23 17:18:47.657057799 +0100

> +++ gcc/cp/constexpr.c  2017-10-23 17:25:51.728380292 +0100

> @@ -3059,7 +3059,8 @@ cxx_fold_indirect_ref (location_t loc, t

>               unsigned HOST_WIDE_INT indexi = offset * BITS_PER_UNIT;

>               tree index = bitsize_int (indexi);

>

> -             if (offset / part_widthi < TYPE_VECTOR_SUBPARTS (op00type))

> +             if (must_lt (offset / part_widthi,

> +                          TYPE_VECTOR_SUBPARTS (op00type)))

>                 return fold_build3_loc (loc,

>                                         BIT_FIELD_REF, type, op00,

>                                         part_width, index);

> Index: gcc/cp/decl.c

> ===================================================================

> --- gcc/cp/decl.c       2017-10-23 11:41:24.223565801 +0100

> +++ gcc/cp/decl.c       2017-10-23 17:25:51.732380148 +0100

> @@ -7454,7 +7454,11 @@ cp_finish_decomp (tree decl, tree first,

>      }

>    else if (TREE_CODE (type) == VECTOR_TYPE)

>      {

> -      eltscnt = TYPE_VECTOR_SUBPARTS (type);

> +      if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&eltscnt))

> +       {

> +         error_at (loc, "cannot decompose variable length vector %qT", type);

> +         goto error_out;

> +       }

>        if (count != eltscnt)

>         goto cnt_mismatch;

>        eltype = cp_build_qualified_type (TREE_TYPE (type), TYPE_QUALS (type));

> Index: gcc/cp/mangle.c

> ===================================================================

> --- gcc/cp/mangle.c     2017-10-10 17:55:22.087175461 +0100

> +++ gcc/cp/mangle.c     2017-10-23 17:25:51.733380112 +0100

> @@ -2260,7 +2260,8 @@ write_type (tree type)

>                   write_string ("Dv");

>                   /* Non-constant vector size would be encoded with

>                      _ expression, but we don't support that yet.  */

> -                 write_unsigned_number (TYPE_VECTOR_SUBPARTS (type));

> +                 write_unsigned_number (TYPE_VECTOR_SUBPARTS (type)

> +                                        .to_constant ());

>                   write_char ('_');

>                 }

>               else

> Index: gcc/cp/typeck.c

> ===================================================================

> --- gcc/cp/typeck.c     2017-10-23 11:41:24.212926194 +0100

> +++ gcc/cp/typeck.c     2017-10-23 17:25:51.735380040 +0100

> @@ -1359,7 +1359,7 @@ structural_comptypes (tree t1, tree t2,

>        break;

>

>      case VECTOR_TYPE:

> -      if (TYPE_VECTOR_SUBPARTS (t1) != TYPE_VECTOR_SUBPARTS (t2)

> +      if (may_ne (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))

>           || !same_type_p (TREE_TYPE (t1), TREE_TYPE (t2)))

>         return false;

>        break;

> @@ -4513,9 +4513,10 @@ cp_build_binary_op (location_t location,

>            converted = 1;

>          }

>        else if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE

> -         && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE

> -         && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE

> -         && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))

> +              && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE

> +              && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE

> +              && must_eq (TYPE_VECTOR_SUBPARTS (type0),

> +                          TYPE_VECTOR_SUBPARTS (type1)))

>         {

>           result_type = type0;

>           converted = 1;

> @@ -4558,9 +4559,10 @@ cp_build_binary_op (location_t location,

>            converted = 1;

>          }

>        else if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE

> -         && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE

> -         && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE

> -         && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))

> +              && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE

> +              && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE

> +              && must_eq (TYPE_VECTOR_SUBPARTS (type0),

> +                          TYPE_VECTOR_SUBPARTS (type1)))

>         {

>           result_type = type0;

>           converted = 1;

> @@ -4925,7 +4927,8 @@ cp_build_binary_op (location_t location,

>               return error_mark_node;

>             }

>

> -         if (TYPE_VECTOR_SUBPARTS (type0) != TYPE_VECTOR_SUBPARTS (type1))

> +         if (may_ne (TYPE_VECTOR_SUBPARTS (type0),

> +                     TYPE_VECTOR_SUBPARTS (type1)))

>             {

>               if (complain & tf_error)

>                 {

> Index: gcc/cp/typeck2.c

> ===================================================================

> --- gcc/cp/typeck2.c    2017-10-09 11:50:52.214211104 +0100

> +++ gcc/cp/typeck2.c    2017-10-23 17:25:51.736380004 +0100

> @@ -1276,7 +1276,7 @@ process_init_constructor_array (tree typ

>      }

>    else

>      /* Vectors are like simple fixed-size arrays.  */

> -    len = TYPE_VECTOR_SUBPARTS (type);

> +    unbounded = !TYPE_VECTOR_SUBPARTS (type).is_constant (&len);

>

>    /* There must not be more initializers than needed.  */

>    if (!unbounded && vec_safe_length (v) > len)

> Index: gcc/fortran/trans-types.c

> ===================================================================

> --- gcc/fortran/trans-types.c   2017-09-25 13:57:12.591118003 +0100

> +++ gcc/fortran/trans-types.c   2017-10-23 17:25:51.745379681 +0100

> @@ -3159,7 +3159,8 @@ gfc_type_for_mode (machine_mode mode, in

>        tree type = gfc_type_for_size (GET_MODE_PRECISION (int_mode), unsignedp);

>        return type != NULL_TREE && mode == TYPE_MODE (type) ? type : NULL_TREE;

>      }

> -  else if (VECTOR_MODE_P (mode))

> +  else if (VECTOR_MODE_P (mode)

> +          && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))

>      {

>        machine_mode inner_mode = GET_MODE_INNER (mode);

>        tree inner_type = gfc_type_for_mode (inner_mode, unsignedp);

> Index: gcc/lto/lto-lang.c

> ===================================================================

> --- gcc/lto/lto-lang.c  2017-10-23 11:41:25.563189078 +0100

> +++ gcc/lto/lto-lang.c  2017-10-23 17:25:51.748379573 +0100

> @@ -971,7 +971,8 @@ lto_type_for_mode (machine_mode mode, in

>        if (inner_type != NULL_TREE)

>         return build_complex_type (inner_type);

>      }

> -  else if (VECTOR_MODE_P (mode))

> +  else if (VECTOR_MODE_P (mode)

> +          && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))

>      {

>        machine_mode inner_mode = GET_MODE_INNER (mode);

>        tree inner_type = lto_type_for_mode (inner_mode, unsigned_p);

> Index: gcc/lto/lto.c

> ===================================================================

> --- gcc/lto/lto.c       2017-10-13 10:23:39.776947828 +0100

> +++ gcc/lto/lto.c       2017-10-23 17:25:51.749379537 +0100

> @@ -316,7 +316,7 @@ hash_canonical_type (tree type)

>

>    if (VECTOR_TYPE_P (type))

>      {

> -      hstate.add_int (TYPE_VECTOR_SUBPARTS (type));

> +      hstate.add_poly_int (TYPE_VECTOR_SUBPARTS (type));

>        hstate.add_int (TYPE_UNSIGNED (type));

>      }

>

> Index: gcc/go/go-lang.c

> ===================================================================

> --- gcc/go/go-lang.c    2017-08-30 12:20:57.010045759 +0100

> +++ gcc/go/go-lang.c    2017-10-23 17:25:51.747379609 +0100

> @@ -372,7 +372,8 @@ go_langhook_type_for_mode (machine_mode

>       make sense for the middle-end to ask the frontend for a type

>       which the frontend does not support.  However, at least for now

>       it is required.  See PR 46805.  */

> -  if (VECTOR_MODE_P (mode))

> +  if (VECTOR_MODE_P (mode)

> +      && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))

>      {

>        tree inner;

>
Richard Sandiford Oct. 24, 2017, 9:40 a.m. UTC | #2
Richard Biener <richard.guenther@gmail.com> writes:
> On Mon, Oct 23, 2017 at 7:41 PM, Richard Sandiford

> <richard.sandiford@linaro.org> wrote:

>> This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64.  The value is

>> encoded in the 10-bit precision field and was previously always stored

>> as a simple log2 value.  The challenge was to use this 10 bits to

>> encode the number of elements in variable-length vectors, so that

>> we didn't need to increase the size of the tree.

>>

>> In practice the number of vector elements should always have the form

>> N + N * X (where X is the runtime value), and as for constant-length

>> vectors, N must be a power of 2 (even though X itself might not be).

>> The patch therefore uses the low bit to select between constant-length

>> and variable-length and uses the upper 9 bits to encode log2(N).

>> Targets without variable-length vectors continue to use the old scheme.

>>

>> A new valid_vector_subparts_p function tests whether a given number

>> of elements can be encoded.  This is false for the vector modes that

>> represent an LD3 or ST3 vector triple (which we want to treat as arrays

>> of vectors rather than single vectors).

>>

>> Most of the patch is mechanical; previous patches handled the changes

>> that weren't entirely straightforward.

>

> One comment, w/o actually reviewing may/must stuff (will comment on that

> elsewhere).

>

> You split 10 bits into 9 and 1, wouldn't it be more efficient to use the

> lower 8 bits for the log2 value of N and either of the two remaining bits

> for the flag?  That way the 8 bits for the shift amount can be eventually

> accessed in a more efficient way.

>

> Guess you'd need to compare code-generation of the TYPE_VECTOR_SUBPARTS

> accessor on aarch64 / x86_64.


Ah, yeah.  I'll give that a go.

> Am I correct that NUM_POLY_INT_COEFFS is 1 for targets that do not

> have variable length vector modes?


Right.  1 is the default and only AArch64 defines it to anything else (2).

Thanks,
Richard
Richard Biener Oct. 24, 2017, 9:58 a.m. UTC | #3
On Tue, Oct 24, 2017 at 11:40 AM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:

>> On Mon, Oct 23, 2017 at 7:41 PM, Richard Sandiford

>> <richard.sandiford@linaro.org> wrote:

>>> This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64.  The value is

>>> encoded in the 10-bit precision field and was previously always stored

>>> as a simple log2 value.  The challenge was to use this 10 bits to

>>> encode the number of elements in variable-length vectors, so that

>>> we didn't need to increase the size of the tree.

>>>

>>> In practice the number of vector elements should always have the form

>>> N + N * X (where X is the runtime value), and as for constant-length

>>> vectors, N must be a power of 2 (even though X itself might not be).

>>> The patch therefore uses the low bit to select between constant-length

>>> and variable-length and uses the upper 9 bits to encode log2(N).

>>> Targets without variable-length vectors continue to use the old scheme.

>>>

>>> A new valid_vector_subparts_p function tests whether a given number

>>> of elements can be encoded.  This is false for the vector modes that

>>> represent an LD3 or ST3 vector triple (which we want to treat as arrays

>>> of vectors rather than single vectors).

>>>

>>> Most of the patch is mechanical; previous patches handled the changes

>>> that weren't entirely straightforward.

>>

>> One comment, w/o actually reviewing may/must stuff (will comment on that

>> elsewhere).

>>

>> You split 10 bits into 9 and 1, wouldn't it be more efficient to use the

>> lower 8 bits for the log2 value of N and either of the two remaining bits

>> for the flag?  That way the 8 bits for the shift amount can be eventually

>> accessed in a more efficient way.

>>

>> Guess you'd need to compare code-generation of the TYPE_VECTOR_SUBPARTS

>> accessor on aarch64 / x86_64.

>

> Ah, yeah.  I'll give that a go.

>

>> Am I correct that NUM_POLY_INT_COEFFS is 1 for targets that do not

>> have variable length vector modes?

>

> Right.  1 is the default and only AArch64 defines it to anything else (2).


Going to be interesting (bitrot) times then?  I wonder if it makes sense
to initially define it to 2 globally and only change it to 1 later?

Do you have any numbers on the effect of poly-int on compile-times?
Esp. for example on stage2 build times when stage1 is -O0 -g "optimized"?

Thanks,
Richard.

> Thanks,

> Richard
Richard Sandiford Oct. 24, 2017, 11:18 a.m. UTC | #4
Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Oct 24, 2017 at 11:40 AM, Richard Sandiford

> <richard.sandiford@linaro.org> wrote:

>> Richard Biener <richard.guenther@gmail.com> writes:

>>> On Mon, Oct 23, 2017 at 7:41 PM, Richard Sandiford

>>> <richard.sandiford@linaro.org> wrote:

>>>> This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64.  The value is

>>>> encoded in the 10-bit precision field and was previously always stored

>>>> as a simple log2 value.  The challenge was to use this 10 bits to

>>>> encode the number of elements in variable-length vectors, so that

>>>> we didn't need to increase the size of the tree.

>>>>

>>>> In practice the number of vector elements should always have the form

>>>> N + N * X (where X is the runtime value), and as for constant-length

>>>> vectors, N must be a power of 2 (even though X itself might not be).

>>>> The patch therefore uses the low bit to select between constant-length

>>>> and variable-length and uses the upper 9 bits to encode log2(N).

>>>> Targets without variable-length vectors continue to use the old scheme.

>>>>

>>>> A new valid_vector_subparts_p function tests whether a given number

>>>> of elements can be encoded.  This is false for the vector modes that

>>>> represent an LD3 or ST3 vector triple (which we want to treat as arrays

>>>> of vectors rather than single vectors).

>>>>

>>>> Most of the patch is mechanical; previous patches handled the changes

>>>> that weren't entirely straightforward.

>>>

>>> One comment, w/o actually reviewing may/must stuff (will comment on that

>>> elsewhere).

>>>

>>> You split 10 bits into 9 and 1, wouldn't it be more efficient to use the

>>> lower 8 bits for the log2 value of N and either of the two remaining bits

>>> for the flag?  That way the 8 bits for the shift amount can be eventually

>>> accessed in a more efficient way.

>>>

>>> Guess you'd need to compare code-generation of the TYPE_VECTOR_SUBPARTS

>>> accessor on aarch64 / x86_64.

>>

>> Ah, yeah.  I'll give that a go.

>>

>>> Am I correct that NUM_POLY_INT_COEFFS is 1 for targets that do not

>>> have variable length vector modes?

>>

>> Right.  1 is the default and only AArch64 defines it to anything else (2).

>

> Going to be interesting (bitrot) times then?  I wonder if it makes sense

> to initially define it to 2 globally and only change it to 1 later?


Well, the target-independent code doesn't have the implicit conversion
from poly_int<1, C> to C, so it can't e.g. do:

  poly_int64 x = ...;
  HOST_WIDE_INT y = x;

even when NUM_POLY_INT_COEFFS==1.  Only target-specific code (identified
by IN_TARGET_CODE) can do that.

So to target-independent code it doesn't really matter what
NUM_POLY_INT_COEFFS is.  Even if we bumped it to 2, the extra coefficient
would always be zero.

FWIW, the poly_int tests in [001/nnn] cover N == 1, 2 and (as far as
supported) 3 for all targets, so that part isn't sensitive to
NUM_POLY_INT_COEFFS.

> Do you have any numbers on the effect of poly-int on compile-times?

> Esp. for example on stage2 build times when stage1 is -O0 -g "optimized"?


I've just tried that for an x86_64 -j24 build and got:

real: +7%
user: +8.6%

I don't know how noisy the results are though.

It's compile-time neutral in terms of running a gcc built with
--enable-checking=release, within a margin of about [-0.1%, 0.1%].

Thanks,
Richard
Richard Biener Oct. 24, 2017, 11:25 a.m. UTC | #5
On Tue, Oct 24, 2017 at 1:18 PM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:

>> On Tue, Oct 24, 2017 at 11:40 AM, Richard Sandiford

>> <richard.sandiford@linaro.org> wrote:

>>> Richard Biener <richard.guenther@gmail.com> writes:

>>>> On Mon, Oct 23, 2017 at 7:41 PM, Richard Sandiford

>>>> <richard.sandiford@linaro.org> wrote:

>>>>> This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64.  The value is

>>>>> encoded in the 10-bit precision field and was previously always stored

>>>>> as a simple log2 value.  The challenge was to use this 10 bits to

>>>>> encode the number of elements in variable-length vectors, so that

>>>>> we didn't need to increase the size of the tree.

>>>>>

>>>>> In practice the number of vector elements should always have the form

>>>>> N + N * X (where X is the runtime value), and as for constant-length

>>>>> vectors, N must be a power of 2 (even though X itself might not be).

>>>>> The patch therefore uses the low bit to select between constant-length

>>>>> and variable-length and uses the upper 9 bits to encode log2(N).

>>>>> Targets without variable-length vectors continue to use the old scheme.

>>>>>

>>>>> A new valid_vector_subparts_p function tests whether a given number

>>>>> of elements can be encoded.  This is false for the vector modes that

>>>>> represent an LD3 or ST3 vector triple (which we want to treat as arrays

>>>>> of vectors rather than single vectors).

>>>>>

>>>>> Most of the patch is mechanical; previous patches handled the changes

>>>>> that weren't entirely straightforward.

>>>>

>>>> One comment, w/o actually reviewing may/must stuff (will comment on that

>>>> elsewhere).

>>>>

>>>> You split 10 bits into 9 and 1, wouldn't it be more efficient to use the

>>>> lower 8 bits for the log2 value of N and either of the two remaining bits

>>>> for the flag?  That way the 8 bits for the shift amount can be eventually

>>>> accessed in a more efficient way.

>>>>

>>>> Guess you'd need to compare code-generation of the TYPE_VECTOR_SUBPARTS

>>>> accessor on aarch64 / x86_64.

>>>

>>> Ah, yeah.  I'll give that a go.

>>>

>>>> Am I correct that NUM_POLY_INT_COEFFS is 1 for targets that do not

>>>> have variable length vector modes?

>>>

>>> Right.  1 is the default and only AArch64 defines it to anything else (2).

>>

>> Going to be interesting (bitrot) times then?  I wonder if it makes sense

>> to initially define it to 2 globally and only change it to 1 later?

>

> Well, the target-independent code doesn't have the implicit conversion

> from poly_int<1, C> to C, so it can't e.g. do:

>

>   poly_int64 x = ...;

>   HOST_WIDE_INT y = x;

>

> even when NUM_POLY_INT_COEFFS==1.  Only target-specific code (identified

> by IN_TARGET_CODE) can do that.

>

> So to target-independent code it doesn't really matter what

> NUM_POLY_INT_COEFFS is.  Even if we bumped it to 2, the extra coefficient

> would always be zero.

>

> FWIW, the poly_int tests in [001/nnn] cover N == 1, 2 and (as far as

> supported) 3 for all targets, so that part isn't sensitive to

> NUM_POLY_INT_COEFFS.

>

>> Do you have any numbers on the effect of poly-int on compile-times?

>> Esp. for example on stage2 build times when stage1 is -O0 -g "optimized"?

>

> I've just tried that for an x86_64 -j24 build and got:

>

> real: +7%

> user: +8.6%

>

> I don't know how noisy the results are though.


What's the same on AARCH64 where NUM_POLY_INT_COEFFS is 2?

> It's compile-time neutral in terms of running a gcc built with

> --enable-checking=release, within a margin of about [-0.1%, 0.1%].


I would have expected that (on x86_64).  Well, hoped (you basically
stated that in 000/nnn.  The question is what is the effect on AARCH64.
As you know we build openSUSE for AARCH64 and build power is limited ;)

Richard.

> Thanks,

> Richard
Richard Sandiford Oct. 24, 2017, 4:23 p.m. UTC | #6
Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Oct 24, 2017 at 1:18 PM, Richard Sandiford

> <richard.sandiford@linaro.org> wrote:

>> Richard Biener <richard.guenther@gmail.com> writes:

>>> Do you have any numbers on the effect of poly-int on compile-times?

>>> Esp. for example on stage2 build times when stage1 is -O0 -g "optimized"?

>>

>> I've just tried that for an x86_64 -j24 build and got:

>>

>> real: +7%

>> user: +8.6%

>>

>> I don't know how noisy the results are though.

>

> What's the same on AARCH64 where NUM_POLY_INT_COEFFS is 2?

>

>> It's compile-time neutral in terms of running a gcc built with

>> --enable-checking=release, within a margin of about [-0.1%, 0.1%].

>

> I would have expected that (on x86_64).  Well, hoped (you basically

> stated that in 000/nnn.


Sorry, wasn't sure how much of the series you'd had a chance to read.

> The question is what is the effect on AARCH64.

> As you know we build openSUSE for AARCH64 and build power is limited ;)


The timings for an AArch64 stage2-bubble with an -O0 -g stage1, for
NUM_POLY_INT_COEFFS==2 is:

real: +17%
user: +20%

Running a gcc built with --enable-checking=release is ~1% slower when
using -g and ~2% slower with -O2 -g.

Thanks,
Richard
Jeff Law Dec. 6, 2017, 2:31 a.m. UTC | #7
On 10/23/2017 11:41 AM, Richard Sandiford wrote:
> This patch changes TYPE_VECTOR_SUBPARTS to a poly_uint64.  The value is

> encoded in the 10-bit precision field and was previously always stored

> as a simple log2 value.  The challenge was to use this 10 bits to

> encode the number of elements in variable-length vectors, so that

> we didn't need to increase the size of the tree.

> 

> In practice the number of vector elements should always have the form

> N + N * X (where X is the runtime value), and as for constant-length

> vectors, N must be a power of 2 (even though X itself might not be).

> The patch therefore uses the low bit to select between constant-length

> and variable-length and uses the upper 9 bits to encode log2(N).

> Targets without variable-length vectors continue to use the old scheme.

> 

> A new valid_vector_subparts_p function tests whether a given number

> of elements can be encoded.  This is false for the vector modes that

> represent an LD3 or ST3 vector triple (which we want to treat as arrays

> of vectors rather than single vectors).

> 

> Most of the patch is mechanical; previous patches handled the changes

> that weren't entirely straightforward.

> 

> 

> 2017-10-23  Richard Sandiford  <richard.sandiford@linaro.org>

> 	    Alan Hayward  <alan.hayward@arm.com>

> 	    David Sherwood  <david.sherwood@arm.com>

> 

> gcc/

> 	* tree.h (TYPE_VECTOR_SUBPARTS): Turn into a function and handle

> 	polynomial numbers of units.

> 	(SET_TYPE_VECTOR_SUBPARTS): Likewise.

> 	(valid_vector_subparts_p): New function.

> 	(build_vector_type): Remove temporary shim and take the number

> 	of units as a poly_uint64 rather than an int.

> 	(build_opaque_vector_type): Take the number of units as a

> 	poly_uint64 rather than an int.

> 	* tree.c (build_vector): Handle polynomial TYPE_VECTOR_SUBPARTS.

> 	(build_vector_from_ctor, type_hash_canon_hash): Likewise.

> 	(type_cache_hasher::equal, uniform_vector_p): Likewise.

> 	(vector_type_mode): Likewise.

> 	(build_vector_from_val): If the number of units isn't constant,

> 	use build_vec_duplicate_cst for constant operands and

> 	VEC_DUPLICATE_EXPR otherwise.

> 	(make_vector_type): Remove temporary is_constant ().

> 	(build_vector_type, build_opaque_vector_type): Take the number of

> 	units as a poly_uint64 rather than an int.

> 	* cfgexpand.c (expand_debug_expr): Handle polynomial

> 	TYPE_VECTOR_SUBPARTS.

> 	* expr.c (count_type_elements, store_constructor): Likewise.

> 	* fold-const.c (const_binop, const_unop, fold_convert_const)

> 	(operand_equal_p, fold_view_convert_expr, fold_vec_perm)

> 	(fold_ternary_loc, fold_relational_const): Likewise.

> 	(native_interpret_vector): Likewise.  Change the size from an

> 	int to an unsigned int.

> 	* gimple-fold.c (gimple_fold_stmt_to_constant_1): Handle polynomial

> 	TYPE_VECTOR_SUBPARTS.

> 	(gimple_fold_indirect_ref, gimple_build_vector): Likewise.

> 	(gimple_build_vector_from_val): Use VEC_DUPLICATE_EXPR when

> 	duplicating a non-constant operand into a variable-length vector.

> 	* match.pd: Handle polynomial TYPE_VECTOR_SUBPARTS.

> 	* omp-simd-clone.c (simd_clone_subparts): Likewise.

> 	* print-tree.c (print_node): Likewise.

> 	* stor-layout.c (layout_type): Likewise.

> 	* targhooks.c (default_builtin_vectorization_cost): Likewise.

> 	* tree-cfg.c (verify_gimple_comparison): Likewise.

> 	(verify_gimple_assign_binary): Likewise.

> 	(verify_gimple_assign_ternary): Likewise.

> 	(verify_gimple_assign_single): Likewise.

> 	* tree-ssa-forwprop.c (simplify_vector_constructor): Likewise.

> 	* tree-vect-data-refs.c (vect_permute_store_chain): Likewise.

> 	(vect_grouped_load_supported, vect_permute_load_chain): Likewise.

> 	(vect_shift_permute_load_chain): Likewise.

> 	* tree-vect-generic.c (nunits_for_known_piecewise_op): Likewise.

> 	(expand_vector_condition, optimize_vector_constructor): Likewise.

> 	(lower_vec_perm, get_compute_type): Likewise.

> 	* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.

> 	(get_initial_defs_for_reduction, vect_transform_loop): Likewise.

> 	* tree-vect-patterns.c (vect_recog_bool_pattern): Likewise.

> 	(vect_recog_mask_conversion_pattern): Likewise.

> 	* tree-vect-slp.c (vect_supported_load_permutation_p): Likewise.

> 	(vect_get_constant_vectors, vect_transform_slp_perm_load): Likewise.

> 	* tree-vect-stmts.c (perm_mask_for_reverse): Likewise.

> 	(get_group_load_store_type, vectorizable_mask_load_store): Likewise.

> 	(vectorizable_bswap, simd_clone_subparts, vectorizable_assignment)

> 	(vectorizable_shift, vectorizable_operation, vectorizable_store)

> 	(vect_gen_perm_mask_any, vectorizable_load, vect_is_simple_cond)

> 	(vectorizable_comparison, supportable_widening_operation): Likewise.

> 	(supportable_narrowing_operation): Likewise.

> 

> gcc/ada/

> 	* gcc-interface/utils.c (gnat_types_compatible_p): Handle

> 	polynomial TYPE_VECTOR_SUBPARTS.

> 

> gcc/brig/

> 	* brigfrontend/brig-to-generic.cc (get_unsigned_int_type): Handle

> 	polynomial TYPE_VECTOR_SUBPARTS.

> 	* brigfrontend/brig-util.h (gccbrig_type_vector_subparts): Likewise.

> 

> gcc/c-family/

> 	* c-common.c (vector_types_convertible_p, c_build_vec_perm_expr)

> 	(convert_vector_to_array_for_subscript): Handle polynomial

> 	TYPE_VECTOR_SUBPARTS.

> 	(c_common_type_for_mode): Check valid_vector_subparts_p.

> 

> gcc/c/

> 	* c-typeck.c (comptypes_internal, build_binary_op): Handle polynomial

> 	TYPE_VECTOR_SUBPARTS.

> 

> gcc/cp/

> 	* call.c (build_conditional_expr_1): Handle polynomial

> 	TYPE_VECTOR_SUBPARTS.

> 	* constexpr.c (cxx_fold_indirect_ref): Likewise.

> 	* decl.c (cp_finish_decomp): Likewise.

> 	* mangle.c (write_type): Likewise.

> 	* typeck.c (structural_comptypes): Likewise.

> 	(cp_build_binary_op): Likewise.

> 	* typeck2.c (process_init_constructor_array): Likewise.

> 

> gcc/fortran/

> 	* trans-types.c (gfc_type_for_mode): Check valid_vector_subparts_p.

> 

> gcc/lto/

> 	* lto-lang.c (lto_type_for_mode): Check valid_vector_subparts_p.

> 	* lto.c (hash_canonical_type): Handle polynomial TYPE_VECTOR_SUBPARTS.

> 

> gcc/go/

> 	* go-lang.c (go_langhook_type_for_mode): Check valid_vector_subparts_p.

My recollection is that the encoding was going to change on this one,
but that shouldn't affect the bulk of this patch.

OK.  I'll trust you'll adjust the encoding per the discussion with Richi.

jeff
diff mbox series

Patch

Index: gcc/tree.h
===================================================================
--- gcc/tree.h	2017-10-23 17:22:35.831905077 +0100
+++ gcc/tree.h	2017-10-23 17:25:51.773378674 +0100
@@ -2041,15 +2041,6 @@  #define TREE_VISITED(NODE) ((NODE)->base
    If set in a INTEGER_TYPE, indicates a character type.  */
 #define TYPE_STRING_FLAG(NODE) (TYPE_CHECK (NODE)->type_common.string_flag)
 
-/* For a VECTOR_TYPE, this is the number of sub-parts of the vector.  */
-#define TYPE_VECTOR_SUBPARTS(VECTOR_TYPE) \
-  (HOST_WIDE_INT_1U \
-   << VECTOR_TYPE_CHECK (VECTOR_TYPE)->type_common.precision)
-
-/* Set precision to n when we have 2^n sub-parts of the vector.  */
-#define SET_TYPE_VECTOR_SUBPARTS(VECTOR_TYPE, X) \
-  (VECTOR_TYPE_CHECK (VECTOR_TYPE)->type_common.precision = exact_log2 (X))
-
 /* Nonzero in a VECTOR_TYPE if the frontends should not emit warnings
    about missing conversions to other vector types of the same size.  */
 #define TYPE_VECTOR_OPAQUE(NODE) \
@@ -3671,6 +3662,64 @@  id_equal (const char *str, const_tree id
   return !strcmp (str, IDENTIFIER_POINTER (id));
 }
 
+/* Return the number of elements in the VECTOR_TYPE given by NODE.  */
+
+inline poly_uint64
+TYPE_VECTOR_SUBPARTS (const_tree node)
+{
+  STATIC_ASSERT (NUM_POLY_INT_COEFFS <= 2);
+  unsigned int precision = VECTOR_TYPE_CHECK (node)->type_common.precision;
+  if (NUM_POLY_INT_COEFFS == 2)
+    {
+      poly_uint64 res = 0;
+      res.coeffs[0] = 1 << (precision / 2);
+      if (precision & 1)
+	res.coeffs[1] = 1 << (precision / 2);
+      return res;
+    }
+  else
+    return 1 << precision;
+}
+
+/* Set the number of elements in VECTOR_TYPE NODE to SUBPARTS, which must
+   satisfy valid_vector_subparts_p.  */
+
+inline void
+SET_TYPE_VECTOR_SUBPARTS (tree node, poly_uint64 subparts)
+{
+  STATIC_ASSERT (NUM_POLY_INT_COEFFS <= 2);
+  unsigned HOST_WIDE_INT coeff0 = subparts.coeffs[0];
+  int index = exact_log2 (coeff0);
+  gcc_assert (index >= 0);
+  if (NUM_POLY_INT_COEFFS == 2)
+    {
+      unsigned HOST_WIDE_INT coeff1 = subparts.coeffs[1];
+      gcc_assert (coeff1 == 0 || coeff1 == coeff0);
+      VECTOR_TYPE_CHECK (node)->type_common.precision
+	= index * 2 + (coeff1 != 0);
+    }
+  else
+    VECTOR_TYPE_CHECK (node)->type_common.precision = index;
+}
+
+/* Return true if we can construct vector types with the given number
+   of subparts.  */
+
+static inline bool
+valid_vector_subparts_p (poly_uint64 subparts)
+{
+  unsigned HOST_WIDE_INT coeff0 = subparts.coeffs[0];
+  if (!pow2p_hwi (coeff0))
+    return false;
+  if (NUM_POLY_INT_COEFFS == 2)
+    {
+      unsigned HOST_WIDE_INT coeff1 = subparts.coeffs[1];
+      if (coeff1 != 0 && coeff1 != coeff0)
+	return false;
+    }
+  return true;
+}
+
 #define error_mark_node			global_trees[TI_ERROR_MARK]
 
 #define intQI_type_node			global_trees[TI_INTQI_TYPE]
@@ -4108,16 +4157,10 @@  extern tree build_pointer_type (tree);
 extern tree build_reference_type_for_mode (tree, machine_mode, bool);
 extern tree build_reference_type (tree);
 extern tree build_vector_type_for_mode (tree, machine_mode);
-extern tree build_vector_type (tree innertype, int nunits);
-/* Temporary.  */
-inline tree
-build_vector_type (tree innertype, poly_uint64 nunits)
-{
-  return build_vector_type (innertype, (int) nunits.to_constant ());
-}
+extern tree build_vector_type (tree, poly_int64);
 extern tree build_truth_vector_type (poly_uint64, poly_uint64);
 extern tree build_same_sized_truth_vector_type (tree vectype);
-extern tree build_opaque_vector_type (tree innertype, int nunits);
+extern tree build_opaque_vector_type (tree, poly_int64);
 extern tree build_index_type (tree);
 extern tree build_array_type (tree, tree, bool = false);
 extern tree build_nonshared_array_type (tree, tree);
Index: gcc/tree.c
===================================================================
--- gcc/tree.c	2017-10-23 17:25:48.625491825 +0100
+++ gcc/tree.c	2017-10-23 17:25:51.771378746 +0100
@@ -1877,7 +1877,7 @@  make_vector (unsigned len MEM_STAT_DECL)
 build_vector (tree type, vec<tree> vals MEM_STAT_DECL)
 {
   unsigned int nelts = vals.length ();
-  gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));
+  gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));
   int over = 0;
   unsigned cnt = 0;
   tree v = make_vector (nelts);
@@ -1907,10 +1907,11 @@  build_vector (tree type, vec<tree> vals
 tree
 build_vector_from_ctor (tree type, vec<constructor_elt, va_gc> *v)
 {
-  unsigned int nelts = TYPE_VECTOR_SUBPARTS (type);
-  unsigned HOST_WIDE_INT idx;
+  unsigned HOST_WIDE_INT idx, nelts;
   tree value;
 
+  /* We can't construct a VECTOR_CST for a variable number of elements.  */
+  nelts = TYPE_VECTOR_SUBPARTS (type).to_constant ();
   auto_vec<tree, 32> vec (nelts);
   FOR_EACH_CONSTRUCTOR_VALUE (v, idx, value)
     {
@@ -1928,9 +1929,9 @@  build_vector_from_ctor (tree type, vec<c
 
 /* Build a vector of type VECTYPE where all the elements are SCs.  */
 tree
-build_vector_from_val (tree vectype, tree sc) 
+build_vector_from_val (tree vectype, tree sc)
 {
-  int i, nunits = TYPE_VECTOR_SUBPARTS (vectype);
+  unsigned HOST_WIDE_INT i, nunits;
 
   if (sc == error_mark_node)
     return sc;
@@ -1944,6 +1945,13 @@  build_vector_from_val (tree vectype, tre
   gcc_checking_assert (types_compatible_p (TYPE_MAIN_VARIANT (TREE_TYPE (sc)),
 					   TREE_TYPE (vectype)));
 
+  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits))
+    {
+      if (CONSTANT_CLASS_P (sc))
+	return build_vec_duplicate_cst (vectype, sc);
+      return fold_build1 (VEC_DUPLICATE_EXPR, vectype, sc);
+    }
+
   if (CONSTANT_CLASS_P (sc))
     {
       auto_vec<tree, 32> v (nunits);
@@ -6575,11 +6583,8 @@  type_hash_canon_hash (tree type)
       }
 
     case VECTOR_TYPE:
-      {
-	unsigned nunits = TYPE_VECTOR_SUBPARTS (type);
-	hstate.add_object (nunits);
-	break;
-      }
+      hstate.add_poly_int (TYPE_VECTOR_SUBPARTS (type));
+      break;
 
     default:
       break;
@@ -6623,7 +6628,8 @@  type_cache_hasher::equal (type_hash *a,
       return 1;
 
     case VECTOR_TYPE:
-      return TYPE_VECTOR_SUBPARTS (a->type) == TYPE_VECTOR_SUBPARTS (b->type);
+      return must_eq (TYPE_VECTOR_SUBPARTS (a->type),
+		      TYPE_VECTOR_SUBPARTS (b->type));
 
     case ENUMERAL_TYPE:
       if (TYPE_VALUES (a->type) != TYPE_VALUES (b->type)
@@ -9666,7 +9672,7 @@  make_vector_type (tree innertype, poly_i
 
   t = make_node (VECTOR_TYPE);
   TREE_TYPE (t) = mv_innertype;
-  SET_TYPE_VECTOR_SUBPARTS (t, nunits.to_constant ()); /* Temporary */
+  SET_TYPE_VECTOR_SUBPARTS (t, nunits);
   SET_TYPE_MODE (t, mode);
 
   if (TYPE_STRUCTURAL_EQUALITY_P (mv_innertype) || in_lto_p)
@@ -10582,7 +10588,7 @@  build_vector_type_for_mode (tree innerty
    a power of two.  */
 
 tree
-build_vector_type (tree innertype, int nunits)
+build_vector_type (tree innertype, poly_int64 nunits)
 {
   return make_vector_type (innertype, nunits, VOIDmode);
 }
@@ -10627,7 +10633,7 @@  build_same_sized_truth_vector_type (tree
 /* Similarly, but builds a variant type with TYPE_VECTOR_OPAQUE set.  */
 
 tree
-build_opaque_vector_type (tree innertype, int nunits)
+build_opaque_vector_type (tree innertype, poly_int64 nunits)
 {
   tree t = make_vector_type (innertype, nunits, VOIDmode);
   tree cand;
@@ -10730,7 +10736,7 @@  initializer_zerop (const_tree init)
 uniform_vector_p (const_tree vec)
 {
   tree first, t;
-  unsigned i;
+  unsigned HOST_WIDE_INT i, nelts;
 
   if (vec == NULL_TREE)
     return NULL_TREE;
@@ -10753,7 +10759,8 @@  uniform_vector_p (const_tree vec)
       return first;
     }
 
-  else if (TREE_CODE (vec) == CONSTRUCTOR)
+  else if (TREE_CODE (vec) == CONSTRUCTOR
+	   && TYPE_VECTOR_SUBPARTS (TREE_TYPE (vec)).is_constant (&nelts))
     {
       first = error_mark_node;
 
@@ -10767,7 +10774,7 @@  uniform_vector_p (const_tree vec)
 	  if (!operand_equal_p (first, t, 0))
 	    return NULL_TREE;
         }
-      if (i != TYPE_VECTOR_SUBPARTS (TREE_TYPE (vec)))
+      if (i != nelts)
 	return NULL_TREE;
 
       return first;
@@ -13011,8 +13018,8 @@  vector_type_mode (const_tree t)
       /* For integers, try mapping it to a same-sized scalar mode.  */
       if (is_int_mode (TREE_TYPE (t)->type_common.mode, &innermode))
 	{
-	  unsigned int size = (TYPE_VECTOR_SUBPARTS (t)
-			       * GET_MODE_BITSIZE (innermode));
+	  poly_int64 size = (TYPE_VECTOR_SUBPARTS (t)
+			     * GET_MODE_BITSIZE (innermode));
 	  scalar_int_mode mode;
 	  if (int_mode_for_size (size, 0).exists (&mode)
 	      && have_regs_of_mode[mode])
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c	2017-10-23 17:19:04.559212322 +0100
+++ gcc/cfgexpand.c	2017-10-23 17:25:51.727380328 +0100
@@ -4961,10 +4961,13 @@  expand_debug_expr (tree exp)
       else if (TREE_CODE (TREE_TYPE (exp)) == VECTOR_TYPE)
 	{
 	  unsigned i;
+	  unsigned HOST_WIDE_INT nelts;
 	  tree val;
 
-	  op0 = gen_rtx_CONCATN
-	    (mode, rtvec_alloc (TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp))));
+	  if (!TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)).is_constant (&nelts))
+	    goto flag_unsupported;
+
+	  op0 = gen_rtx_CONCATN (mode, rtvec_alloc (nelts));
 
 	  FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (exp), i, val)
 	    {
@@ -4974,7 +4977,7 @@  expand_debug_expr (tree exp)
 	      XVECEXP (op0, 0, i) = op1;
 	    }
 
-	  if (i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)))
+	  if (i < nelts)
 	    {
 	      op1 = expand_debug_expr
 		(build_zero_cst (TREE_TYPE (TREE_TYPE (exp))));
@@ -4982,7 +4985,7 @@  expand_debug_expr (tree exp)
 	      if (!op1)
 		return NULL;
 
-	      for (; i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)); i++)
+	      for (; i < nelts; i++)
 		XVECEXP (op0, 0, i) = op1;
 	    }
 
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	2017-10-23 17:25:38.241865064 +0100
+++ gcc/expr.c	2017-10-23 17:25:51.740379860 +0100
@@ -5847,7 +5847,13 @@  count_type_elements (const_tree type, bo
       return 2;
 
     case VECTOR_TYPE:
-      return TYPE_VECTOR_SUBPARTS (type);
+      {
+	unsigned HOST_WIDE_INT nelts;
+	if (TYPE_VECTOR_SUBPARTS (type).is_constant (&nelts))
+	  return nelts;
+	else
+	  return -1;
+      }
 
     case INTEGER_TYPE:
     case REAL_TYPE:
@@ -6594,7 +6600,8 @@  store_constructor (tree exp, rtx target,
 	HOST_WIDE_INT bitsize;
 	HOST_WIDE_INT bitpos;
 	rtvec vector = NULL;
-	unsigned n_elts;
+	poly_uint64 n_elts;
+	unsigned HOST_WIDE_INT const_n_elts;
 	alias_set_type alias;
 	bool vec_vec_init_p = false;
 	machine_mode mode = GET_MODE (target);
@@ -6619,7 +6626,9 @@  store_constructor (tree exp, rtx target,
 	  }
 
 	n_elts = TYPE_VECTOR_SUBPARTS (type);
-	if (REG_P (target) && VECTOR_MODE_P (mode))
+	if (REG_P (target)
+	    && VECTOR_MODE_P (mode)
+	    && n_elts.is_constant (&const_n_elts))
 	  {
 	    machine_mode emode = eltmode;
 
@@ -6628,14 +6637,15 @@  store_constructor (tree exp, rtx target,
 		    == VECTOR_TYPE))
 	      {
 		tree etype = TREE_TYPE (CONSTRUCTOR_ELT (exp, 0)->value);
-		gcc_assert (CONSTRUCTOR_NELTS (exp) * TYPE_VECTOR_SUBPARTS (etype)
-			    == n_elts);
+		gcc_assert (must_eq (CONSTRUCTOR_NELTS (exp)
+				     * TYPE_VECTOR_SUBPARTS (etype),
+				     n_elts));
 		emode = TYPE_MODE (etype);
 	      }
 	    icode = convert_optab_handler (vec_init_optab, mode, emode);
 	    if (icode != CODE_FOR_nothing)
 	      {
-		unsigned int i, n = n_elts;
+		unsigned int i, n = const_n_elts;
 
 		if (emode != eltmode)
 		  {
@@ -6674,7 +6684,8 @@  store_constructor (tree exp, rtx target,
 
 	    /* Clear the entire vector first if there are any missing elements,
 	       or if the incidence of zero elements is >= 75%.  */
-	    need_to_clear = (count < n_elts || 4 * zero_count >= 3 * count);
+	    need_to_clear = (may_lt (count, n_elts)
+			     || 4 * zero_count >= 3 * count);
 	  }
 
 	if (need_to_clear && may_gt (size, 0) && !vector)
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	2017-10-23 17:22:48.984540760 +0100
+++ gcc/fold-const.c	2017-10-23 17:25:51.744379717 +0100
@@ -1645,7 +1645,7 @@  const_binop (enum tree_code code, tree t
 	in_nelts = VECTOR_CST_NELTS (arg1);
 	out_nelts = in_nelts * 2;
 	gcc_assert (in_nelts == VECTOR_CST_NELTS (arg2)
-		    && out_nelts == TYPE_VECTOR_SUBPARTS (type));
+		    && must_eq (out_nelts, TYPE_VECTOR_SUBPARTS (type)));
 
 	auto_vec<tree, 32> elts (out_nelts);
 	for (i = 0; i < out_nelts; i++)
@@ -1677,7 +1677,7 @@  const_binop (enum tree_code code, tree t
 	in_nelts = VECTOR_CST_NELTS (arg1);
 	out_nelts = in_nelts / 2;
 	gcc_assert (in_nelts == VECTOR_CST_NELTS (arg2)
-		    && out_nelts == TYPE_VECTOR_SUBPARTS (type));
+		    && must_eq (out_nelts, TYPE_VECTOR_SUBPARTS (type)));
 
 	if (code == VEC_WIDEN_MULT_LO_EXPR)
 	  scale = 0, ofs = BYTES_BIG_ENDIAN ? out_nelts : 0;
@@ -1841,7 +1841,7 @@  const_unop (enum tree_code code, tree ty
 
 	in_nelts = VECTOR_CST_NELTS (arg0);
 	out_nelts = in_nelts / 2;
-	gcc_assert (out_nelts == TYPE_VECTOR_SUBPARTS (type));
+	gcc_assert (must_eq (out_nelts, TYPE_VECTOR_SUBPARTS (type)));
 
 	unsigned int offset = 0;
 	if ((!BYTES_BIG_ENDIAN) ^ (code == VEC_UNPACK_LO_EXPR
@@ -2329,7 +2329,7 @@  fold_convert_const (enum tree_code code,
   else if (TREE_CODE (type) == VECTOR_TYPE)
     {
       if (TREE_CODE (arg1) == VECTOR_CST
-	  && TYPE_VECTOR_SUBPARTS (type) == VECTOR_CST_NELTS (arg1))
+	  && must_eq (TYPE_VECTOR_SUBPARTS (type), VECTOR_CST_NELTS (arg1)))
 	{
 	  int len = VECTOR_CST_NELTS (arg1);
 	  tree elttype = TREE_TYPE (type);
@@ -2345,8 +2345,8 @@  fold_convert_const (enum tree_code code,
 	  return build_vector (type, v);
 	}
       if (TREE_CODE (arg1) == VEC_DUPLICATE_CST
-	  && (TYPE_VECTOR_SUBPARTS (type)
-	      == TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1))))
+	  && must_eq (TYPE_VECTOR_SUBPARTS (type),
+		      TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1))))
 	{
 	  tree sub = fold_convert_const (code, TREE_TYPE (type),
 					 VEC_DUPLICATE_CST_ELT (arg1));
@@ -3491,8 +3491,8 @@  #define OP_SAME_WITH_NULL(N)				\
 	     We only tested element precision and modes to match.
 	     Vectors may be BLKmode and thus also check that the number of
 	     parts match.  */
-	  if (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0))
-	      != TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)))
+	  if (may_ne (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)),
+		      TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1))))
 	    return 0;
 
 	  vec<constructor_elt, va_gc> *v0 = CONSTRUCTOR_ELTS (arg0);
@@ -7613,15 +7613,16 @@  native_interpret_complex (tree type, con
    If the buffer cannot be interpreted, return NULL_TREE.  */
 
 static tree
-native_interpret_vector (tree type, const unsigned char *ptr, int len)
+native_interpret_vector (tree type, const unsigned char *ptr, unsigned int len)
 {
   tree etype, elem;
-  int i, size, count;
+  unsigned int i, size;
+  unsigned HOST_WIDE_INT count;
 
   etype = TREE_TYPE (type);
   size = GET_MODE_SIZE (SCALAR_TYPE_MODE (etype));
-  count = TYPE_VECTOR_SUBPARTS (type);
-  if (size * count > len)
+  if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&count)
+      || size * count > len)
     return NULL_TREE;
 
   auto_vec<tree, 32> elements (count);
@@ -7707,7 +7708,8 @@  fold_view_convert_expr (tree type, tree
   tree expr_type = TREE_TYPE (expr);
   if (TREE_CODE (expr) == VEC_DUPLICATE_CST
       && VECTOR_TYPE_P (type)
-      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (expr_type)
+      && must_eq (TYPE_VECTOR_SUBPARTS (type),
+		  TYPE_VECTOR_SUBPARTS (expr_type))
       && TYPE_SIZE (TREE_TYPE (type)) == TYPE_SIZE (TREE_TYPE (expr_type)))
     {
       tree sub = fold_view_convert_expr (TREE_TYPE (type),
@@ -9025,9 +9027,9 @@  fold_vec_perm (tree type, tree arg0, tre
   bool need_ctor = false;
 
   unsigned int nelts = sel.length ();
-  gcc_assert (TYPE_VECTOR_SUBPARTS (type) == nelts
-	      && TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)) == nelts
-	      && TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)) == nelts);
+  gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (type), nelts)
+	      && must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)), nelts)
+	      && must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)), nelts));
   if (TREE_TYPE (TREE_TYPE (arg0)) != TREE_TYPE (type)
       || TREE_TYPE (TREE_TYPE (arg1)) != TREE_TYPE (type))
     return NULL_TREE;
@@ -11440,7 +11442,7 @@  fold_ternary_loc (location_t loc, enum t
 		  || TREE_CODE (arg2) == CONSTRUCTOR))
 	    {
 	      unsigned int nelts = VECTOR_CST_NELTS (arg0), i;
-	      gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));
+	      gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));
 	      auto_vec_perm_indices sel (nelts);
 	      for (i = 0; i < nelts; i++)
 		{
@@ -11706,7 +11708,8 @@  fold_ternary_loc (location_t loc, enum t
 	  if (n != 0
 	      && (idx % width) == 0
 	      && (n % width) == 0
-	      && ((idx + n) / width) <= TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)))
+	      && must_le ((idx + n) / width,
+			  TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0))))
 	    {
 	      idx = idx / width;
 	      n = n / width;
@@ -11783,7 +11786,7 @@  fold_ternary_loc (location_t loc, enum t
 
 	  mask2 = 2 * nelts - 1;
 	  mask = single_arg ? (nelts - 1) : mask2;
-	  gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));
+	  gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));
 	  auto_vec_perm_indices sel (nelts);
 	  auto_vec_perm_indices sel2 (nelts);
 	  for (i = 0; i < nelts; i++)
@@ -14034,7 +14037,7 @@  fold_relational_const (enum tree_code co
 	}
       unsigned count = VECTOR_CST_NELTS (op0);
       gcc_assert (VECTOR_CST_NELTS (op1) == count
-		  && TYPE_VECTOR_SUBPARTS (type) == count);
+		  && must_eq (TYPE_VECTOR_SUBPARTS (type), count));
 
       auto_vec<tree, 32> elts (count);
       for (unsigned i = 0; i < count; i++)
Index: gcc/gimple-fold.c
===================================================================
--- gcc/gimple-fold.c	2017-10-23 17:22:18.228825053 +0100
+++ gcc/gimple-fold.c	2017-10-23 17:25:51.747379609 +0100
@@ -5909,13 +5909,13 @@  gimple_fold_stmt_to_constant_1 (gimple *
 		}
 	      else if (TREE_CODE (rhs) == CONSTRUCTOR
 		       && TREE_CODE (TREE_TYPE (rhs)) == VECTOR_TYPE
-		       && (CONSTRUCTOR_NELTS (rhs)
-			   == TYPE_VECTOR_SUBPARTS (TREE_TYPE (rhs))))
+		       && must_eq (CONSTRUCTOR_NELTS (rhs),
+				   TYPE_VECTOR_SUBPARTS (TREE_TYPE (rhs))))
 		{
 		  unsigned i, nelts;
 		  tree val;
 
-		  nelts = TYPE_VECTOR_SUBPARTS (TREE_TYPE (rhs));
+		  nelts = CONSTRUCTOR_NELTS (rhs);
 		  auto_vec<tree, 32> vec (nelts);
 		  FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (rhs), i, val)
 		    {
@@ -6761,8 +6761,8 @@  gimple_fold_indirect_ref (tree t)
             = tree_to_shwi (part_width) / BITS_PER_UNIT;
           unsigned HOST_WIDE_INT indexi = offset * BITS_PER_UNIT;
           tree index = bitsize_int (indexi);
-          if (offset / part_widthi
-	      < TYPE_VECTOR_SUBPARTS (TREE_TYPE (addrtype)))
+	  if (must_lt (offset / part_widthi,
+		       TYPE_VECTOR_SUBPARTS (TREE_TYPE (addrtype))))
             return fold_build3 (BIT_FIELD_REF, type, TREE_OPERAND (addr, 0),
                                 part_width, index);
 	}
@@ -7064,6 +7064,10 @@  gimple_convert_to_ptrofftype (gimple_seq
 gimple_build_vector_from_val (gimple_seq *seq, location_t loc, tree type,
 			      tree op)
 {
+  if (!TYPE_VECTOR_SUBPARTS (type).is_constant ()
+      && !CONSTANT_CLASS_P (op))
+    return gimple_build (seq, loc, VEC_DUPLICATE_EXPR, type, op);
+
   tree res, vec = build_vector_from_val (type, op);
   if (is_gimple_val (vec))
     return vec;
@@ -7086,7 +7090,7 @@  gimple_build_vector (gimple_seq *seq, lo
 		     vec<tree> elts)
 {
   unsigned int nelts = elts.length ();
-  gcc_assert (nelts == TYPE_VECTOR_SUBPARTS (type));
+  gcc_assert (must_eq (nelts, TYPE_VECTOR_SUBPARTS (type)));
   for (unsigned int i = 0; i < nelts; ++i)
     if (!TREE_CONSTANT (elts[i]))
       {
Index: gcc/match.pd
===================================================================
--- gcc/match.pd	2017-10-23 17:22:50.031432167 +0100
+++ gcc/match.pd	2017-10-23 17:25:51.750379501 +0100
@@ -83,7 +83,8 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (match (nop_convert @0)
  (view_convert @0)
  (if (VECTOR_TYPE_P (type) && VECTOR_TYPE_P (TREE_TYPE (@0))
-      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (TREE_TYPE (@0))
+      && must_eq (TYPE_VECTOR_SUBPARTS (type),
+		  TYPE_VECTOR_SUBPARTS (TREE_TYPE (@0)))
       && tree_nop_conversion_p (TREE_TYPE (type), TREE_TYPE (TREE_TYPE (@0))))))
 /* This one has to be last, or it shadows the others.  */
 (match (nop_convert @0)
@@ -2628,7 +2629,8 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (simplify
  (plus:c @3 (view_convert? (vec_cond:s @0 integer_each_onep@1 integer_zerop@2)))
  (if (VECTOR_TYPE_P (type)
-      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1))
+      && must_eq (TYPE_VECTOR_SUBPARTS (type),
+		  TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1)))
       && (TYPE_MODE (TREE_TYPE (type))
           == TYPE_MODE (TREE_TYPE (TREE_TYPE (@1)))))
   (minus @3 (view_convert (vec_cond @0 (negate @1) @2)))))
@@ -2637,7 +2639,8 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (simplify
  (minus @3 (view_convert? (vec_cond:s @0 integer_each_onep@1 integer_zerop@2)))
  (if (VECTOR_TYPE_P (type)
-      && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1))
+      && must_eq (TYPE_VECTOR_SUBPARTS (type),
+		  TYPE_VECTOR_SUBPARTS (TREE_TYPE (@1)))
       && (TYPE_MODE (TREE_TYPE (type))
           == TYPE_MODE (TREE_TYPE (TREE_TYPE (@1)))))
   (plus @3 (view_convert (vec_cond @0 (negate @1) @2)))))
@@ -4301,7 +4304,8 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
    (if (n != 0
 	&& (idx % width) == 0
 	&& (n % width) == 0
-	&& ((idx + n) / width) <= TYPE_VECTOR_SUBPARTS (TREE_TYPE (ctor)))
+	&& must_le ((idx + n) / width,
+		    TYPE_VECTOR_SUBPARTS (TREE_TYPE (ctor))))
     (with
      {
        idx = idx / width;
Index: gcc/omp-simd-clone.c
===================================================================
--- gcc/omp-simd-clone.c	2017-10-23 17:22:47.947648317 +0100
+++ gcc/omp-simd-clone.c	2017-10-23 17:25:51.751379465 +0100
@@ -57,7 +57,7 @@  Software Foundation; either version 3, o
 static unsigned HOST_WIDE_INT
 simd_clone_subparts (tree vectype)
 {
-  return TYPE_VECTOR_SUBPARTS (vectype);
+  return TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
 }
 
 /* Allocate a fresh `simd_clone' and return it.  NARGS is the number
Index: gcc/print-tree.c
===================================================================
--- gcc/print-tree.c	2017-10-23 17:11:40.246949037 +0100
+++ gcc/print-tree.c	2017-10-23 17:25:51.751379465 +0100
@@ -630,7 +630,10 @@  print_node (FILE *file, const char *pref
       else if (code == ARRAY_TYPE)
 	print_node (file, "domain", TYPE_DOMAIN (node), indent + 4);
       else if (code == VECTOR_TYPE)
-	fprintf (file, " nunits:%d", (int) TYPE_VECTOR_SUBPARTS (node));
+	{
+	  fprintf (file, " nunits:");
+	  print_dec (TYPE_VECTOR_SUBPARTS (node), file);
+	}
       else if (code == RECORD_TYPE
 	       || code == UNION_TYPE
 	       || code == QUAL_UNION_TYPE)
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c	2017-10-23 17:11:54.535862371 +0100
+++ gcc/stor-layout.c	2017-10-23 17:25:51.753379393 +0100
@@ -2267,11 +2267,9 @@  layout_type (tree type)
 
     case VECTOR_TYPE:
       {
-	int nunits = TYPE_VECTOR_SUBPARTS (type);
+	poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type);
 	tree innertype = TREE_TYPE (type);
 
-	gcc_assert (!(nunits & (nunits - 1)));
-
 	/* Find an appropriate mode for the vector type.  */
 	if (TYPE_MODE (type) == VOIDmode)
 	  SET_TYPE_MODE (type,
Index: gcc/targhooks.c
===================================================================
--- gcc/targhooks.c	2017-10-23 17:22:32.725227332 +0100
+++ gcc/targhooks.c	2017-10-23 17:25:51.753379393 +0100
@@ -683,7 +683,7 @@  default_builtin_vectorization_cost (enum
         return 3;
 
       case vec_construct:
-	return TYPE_VECTOR_SUBPARTS (vectype) - 1;
+	return estimated_poly_value (TYPE_VECTOR_SUBPARTS (vectype)) - 1;
 
       default:
         gcc_unreachable ();
Index: gcc/tree-cfg.c
===================================================================
--- gcc/tree-cfg.c	2017-10-23 17:20:50.883679845 +0100
+++ gcc/tree-cfg.c	2017-10-23 17:25:51.756379285 +0100
@@ -3640,7 +3640,8 @@  verify_gimple_comparison (tree type, tre
           return true;
         }
 
-      if (TYPE_VECTOR_SUBPARTS (type) != TYPE_VECTOR_SUBPARTS (op0_type))
+      if (may_ne (TYPE_VECTOR_SUBPARTS (type),
+		  TYPE_VECTOR_SUBPARTS (op0_type)))
         {
           error ("invalid vector comparison resulting type");
           debug_generic_expr (type);
@@ -4070,8 +4071,8 @@  verify_gimple_assign_binary (gassign *st
       if (VECTOR_BOOLEAN_TYPE_P (lhs_type)
 	  && VECTOR_BOOLEAN_TYPE_P (rhs1_type)
 	  && types_compatible_p (rhs1_type, rhs2_type)
-	  && (TYPE_VECTOR_SUBPARTS (lhs_type)
-	      == 2 * TYPE_VECTOR_SUBPARTS (rhs1_type)))
+	  && must_eq (TYPE_VECTOR_SUBPARTS (lhs_type),
+		      2 * TYPE_VECTOR_SUBPARTS (rhs1_type)))
 	return false;
 
       /* Fallthru.  */
@@ -4221,8 +4222,8 @@  verify_gimple_assign_ternary (gassign *s
 
     case VEC_COND_EXPR:
       if (!VECTOR_BOOLEAN_TYPE_P (rhs1_type)
-	  || TYPE_VECTOR_SUBPARTS (rhs1_type)
-	     != TYPE_VECTOR_SUBPARTS (lhs_type))
+	  || may_ne (TYPE_VECTOR_SUBPARTS (rhs1_type),
+		     TYPE_VECTOR_SUBPARTS (lhs_type)))
 	{
 	  error ("the first argument of a VEC_COND_EXPR must be of a "
 		 "boolean vector type of the same number of elements "
@@ -4268,11 +4269,12 @@  verify_gimple_assign_ternary (gassign *s
 	  return true;
 	}
 
-      if (TYPE_VECTOR_SUBPARTS (rhs1_type) != TYPE_VECTOR_SUBPARTS (rhs2_type)
-	  || TYPE_VECTOR_SUBPARTS (rhs2_type)
-	     != TYPE_VECTOR_SUBPARTS (rhs3_type)
-	  || TYPE_VECTOR_SUBPARTS (rhs3_type)
-	     != TYPE_VECTOR_SUBPARTS (lhs_type))
+      if (may_ne (TYPE_VECTOR_SUBPARTS (rhs1_type),
+		  TYPE_VECTOR_SUBPARTS (rhs2_type))
+	  || may_ne (TYPE_VECTOR_SUBPARTS (rhs2_type),
+		     TYPE_VECTOR_SUBPARTS (rhs3_type))
+	  || may_ne (TYPE_VECTOR_SUBPARTS (rhs3_type),
+		     TYPE_VECTOR_SUBPARTS (lhs_type)))
 	{
 	  error ("vectors with different element number found "
 		 "in vector permute expression");
@@ -4554,9 +4556,9 @@  verify_gimple_assign_single (gassign *st
 			  debug_generic_stmt (rhs1);
 			  return true;
 			}
-		      else if (CONSTRUCTOR_NELTS (rhs1)
-			       * TYPE_VECTOR_SUBPARTS (elt_t)
-			       != TYPE_VECTOR_SUBPARTS (rhs1_type))
+		      else if (may_ne (CONSTRUCTOR_NELTS (rhs1)
+				       * TYPE_VECTOR_SUBPARTS (elt_t),
+				       TYPE_VECTOR_SUBPARTS (rhs1_type)))
 			{
 			  error ("incorrect number of vector CONSTRUCTOR"
 				 " elements");
@@ -4571,8 +4573,8 @@  verify_gimple_assign_single (gassign *st
 		      debug_generic_stmt (rhs1);
 		      return true;
 		    }
-		  else if (CONSTRUCTOR_NELTS (rhs1)
-			   > TYPE_VECTOR_SUBPARTS (rhs1_type))
+		  else if (may_gt (CONSTRUCTOR_NELTS (rhs1),
+				   TYPE_VECTOR_SUBPARTS (rhs1_type)))
 		    {
 		      error ("incorrect number of vector CONSTRUCTOR elements");
 		      debug_generic_stmt (rhs1);
Index: gcc/tree-ssa-forwprop.c
===================================================================
--- gcc/tree-ssa-forwprop.c	2017-10-23 17:20:50.883679845 +0100
+++ gcc/tree-ssa-forwprop.c	2017-10-23 17:25:51.756379285 +0100
@@ -1948,7 +1948,8 @@  simplify_vector_constructor (gimple_stmt
   gimple *stmt = gsi_stmt (*gsi);
   gimple *def_stmt;
   tree op, op2, orig, type, elem_type;
-  unsigned elem_size, nelts, i;
+  unsigned elem_size, i;
+  unsigned HOST_WIDE_INT nelts;
   enum tree_code code, conv_code;
   constructor_elt *elt;
   bool maybe_ident;
@@ -1959,7 +1960,8 @@  simplify_vector_constructor (gimple_stmt
   type = TREE_TYPE (op);
   gcc_checking_assert (TREE_CODE (type) == VECTOR_TYPE);
 
-  nelts = TYPE_VECTOR_SUBPARTS (type);
+  if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&nelts))
+    return false;
   elem_type = TREE_TYPE (type);
   elem_size = TREE_INT_CST_LOW (TYPE_SIZE (elem_type));
 
@@ -2031,8 +2033,8 @@  simplify_vector_constructor (gimple_stmt
     return false;
 
   if (! VECTOR_TYPE_P (TREE_TYPE (orig))
-      || (TYPE_VECTOR_SUBPARTS (type)
-	  != TYPE_VECTOR_SUBPARTS (TREE_TYPE (orig))))
+      || may_ne (TYPE_VECTOR_SUBPARTS (type),
+		 TYPE_VECTOR_SUBPARTS (TREE_TYPE (orig))))
     return false;
 
   tree tem;
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2017-10-23 17:25:50.361429427 +0100
+++ gcc/tree-vect-data-refs.c	2017-10-23 17:25:51.758379213 +0100
@@ -4743,7 +4743,7 @@  vect_permute_store_chain (vec<tree> dr_c
   if (length == 3)
     {
       /* vect_grouped_store_supported ensures that this is constant.  */
-      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype);
+      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
       unsigned int j0 = 0, j1 = 0, j2 = 0;
 
       auto_vec_perm_indices sel (nelt);
@@ -4807,7 +4807,7 @@  vect_permute_store_chain (vec<tree> dr_c
       gcc_assert (pow2p_hwi (length));
 
       /* vect_grouped_store_supported ensures that this is constant.  */
-      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype);
+      unsigned int nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
       auto_vec_perm_indices sel (nelt);
       sel.quick_grow (nelt);
       for (i = 0, n = nelt / 2; i < n; i++)
@@ -5140,7 +5140,7 @@  vect_grouped_load_supported (tree vectyp
      that leaves unused vector loads around punt - we at least create
      very sub-optimal code in that case (and blow up memory,
      see PR65518).  */
-  if (single_element_p && count > TYPE_VECTOR_SUBPARTS (vectype))
+  if (single_element_p && may_gt (count, TYPE_VECTOR_SUBPARTS (vectype)))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -5333,7 +5333,7 @@  vect_permute_load_chain (vec<tree> dr_ch
   if (length == 3)
     {
       /* vect_grouped_load_supported ensures that this is constant.  */
-      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype);
+      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
       unsigned int k;
 
       auto_vec_perm_indices sel (nelt);
@@ -5384,7 +5384,7 @@  vect_permute_load_chain (vec<tree> dr_ch
       gcc_assert (pow2p_hwi (length));
 
       /* vect_grouped_load_supported ensures that this is constant.  */
-      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype);
+      unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
       auto_vec_perm_indices sel (nelt);
       sel.quick_grow (nelt);
       for (i = 0; i < nelt; ++i)
@@ -5525,12 +5525,12 @@  vect_shift_permute_load_chain (vec<tree>
 
   tree vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
   unsigned int i;
-  unsigned nelt = TYPE_VECTOR_SUBPARTS (vectype);
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
 
-  unsigned HOST_WIDE_INT vf;
-  if (!LOOP_VINFO_VECT_FACTOR (loop_vinfo).is_constant (&vf))
+  unsigned HOST_WIDE_INT nelt, vf;
+  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nelt)
+      || !LOOP_VINFO_VECT_FACTOR (loop_vinfo).is_constant (&vf))
     /* Not supported for variable-length vectors.  */
     return false;
 
Index: gcc/tree-vect-generic.c
===================================================================
--- gcc/tree-vect-generic.c	2017-10-23 17:25:48.623491897 +0100
+++ gcc/tree-vect-generic.c	2017-10-23 17:25:51.759379177 +0100
@@ -48,7 +48,7 @@  static void expand_vector_operations_1 (
 static unsigned int
 nunits_for_known_piecewise_op (const_tree type)
 {
-  return TYPE_VECTOR_SUBPARTS (type);
+  return TYPE_VECTOR_SUBPARTS (type).to_constant ();
 }
 
 /* Return true if TYPE1 has more elements than TYPE2, where either
@@ -916,9 +916,9 @@  expand_vector_condition (gimple_stmt_ite
      Similarly for vbfld_10 instead of x_2 < y_3.  */
   if (VECTOR_BOOLEAN_TYPE_P (type)
       && SCALAR_INT_MODE_P (TYPE_MODE (type))
-      && (GET_MODE_BITSIZE (TYPE_MODE (type))
-	  < (TYPE_VECTOR_SUBPARTS (type)
-	     * GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (type)))))
+      && must_lt (GET_MODE_BITSIZE (TYPE_MODE (type)),
+		  TYPE_VECTOR_SUBPARTS (type)
+		  * GET_MODE_BITSIZE (SCALAR_TYPE_MODE (TREE_TYPE (type))))
       && (a_is_comparison
 	  ? useless_type_conversion_p (type, TREE_TYPE (a))
 	  : expand_vec_cmp_expr_p (TREE_TYPE (a1), type, TREE_CODE (a))))
@@ -1083,14 +1083,17 @@  optimize_vector_constructor (gimple_stmt
   tree lhs = gimple_assign_lhs (stmt);
   tree rhs = gimple_assign_rhs1 (stmt);
   tree type = TREE_TYPE (rhs);
-  unsigned int i, j, nelts = TYPE_VECTOR_SUBPARTS (type);
+  unsigned int i, j;
+  unsigned HOST_WIDE_INT nelts;
   bool all_same = true;
   constructor_elt *elt;
   gimple *g;
   tree base = NULL_TREE;
   optab op;
 
-  if (nelts <= 2 || CONSTRUCTOR_NELTS (rhs) != nelts)
+  if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&nelts)
+      || nelts <= 2
+      || CONSTRUCTOR_NELTS (rhs) != nelts)
     return;
   op = optab_for_tree_code (PLUS_EXPR, type, optab_default);
   if (op == unknown_optab
@@ -1302,7 +1305,7 @@  lower_vec_perm (gimple_stmt_iterator *gs
   tree mask_type = TREE_TYPE (mask);
   tree vect_elt_type = TREE_TYPE (vect_type);
   tree mask_elt_type = TREE_TYPE (mask_type);
-  unsigned int elements = TYPE_VECTOR_SUBPARTS (vect_type);
+  unsigned HOST_WIDE_INT elements;
   vec<constructor_elt, va_gc> *v;
   tree constr, t, si, i_val;
   tree vec0tmp = NULL_TREE, vec1tmp = NULL_TREE, masktmp = NULL_TREE;
@@ -1310,6 +1313,9 @@  lower_vec_perm (gimple_stmt_iterator *gs
   location_t loc = gimple_location (gsi_stmt (*gsi));
   unsigned i;
 
+  if (!TYPE_VECTOR_SUBPARTS (vect_type).is_constant (&elements))
+    return;
+
   if (TREE_CODE (mask) == SSA_NAME)
     {
       gimple *def_stmt = SSA_NAME_DEF_STMT (mask);
@@ -1467,7 +1473,7 @@  get_compute_type (enum tree_code code, o
 	= type_for_widest_vector_mode (TREE_TYPE (type), op);
       if (vector_compute_type != NULL_TREE
 	  && subparts_gt (compute_type, vector_compute_type)
-	  && TYPE_VECTOR_SUBPARTS (vector_compute_type) > 1
+	  && may_ne (TYPE_VECTOR_SUBPARTS (vector_compute_type), 1U)
 	  && (optab_handler (op, TYPE_MODE (vector_compute_type))
 	      != CODE_FOR_nothing))
 	compute_type = vector_compute_type;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2017-10-23 17:25:48.624491861 +0100
+++ gcc/tree-vect-loop.c	2017-10-23 17:25:51.761379105 +0100
@@ -255,9 +255,11 @@  vect_determine_vectorization_factor (loo
 		}
 
 	      if (dump_enabled_p ())
-		dump_printf_loc (MSG_NOTE, vect_location,
-				 "nunits = " HOST_WIDE_INT_PRINT_DEC "\n",
-                                 TYPE_VECTOR_SUBPARTS (vectype));
+		{
+		  dump_printf_loc (MSG_NOTE, vect_location, "nunits = ");
+		  dump_dec (MSG_NOTE, TYPE_VECTOR_SUBPARTS (vectype));
+		  dump_printf (MSG_NOTE, "\n");
+		}
 
 	      vect_update_max_nunits (&vectorization_factor, vectype);
 	    }
@@ -548,9 +550,11 @@  vect_determine_vectorization_factor (loo
 	    }
 
 	  if (dump_enabled_p ())
-	    dump_printf_loc (MSG_NOTE, vect_location,
-			     "nunits = " HOST_WIDE_INT_PRINT_DEC "\n",
-			     TYPE_VECTOR_SUBPARTS (vf_vectype));
+	    {
+	      dump_printf_loc (MSG_NOTE, vect_location, "nunits = ");
+	      dump_dec (MSG_NOTE, TYPE_VECTOR_SUBPARTS (vf_vectype));
+	      dump_printf (MSG_NOTE, "\n");
+	    }
 
 	  vect_update_max_nunits (&vectorization_factor, vf_vectype);
 
@@ -632,8 +636,8 @@  vect_determine_vectorization_factor (loo
 
 	      if (!mask_type)
 		mask_type = vectype;
-	      else if (TYPE_VECTOR_SUBPARTS (mask_type)
-		       != TYPE_VECTOR_SUBPARTS (vectype))
+	      else if (may_ne (TYPE_VECTOR_SUBPARTS (mask_type),
+			       TYPE_VECTOR_SUBPARTS (vectype)))
 		{
 		  if (dump_enabled_p ())
 		    {
@@ -4152,7 +4156,7 @@  get_initial_defs_for_reduction (slp_tree
   scalar_type = TREE_TYPE (vector_type);
   /* vectorizable_reduction has already rejected SLP reductions on
      variable-length vectors.  */
-  nunits = TYPE_VECTOR_SUBPARTS (vector_type);
+  nunits = TYPE_VECTOR_SUBPARTS (vector_type).to_constant ();
 
   gcc_assert (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def);
 
@@ -7672,9 +7676,8 @@  vect_transform_loop (loop_vec_info loop_
 
 	  if (STMT_VINFO_VECTYPE (stmt_info))
 	    {
-	      unsigned int nunits
-		= (unsigned int)
-		  TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (stmt_info));
+	      poly_uint64 nunits
+		= TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (stmt_info));
 	      if (!STMT_SLP_TYPE (stmt_info)
 		  && may_ne (nunits, vf)
 		  && dump_enabled_p ())
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c	2017-10-10 17:55:22.109175458 +0100
+++ gcc/tree-vect-patterns.c	2017-10-23 17:25:51.763379034 +0100
@@ -3714,8 +3714,9 @@  vect_recog_bool_pattern (vec<gimple *> *
          vectorized matches the vector type of the result in
 	 size and number of elements.  */
       unsigned prec
-	= wi::udiv_trunc (wi::to_wide (TYPE_SIZE (vectype)),
-			  TYPE_VECTOR_SUBPARTS (vectype)).to_uhwi ();
+	= vector_element_size (tree_to_poly_uint64 (TYPE_SIZE (vectype)),
+			       TYPE_VECTOR_SUBPARTS (vectype));
+
       tree type
 	= build_nonstandard_integer_type (prec,
 					  TYPE_UNSIGNED (TREE_TYPE (var)));
@@ -3898,7 +3899,8 @@  vect_recog_mask_conversion_pattern (vec<
       vectype2 = get_mask_type_for_scalar_type (rhs1_type);
 
       if (!vectype1 || !vectype2
-	  || TYPE_VECTOR_SUBPARTS (vectype1) == TYPE_VECTOR_SUBPARTS (vectype2))
+	  || must_eq (TYPE_VECTOR_SUBPARTS (vectype1),
+		      TYPE_VECTOR_SUBPARTS (vectype2)))
 	return NULL;
 
       tmp = build_mask_conversion (rhs1, vectype1, stmt_vinfo, vinfo);
@@ -3973,7 +3975,8 @@  vect_recog_mask_conversion_pattern (vec<
       vectype2 = get_mask_type_for_scalar_type (rhs1_type);
 
       if (!vectype1 || !vectype2
-	  || TYPE_VECTOR_SUBPARTS (vectype1) == TYPE_VECTOR_SUBPARTS (vectype2))
+	  || must_eq (TYPE_VECTOR_SUBPARTS (vectype1),
+		      TYPE_VECTOR_SUBPARTS (vectype2)))
 	return NULL;
 
       /* If rhs1 is a comparison we need to move it into a
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2017-10-23 17:22:43.865071801 +0100
+++ gcc/tree-vect-slp.c	2017-10-23 17:25:51.764378998 +0100
@@ -1621,15 +1621,16 @@  vect_supported_load_permutation_p (slp_i
 	      stmt_vec_info group_info
 		= vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
 	      group_info = vinfo_for_stmt (GROUP_FIRST_ELEMENT (group_info));
-	      unsigned nunits
-		= TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (group_info));
+	      unsigned HOST_WIDE_INT nunits;
 	      unsigned k, maxk = 0;
 	      FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
 		if (k > maxk)
 		  maxk = k;
 	      /* In BB vectorization we may not actually use a loaded vector
 		 accessing elements in excess of GROUP_SIZE.  */
-	      if (maxk >= (GROUP_SIZE (group_info) & ~(nunits - 1)))
+	      tree vectype = STMT_VINFO_VECTYPE (group_info);
+	      if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits)
+		  || maxk >= (GROUP_SIZE (group_info) & ~(nunits - 1)))
 		{
 		  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				   "BB vectorization with gaps at the end of "
@@ -3243,7 +3244,7 @@  vect_get_constant_vectors (tree op, slp_
   else
     vector_type = get_vectype_for_scalar_type (TREE_TYPE (op));
   /* Enforced by vect_get_and_check_slp_defs.  */
-  nunits = TYPE_VECTOR_SUBPARTS (vector_type);
+  nunits = TYPE_VECTOR_SUBPARTS (vector_type).to_constant ();
 
   if (STMT_VINFO_DATA_REF (stmt_vinfo))
     {
@@ -3600,12 +3601,12 @@  vect_transform_slp_perm_load (slp_tree n
   gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   tree mask_element_type = NULL_TREE, mask_type;
-  int nunits, vec_index = 0;
+  int vec_index = 0;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   int group_size = SLP_INSTANCE_GROUP_SIZE (slp_node_instance);
-  int mask_element;
+  unsigned int mask_element;
   machine_mode mode;
-  unsigned HOST_WIDE_INT const_vf;
+  unsigned HOST_WIDE_INT nunits, const_vf;
 
   if (!STMT_VINFO_GROUPED_ACCESS (stmt_info))
     return false;
@@ -3615,8 +3616,10 @@  vect_transform_slp_perm_load (slp_tree n
   mode = TYPE_MODE (vectype);
 
   /* At the moment, all permutations are represented using per-element
-     indices, so we can't cope with variable vectorization factors.  */
-  if (!vf.is_constant (&const_vf))
+     indices, so we can't cope with variable vector lengths or
+     vectorization factors.  */
+  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits)
+      || !vf.is_constant (&const_vf))
     return false;
 
   /* The generic VEC_PERM_EXPR code always uses an integral type of the
@@ -3624,7 +3627,6 @@  vect_transform_slp_perm_load (slp_tree n
   mask_element_type = lang_hooks.types.type_for_mode
     (int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))).require (), 1);
   mask_type = get_vectype_for_scalar_type (mask_element_type);
-  nunits = TYPE_VECTOR_SUBPARTS (vectype);
   auto_vec_perm_indices mask (nunits);
   mask.quick_grow (nunits);
 
@@ -3654,7 +3656,7 @@  vect_transform_slp_perm_load (slp_tree n
      {c2,a3,b3,c3}.  */
 
   int vect_stmts_counter = 0;
-  int index = 0;
+  unsigned int index = 0;
   int first_vec_index = -1;
   int second_vec_index = -1;
   bool noop_p = true;
@@ -3664,8 +3666,8 @@  vect_transform_slp_perm_load (slp_tree n
     {
       for (int k = 0; k < group_size; k++)
 	{
-	  int i = (SLP_TREE_LOAD_PERMUTATION (node)[k]
-		   + j * STMT_VINFO_GROUP_SIZE (stmt_info));
+	  unsigned int i = (SLP_TREE_LOAD_PERMUTATION (node)[k]
+			    + j * STMT_VINFO_GROUP_SIZE (stmt_info));
 	  vec_index = i / nunits;
 	  mask_element = i % nunits;
 	  if (vec_index == first_vec_index
@@ -3693,8 +3695,7 @@  vect_transform_slp_perm_load (slp_tree n
 	      return false;
 	    }
 
-	  gcc_assert (mask_element >= 0
-		      && mask_element < 2 * nunits);
+	  gcc_assert (mask_element < 2 * nunits);
 	  if (mask_element != index)
 	    noop_p = false;
 	  mask[index++] = mask_element;
@@ -3727,7 +3728,7 @@  vect_transform_slp_perm_load (slp_tree n
 		  if (! noop_p)
 		    {
 		      auto_vec<tree, 32> mask_elts (nunits);
-		      for (int l = 0; l < nunits; ++l)
+		      for (unsigned int l = 0; l < nunits; ++l)
 			mask_elts.quick_push (build_int_cst (mask_element_type,
 							     mask[l]));
 		      mask_vec = build_vector (mask_type, mask_elts);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2017-10-23 17:22:41.879277786 +0100
+++ gcc/tree-vect-stmts.c	2017-10-23 17:25:51.767378890 +0100
@@ -1713,9 +1713,10 @@  compare_step_with_zero (gimple *stmt)
 static tree
 perm_mask_for_reverse (tree vectype)
 {
-  int i, nunits;
+  unsigned HOST_WIDE_INT i, nunits;
 
-  nunits = TYPE_VECTOR_SUBPARTS (vectype);
+  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits))
+    return NULL_TREE;
 
   auto_vec_perm_indices sel (nunits);
   for (i = 0; i < nunits; ++i)
@@ -1750,7 +1751,7 @@  get_group_load_store_type (gimple *stmt,
   bool single_element_p = (stmt == first_stmt
 			   && !GROUP_NEXT_ELEMENT (stmt_info));
   unsigned HOST_WIDE_INT gap = GROUP_GAP (vinfo_for_stmt (first_stmt));
-  unsigned nunits = TYPE_VECTOR_SUBPARTS (vectype);
+  poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
 
   /* True if the vectorized statements would access beyond the last
      statement in the group.  */
@@ -1774,7 +1775,7 @@  get_group_load_store_type (gimple *stmt,
 	  /* Try to use consecutive accesses of GROUP_SIZE elements,
 	     separated by the stride, until we have a complete vector.
 	     Fall back to scalar accesses if that isn't possible.  */
-	  if (nunits % group_size == 0)
+	  if (multiple_p (nunits, group_size))
 	    *memory_access_type = VMAT_STRIDED_SLP;
 	  else
 	    *memory_access_type = VMAT_ELEMENTWISE;
@@ -2102,7 +2103,8 @@  vectorizable_mask_load_store (gimple *st
     mask_vectype = get_mask_type_for_scalar_type (TREE_TYPE (vectype));
 
   if (!mask_vectype || !VECTOR_BOOLEAN_TYPE_P (mask_vectype)
-      || TYPE_VECTOR_SUBPARTS (mask_vectype) != TYPE_VECTOR_SUBPARTS (vectype))
+      || may_ne (TYPE_VECTOR_SUBPARTS (mask_vectype),
+		 TYPE_VECTOR_SUBPARTS (vectype)))
     return false;
 
   if (gimple_call_internal_fn (stmt) == IFN_MASK_STORE)
@@ -2255,8 +2257,8 @@  vectorizable_mask_load_store (gimple *st
 
 	  if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))
 	    {
-	      gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op))
-			  == TYPE_VECTOR_SUBPARTS (idxtype));
+	      gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op)),
+				   TYPE_VECTOR_SUBPARTS (idxtype)));
 	      var = vect_get_new_ssa_name (idxtype, vect_simple_var);
 	      op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
 	      new_stmt
@@ -2281,8 +2283,9 @@  vectorizable_mask_load_store (gimple *st
 	      mask_op = vec_mask;
 	      if (!useless_type_conversion_p (masktype, TREE_TYPE (vec_mask)))
 		{
-		  gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask_op))
-			      == TYPE_VECTOR_SUBPARTS (masktype));
+		  gcc_assert
+		    (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask_op)),
+			      TYPE_VECTOR_SUBPARTS (masktype)));
 		  var = vect_get_new_ssa_name (masktype, vect_simple_var);
 		  mask_op = build1 (VIEW_CONVERT_EXPR, masktype, mask_op);
 		  new_stmt
@@ -2298,8 +2301,8 @@  vectorizable_mask_load_store (gimple *st
 
 	  if (!useless_type_conversion_p (vectype, rettype))
 	    {
-	      gcc_assert (TYPE_VECTOR_SUBPARTS (vectype)
-			  == TYPE_VECTOR_SUBPARTS (rettype));
+	      gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (vectype),
+				   TYPE_VECTOR_SUBPARTS (rettype)));
 	      op = vect_get_new_ssa_name (rettype, vect_simple_var);
 	      gimple_call_set_lhs (new_stmt, op);
 	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
@@ -2493,11 +2496,14 @@  vectorizable_bswap (gimple *stmt, gimple
   tree op, vectype;
   stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
-  unsigned ncopies, nunits;
+  unsigned ncopies;
+  unsigned HOST_WIDE_INT nunits, num_bytes;
 
   op = gimple_call_arg (stmt, 0);
   vectype = STMT_VINFO_VECTYPE (stmt_info);
-  nunits = TYPE_VECTOR_SUBPARTS (vectype);
+
+  if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&nunits))
+    return false;
 
   /* Multiple types in SLP are handled by creating the appropriate number of
      vectorized stmts for each SLP node.  Hence, NCOPIES is always 1 in
@@ -2513,7 +2519,9 @@  vectorizable_bswap (gimple *stmt, gimple
   if (! char_vectype)
     return false;
 
-  unsigned int num_bytes = TYPE_VECTOR_SUBPARTS (char_vectype);
+  if (!TYPE_VECTOR_SUBPARTS (char_vectype).is_constant (&num_bytes))
+    return false;
+
   unsigned word_bytes = num_bytes / nunits;
 
   auto_vec_perm_indices elts (num_bytes);
@@ -3213,7 +3221,7 @@  vect_simd_lane_linear (tree op, struct l
 static unsigned HOST_WIDE_INT
 simd_clone_subparts (tree vectype)
 {
-  return TYPE_VECTOR_SUBPARTS (vectype);
+  return TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
 }
 
 /* Function vectorizable_simd_clone_call.
@@ -4732,7 +4740,7 @@  vectorizable_assignment (gimple *stmt, g
     op = TREE_OPERAND (op, 0);
 
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
-  unsigned int nunits = TYPE_VECTOR_SUBPARTS (vectype);
+  poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (vectype);
 
   /* Multiple types in SLP are handled by creating the appropriate number of
      vectorized stmts for each SLP node.  Hence, NCOPIES is always 1 in
@@ -4757,7 +4765,7 @@  vectorizable_assignment (gimple *stmt, g
   if ((CONVERT_EXPR_CODE_P (code)
        || code == VIEW_CONVERT_EXPR)
       && (!vectype_in
-	  || TYPE_VECTOR_SUBPARTS (vectype_in) != nunits
+	  || may_ne (TYPE_VECTOR_SUBPARTS (vectype_in), nunits)
 	  || (GET_MODE_SIZE (TYPE_MODE (vectype))
 	      != GET_MODE_SIZE (TYPE_MODE (vectype_in)))))
     return false;
@@ -4906,8 +4914,8 @@  vectorizable_shift (gimple *stmt, gimple
   int ndts = 2;
   gimple *new_stmt = NULL;
   stmt_vec_info prev_stmt_info;
-  int nunits_in;
-  int nunits_out;
+  poly_uint64 nunits_in;
+  poly_uint64 nunits_out;
   tree vectype_out;
   tree op1_vectype;
   int ncopies;
@@ -4974,7 +4982,7 @@  vectorizable_shift (gimple *stmt, gimple
 
   nunits_out = TYPE_VECTOR_SUBPARTS (vectype_out);
   nunits_in = TYPE_VECTOR_SUBPARTS (vectype);
-  if (nunits_out != nunits_in)
+  if (may_ne (nunits_out, nunits_in))
     return false;
 
   op1 = gimple_assign_rhs2 (stmt);
@@ -5274,8 +5282,8 @@  vectorizable_operation (gimple *stmt, gi
   int ndts = 3;
   gimple *new_stmt = NULL;
   stmt_vec_info prev_stmt_info;
-  int nunits_in;
-  int nunits_out;
+  poly_uint64 nunits_in;
+  poly_uint64 nunits_out;
   tree vectype_out;
   int ncopies;
   int j, i;
@@ -5385,7 +5393,7 @@  vectorizable_operation (gimple *stmt, gi
 
   nunits_out = TYPE_VECTOR_SUBPARTS (vectype_out);
   nunits_in = TYPE_VECTOR_SUBPARTS (vectype);
-  if (nunits_out != nunits_in)
+  if (may_ne (nunits_out, nunits_in))
     return false;
 
   if (op_type == binary_op || op_type == ternary_op)
@@ -5937,8 +5945,8 @@  vectorizable_store (gimple *stmt, gimple
 
 	  if (!useless_type_conversion_p (srctype, TREE_TYPE (src)))
 	    {
-	      gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (src))
-			  == TYPE_VECTOR_SUBPARTS (srctype));
+	      gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (src)),
+				   TYPE_VECTOR_SUBPARTS (srctype)));
 	      var = vect_get_new_ssa_name (srctype, vect_simple_var);
 	      src = build1 (VIEW_CONVERT_EXPR, srctype, src);
 	      new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, src);
@@ -5948,8 +5956,8 @@  vectorizable_store (gimple *stmt, gimple
 
 	  if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))
 	    {
-	      gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op))
-			  == TYPE_VECTOR_SUBPARTS (idxtype));
+	      gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op)),
+				   TYPE_VECTOR_SUBPARTS (idxtype)));
 	      var = vect_get_new_ssa_name (idxtype, vect_simple_var);
 	      op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
 	      new_stmt = gimple_build_assign (var, VIEW_CONVERT_EXPR, op);
@@ -6554,7 +6562,7 @@  vect_gen_perm_mask_any (tree vectype, ve
   tree mask_elt_type, mask_type, mask_vec;
 
   unsigned int nunits = sel.length ();
-  gcc_checking_assert (nunits == TYPE_VECTOR_SUBPARTS (vectype));
+  gcc_checking_assert (must_eq (nunits, TYPE_VECTOR_SUBPARTS (vectype)));
 
   mask_elt_type = lang_hooks.types.type_for_mode
     (int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))).require (), 1);
@@ -6993,8 +7001,8 @@  vectorizable_load (gimple *stmt, gimple_
 
 	  if (!useless_type_conversion_p (idxtype, TREE_TYPE (op)))
 	    {
-	      gcc_assert (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op))
-			  == TYPE_VECTOR_SUBPARTS (idxtype));
+	      gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (op)),
+				   TYPE_VECTOR_SUBPARTS (idxtype)));
 	      var = vect_get_new_ssa_name (idxtype, vect_simple_var);
 	      op = build1 (VIEW_CONVERT_EXPR, idxtype, op);
 	      new_stmt
@@ -7008,8 +7016,8 @@  vectorizable_load (gimple *stmt, gimple_
 
 	  if (!useless_type_conversion_p (vectype, rettype))
 	    {
-	      gcc_assert (TYPE_VECTOR_SUBPARTS (vectype)
-			  == TYPE_VECTOR_SUBPARTS (rettype));
+	      gcc_assert (must_eq (TYPE_VECTOR_SUBPARTS (vectype),
+				   TYPE_VECTOR_SUBPARTS (rettype)));
 	      op = vect_get_new_ssa_name (rettype, vect_simple_var);
 	      gimple_call_set_lhs (new_stmt, op);
 	      vect_finish_stmt_generation (stmt, new_stmt, gsi);
@@ -7905,7 +7913,8 @@  vect_is_simple_cond (tree cond, vec_info
     return false;
 
   if (vectype1 && vectype2
-      && TYPE_VECTOR_SUBPARTS (vectype1) != TYPE_VECTOR_SUBPARTS (vectype2))
+      && may_ne (TYPE_VECTOR_SUBPARTS (vectype1),
+		 TYPE_VECTOR_SUBPARTS (vectype2)))
     return false;
 
   *comp_vectype = vectype1 ? vectype1 : vectype2;
@@ -8308,7 +8317,7 @@  vectorizable_comparison (gimple *stmt, g
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   enum vect_def_type dts[2] = {vect_unknown_def_type, vect_unknown_def_type};
   int ndts = 2;
-  unsigned nunits;
+  poly_uint64 nunits;
   int ncopies;
   enum tree_code code, bitop1 = NOP_EXPR, bitop2 = NOP_EXPR;
   stmt_vec_info prev_stmt_info = NULL;
@@ -8368,7 +8377,8 @@  vectorizable_comparison (gimple *stmt, g
     return false;
 
   if (vectype1 && vectype2
-      && TYPE_VECTOR_SUBPARTS (vectype1) != TYPE_VECTOR_SUBPARTS (vectype2))
+      && may_ne (TYPE_VECTOR_SUBPARTS (vectype1),
+		 TYPE_VECTOR_SUBPARTS (vectype2)))
     return false;
 
   vectype = vectype1 ? vectype1 : vectype2;
@@ -8377,10 +8387,10 @@  vectorizable_comparison (gimple *stmt, g
   if (!vectype)
     {
       vectype = get_vectype_for_scalar_type (TREE_TYPE (rhs1));
-      if (TYPE_VECTOR_SUBPARTS (vectype) != nunits)
+      if (may_ne (TYPE_VECTOR_SUBPARTS (vectype), nunits))
 	return false;
     }
-  else if (nunits != TYPE_VECTOR_SUBPARTS (vectype))
+  else if (may_ne (nunits, TYPE_VECTOR_SUBPARTS (vectype)))
     return false;
 
   /* Can't compare mask and non-mask types.  */
@@ -9611,8 +9621,8 @@  supportable_widening_operation (enum tre
 	 vector types having the same QImode.  Thus we
 	 add additional check for elements number.  */
     return (!VECTOR_BOOLEAN_TYPE_P (vectype)
-	    || (TYPE_VECTOR_SUBPARTS (vectype) / 2
-		== TYPE_VECTOR_SUBPARTS (wide_vectype)));
+	    || must_eq (TYPE_VECTOR_SUBPARTS (vectype),
+			TYPE_VECTOR_SUBPARTS (wide_vectype) * 2));
 
   /* Check if it's a multi-step conversion that can be done using intermediate
      types.  */
@@ -9633,8 +9643,10 @@  supportable_widening_operation (enum tre
       intermediate_mode = insn_data[icode1].operand[0].mode;
       if (VECTOR_BOOLEAN_TYPE_P (prev_type))
 	{
+	  poly_uint64 intermediate_nelts
+	    = exact_div (TYPE_VECTOR_SUBPARTS (prev_type), 2);
 	  intermediate_type
-	    = build_truth_vector_type (TYPE_VECTOR_SUBPARTS (prev_type) / 2,
+	    = build_truth_vector_type (intermediate_nelts,
 				       current_vector_size);
 	  if (intermediate_mode != TYPE_MODE (intermediate_type))
 	    return false;
@@ -9664,8 +9676,8 @@  supportable_widening_operation (enum tre
       if (insn_data[icode1].operand[0].mode == TYPE_MODE (wide_vectype)
 	  && insn_data[icode2].operand[0].mode == TYPE_MODE (wide_vectype))
 	return (!VECTOR_BOOLEAN_TYPE_P (vectype)
-		|| (TYPE_VECTOR_SUBPARTS (intermediate_type) / 2
-		    == TYPE_VECTOR_SUBPARTS (wide_vectype)));
+		|| must_eq (TYPE_VECTOR_SUBPARTS (intermediate_type),
+			    TYPE_VECTOR_SUBPARTS (wide_vectype) * 2));
 
       prev_type = intermediate_type;
       prev_mode = intermediate_mode;
@@ -9753,8 +9765,8 @@  supportable_narrowing_operation (enum tr
        vector types having the same QImode.  Thus we
        add additional check for elements number.  */
     return (!VECTOR_BOOLEAN_TYPE_P (vectype)
-	    || (TYPE_VECTOR_SUBPARTS (vectype) * 2
-		== TYPE_VECTOR_SUBPARTS (narrow_vectype)));
+	    || must_eq (TYPE_VECTOR_SUBPARTS (vectype) * 2,
+			TYPE_VECTOR_SUBPARTS (narrow_vectype)));
 
   /* Check if it's a multi-step conversion that can be done using intermediate
      types.  */
@@ -9820,8 +9832,8 @@  supportable_narrowing_operation (enum tr
 
       if (insn_data[icode1].operand[0].mode == TYPE_MODE (narrow_vectype))
 	return (!VECTOR_BOOLEAN_TYPE_P (vectype)
-		|| (TYPE_VECTOR_SUBPARTS (intermediate_type) * 2
-		    == TYPE_VECTOR_SUBPARTS (narrow_vectype)));
+		|| must_eq (TYPE_VECTOR_SUBPARTS (intermediate_type) * 2,
+			    TYPE_VECTOR_SUBPARTS (narrow_vectype)));
 
       prev_mode = intermediate_mode;
       prev_type = intermediate_type;
Index: gcc/ada/gcc-interface/utils.c
===================================================================
--- gcc/ada/gcc-interface/utils.c	2017-10-23 11:41:24.988650286 +0100
+++ gcc/ada/gcc-interface/utils.c	2017-10-23 17:25:51.723380471 +0100
@@ -3528,7 +3528,7 @@  gnat_types_compatible_p (tree t1, tree t
   /* Vector types are also compatible if they have the same number of subparts
      and the same form of (scalar) element type.  */
   if (code == VECTOR_TYPE
-      && TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2)
+      && must_eq (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))
       && TREE_CODE (TREE_TYPE (t1)) == TREE_CODE (TREE_TYPE (t2))
       && TYPE_PRECISION (TREE_TYPE (t1)) == TYPE_PRECISION (TREE_TYPE (t2)))
     return 1;
Index: gcc/brig/brigfrontend/brig-to-generic.cc
===================================================================
--- gcc/brig/brigfrontend/brig-to-generic.cc	2017-10-10 16:57:41.296192291 +0100
+++ gcc/brig/brigfrontend/brig-to-generic.cc	2017-10-23 17:25:51.724380435 +0100
@@ -869,7 +869,7 @@  get_unsigned_int_type (tree original_typ
     {
       size_t esize
 	= int_size_in_bytes (TREE_TYPE (original_type)) * BITS_PER_UNIT;
-      size_t ecount = TYPE_VECTOR_SUBPARTS (original_type);
+      poly_uint64 ecount = TYPE_VECTOR_SUBPARTS (original_type);
       return build_vector_type (build_nonstandard_integer_type (esize, true),
 				ecount);
     }
Index: gcc/brig/brigfrontend/brig-util.h
===================================================================
--- gcc/brig/brigfrontend/brig-util.h	2017-10-23 17:22:46.882758777 +0100
+++ gcc/brig/brigfrontend/brig-util.h	2017-10-23 17:25:51.724380435 +0100
@@ -81,7 +81,7 @@  bool hsa_type_packed_p (BrigType16_t typ
 inline unsigned HOST_WIDE_INT
 gccbrig_type_vector_subparts (const_tree type)
 {
-  return TYPE_VECTOR_SUBPARTS (type);
+  return TYPE_VECTOR_SUBPARTS (type).to_constant ();
 }
 
 #endif
Index: gcc/c-family/c-common.c
===================================================================
--- gcc/c-family/c-common.c	2017-10-23 11:41:23.219573771 +0100
+++ gcc/c-family/c-common.c	2017-10-23 17:25:51.725380399 +0100
@@ -942,15 +942,16 @@  vector_types_convertible_p (const_tree t
 
   convertible_lax =
     (tree_int_cst_equal (TYPE_SIZE (t1), TYPE_SIZE (t2))
-     && (TREE_CODE (TREE_TYPE (t1)) != REAL_TYPE ||
-	 TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2))
+     && (TREE_CODE (TREE_TYPE (t1)) != REAL_TYPE
+	 || must_eq (TYPE_VECTOR_SUBPARTS (t1),
+		     TYPE_VECTOR_SUBPARTS (t2)))
      && (INTEGRAL_TYPE_P (TREE_TYPE (t1))
 	 == INTEGRAL_TYPE_P (TREE_TYPE (t2))));
 
   if (!convertible_lax || flag_lax_vector_conversions)
     return convertible_lax;
 
-  if (TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2)
+  if (must_eq (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))
       && lang_hooks.types_compatible_p (TREE_TYPE (t1), TREE_TYPE (t2)))
     return true;
 
@@ -1018,10 +1019,10 @@  c_build_vec_perm_expr (location_t loc, t
       return error_mark_node;
     }
 
-  if (TYPE_VECTOR_SUBPARTS (TREE_TYPE (v0))
-      != TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask))
-      && TYPE_VECTOR_SUBPARTS (TREE_TYPE (v1))
-	 != TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask)))
+  if (may_ne (TYPE_VECTOR_SUBPARTS (TREE_TYPE (v0)),
+	      TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask)))
+      && may_ne (TYPE_VECTOR_SUBPARTS (TREE_TYPE (v1)),
+		 TYPE_VECTOR_SUBPARTS (TREE_TYPE (mask))))
     {
       if (complain)
 	error_at (loc, "__builtin_shuffle number of elements of the "
@@ -2280,7 +2281,8 @@  c_common_type_for_mode (machine_mode mod
       if (inner_type != NULL_TREE)
 	return build_complex_type (inner_type);
     }
-  else if (VECTOR_MODE_P (mode))
+  else if (VECTOR_MODE_P (mode)
+	   && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))
     {
       machine_mode inner_mode = GET_MODE_INNER (mode);
       tree inner_type = c_common_type_for_mode (inner_mode, unsignedp);
@@ -7591,7 +7593,7 @@  convert_vector_to_array_for_subscript (l
 
       if (TREE_CODE (index) == INTEGER_CST)
         if (!tree_fits_uhwi_p (index)
-            || tree_to_uhwi (index) >= TYPE_VECTOR_SUBPARTS (type))
+	    || may_ge (tree_to_uhwi (index), TYPE_VECTOR_SUBPARTS (type)))
           warning_at (loc, OPT_Warray_bounds, "index value is out of bound");
 
       /* We are building an ARRAY_REF so mark the vector as addressable
Index: gcc/c/c-typeck.c
===================================================================
--- gcc/c/c-typeck.c	2017-10-10 17:55:22.067175462 +0100
+++ gcc/c/c-typeck.c	2017-10-23 17:25:51.726380364 +0100
@@ -1238,7 +1238,7 @@  comptypes_internal (const_tree type1, co
       break;
 
     case VECTOR_TYPE:
-      val = (TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2)
+      val = (must_eq (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))
 	     && comptypes_internal (TREE_TYPE (t1), TREE_TYPE (t2),
 				    enum_and_int_p, different_types_p));
       break;
@@ -11343,7 +11343,8 @@  build_binary_op (location_t location, en
       if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE
 	  && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE
 	  && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE
-	  && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))
+	  && must_eq (TYPE_VECTOR_SUBPARTS (type0),
+		      TYPE_VECTOR_SUBPARTS (type1)))
 	{
 	  result_type = type0;
 	  converted = 1;
@@ -11400,7 +11401,8 @@  build_binary_op (location_t location, en
       if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE
 	  && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE
 	  && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE
-	  && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))
+	  && must_eq (TYPE_VECTOR_SUBPARTS (type0),
+		      TYPE_VECTOR_SUBPARTS (type1)))
 	{
 	  result_type = type0;
 	  converted = 1;
@@ -11474,7 +11476,8 @@  build_binary_op (location_t location, en
               return error_mark_node;
             }
 
-          if (TYPE_VECTOR_SUBPARTS (type0) != TYPE_VECTOR_SUBPARTS (type1))
+	  if (may_ne (TYPE_VECTOR_SUBPARTS (type0),
+		      TYPE_VECTOR_SUBPARTS (type1)))
             {
               error_at (location, "comparing vectors with different "
                                   "number of elements");
@@ -11634,7 +11637,8 @@  build_binary_op (location_t location, en
               return error_mark_node;
             }
 
-          if (TYPE_VECTOR_SUBPARTS (type0) != TYPE_VECTOR_SUBPARTS (type1))
+	  if (may_ne (TYPE_VECTOR_SUBPARTS (type0),
+		      TYPE_VECTOR_SUBPARTS (type1)))
             {
               error_at (location, "comparing vectors with different "
                                   "number of elements");
Index: gcc/cp/call.c
===================================================================
--- gcc/cp/call.c	2017-10-23 11:41:24.251615675 +0100
+++ gcc/cp/call.c	2017-10-23 17:25:51.728380292 +0100
@@ -4928,8 +4928,8 @@  build_conditional_expr_1 (location_t loc
 	}
 
       if (!same_type_p (arg2_type, arg3_type)
-	  || TYPE_VECTOR_SUBPARTS (arg1_type)
-	     != TYPE_VECTOR_SUBPARTS (arg2_type)
+	  || may_ne (TYPE_VECTOR_SUBPARTS (arg1_type),
+		     TYPE_VECTOR_SUBPARTS (arg2_type))
 	  || TYPE_SIZE (arg1_type) != TYPE_SIZE (arg2_type))
 	{
 	  if (complain & tf_error)
Index: gcc/cp/constexpr.c
===================================================================
--- gcc/cp/constexpr.c	2017-10-23 17:18:47.657057799 +0100
+++ gcc/cp/constexpr.c	2017-10-23 17:25:51.728380292 +0100
@@ -3059,7 +3059,8 @@  cxx_fold_indirect_ref (location_t loc, t
 	      unsigned HOST_WIDE_INT indexi = offset * BITS_PER_UNIT;
 	      tree index = bitsize_int (indexi);
 
-	      if (offset / part_widthi < TYPE_VECTOR_SUBPARTS (op00type))
+	      if (must_lt (offset / part_widthi,
+			   TYPE_VECTOR_SUBPARTS (op00type)))
 		return fold_build3_loc (loc,
 					BIT_FIELD_REF, type, op00,
 					part_width, index);
Index: gcc/cp/decl.c
===================================================================
--- gcc/cp/decl.c	2017-10-23 11:41:24.223565801 +0100
+++ gcc/cp/decl.c	2017-10-23 17:25:51.732380148 +0100
@@ -7454,7 +7454,11 @@  cp_finish_decomp (tree decl, tree first,
     }
   else if (TREE_CODE (type) == VECTOR_TYPE)
     {
-      eltscnt = TYPE_VECTOR_SUBPARTS (type);
+      if (!TYPE_VECTOR_SUBPARTS (type).is_constant (&eltscnt))
+	{
+	  error_at (loc, "cannot decompose variable length vector %qT", type);
+	  goto error_out;
+	}
       if (count != eltscnt)
 	goto cnt_mismatch;
       eltype = cp_build_qualified_type (TREE_TYPE (type), TYPE_QUALS (type));
Index: gcc/cp/mangle.c
===================================================================
--- gcc/cp/mangle.c	2017-10-10 17:55:22.087175461 +0100
+++ gcc/cp/mangle.c	2017-10-23 17:25:51.733380112 +0100
@@ -2260,7 +2260,8 @@  write_type (tree type)
 		  write_string ("Dv");
 		  /* Non-constant vector size would be encoded with
 		     _ expression, but we don't support that yet.  */
-		  write_unsigned_number (TYPE_VECTOR_SUBPARTS (type));
+		  write_unsigned_number (TYPE_VECTOR_SUBPARTS (type)
+					 .to_constant ());
 		  write_char ('_');
 		}
 	      else
Index: gcc/cp/typeck.c
===================================================================
--- gcc/cp/typeck.c	2017-10-23 11:41:24.212926194 +0100
+++ gcc/cp/typeck.c	2017-10-23 17:25:51.735380040 +0100
@@ -1359,7 +1359,7 @@  structural_comptypes (tree t1, tree t2,
       break;
 
     case VECTOR_TYPE:
-      if (TYPE_VECTOR_SUBPARTS (t1) != TYPE_VECTOR_SUBPARTS (t2)
+      if (may_ne (TYPE_VECTOR_SUBPARTS (t1), TYPE_VECTOR_SUBPARTS (t2))
 	  || !same_type_p (TREE_TYPE (t1), TREE_TYPE (t2)))
 	return false;
       break;
@@ -4513,9 +4513,10 @@  cp_build_binary_op (location_t location,
           converted = 1;
         }
       else if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE
-	  && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE
-	  && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE
-	  && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))
+	       && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE
+	       && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE
+	       && must_eq (TYPE_VECTOR_SUBPARTS (type0),
+			   TYPE_VECTOR_SUBPARTS (type1)))
 	{
 	  result_type = type0;
 	  converted = 1;
@@ -4558,9 +4559,10 @@  cp_build_binary_op (location_t location,
           converted = 1;
         }
       else if (code0 == VECTOR_TYPE && code1 == VECTOR_TYPE
-	  && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE
-	  && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE
-	  && TYPE_VECTOR_SUBPARTS (type0) == TYPE_VECTOR_SUBPARTS (type1))
+	       && TREE_CODE (TREE_TYPE (type0)) == INTEGER_TYPE
+	       && TREE_CODE (TREE_TYPE (type1)) == INTEGER_TYPE
+	       && must_eq (TYPE_VECTOR_SUBPARTS (type0),
+			   TYPE_VECTOR_SUBPARTS (type1)))
 	{
 	  result_type = type0;
 	  converted = 1;
@@ -4925,7 +4927,8 @@  cp_build_binary_op (location_t location,
 	      return error_mark_node;
 	    }
 
-	  if (TYPE_VECTOR_SUBPARTS (type0) != TYPE_VECTOR_SUBPARTS (type1))
+	  if (may_ne (TYPE_VECTOR_SUBPARTS (type0),
+		      TYPE_VECTOR_SUBPARTS (type1)))
 	    {
 	      if (complain & tf_error)
 		{
Index: gcc/cp/typeck2.c
===================================================================
--- gcc/cp/typeck2.c	2017-10-09 11:50:52.214211104 +0100
+++ gcc/cp/typeck2.c	2017-10-23 17:25:51.736380004 +0100
@@ -1276,7 +1276,7 @@  process_init_constructor_array (tree typ
     }
   else
     /* Vectors are like simple fixed-size arrays.  */
-    len = TYPE_VECTOR_SUBPARTS (type);
+    unbounded = !TYPE_VECTOR_SUBPARTS (type).is_constant (&len);
 
   /* There must not be more initializers than needed.  */
   if (!unbounded && vec_safe_length (v) > len)
Index: gcc/fortran/trans-types.c
===================================================================
--- gcc/fortran/trans-types.c	2017-09-25 13:57:12.591118003 +0100
+++ gcc/fortran/trans-types.c	2017-10-23 17:25:51.745379681 +0100
@@ -3159,7 +3159,8 @@  gfc_type_for_mode (machine_mode mode, in
       tree type = gfc_type_for_size (GET_MODE_PRECISION (int_mode), unsignedp);
       return type != NULL_TREE && mode == TYPE_MODE (type) ? type : NULL_TREE;
     }
-  else if (VECTOR_MODE_P (mode))
+  else if (VECTOR_MODE_P (mode)
+	   && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))
     {
       machine_mode inner_mode = GET_MODE_INNER (mode);
       tree inner_type = gfc_type_for_mode (inner_mode, unsignedp);
Index: gcc/lto/lto-lang.c
===================================================================
--- gcc/lto/lto-lang.c	2017-10-23 11:41:25.563189078 +0100
+++ gcc/lto/lto-lang.c	2017-10-23 17:25:51.748379573 +0100
@@ -971,7 +971,8 @@  lto_type_for_mode (machine_mode mode, in
       if (inner_type != NULL_TREE)
 	return build_complex_type (inner_type);
     }
-  else if (VECTOR_MODE_P (mode))
+  else if (VECTOR_MODE_P (mode)
+	   && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))
     {
       machine_mode inner_mode = GET_MODE_INNER (mode);
       tree inner_type = lto_type_for_mode (inner_mode, unsigned_p);
Index: gcc/lto/lto.c
===================================================================
--- gcc/lto/lto.c	2017-10-13 10:23:39.776947828 +0100
+++ gcc/lto/lto.c	2017-10-23 17:25:51.749379537 +0100
@@ -316,7 +316,7 @@  hash_canonical_type (tree type)
 
   if (VECTOR_TYPE_P (type))
     {
-      hstate.add_int (TYPE_VECTOR_SUBPARTS (type));
+      hstate.add_poly_int (TYPE_VECTOR_SUBPARTS (type));
       hstate.add_int (TYPE_UNSIGNED (type));
     }
 
Index: gcc/go/go-lang.c
===================================================================
--- gcc/go/go-lang.c	2017-08-30 12:20:57.010045759 +0100
+++ gcc/go/go-lang.c	2017-10-23 17:25:51.747379609 +0100
@@ -372,7 +372,8 @@  go_langhook_type_for_mode (machine_mode
      make sense for the middle-end to ask the frontend for a type
      which the frontend does not support.  However, at least for now
      it is required.  See PR 46805.  */
-  if (VECTOR_MODE_P (mode))
+  if (VECTOR_MODE_P (mode)
+      && valid_vector_subparts_p (GET_MODE_NUNITS (mode)))
     {
       tree inner;