diff mbox series

[06/nn,AArch64] Add an endian_lane_rtx helper routine

Message ID 87a80cbu09.fsf@linaro.org
State New
Headers show
Series [06/nn,AArch64] Add an endian_lane_rtx helper routine | expand

Commit Message

Richard Sandiford Oct. 27, 2017, 1:27 p.m. UTC
Later patches turn the number of vector units into a poly_int.
We deliberately don't support applying GEN_INT to those (except
in target code that doesn't disguish between poly_ints and normal
constants); gen_int_mode needs to be used instead.

This patch therefore replaces instances of:

  GEN_INT (ENDIAN_LANE_N (builtin_mode, INTVAL (op[opc])))

with uses of a new endian_lane_rtx function.


2017-10-26  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* config/aarch64/aarch64-protos.h (aarch64_endian_lane_rtx): Declare.
	* config/aarch64/aarch64.c (aarch64_endian_lane_rtx): New function.
	* config/aarch64/aarch64.h (ENDIAN_LANE_N): Take the number
	of units rather than the mode.
	* config/aarch64/iterators.md (nunits): New mode attribute.
	* config/aarch64/aarch64-builtins.c (aarch64_simd_expand_args):
	Use aarch64_endian_lane_rtx instead of GEN_INT (ENDIAN_LANE_N ...).
	* config/aarch64/aarch64-simd.md (aarch64_dup_lane<mode>)
	(aarch64_dup_lane_<vswap_width_name><mode>, *aarch64_mul3_elt<mode>)
	(*aarch64_mul3_elt_<vswap_width_name><mode>): Likewise.
	(*aarch64_mul3_elt_to_64v2df, *aarch64_mla_elt<mode>): Likewise.
	(*aarch64_mla_elt_<vswap_width_name><mode>, *aarch64_mls_elt<mode>)
	(*aarch64_mls_elt_<vswap_width_name><mode>, *aarch64_fma4_elt<mode>)
	(*aarch64_fma4_elt_<vswap_width_name><mode>):: Likewise.
	(*aarch64_fma4_elt_to_64v2df, *aarch64_fnma4_elt<mode>): Likewise.
	(*aarch64_fnma4_elt_<vswap_width_name><mode>): Likewise.
	(*aarch64_fnma4_elt_to_64v2df, reduc_plus_scal_<mode>): Likewise.
	(reduc_plus_scal_v4sf, reduc_<maxmin_uns>_scal_<mode>): Likewise.
	(reduc_<maxmin_uns>_scal_<mode>): Likewise.
	(*aarch64_get_lane_extend<GPI:mode><VDQQH:mode>): Likewise.
	(*aarch64_get_lane_zero_extendsi<mode>): Likewise.
	(aarch64_get_lane<mode>, *aarch64_mulx_elt_<vswap_width_name><mode>)
	(*aarch64_mulx_elt<mode>, *aarch64_vgetfmulx<mode>): Likewise.
	(aarch64_sq<r>dmulh_lane<mode>, aarch64_sq<r>dmulh_laneq<mode>)
	(aarch64_sqrdml<SQRDMLH_AS:rdma_as>h_lane<mode>): Likewise.
	(aarch64_sqrdml<SQRDMLH_AS:rdma_as>h_laneq<mode>): Likewise.
	(aarch64_sqdml<SBINQOPS:as>l_lane<mode>): Likewise.
	(aarch64_sqdml<SBINQOPS:as>l_laneq<mode>): Likewise.
	(aarch64_sqdml<SBINQOPS:as>l2_lane<mode>_internal): Likewise.
	(aarch64_sqdml<SBINQOPS:as>l2_laneq<mode>_internal): Likewise.
	(aarch64_sqdmull_lane<mode>, aarch64_sqdmull_laneq<mode>): Likewise.
	(aarch64_sqdmull2_lane<mode>_internal): Likewise.
	(aarch64_sqdmull2_laneq<mode>_internal): Likewise.
	(aarch64_vec_load_lanesoi_lane<mode>): Likewise.
	(aarch64_vec_store_lanesoi_lane<mode>): Likewise.
	(aarch64_vec_load_lanesci_lane<mode>): Likewise.
	(aarch64_vec_store_lanesci_lane<mode>): Likewise.
	(aarch64_vec_load_lanesxi_lane<mode>): Likewise.
	(aarch64_vec_store_lanesxi_lane<mode>): Likewise.
	(aarch64_simd_vec_set<mode>): Update use of ENDIAN_LANE_N.
	(aarch64_simd_vec_setv2di): Likewise.

Comments

James Greenhalgh Nov. 2, 2017, 9:55 a.m. UTC | #1
On Fri, Oct 27, 2017 at 02:27:50PM +0100, Richard Sandiford wrote:
> Later patches turn the number of vector units into a poly_int.

> We deliberately don't support applying GEN_INT to those (except

> in target code that doesn't disguish between poly_ints and normal

> constants); gen_int_mode needs to be used instead.

> 

> This patch therefore replaces instances of:

> 

>   GEN_INT (ENDIAN_LANE_N (builtin_mode, INTVAL (op[opc])))

> 

> with uses of a new endian_lane_rtx function.


OK.

Reviewed-by:  James Greenhalgh <james.greenhalgh@arm.com>


Thanks,
James

> 

> 

> 2017-10-26  Richard Sandiford  <richard.sandiford@linaro.org>

> 	    Alan Hayward  <alan.hayward@arm.com>

> 	    David Sherwood  <david.sherwood@arm.com>

> 

> gcc/

> 	* config/aarch64/aarch64-protos.h (aarch64_endian_lane_rtx): Declare.

> 	* config/aarch64/aarch64.c (aarch64_endian_lane_rtx): New function.

> 	* config/aarch64/aarch64.h (ENDIAN_LANE_N): Take the number

> 	of units rather than the mode.

> 	* config/aarch64/iterators.md (nunits): New mode attribute.

> 	* config/aarch64/aarch64-builtins.c (aarch64_simd_expand_args):

> 	Use aarch64_endian_lane_rtx instead of GEN_INT (ENDIAN_LANE_N ...).

> 	* config/aarch64/aarch64-simd.md (aarch64_dup_lane<mode>)

> 	(aarch64_dup_lane_<vswap_width_name><mode>, *aarch64_mul3_elt<mode>)

> 	(*aarch64_mul3_elt_<vswap_width_name><mode>): Likewise.

> 	(*aarch64_mul3_elt_to_64v2df, *aarch64_mla_elt<mode>): Likewise.

> 	(*aarch64_mla_elt_<vswap_width_name><mode>, *aarch64_mls_elt<mode>)

> 	(*aarch64_mls_elt_<vswap_width_name><mode>, *aarch64_fma4_elt<mode>)

> 	(*aarch64_fma4_elt_<vswap_width_name><mode>):: Likewise.

> 	(*aarch64_fma4_elt_to_64v2df, *aarch64_fnma4_elt<mode>): Likewise.

> 	(*aarch64_fnma4_elt_<vswap_width_name><mode>): Likewise.

> 	(*aarch64_fnma4_elt_to_64v2df, reduc_plus_scal_<mode>): Likewise.

> 	(reduc_plus_scal_v4sf, reduc_<maxmin_uns>_scal_<mode>): Likewise.

> 	(reduc_<maxmin_uns>_scal_<mode>): Likewise.

> 	(*aarch64_get_lane_extend<GPI:mode><VDQQH:mode>): Likewise.

> 	(*aarch64_get_lane_zero_extendsi<mode>): Likewise.

> 	(aarch64_get_lane<mode>, *aarch64_mulx_elt_<vswap_width_name><mode>)

> 	(*aarch64_mulx_elt<mode>, *aarch64_vgetfmulx<mode>): Likewise.

> 	(aarch64_sq<r>dmulh_lane<mode>, aarch64_sq<r>dmulh_laneq<mode>)

> 	(aarch64_sqrdml<SQRDMLH_AS:rdma_as>h_lane<mode>): Likewise.

> 	(aarch64_sqrdml<SQRDMLH_AS:rdma_as>h_laneq<mode>): Likewise.

> 	(aarch64_sqdml<SBINQOPS:as>l_lane<mode>): Likewise.

> 	(aarch64_sqdml<SBINQOPS:as>l_laneq<mode>): Likewise.

> 	(aarch64_sqdml<SBINQOPS:as>l2_lane<mode>_internal): Likewise.

> 	(aarch64_sqdml<SBINQOPS:as>l2_laneq<mode>_internal): Likewise.

> 	(aarch64_sqdmull_lane<mode>, aarch64_sqdmull_laneq<mode>): Likewise.

> 	(aarch64_sqdmull2_lane<mode>_internal): Likewise.

> 	(aarch64_sqdmull2_laneq<mode>_internal): Likewise.

> 	(aarch64_vec_load_lanesoi_lane<mode>): Likewise.

> 	(aarch64_vec_store_lanesoi_lane<mode>): Likewise.

> 	(aarch64_vec_load_lanesci_lane<mode>): Likewise.

> 	(aarch64_vec_store_lanesci_lane<mode>): Likewise.

> 	(aarch64_vec_load_lanesxi_lane<mode>): Likewise.

> 	(aarch64_vec_store_lanesxi_lane<mode>): Likewise.

> 	(aarch64_simd_vec_set<mode>): Update use of ENDIAN_LANE_N.

> 	(aarch64_simd_vec_setv2di): Likewise.

> 

> Index: gcc/config/aarch64/aarch64-protos.h

> ===================================================================

> --- gcc/config/aarch64/aarch64-protos.h	2017-10-27 14:11:56.993658452 +0100

> +++ gcc/config/aarch64/aarch64-protos.h	2017-10-27 14:12:00.601693018 +0100

> @@ -437,6 +437,7 @@ void aarch64_simd_emit_reg_reg_move (rtx

>  rtx aarch64_simd_expand_builtin (int, tree, rtx);

>  

>  void aarch64_simd_lane_bounds (rtx, HOST_WIDE_INT, HOST_WIDE_INT, const_tree);

> +rtx aarch64_endian_lane_rtx (machine_mode, unsigned int);

>  

>  void aarch64_split_128bit_move (rtx, rtx);

>  

> Index: gcc/config/aarch64/aarch64.c

> ===================================================================

> --- gcc/config/aarch64/aarch64.c	2017-10-27 14:11:56.995515870 +0100

> +++ gcc/config/aarch64/aarch64.c	2017-10-27 14:12:00.603550436 +0100

> @@ -12083,6 +12083,15 @@ aarch64_simd_lane_bounds (rtx operand, H

>    }

>  }

>  

> +/* Peform endian correction on lane number N, which indexes a vector

> +   of mode MODE, and return the result as an SImode rtx.  */

> +

> +rtx

> +aarch64_endian_lane_rtx (machine_mode mode, unsigned int n)

> +{

> +  return gen_int_mode (ENDIAN_LANE_N (GET_MODE_NUNITS (mode), n), SImode);

> +}

> +

>  /* Return TRUE if OP is a valid vector addressing mode.  */

>  bool

>  aarch64_simd_mem_operand_p (rtx op)

> Index: gcc/config/aarch64/aarch64.h

> ===================================================================

> --- gcc/config/aarch64/aarch64.h	2017-10-27 14:05:38.132936808 +0100

> +++ gcc/config/aarch64/aarch64.h	2017-10-27 14:12:00.603550436 +0100

> @@ -910,8 +910,8 @@ #define AARCH64_VALID_SIMD_QREG_MODE(MOD

>     || (MODE) == V4SFmode || (MODE) == V8HFmode || (MODE) == V2DImode \

>     || (MODE) == V2DFmode)

>  

> -#define ENDIAN_LANE_N(mode, n)  \

> -  (BYTES_BIG_ENDIAN ? GET_MODE_NUNITS (mode) - 1 - n : n)

> +#define ENDIAN_LANE_N(NUNITS, N) \

> +  (BYTES_BIG_ENDIAN ? NUNITS - 1 - N : N)

>  

>  /* Support for a configure-time default CPU, etc.  We currently support

>     --with-arch and --with-cpu.  Both are ignored if either is specified

> Index: gcc/config/aarch64/iterators.md

> ===================================================================

> --- gcc/config/aarch64/iterators.md	2017-10-27 14:11:56.995515870 +0100

> +++ gcc/config/aarch64/iterators.md	2017-10-27 14:12:00.604479145 +0100

> @@ -438,6 +438,17 @@ (define_mode_attr vw2 [(DI "") (QI "h")

>  (define_mode_attr rtn [(DI "d") (SI "")])

>  (define_mode_attr vas [(DI "") (SI ".2s")])

>  

> +;; Map a vector to the number of units in it, if the size of the mode

> +;; is constant.

> +(define_mode_attr nunits [(V8QI "8") (V16QI "16")

> +			  (V4HI "4") (V8HI "8")

> +			  (V2SI "2") (V4SI "4")

> +				     (V2DI "2")

> +			  (V4HF "4") (V8HF "8")

> +			  (V2SF "2") (V4SF "4")

> +			  (V1DF "1") (V2DF "2")

> +			  (DI "1") (DF "1")])

> +

>  ;; Map a mode to the number of bits in it, if the size of the mode

>  ;; is constant.

>  (define_mode_attr bitsize [(V8QI "64") (V16QI "128")

> Index: gcc/config/aarch64/aarch64-builtins.c

> ===================================================================

> --- gcc/config/aarch64/aarch64-builtins.c	2017-10-27 14:05:38.132936808 +0100

> +++ gcc/config/aarch64/aarch64-builtins.c	2017-10-27 14:12:00.601693018 +0100

> @@ -1069,8 +1069,8 @@ aarch64_simd_expand_args (rtx target, in

>  					    GET_MODE_NUNITS (builtin_mode),

>  					    exp);

>  		  /* Keep to GCC-vector-extension lane indices in the RTL.  */

> -		  op[opc] =

> -		    GEN_INT (ENDIAN_LANE_N (builtin_mode, INTVAL (op[opc])));

> +		  op[opc] = aarch64_endian_lane_rtx (builtin_mode,

> +						     INTVAL (op[opc]));

>  		}

>  	      goto constant_arg;

>  

> @@ -1083,7 +1083,7 @@ aarch64_simd_expand_args (rtx target, in

>  		  aarch64_simd_lane_bounds (op[opc],

>  					    0, GET_MODE_NUNITS (vmode), exp);

>  		  /* Keep to GCC-vector-extension lane indices in the RTL.  */

> -		  op[opc] = GEN_INT (ENDIAN_LANE_N (vmode, INTVAL (op[opc])));

> +		  op[opc] = aarch64_endian_lane_rtx (vmode, INTVAL (op[opc]));

>  		}

>  	      /* Fall through - if the lane index isn't a constant then

>  		 the next case will error.  */

> Index: gcc/config/aarch64/aarch64-simd.md

> ===================================================================

> --- gcc/config/aarch64/aarch64-simd.md	2017-10-27 14:11:56.994587161 +0100

> +++ gcc/config/aarch64/aarch64-simd.md	2017-10-27 14:12:00.602621727 +0100

> @@ -80,7 +80,7 @@ (define_insn "aarch64_dup_lane<mode>"

>            )))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "dup\\t%0.<Vtype>, %1.<Vetype>[%2]";

>    }

>    [(set_attr "type" "neon_dup<q>")]

> @@ -95,8 +95,7 @@ (define_insn "aarch64_dup_lane_<vswap_wi

>            )))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,

> -					  INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));

>      return "dup\\t%0.<Vtype>, %1.<Vetype>[%2]";

>    }

>    [(set_attr "type" "neon_dup<q>")]

> @@ -501,7 +500,7 @@ (define_insn "*aarch64_mul3_elt<mode>"

>        (match_operand:VMUL 3 "register_operand" "w")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "<f>mul\\t%0.<Vtype>, %3.<Vtype>, %1.<Vetype>[%2]";

>    }

>    [(set_attr "type" "neon<fp>_mul_<stype>_scalar<q>")]

> @@ -517,8 +516,7 @@ (define_insn "*aarch64_mul3_elt_<vswap_w

>        (match_operand:VMUL_CHANGE_NLANES 3 "register_operand" "w")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,

> -					  INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));

>      return "<f>mul\\t%0.<Vtype>, %3.<Vtype>, %1.<Vetype>[%2]";

>    }

>    [(set_attr "type" "neon<fp>_mul_<Vetype>_scalar<q>")]

> @@ -571,7 +569,7 @@ (define_insn "*aarch64_mul3_elt_to_64v2d

>         (match_operand:DF 3 "register_operand" "w")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2]));

>      return "fmul\\t%0.2d, %3.2d, %1.d[%2]";

>    }

>    [(set_attr "type" "neon_fp_mul_d_scalar_q")]

> @@ -706,7 +704,7 @@ (define_insn "aarch64_simd_vec_set<mode>

>  	    (match_operand:SI 2 "immediate_operand" "i,i,i")))]

>    "TARGET_SIMD"

>    {

> -   int elt = ENDIAN_LANE_N (<MODE>mode, exact_log2 (INTVAL (operands[2])));

> +   int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));

>     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);

>     switch (which_alternative)

>       {

> @@ -1072,7 +1070,7 @@ (define_insn "aarch64_simd_vec_setv2di"

>  	    (match_operand:SI 2 "immediate_operand" "i,i")))]

>    "TARGET_SIMD"

>    {

> -    int elt = ENDIAN_LANE_N (V2DImode, exact_log2 (INTVAL (operands[2])));

> +    int elt = ENDIAN_LANE_N (2, exact_log2 (INTVAL (operands[2])));

>      operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);

>      switch (which_alternative)

>        {

> @@ -1109,7 +1107,7 @@ (define_insn "aarch64_simd_vec_set<mode>

>  	    (match_operand:SI 2 "immediate_operand" "i")))]

>    "TARGET_SIMD"

>    {

> -    int elt = ENDIAN_LANE_N (<MODE>mode, exact_log2 (INTVAL (operands[2])));

> +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));

>  

>      operands[2] = GEN_INT ((HOST_WIDE_INT)1 << elt);

>      return "ins\t%0.<Vetype>[%p2], %1.<Vetype>[0]";

> @@ -1154,7 +1152,7 @@ (define_insn "*aarch64_mla_elt<mode>"

>  	 (match_operand:VDQHS 4 "register_operand" "0")))]

>   "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "mla\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]

> @@ -1172,8 +1170,7 @@ (define_insn "*aarch64_mla_elt_<vswap_wi

>  	 (match_operand:VDQHS 4 "register_operand" "0")))]

>   "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,

> -					  INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));

>      return "mla\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]

> @@ -1213,7 +1210,7 @@ (define_insn "*aarch64_mls_elt<mode>"

>  	   (match_operand:VDQHS 3 "register_operand" "w"))))]

>   "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "mls\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]

> @@ -1231,8 +1228,7 @@ (define_insn "*aarch64_mls_elt_<vswap_wi

>  	   (match_operand:VDQHS 3 "register_operand" "w"))))]

>   "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,

> -					  INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));

>      return "mls\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]

> @@ -1802,7 +1798,7 @@ (define_insn "*aarch64_fma4_elt<mode>"

>        (match_operand:VDQF 4 "register_operand" "0")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "fmla\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]

> @@ -1819,8 +1815,7 @@ (define_insn "*aarch64_fma4_elt_<vswap_w

>        (match_operand:VDQSF 4 "register_operand" "0")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,

> -					  INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));

>      return "fmla\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]

> @@ -1848,7 +1843,7 @@ (define_insn "*aarch64_fma4_elt_to_64v2d

>        (match_operand:DF 4 "register_operand" "0")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2]));

>      return "fmla\\t%0.2d, %3.2d, %1.2d[%2]";

>    }

>    [(set_attr "type" "neon_fp_mla_d_scalar_q")]

> @@ -1878,7 +1873,7 @@ (define_insn "*aarch64_fnma4_elt<mode>"

>        (match_operand:VDQF 4 "register_operand" "0")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "fmls\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]

> @@ -1896,8 +1891,7 @@ (define_insn "*aarch64_fnma4_elt_<vswap_

>        (match_operand:VDQSF 4 "register_operand" "0")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,

> -					  INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));

>      return "fmls\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";

>    }

>    [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]

> @@ -1927,7 +1921,7 @@ (define_insn "*aarch64_fnma4_elt_to_64v2

>        (match_operand:DF 4 "register_operand" "0")))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2]));

>      return "fmls\\t%0.2d, %3.2d, %1.2d[%2]";

>    }

>    [(set_attr "type" "neon_fp_mla_d_scalar_q")]

> @@ -2260,7 +2254,7 @@ (define_expand "reduc_plus_scal_<mode>"

>  	       UNSPEC_ADDV)]

>    "TARGET_SIMD"

>    {

> -    rtx elt = GEN_INT (ENDIAN_LANE_N (<MODE>mode, 0));

> +    rtx elt = aarch64_endian_lane_rtx (<MODE>mode, 0);

>      rtx scratch = gen_reg_rtx (<MODE>mode);

>      emit_insn (gen_aarch64_reduc_plus_internal<mode> (scratch, operands[1]));

>      emit_insn (gen_aarch64_get_lane<mode> (operands[0], scratch, elt));

> @@ -2311,7 +2305,7 @@ (define_expand "reduc_plus_scal_v4sf"

>  		    UNSPEC_FADDV))]

>   "TARGET_SIMD"

>  {

> -  rtx elt = GEN_INT (ENDIAN_LANE_N (V4SFmode, 0));

> +  rtx elt = aarch64_endian_lane_rtx (V4SFmode, 0);

>    rtx scratch = gen_reg_rtx (V4SFmode);

>    emit_insn (gen_aarch64_faddpv4sf (scratch, operands[1], operands[1]));

>    emit_insn (gen_aarch64_faddpv4sf (scratch, scratch, scratch));

> @@ -2353,7 +2347,7 @@ (define_expand "reduc_<maxmin_uns>_scal_

>  		  FMAXMINV)]

>    "TARGET_SIMD"

>    {

> -    rtx elt = GEN_INT (ENDIAN_LANE_N (<MODE>mode, 0));

> +    rtx elt = aarch64_endian_lane_rtx (<MODE>mode, 0);

>      rtx scratch = gen_reg_rtx (<MODE>mode);

>      emit_insn (gen_aarch64_reduc_<maxmin_uns>_internal<mode> (scratch,

>  							      operands[1]));

> @@ -2369,7 +2363,7 @@ (define_expand "reduc_<maxmin_uns>_scal_

>  		    MAXMINV)]

>    "TARGET_SIMD"

>    {

> -    rtx elt = GEN_INT (ENDIAN_LANE_N (<MODE>mode, 0));

> +    rtx elt = aarch64_endian_lane_rtx (<MODE>mode, 0);

>      rtx scratch = gen_reg_rtx (<MODE>mode);

>      emit_insn (gen_aarch64_reduc_<maxmin_uns>_internal<mode> (scratch,

>  							      operands[1]));

> @@ -2894,7 +2888,7 @@ (define_insn "*aarch64_get_lane_extend<G

>  	    (parallel [(match_operand:SI 2 "immediate_operand" "i")]))))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "smov\\t%<GPI:w>0, %1.<VDQQH:Vetype>[%2]";

>    }

>    [(set_attr "type" "neon_to_gp<q>")]

> @@ -2908,7 +2902,7 @@ (define_insn "*aarch64_get_lane_zero_ext

>  	    (parallel [(match_operand:SI 2 "immediate_operand" "i")]))))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "umov\\t%w0, %1.<Vetype>[%2]";

>    }

>    [(set_attr "type" "neon_to_gp<q>")]

> @@ -2924,7 +2918,7 @@ (define_insn "aarch64_get_lane<mode>"

>  	  (parallel [(match_operand:SI 2 "immediate_operand" "i, i, i")])))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      switch (which_alternative)

>        {

>  	case 0:

> @@ -3300,8 +3294,7 @@ (define_insn "*aarch64_mulx_elt_<vswap_w

>  	 UNSPEC_FMULX))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,

> -					  INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[3]));

>      return "fmulx\t%<v>0<Vmtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_fp_mul_<Vetype>_scalar<q>")]

> @@ -3320,7 +3313,7 @@ (define_insn "*aarch64_mulx_elt<mode>"

>  	 UNSPEC_FMULX))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));

>      return "fmulx\t%<v>0<Vmtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_fp_mul_<Vetype><q>")]

> @@ -3354,7 +3347,7 @@ (define_insn "*aarch64_vgetfmulx<mode>"

>  	 UNSPEC_FMULX))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));

>      return "fmulx\t%<Vetype>0, %<Vetype>1, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "fmul<Vetype>")]

> @@ -3440,7 +3433,7 @@ (define_insn "aarch64_sq<r>dmulh_lane<mo

>  	 VQDMULH))]

>    "TARGET_SIMD"

>    "*

> -   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));

> +   operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));

>     return \"sq<r>dmulh\\t%0.<Vtype>, %1.<Vtype>, %2.<Vetype>[%3]\";"

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]

>  )

> @@ -3455,7 +3448,7 @@ (define_insn "aarch64_sq<r>dmulh_laneq<m

>  	 VQDMULH))]

>    "TARGET_SIMD"

>    "*

> -   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));

> +   operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));

>     return \"sq<r>dmulh\\t%0.<Vtype>, %1.<Vtype>, %2.<Vetype>[%3]\";"

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]

>  )

> @@ -3470,7 +3463,7 @@ (define_insn "aarch64_sq<r>dmulh_lane<mo

>  	 VQDMULH))]

>    "TARGET_SIMD"

>    "*

> -   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));

> +   operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));

>     return \"sq<r>dmulh\\t%<v>0, %<v>1, %2.<v>[%3]\";"

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]

>  )

> @@ -3485,7 +3478,7 @@ (define_insn "aarch64_sq<r>dmulh_laneq<m

>  	 VQDMULH))]

>    "TARGET_SIMD"

>    "*

> -   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));

> +   operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));

>     return \"sq<r>dmulh\\t%<v>0, %<v>1, %2.<v>[%3]\";"

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]

>  )

> @@ -3517,7 +3510,7 @@ (define_insn "aarch64_sqrdml<SQRDMLH_AS:

>  	  SQRDMLH_AS))]

>     "TARGET_SIMD_RDMA"

>     {

> -     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));

> +     operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));

>       return

>        "sqrdml<SQRDMLH_AS:rdma_as>h\\t%0.<Vtype>, %2.<Vtype>, %3.<Vetype>[%4]";

>     }

> @@ -3535,7 +3528,7 @@ (define_insn "aarch64_sqrdml<SQRDMLH_AS:

>  	  SQRDMLH_AS))]

>     "TARGET_SIMD_RDMA"

>     {

> -     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));

> +     operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));

>       return

>        "sqrdml<SQRDMLH_AS:rdma_as>h\\t%<v>0, %<v>2, %3.<Vetype>[%4]";

>     }

> @@ -3555,7 +3548,7 @@ (define_insn "aarch64_sqrdml<SQRDMLH_AS:

>  	  SQRDMLH_AS))]

>     "TARGET_SIMD_RDMA"

>     {

> -     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));

> +     operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));

>       return

>        "sqrdml<SQRDMLH_AS:rdma_as>h\\t%0.<Vtype>, %2.<Vtype>, %3.<Vetype>[%4]";

>     }

> @@ -3573,7 +3566,7 @@ (define_insn "aarch64_sqrdml<SQRDMLH_AS:

>  	  SQRDMLH_AS))]

>     "TARGET_SIMD_RDMA"

>     {

> -     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));

> +     operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));

>       return

>        "sqrdml<SQRDMLH_AS:rdma_as>h\\t%<v>0, %<v>2, %3.<v>[%4]";

>     }

> @@ -3617,7 +3610,7 @@ (define_insn "aarch64_sqdml<SBINQOPS:as>

>  	    (const_int 1))))]

>    "TARGET_SIMD"

>    {

> -    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));

> +    operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));

>      return

>        "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";

>    }

> @@ -3641,7 +3634,7 @@ (define_insn "aarch64_sqdml<SBINQOPS:as>

>  	    (const_int 1))))]

>    "TARGET_SIMD"

>    {

> -    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));

> +    operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));

>      return

>        "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";

>    }

> @@ -3664,7 +3657,7 @@ (define_insn "aarch64_sqdml<SBINQOPS:as>

>  	    (const_int 1))))]

>    "TARGET_SIMD"

>    {

> -    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));

> +    operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));

>      return

>        "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";

>    }

> @@ -3687,7 +3680,7 @@ (define_insn "aarch64_sqdml<SBINQOPS:as>

>  	    (const_int 1))))]

>    "TARGET_SIMD"

>    {

> -    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));

> +    operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));

>      return

>        "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";

>    }

> @@ -3782,7 +3775,7 @@ (define_insn "aarch64_sqdml<SBINQOPS:as>

>  	      (const_int 1))))]

>    "TARGET_SIMD"

>    {

> -    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));

> +    operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));

>      return

>       "sqdml<SBINQOPS:as>l2\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";

>    }

> @@ -3808,7 +3801,7 @@ (define_insn "aarch64_sqdml<SBINQOPS:as>

>  	      (const_int 1))))]

>    "TARGET_SIMD"

>    {

> -    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));

> +    operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));

>      return

>       "sqdml<SBINQOPS:as>l2\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";

>    }

> @@ -3955,7 +3948,7 @@ (define_insn "aarch64_sqdmull_lane<mode>

>  	     (const_int 1)))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));

>      return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]

> @@ -3976,7 +3969,7 @@ (define_insn "aarch64_sqdmull_laneq<mode

>  	     (const_int 1)))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));

>      return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]

> @@ -3996,7 +3989,7 @@ (define_insn "aarch64_sqdmull_lane<mode>

>  	     (const_int 1)))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));

>      return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]

> @@ -4016,7 +4009,7 @@ (define_insn "aarch64_sqdmull_laneq<mode

>  	     (const_int 1)))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));

>      return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]

> @@ -4094,7 +4087,7 @@ (define_insn "aarch64_sqdmull2_lane<mode

>  	     (const_int 1)))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));

>      return "sqdmull2\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]

> @@ -4117,7 +4110,7 @@ (define_insn "aarch64_sqdmull2_laneq<mod

>  	     (const_int 1)))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));

>      return "sqdmull2\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";

>    }

>    [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]

> @@ -4623,7 +4616,7 @@ (define_insn "aarch64_vec_load_lanesoi_l

>  		   UNSPEC_LD2_LANE))]

>    "TARGET_SIMD"

>    {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));

>      return "ld2\\t{%S0.<Vetype> - %T0.<Vetype>}[%3], %1";

>    }

>    [(set_attr "type" "neon_load2_one_lane")]

> @@ -4667,7 +4660,7 @@ (define_insn "aarch64_vec_store_lanesoi_

>  		   UNSPEC_ST2_LANE))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "st2\\t{%S1.<Vetype> - %T1.<Vetype>}[%2], %0";

>    }

>    [(set_attr "type" "neon_store2_one_lane<q>")]

> @@ -4721,7 +4714,7 @@ (define_insn "aarch64_vec_load_lanesci_l

>  		   UNSPEC_LD3_LANE))]

>    "TARGET_SIMD"

>  {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));

>      return "ld3\\t{%S0.<Vetype> - %U0.<Vetype>}[%3], %1";

>  }

>    [(set_attr "type" "neon_load3_one_lane")]

> @@ -4765,7 +4758,7 @@ (define_insn "aarch64_vec_store_lanesci_

>  		    UNSPEC_ST3_LANE))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "st3\\t{%S1.<Vetype> - %U1.<Vetype>}[%2], %0";

>    }

>    [(set_attr "type" "neon_store3_one_lane<q>")]

> @@ -4819,7 +4812,7 @@ (define_insn "aarch64_vec_load_lanesxi_l

>  		   UNSPEC_LD4_LANE))]

>    "TARGET_SIMD"

>  {

> -    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));

> +    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));

>      return "ld4\\t{%S0.<Vetype> - %V0.<Vetype>}[%3], %1";

>  }

>    [(set_attr "type" "neon_load4_one_lane")]

> @@ -4863,7 +4856,7 @@ (define_insn "aarch64_vec_store_lanesxi_

>  		    UNSPEC_ST4_LANE))]

>    "TARGET_SIMD"

>    {

> -    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));

> +    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));

>      return "st4\\t{%S1.<Vetype> - %V1.<Vetype>}[%2], %0";

>    }

>    [(set_attr "type" "neon_store4_one_lane<q>")]
diff mbox series

Patch

Index: gcc/config/aarch64/aarch64-protos.h
===================================================================
--- gcc/config/aarch64/aarch64-protos.h	2017-10-27 14:11:56.993658452 +0100
+++ gcc/config/aarch64/aarch64-protos.h	2017-10-27 14:12:00.601693018 +0100
@@ -437,6 +437,7 @@  void aarch64_simd_emit_reg_reg_move (rtx
 rtx aarch64_simd_expand_builtin (int, tree, rtx);
 
 void aarch64_simd_lane_bounds (rtx, HOST_WIDE_INT, HOST_WIDE_INT, const_tree);
+rtx aarch64_endian_lane_rtx (machine_mode, unsigned int);
 
 void aarch64_split_128bit_move (rtx, rtx);
 
Index: gcc/config/aarch64/aarch64.c
===================================================================
--- gcc/config/aarch64/aarch64.c	2017-10-27 14:11:56.995515870 +0100
+++ gcc/config/aarch64/aarch64.c	2017-10-27 14:12:00.603550436 +0100
@@ -12083,6 +12083,15 @@  aarch64_simd_lane_bounds (rtx operand, H
   }
 }
 
+/* Peform endian correction on lane number N, which indexes a vector
+   of mode MODE, and return the result as an SImode rtx.  */
+
+rtx
+aarch64_endian_lane_rtx (machine_mode mode, unsigned int n)
+{
+  return gen_int_mode (ENDIAN_LANE_N (GET_MODE_NUNITS (mode), n), SImode);
+}
+
 /* Return TRUE if OP is a valid vector addressing mode.  */
 bool
 aarch64_simd_mem_operand_p (rtx op)
Index: gcc/config/aarch64/aarch64.h
===================================================================
--- gcc/config/aarch64/aarch64.h	2017-10-27 14:05:38.132936808 +0100
+++ gcc/config/aarch64/aarch64.h	2017-10-27 14:12:00.603550436 +0100
@@ -910,8 +910,8 @@  #define AARCH64_VALID_SIMD_QREG_MODE(MOD
    || (MODE) == V4SFmode || (MODE) == V8HFmode || (MODE) == V2DImode \
    || (MODE) == V2DFmode)
 
-#define ENDIAN_LANE_N(mode, n)  \
-  (BYTES_BIG_ENDIAN ? GET_MODE_NUNITS (mode) - 1 - n : n)
+#define ENDIAN_LANE_N(NUNITS, N) \
+  (BYTES_BIG_ENDIAN ? NUNITS - 1 - N : N)
 
 /* Support for a configure-time default CPU, etc.  We currently support
    --with-arch and --with-cpu.  Both are ignored if either is specified
Index: gcc/config/aarch64/iterators.md
===================================================================
--- gcc/config/aarch64/iterators.md	2017-10-27 14:11:56.995515870 +0100
+++ gcc/config/aarch64/iterators.md	2017-10-27 14:12:00.604479145 +0100
@@ -438,6 +438,17 @@  (define_mode_attr vw2 [(DI "") (QI "h")
 (define_mode_attr rtn [(DI "d") (SI "")])
 (define_mode_attr vas [(DI "") (SI ".2s")])
 
+;; Map a vector to the number of units in it, if the size of the mode
+;; is constant.
+(define_mode_attr nunits [(V8QI "8") (V16QI "16")
+			  (V4HI "4") (V8HI "8")
+			  (V2SI "2") (V4SI "4")
+				     (V2DI "2")
+			  (V4HF "4") (V8HF "8")
+			  (V2SF "2") (V4SF "4")
+			  (V1DF "1") (V2DF "2")
+			  (DI "1") (DF "1")])
+
 ;; Map a mode to the number of bits in it, if the size of the mode
 ;; is constant.
 (define_mode_attr bitsize [(V8QI "64") (V16QI "128")
Index: gcc/config/aarch64/aarch64-builtins.c
===================================================================
--- gcc/config/aarch64/aarch64-builtins.c	2017-10-27 14:05:38.132936808 +0100
+++ gcc/config/aarch64/aarch64-builtins.c	2017-10-27 14:12:00.601693018 +0100
@@ -1069,8 +1069,8 @@  aarch64_simd_expand_args (rtx target, in
 					    GET_MODE_NUNITS (builtin_mode),
 					    exp);
 		  /* Keep to GCC-vector-extension lane indices in the RTL.  */
-		  op[opc] =
-		    GEN_INT (ENDIAN_LANE_N (builtin_mode, INTVAL (op[opc])));
+		  op[opc] = aarch64_endian_lane_rtx (builtin_mode,
+						     INTVAL (op[opc]));
 		}
 	      goto constant_arg;
 
@@ -1083,7 +1083,7 @@  aarch64_simd_expand_args (rtx target, in
 		  aarch64_simd_lane_bounds (op[opc],
 					    0, GET_MODE_NUNITS (vmode), exp);
 		  /* Keep to GCC-vector-extension lane indices in the RTL.  */
-		  op[opc] = GEN_INT (ENDIAN_LANE_N (vmode, INTVAL (op[opc])));
+		  op[opc] = aarch64_endian_lane_rtx (vmode, INTVAL (op[opc]));
 		}
 	      /* Fall through - if the lane index isn't a constant then
 		 the next case will error.  */
Index: gcc/config/aarch64/aarch64-simd.md
===================================================================
--- gcc/config/aarch64/aarch64-simd.md	2017-10-27 14:11:56.994587161 +0100
+++ gcc/config/aarch64/aarch64-simd.md	2017-10-27 14:12:00.602621727 +0100
@@ -80,7 +80,7 @@  (define_insn "aarch64_dup_lane<mode>"
           )))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "dup\\t%0.<Vtype>, %1.<Vetype>[%2]";
   }
   [(set_attr "type" "neon_dup<q>")]
@@ -95,8 +95,7 @@  (define_insn "aarch64_dup_lane_<vswap_wi
           )))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,
-					  INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));
     return "dup\\t%0.<Vtype>, %1.<Vetype>[%2]";
   }
   [(set_attr "type" "neon_dup<q>")]
@@ -501,7 +500,7 @@  (define_insn "*aarch64_mul3_elt<mode>"
       (match_operand:VMUL 3 "register_operand" "w")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "<f>mul\\t%0.<Vtype>, %3.<Vtype>, %1.<Vetype>[%2]";
   }
   [(set_attr "type" "neon<fp>_mul_<stype>_scalar<q>")]
@@ -517,8 +516,7 @@  (define_insn "*aarch64_mul3_elt_<vswap_w
       (match_operand:VMUL_CHANGE_NLANES 3 "register_operand" "w")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,
-					  INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));
     return "<f>mul\\t%0.<Vtype>, %3.<Vtype>, %1.<Vetype>[%2]";
   }
   [(set_attr "type" "neon<fp>_mul_<Vetype>_scalar<q>")]
@@ -571,7 +569,7 @@  (define_insn "*aarch64_mul3_elt_to_64v2d
        (match_operand:DF 3 "register_operand" "w")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2]));
     return "fmul\\t%0.2d, %3.2d, %1.d[%2]";
   }
   [(set_attr "type" "neon_fp_mul_d_scalar_q")]
@@ -706,7 +704,7 @@  (define_insn "aarch64_simd_vec_set<mode>
 	    (match_operand:SI 2 "immediate_operand" "i,i,i")))]
   "TARGET_SIMD"
   {
-   int elt = ENDIAN_LANE_N (<MODE>mode, exact_log2 (INTVAL (operands[2])));
+   int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
    switch (which_alternative)
      {
@@ -1072,7 +1070,7 @@  (define_insn "aarch64_simd_vec_setv2di"
 	    (match_operand:SI 2 "immediate_operand" "i,i")))]
   "TARGET_SIMD"
   {
-    int elt = ENDIAN_LANE_N (V2DImode, exact_log2 (INTVAL (operands[2])));
+    int elt = ENDIAN_LANE_N (2, exact_log2 (INTVAL (operands[2])));
     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
     switch (which_alternative)
       {
@@ -1109,7 +1107,7 @@  (define_insn "aarch64_simd_vec_set<mode>
 	    (match_operand:SI 2 "immediate_operand" "i")))]
   "TARGET_SIMD"
   {
-    int elt = ENDIAN_LANE_N (<MODE>mode, exact_log2 (INTVAL (operands[2])));
+    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
 
     operands[2] = GEN_INT ((HOST_WIDE_INT)1 << elt);
     return "ins\t%0.<Vetype>[%p2], %1.<Vetype>[0]";
@@ -1154,7 +1152,7 @@  (define_insn "*aarch64_mla_elt<mode>"
 	 (match_operand:VDQHS 4 "register_operand" "0")))]
  "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "mla\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]
@@ -1172,8 +1170,7 @@  (define_insn "*aarch64_mla_elt_<vswap_wi
 	 (match_operand:VDQHS 4 "register_operand" "0")))]
  "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,
-					  INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));
     return "mla\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]
@@ -1213,7 +1210,7 @@  (define_insn "*aarch64_mls_elt<mode>"
 	   (match_operand:VDQHS 3 "register_operand" "w"))))]
  "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "mls\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]
@@ -1231,8 +1228,7 @@  (define_insn "*aarch64_mls_elt_<vswap_wi
 	   (match_operand:VDQHS 3 "register_operand" "w"))))]
  "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,
-					  INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));
     return "mls\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_mla_<Vetype>_scalar<q>")]
@@ -1802,7 +1798,7 @@  (define_insn "*aarch64_fma4_elt<mode>"
       (match_operand:VDQF 4 "register_operand" "0")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "fmla\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]
@@ -1819,8 +1815,7 @@  (define_insn "*aarch64_fma4_elt_<vswap_w
       (match_operand:VDQSF 4 "register_operand" "0")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,
-					  INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));
     return "fmla\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]
@@ -1848,7 +1843,7 @@  (define_insn "*aarch64_fma4_elt_to_64v2d
       (match_operand:DF 4 "register_operand" "0")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2]));
     return "fmla\\t%0.2d, %3.2d, %1.2d[%2]";
   }
   [(set_attr "type" "neon_fp_mla_d_scalar_q")]
@@ -1878,7 +1873,7 @@  (define_insn "*aarch64_fnma4_elt<mode>"
       (match_operand:VDQF 4 "register_operand" "0")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "fmls\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]
@@ -1896,8 +1891,7 @@  (define_insn "*aarch64_fnma4_elt_<vswap_
       (match_operand:VDQSF 4 "register_operand" "0")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,
-					  INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[2]));
     return "fmls\\t%0.<Vtype>, %3.<Vtype>, %1.<Vtype>[%2]";
   }
   [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]
@@ -1927,7 +1921,7 @@  (define_insn "*aarch64_fnma4_elt_to_64v2
       (match_operand:DF 4 "register_operand" "0")))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2]));
     return "fmls\\t%0.2d, %3.2d, %1.2d[%2]";
   }
   [(set_attr "type" "neon_fp_mla_d_scalar_q")]
@@ -2260,7 +2254,7 @@  (define_expand "reduc_plus_scal_<mode>"
 	       UNSPEC_ADDV)]
   "TARGET_SIMD"
   {
-    rtx elt = GEN_INT (ENDIAN_LANE_N (<MODE>mode, 0));
+    rtx elt = aarch64_endian_lane_rtx (<MODE>mode, 0);
     rtx scratch = gen_reg_rtx (<MODE>mode);
     emit_insn (gen_aarch64_reduc_plus_internal<mode> (scratch, operands[1]));
     emit_insn (gen_aarch64_get_lane<mode> (operands[0], scratch, elt));
@@ -2311,7 +2305,7 @@  (define_expand "reduc_plus_scal_v4sf"
 		    UNSPEC_FADDV))]
  "TARGET_SIMD"
 {
-  rtx elt = GEN_INT (ENDIAN_LANE_N (V4SFmode, 0));
+  rtx elt = aarch64_endian_lane_rtx (V4SFmode, 0);
   rtx scratch = gen_reg_rtx (V4SFmode);
   emit_insn (gen_aarch64_faddpv4sf (scratch, operands[1], operands[1]));
   emit_insn (gen_aarch64_faddpv4sf (scratch, scratch, scratch));
@@ -2353,7 +2347,7 @@  (define_expand "reduc_<maxmin_uns>_scal_
 		  FMAXMINV)]
   "TARGET_SIMD"
   {
-    rtx elt = GEN_INT (ENDIAN_LANE_N (<MODE>mode, 0));
+    rtx elt = aarch64_endian_lane_rtx (<MODE>mode, 0);
     rtx scratch = gen_reg_rtx (<MODE>mode);
     emit_insn (gen_aarch64_reduc_<maxmin_uns>_internal<mode> (scratch,
 							      operands[1]));
@@ -2369,7 +2363,7 @@  (define_expand "reduc_<maxmin_uns>_scal_
 		    MAXMINV)]
   "TARGET_SIMD"
   {
-    rtx elt = GEN_INT (ENDIAN_LANE_N (<MODE>mode, 0));
+    rtx elt = aarch64_endian_lane_rtx (<MODE>mode, 0);
     rtx scratch = gen_reg_rtx (<MODE>mode);
     emit_insn (gen_aarch64_reduc_<maxmin_uns>_internal<mode> (scratch,
 							      operands[1]));
@@ -2894,7 +2888,7 @@  (define_insn "*aarch64_get_lane_extend<G
 	    (parallel [(match_operand:SI 2 "immediate_operand" "i")]))))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "smov\\t%<GPI:w>0, %1.<VDQQH:Vetype>[%2]";
   }
   [(set_attr "type" "neon_to_gp<q>")]
@@ -2908,7 +2902,7 @@  (define_insn "*aarch64_get_lane_zero_ext
 	    (parallel [(match_operand:SI 2 "immediate_operand" "i")]))))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "umov\\t%w0, %1.<Vetype>[%2]";
   }
   [(set_attr "type" "neon_to_gp<q>")]
@@ -2924,7 +2918,7 @@  (define_insn "aarch64_get_lane<mode>"
 	  (parallel [(match_operand:SI 2 "immediate_operand" "i, i, i")])))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     switch (which_alternative)
       {
 	case 0:
@@ -3300,8 +3294,7 @@  (define_insn "*aarch64_mulx_elt_<vswap_w
 	 UNSPEC_FMULX))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<VSWAP_WIDTH>mode,
-					  INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<VSWAP_WIDTH>mode, INTVAL (operands[3]));
     return "fmulx\t%<v>0<Vmtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_fp_mul_<Vetype>_scalar<q>")]
@@ -3320,7 +3313,7 @@  (define_insn "*aarch64_mulx_elt<mode>"
 	 UNSPEC_FMULX))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));
     return "fmulx\t%<v>0<Vmtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_fp_mul_<Vetype><q>")]
@@ -3354,7 +3347,7 @@  (define_insn "*aarch64_vgetfmulx<mode>"
 	 UNSPEC_FMULX))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));
     return "fmulx\t%<Vetype>0, %<Vetype>1, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "fmul<Vetype>")]
@@ -3440,7 +3433,7 @@  (define_insn "aarch64_sq<r>dmulh_lane<mo
 	 VQDMULH))]
   "TARGET_SIMD"
   "*
-   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));
+   operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));
    return \"sq<r>dmulh\\t%0.<Vtype>, %1.<Vtype>, %2.<Vetype>[%3]\";"
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]
 )
@@ -3455,7 +3448,7 @@  (define_insn "aarch64_sq<r>dmulh_laneq<m
 	 VQDMULH))]
   "TARGET_SIMD"
   "*
-   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));
+   operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));
    return \"sq<r>dmulh\\t%0.<Vtype>, %1.<Vtype>, %2.<Vetype>[%3]\";"
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]
 )
@@ -3470,7 +3463,7 @@  (define_insn "aarch64_sq<r>dmulh_lane<mo
 	 VQDMULH))]
   "TARGET_SIMD"
   "*
-   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));
+   operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));
    return \"sq<r>dmulh\\t%<v>0, %<v>1, %2.<v>[%3]\";"
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]
 )
@@ -3485,7 +3478,7 @@  (define_insn "aarch64_sq<r>dmulh_laneq<m
 	 VQDMULH))]
   "TARGET_SIMD"
   "*
-   operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));
+   operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));
    return \"sq<r>dmulh\\t%<v>0, %<v>1, %2.<v>[%3]\";"
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]
 )
@@ -3517,7 +3510,7 @@  (define_insn "aarch64_sqrdml<SQRDMLH_AS:
 	  SQRDMLH_AS))]
    "TARGET_SIMD_RDMA"
    {
-     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));
+     operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));
      return
       "sqrdml<SQRDMLH_AS:rdma_as>h\\t%0.<Vtype>, %2.<Vtype>, %3.<Vetype>[%4]";
    }
@@ -3535,7 +3528,7 @@  (define_insn "aarch64_sqrdml<SQRDMLH_AS:
 	  SQRDMLH_AS))]
    "TARGET_SIMD_RDMA"
    {
-     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));
+     operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));
      return
       "sqrdml<SQRDMLH_AS:rdma_as>h\\t%<v>0, %<v>2, %3.<Vetype>[%4]";
    }
@@ -3555,7 +3548,7 @@  (define_insn "aarch64_sqrdml<SQRDMLH_AS:
 	  SQRDMLH_AS))]
    "TARGET_SIMD_RDMA"
    {
-     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));
+     operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));
      return
       "sqrdml<SQRDMLH_AS:rdma_as>h\\t%0.<Vtype>, %2.<Vtype>, %3.<Vetype>[%4]";
    }
@@ -3573,7 +3566,7 @@  (define_insn "aarch64_sqrdml<SQRDMLH_AS:
 	  SQRDMLH_AS))]
    "TARGET_SIMD_RDMA"
    {
-     operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));
+     operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));
      return
       "sqrdml<SQRDMLH_AS:rdma_as>h\\t%<v>0, %<v>2, %3.<v>[%4]";
    }
@@ -3617,7 +3610,7 @@  (define_insn "aarch64_sqdml<SBINQOPS:as>
 	    (const_int 1))))]
   "TARGET_SIMD"
   {
-    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));
+    operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));
     return
       "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";
   }
@@ -3641,7 +3634,7 @@  (define_insn "aarch64_sqdml<SBINQOPS:as>
 	    (const_int 1))))]
   "TARGET_SIMD"
   {
-    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));
+    operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));
     return
       "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";
   }
@@ -3664,7 +3657,7 @@  (define_insn "aarch64_sqdml<SBINQOPS:as>
 	    (const_int 1))))]
   "TARGET_SIMD"
   {
-    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));
+    operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));
     return
       "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";
   }
@@ -3687,7 +3680,7 @@  (define_insn "aarch64_sqdml<SBINQOPS:as>
 	    (const_int 1))))]
   "TARGET_SIMD"
   {
-    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));
+    operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));
     return
       "sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";
   }
@@ -3782,7 +3775,7 @@  (define_insn "aarch64_sqdml<SBINQOPS:as>
 	      (const_int 1))))]
   "TARGET_SIMD"
   {
-    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[4])));
+    operands[4] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[4]));
     return
      "sqdml<SBINQOPS:as>l2\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";
   }
@@ -3808,7 +3801,7 @@  (define_insn "aarch64_sqdml<SBINQOPS:as>
 	      (const_int 1))))]
   "TARGET_SIMD"
   {
-    operands[4] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[4])));
+    operands[4] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[4]));
     return
      "sqdml<SBINQOPS:as>l2\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]";
   }
@@ -3955,7 +3948,7 @@  (define_insn "aarch64_sqdmull_lane<mode>
 	     (const_int 1)))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));
     return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
@@ -3976,7 +3969,7 @@  (define_insn "aarch64_sqdmull_laneq<mode
 	     (const_int 1)))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));
     return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
@@ -3996,7 +3989,7 @@  (define_insn "aarch64_sqdmull_lane<mode>
 	     (const_int 1)))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));
     return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
@@ -4016,7 +4009,7 @@  (define_insn "aarch64_sqdmull_laneq<mode
 	     (const_int 1)))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));
     return "sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
@@ -4094,7 +4087,7 @@  (define_insn "aarch64_sqdmull2_lane<mode
 	     (const_int 1)))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCOND>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<VCOND>mode, INTVAL (operands[3]));
     return "sqdmull2\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
@@ -4117,7 +4110,7 @@  (define_insn "aarch64_sqdmull2_laneq<mod
 	     (const_int 1)))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<VCONQ>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<VCONQ>mode, INTVAL (operands[3]));
     return "sqdmull2\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]";
   }
   [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
@@ -4623,7 +4616,7 @@  (define_insn "aarch64_vec_load_lanesoi_l
 		   UNSPEC_LD2_LANE))]
   "TARGET_SIMD"
   {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));
     return "ld2\\t{%S0.<Vetype> - %T0.<Vetype>}[%3], %1";
   }
   [(set_attr "type" "neon_load2_one_lane")]
@@ -4667,7 +4660,7 @@  (define_insn "aarch64_vec_store_lanesoi_
 		   UNSPEC_ST2_LANE))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "st2\\t{%S1.<Vetype> - %T1.<Vetype>}[%2], %0";
   }
   [(set_attr "type" "neon_store2_one_lane<q>")]
@@ -4721,7 +4714,7 @@  (define_insn "aarch64_vec_load_lanesci_l
 		   UNSPEC_LD3_LANE))]
   "TARGET_SIMD"
 {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));
     return "ld3\\t{%S0.<Vetype> - %U0.<Vetype>}[%3], %1";
 }
   [(set_attr "type" "neon_load3_one_lane")]
@@ -4765,7 +4758,7 @@  (define_insn "aarch64_vec_store_lanesci_
 		    UNSPEC_ST3_LANE))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "st3\\t{%S1.<Vetype> - %U1.<Vetype>}[%2], %0";
   }
   [(set_attr "type" "neon_store3_one_lane<q>")]
@@ -4819,7 +4812,7 @@  (define_insn "aarch64_vec_load_lanesxi_l
 		   UNSPEC_LD4_LANE))]
   "TARGET_SIMD"
 {
-    operands[3] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[3])));
+    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));
     return "ld4\\t{%S0.<Vetype> - %V0.<Vetype>}[%3], %1";
 }
   [(set_attr "type" "neon_load4_one_lane")]
@@ -4863,7 +4856,7 @@  (define_insn "aarch64_vec_store_lanesxi_
 		    UNSPEC_ST4_LANE))]
   "TARGET_SIMD"
   {
-    operands[2] = GEN_INT (ENDIAN_LANE_N (<MODE>mode, INTVAL (operands[2])));
+    operands[2] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[2]));
     return "st4\\t{%S1.<Vetype> - %V1.<Vetype>}[%2], %0";
   }
   [(set_attr "type" "neon_store4_one_lane<q>")]