From patchwork Fri Oct 27 13:27:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 117347 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp773231qgn; Fri, 27 Oct 2017 06:28:15 -0700 (PDT) X-Google-Smtp-Source: ABhQp+T3TH2jXECNvoVXanaNmXS5IS96nTFfdJXWoe6EA72CUbx9EBke6ciJ4O2zHZ3dUFo+NYur X-Received: by 10.84.218.79 with SMTP id f15mr384398plm.145.1509110895318; Fri, 27 Oct 2017 06:28:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509110895; cv=none; d=google.com; s=arc-20160816; b=DACpEJYelK2K6rhDftOeoDyPee8UDNac7t69yecbnJ8IyR/xbOzb0twu6BFT0onkY8 NnhxQSoqbsy7buUseYrhWxfSjDQ/fP8LJRUBT+d7yVaA0su85q+9mE8im3Hwu1hCP0em U2SYOIQybt6oesv7m3cHutm9R38vq8qc/ASR7S9fzVros5Wot6y2dyY2tFfeSU94flQk EiKlt8F+nC1CbwxmS/ITpcsjq0NZzkQaIgUTWYLttr3EB3L7ONzlZAP7k/7wZAaOtgOO klcyoAd5AU0MY+ISaKTAlfhM5S7oHKnxrIWSBdOozNvfHHOoT64sNJwzbWo6GmflVLzf QX9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:cc:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=4U7S6hyd3dqOD7sbDPEAuwJo+dAarR2VQyQZd6Jn4us=; b=DEJS4y0NMWXj6HUDxGILzkdP4/fgeYqs5K6Ru3HAi1xtnxB6GFgyIYvLw4BhdOiMWl IpWlpQkoOdXnnJX/pYip69pSN3SMKl07nSeojnJLIe6IbeEnCSXO5/jDptdEe3w84mZ9 v4QPaP+xeD0i+z9KoaG6upVeeanaX1P6YzALcL9ti02BineFOfYzvkog8mpG8FAunnQn nWjtqYmCzWu8Z+NvYjr39DEQiJVyQz9cTL3WHvpIY7omWDFvM1JAv/gkVzjaFR50BEoN DMgB+RO5B7Jkx6j2gjLdG5tJY0P3g4xbvNmMUQAPKLr0X0btSY8iuqPBy9zmjDmK1QpH Rj1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=AgcBY/Ex; spf=pass (google.com: domain of gcc-patches-return-465345-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-465345-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id j7si5330127pff.457.2017.10.27.06.28.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Oct 2017 06:28:15 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-465345-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=AgcBY/Ex; spf=pass (google.com: domain of gcc-patches-return-465345-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-465345-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:references:date:in-reply-to:message-id :mime-version:content-type; q=dns; s=default; b=yHuiUf97d5eGMcDm UM6CahWp6FtjB6s2kyy0/7+DfPFWFJ74OgllCqsg+zoRc1ZW7V7mpUdpYKLb1Hgn fNSANlQ9KmEaRSLHp8V6yZGffEuhBMKwisv4QEWPkMMXxCE92V7TkObt11JkoGAW Z8BEaaBAF9fTCrRuLSdDRO2xuf0= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:references:date:in-reply-to:message-id :mime-version:content-type; s=default; bh=gLopL0MungAJeBpJGdpTb6 bsNnw=; b=AgcBY/Ex+46/I6m80Q1DGa/UfOZF1E/PylihB1+w/n9rUGyx8jAmGR S78X6Vde+Fo/gREmlFRhQmG2rqXHooaLYvaxCzwYTLOSLHYUUtU9LwQyoYpqoIUb sdqRP6KYIxAHGiCvuUqAaoiuOMJ7GYbpMS8+v5sYlpt2dzROjJ9u8= Received: (qmail 41696 invoked by alias); 27 Oct 2017 13:28:01 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 41673 invoked by uid 89); 27 Oct 2017 13:28:00 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-15.4 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=1d X-HELO: mail-wr0-f179.google.com Received: from mail-wr0-f179.google.com (HELO mail-wr0-f179.google.com) (209.85.128.179) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 27 Oct 2017 13:27:56 +0000 Received: by mail-wr0-f179.google.com with SMTP id g90so6173369wrd.6 for ; Fri, 27 Oct 2017 06:27:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:cc:subject:references :date:in-reply-to:message-id:user-agent:mime-version; bh=4U7S6hyd3dqOD7sbDPEAuwJo+dAarR2VQyQZd6Jn4us=; b=g9/7JMSKqYkr/Pn5j6US0Qvby5NoHsW2H8wfSbJgOS/rr9h/mNpLvzADSiFPZh7QMz AIviMmVGq/7qI+0v1pu0OHCgdZYN3DDJZai09rXQwDjcVkeMiezveqMjCrd4HPS7fqtf jLvuyht7vnbeld1/6uzVcdRsJfZy0llURrpfiHK9F5RUP0lFIX3dEJap0JFiAWqTMgIp 5o6VdzwKLmUZbssiBAI6T9KdgTbzRMd4EWrO8kIa3gGKGAmbTLMxSMpC1p2UopmVwfNd NJ5jtviUOBjY1XaCFLpb+NQJyf7/AuU3XLcVuTsbCTjyzbelgALttPOnyjUfzZPsb4Gu FOHw== X-Gm-Message-State: AMCzsaXz7o1RT9x1ThAyObxcjqIbTtcGtvQ1G2omMCorXBLNa3eh42dz Mnw94HUreAY5nfrO6gcHhxl8ffv1abg= X-Received: by 10.223.134.106 with SMTP id 39mr462661wrw.134.1509110873767; Fri, 27 Oct 2017 06:27:53 -0700 (PDT) Received: from localhost (188.29.164.51.threembb.co.uk. [188.29.164.51]) by smtp.gmail.com with ESMTPSA id a79sm1354841wma.14.2017.10.27.06.27.52 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 27 Oct 2017 06:27:53 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.earnshaw@arm.com, james.greenhalgh@arm.com, marcus.shawcroft@arm.com, richard.sandiford@linaro.org Cc: richard.earnshaw@arm.com, james.greenhalgh@arm.com, marcus.shawcroft@arm.com Subject: [06/nn] [AArch64] Add an endian_lane_rtx helper routine References: <873764d8y3.fsf@linaro.org> Date: Fri, 27 Oct 2017 14:27:50 +0100 In-Reply-To: <873764d8y3.fsf@linaro.org> (Richard Sandiford's message of "Fri, 27 Oct 2017 14:19:48 +0100") Message-ID: <87a80cbu09.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Later patches turn the number of vector units into a poly_int. We deliberately don't support applying GEN_INT to those (except in target code that doesn't disguish between poly_ints and normal constants); gen_int_mode needs to be used instead. This patch therefore replaces instances of: GEN_INT (ENDIAN_LANE_N (builtin_mode, INTVAL (op[opc]))) with uses of a new endian_lane_rtx function. 2017-10-26 Richard Sandiford Alan Hayward David Sherwood gcc/ * config/aarch64/aarch64-protos.h (aarch64_endian_lane_rtx): Declare. * config/aarch64/aarch64.c (aarch64_endian_lane_rtx): New function. * config/aarch64/aarch64.h (ENDIAN_LANE_N): Take the number of units rather than the mode. * config/aarch64/iterators.md (nunits): New mode attribute. * config/aarch64/aarch64-builtins.c (aarch64_simd_expand_args): Use aarch64_endian_lane_rtx instead of GEN_INT (ENDIAN_LANE_N ...). * config/aarch64/aarch64-simd.md (aarch64_dup_lane) (aarch64_dup_lane_, *aarch64_mul3_elt) (*aarch64_mul3_elt_): Likewise. (*aarch64_mul3_elt_to_64v2df, *aarch64_mla_elt): Likewise. (*aarch64_mla_elt_, *aarch64_mls_elt) (*aarch64_mls_elt_, *aarch64_fma4_elt) (*aarch64_fma4_elt_):: Likewise. (*aarch64_fma4_elt_to_64v2df, *aarch64_fnma4_elt): Likewise. (*aarch64_fnma4_elt_): Likewise. (*aarch64_fnma4_elt_to_64v2df, reduc_plus_scal_): Likewise. (reduc_plus_scal_v4sf, reduc__scal_): Likewise. (reduc__scal_): Likewise. (*aarch64_get_lane_extend): Likewise. (*aarch64_get_lane_zero_extendsi): Likewise. (aarch64_get_lane, *aarch64_mulx_elt_) (*aarch64_mulx_elt, *aarch64_vgetfmulx): Likewise. (aarch64_sqdmulh_lane, aarch64_sqdmulh_laneq) (aarch64_sqrdmlh_lane): Likewise. (aarch64_sqrdmlh_laneq): Likewise. (aarch64_sqdmll_lane): Likewise. (aarch64_sqdmll_laneq): Likewise. (aarch64_sqdmll2_lane_internal): Likewise. (aarch64_sqdmll2_laneq_internal): Likewise. (aarch64_sqdmull_lane, aarch64_sqdmull_laneq): Likewise. (aarch64_sqdmull2_lane_internal): Likewise. (aarch64_sqdmull2_laneq_internal): Likewise. (aarch64_vec_load_lanesoi_lane): Likewise. (aarch64_vec_store_lanesoi_lane): Likewise. (aarch64_vec_load_lanesci_lane): Likewise. (aarch64_vec_store_lanesci_lane): Likewise. (aarch64_vec_load_lanesxi_lane): Likewise. (aarch64_vec_store_lanesxi_lane): Likewise. (aarch64_simd_vec_set): Update use of ENDIAN_LANE_N. (aarch64_simd_vec_setv2di): Likewise. Reviewed-by: James Greenhalgh Index: gcc/config/aarch64/aarch64-protos.h =================================================================== --- gcc/config/aarch64/aarch64-protos.h 2017-10-27 14:11:56.993658452 +0100 +++ gcc/config/aarch64/aarch64-protos.h 2017-10-27 14:12:00.601693018 +0100 @@ -437,6 +437,7 @@ void aarch64_simd_emit_reg_reg_move (rtx rtx aarch64_simd_expand_builtin (int, tree, rtx); void aarch64_simd_lane_bounds (rtx, HOST_WIDE_INT, HOST_WIDE_INT, const_tree); +rtx aarch64_endian_lane_rtx (machine_mode, unsigned int); void aarch64_split_128bit_move (rtx, rtx); Index: gcc/config/aarch64/aarch64.c =================================================================== --- gcc/config/aarch64/aarch64.c 2017-10-27 14:11:56.995515870 +0100 +++ gcc/config/aarch64/aarch64.c 2017-10-27 14:12:00.603550436 +0100 @@ -12083,6 +12083,15 @@ aarch64_simd_lane_bounds (rtx operand, H } } +/* Peform endian correction on lane number N, which indexes a vector + of mode MODE, and return the result as an SImode rtx. */ + +rtx +aarch64_endian_lane_rtx (machine_mode mode, unsigned int n) +{ + return gen_int_mode (ENDIAN_LANE_N (GET_MODE_NUNITS (mode), n), SImode); +} + /* Return TRUE if OP is a valid vector addressing mode. */ bool aarch64_simd_mem_operand_p (rtx op) Index: gcc/config/aarch64/aarch64.h =================================================================== --- gcc/config/aarch64/aarch64.h 2017-10-27 14:05:38.132936808 +0100 +++ gcc/config/aarch64/aarch64.h 2017-10-27 14:12:00.603550436 +0100 @@ -910,8 +910,8 @@ #define AARCH64_VALID_SIMD_QREG_MODE(MOD || (MODE) == V4SFmode || (MODE) == V8HFmode || (MODE) == V2DImode \ || (MODE) == V2DFmode) -#define ENDIAN_LANE_N(mode, n) \ - (BYTES_BIG_ENDIAN ? GET_MODE_NUNITS (mode) - 1 - n : n) +#define ENDIAN_LANE_N(NUNITS, N) \ + (BYTES_BIG_ENDIAN ? NUNITS - 1 - N : N) /* Support for a configure-time default CPU, etc. We currently support --with-arch and --with-cpu. Both are ignored if either is specified Index: gcc/config/aarch64/iterators.md =================================================================== --- gcc/config/aarch64/iterators.md 2017-10-27 14:11:56.995515870 +0100 +++ gcc/config/aarch64/iterators.md 2017-10-27 14:12:00.604479145 +0100 @@ -438,6 +438,17 @@ (define_mode_attr vw2 [(DI "") (QI "h") (define_mode_attr rtn [(DI "d") (SI "")]) (define_mode_attr vas [(DI "") (SI ".2s")]) +;; Map a vector to the number of units in it, if the size of the mode +;; is constant. +(define_mode_attr nunits [(V8QI "8") (V16QI "16") + (V4HI "4") (V8HI "8") + (V2SI "2") (V4SI "4") + (V2DI "2") + (V4HF "4") (V8HF "8") + (V2SF "2") (V4SF "4") + (V1DF "1") (V2DF "2") + (DI "1") (DF "1")]) + ;; Map a mode to the number of bits in it, if the size of the mode ;; is constant. (define_mode_attr bitsize [(V8QI "64") (V16QI "128") Index: gcc/config/aarch64/aarch64-builtins.c =================================================================== --- gcc/config/aarch64/aarch64-builtins.c 2017-10-27 14:05:38.132936808 +0100 +++ gcc/config/aarch64/aarch64-builtins.c 2017-10-27 14:12:00.601693018 +0100 @@ -1069,8 +1069,8 @@ aarch64_simd_expand_args (rtx target, in GET_MODE_NUNITS (builtin_mode), exp); /* Keep to GCC-vector-extension lane indices in the RTL. */ - op[opc] = - GEN_INT (ENDIAN_LANE_N (builtin_mode, INTVAL (op[opc]))); + op[opc] = aarch64_endian_lane_rtx (builtin_mode, + INTVAL (op[opc])); } goto constant_arg; @@ -1083,7 +1083,7 @@ aarch64_simd_expand_args (rtx target, in aarch64_simd_lane_bounds (op[opc], 0, GET_MODE_NUNITS (vmode), exp); /* Keep to GCC-vector-extension lane indices in the RTL. */ - op[opc] = GEN_INT (ENDIAN_LANE_N (vmode, INTVAL (op[opc]))); + op[opc] = aarch64_endian_lane_rtx (vmode, INTVAL (op[opc])); } /* Fall through - if the lane index isn't a constant then the next case will error. */ Index: gcc/config/aarch64/aarch64-simd.md =================================================================== --- gcc/config/aarch64/aarch64-simd.md 2017-10-27 14:11:56.994587161 +0100 +++ gcc/config/aarch64/aarch64-simd.md 2017-10-27 14:12:00.602621727 +0100 @@ -80,7 +80,7 @@ (define_insn "aarch64_dup_lane" )))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "dup\\t%0., %1.[%2]"; } [(set_attr "type" "neon_dup")] @@ -95,8 +95,7 @@ (define_insn "aarch64_dup_lane_mode, - INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "dup\\t%0., %1.[%2]"; } [(set_attr "type" "neon_dup")] @@ -501,7 +500,7 @@ (define_insn "*aarch64_mul3_elt" (match_operand:VMUL 3 "register_operand" "w")))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "mul\\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_mul__scalar")] @@ -517,8 +516,7 @@ (define_insn "*aarch64_mul3_elt_mode, - INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "mul\\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_mul__scalar")] @@ -571,7 +569,7 @@ (define_insn "*aarch64_mul3_elt_to_64v2d (match_operand:DF 3 "register_operand" "w")))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2])); return "fmul\\t%0.2d, %3.2d, %1.d[%2]"; } [(set_attr "type" "neon_fp_mul_d_scalar_q")] @@ -706,7 +704,7 @@ (define_insn "aarch64_simd_vec_set (match_operand:SI 2 "immediate_operand" "i,i,i")))] "TARGET_SIMD" { - int elt = ENDIAN_LANE_N (mode, exact_log2 (INTVAL (operands[2]))); + int elt = ENDIAN_LANE_N (, exact_log2 (INTVAL (operands[2]))); operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt); switch (which_alternative) { @@ -1072,7 +1070,7 @@ (define_insn "aarch64_simd_vec_setv2di" (match_operand:SI 2 "immediate_operand" "i,i")))] "TARGET_SIMD" { - int elt = ENDIAN_LANE_N (V2DImode, exact_log2 (INTVAL (operands[2]))); + int elt = ENDIAN_LANE_N (2, exact_log2 (INTVAL (operands[2]))); operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt); switch (which_alternative) { @@ -1109,7 +1107,7 @@ (define_insn "aarch64_simd_vec_set (match_operand:SI 2 "immediate_operand" "i")))] "TARGET_SIMD" { - int elt = ENDIAN_LANE_N (mode, exact_log2 (INTVAL (operands[2]))); + int elt = ENDIAN_LANE_N (, exact_log2 (INTVAL (operands[2]))); operands[2] = GEN_INT ((HOST_WIDE_INT)1 << elt); return "ins\t%0.[%p2], %1.[0]"; @@ -1154,7 +1152,7 @@ (define_insn "*aarch64_mla_elt" (match_operand:VDQHS 4 "register_operand" "0")))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "mla\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_mla__scalar")] @@ -1172,8 +1170,7 @@ (define_insn "*aarch64_mla_elt_mode, - INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "mla\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_mla__scalar")] @@ -1213,7 +1210,7 @@ (define_insn "*aarch64_mls_elt" (match_operand:VDQHS 3 "register_operand" "w"))))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "mls\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_mla__scalar")] @@ -1231,8 +1228,7 @@ (define_insn "*aarch64_mls_elt_mode, - INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "mls\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_mla__scalar")] @@ -1802,7 +1798,7 @@ (define_insn "*aarch64_fma4_elt" (match_operand:VDQF 4 "register_operand" "0")))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "fmla\\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_fp_mla__scalar")] @@ -1819,8 +1815,7 @@ (define_insn "*aarch64_fma4_elt_mode, - INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "fmla\\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_fp_mla__scalar")] @@ -1848,7 +1843,7 @@ (define_insn "*aarch64_fma4_elt_to_64v2d (match_operand:DF 4 "register_operand" "0")))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2])); return "fmla\\t%0.2d, %3.2d, %1.2d[%2]"; } [(set_attr "type" "neon_fp_mla_d_scalar_q")] @@ -1878,7 +1873,7 @@ (define_insn "*aarch64_fnma4_elt" (match_operand:VDQF 4 "register_operand" "0")))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "fmls\\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_fp_mla__scalar")] @@ -1896,8 +1891,7 @@ (define_insn "*aarch64_fnma4_elt_mode, - INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "fmls\\t%0., %3., %1.[%2]"; } [(set_attr "type" "neon_fp_mla__scalar")] @@ -1927,7 +1921,7 @@ (define_insn "*aarch64_fnma4_elt_to_64v2 (match_operand:DF 4 "register_operand" "0")))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (V2DFmode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (V2DFmode, INTVAL (operands[2])); return "fmls\\t%0.2d, %3.2d, %1.2d[%2]"; } [(set_attr "type" "neon_fp_mla_d_scalar_q")] @@ -2260,7 +2254,7 @@ (define_expand "reduc_plus_scal_" UNSPEC_ADDV)] "TARGET_SIMD" { - rtx elt = GEN_INT (ENDIAN_LANE_N (mode, 0)); + rtx elt = aarch64_endian_lane_rtx (mode, 0); rtx scratch = gen_reg_rtx (mode); emit_insn (gen_aarch64_reduc_plus_internal (scratch, operands[1])); emit_insn (gen_aarch64_get_lane (operands[0], scratch, elt)); @@ -2311,7 +2305,7 @@ (define_expand "reduc_plus_scal_v4sf" UNSPEC_FADDV))] "TARGET_SIMD" { - rtx elt = GEN_INT (ENDIAN_LANE_N (V4SFmode, 0)); + rtx elt = aarch64_endian_lane_rtx (V4SFmode, 0); rtx scratch = gen_reg_rtx (V4SFmode); emit_insn (gen_aarch64_faddpv4sf (scratch, operands[1], operands[1])); emit_insn (gen_aarch64_faddpv4sf (scratch, scratch, scratch)); @@ -2353,7 +2347,7 @@ (define_expand "reduc__scal_ FMAXMINV)] "TARGET_SIMD" { - rtx elt = GEN_INT (ENDIAN_LANE_N (mode, 0)); + rtx elt = aarch64_endian_lane_rtx (mode, 0); rtx scratch = gen_reg_rtx (mode); emit_insn (gen_aarch64_reduc__internal (scratch, operands[1])); @@ -2369,7 +2363,7 @@ (define_expand "reduc__scal_ MAXMINV)] "TARGET_SIMD" { - rtx elt = GEN_INT (ENDIAN_LANE_N (mode, 0)); + rtx elt = aarch64_endian_lane_rtx (mode, 0); rtx scratch = gen_reg_rtx (mode); emit_insn (gen_aarch64_reduc__internal (scratch, operands[1])); @@ -2894,7 +2888,7 @@ (define_insn "*aarch64_get_lane_extendmode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "smov\\t%0, %1.[%2]"; } [(set_attr "type" "neon_to_gp")] @@ -2908,7 +2902,7 @@ (define_insn "*aarch64_get_lane_zero_ext (parallel [(match_operand:SI 2 "immediate_operand" "i")]))))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "umov\\t%w0, %1.[%2]"; } [(set_attr "type" "neon_to_gp")] @@ -2924,7 +2918,7 @@ (define_insn "aarch64_get_lane" (parallel [(match_operand:SI 2 "immediate_operand" "i, i, i")])))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); switch (which_alternative) { case 0: @@ -3300,8 +3294,7 @@ (define_insn "*aarch64_mulx_elt_mode, - INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "fmulx\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_fp_mul__scalar")] @@ -3320,7 +3313,7 @@ (define_insn "*aarch64_mulx_elt" UNSPEC_FMULX))] "TARGET_SIMD" { - operands[3] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "fmulx\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_fp_mul_")] @@ -3354,7 +3347,7 @@ (define_insn "*aarch64_vgetfmulx" UNSPEC_FMULX))] "TARGET_SIMD" { - operands[3] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "fmulx\t%0, %1, %2.[%3]"; } [(set_attr "type" "fmul")] @@ -3440,7 +3433,7 @@ (define_insn "aarch64_sqdmulh_lanemode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return \"sqdmulh\\t%0., %1., %2.[%3]\";" [(set_attr "type" "neon_sat_mul__scalar")] ) @@ -3455,7 +3448,7 @@ (define_insn "aarch64_sqdmulh_laneqmode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return \"sqdmulh\\t%0., %1., %2.[%3]\";" [(set_attr "type" "neon_sat_mul__scalar")] ) @@ -3470,7 +3463,7 @@ (define_insn "aarch64_sqdmulh_lanemode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return \"sqdmulh\\t%0, %1, %2.[%3]\";" [(set_attr "type" "neon_sat_mul__scalar")] ) @@ -3485,7 +3478,7 @@ (define_insn "aarch64_sqdmulh_laneqmode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return \"sqdmulh\\t%0, %1, %2.[%3]\";" [(set_attr "type" "neon_sat_mul__scalar")] ) @@ -3517,7 +3510,7 @@ (define_insn "aarch64_sqrdmlmode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqrdmlh\\t%0., %2., %3.[%4]"; } @@ -3535,7 +3528,7 @@ (define_insn "aarch64_sqrdmlmode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqrdmlh\\t%0, %2, %3.[%4]"; } @@ -3555,7 +3548,7 @@ (define_insn "aarch64_sqrdmlmode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqrdmlh\\t%0., %2., %3.[%4]"; } @@ -3573,7 +3566,7 @@ (define_insn "aarch64_sqrdmlmode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqrdmlh\\t%0, %2, %3.[%4]"; } @@ -3617,7 +3610,7 @@ (define_insn "aarch64_sqdml (const_int 1))))] "TARGET_SIMD" { - operands[4] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqdmll\\t%0, %2, %3.[%4]"; } @@ -3641,7 +3634,7 @@ (define_insn "aarch64_sqdml (const_int 1))))] "TARGET_SIMD" { - operands[4] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqdmll\\t%0, %2, %3.[%4]"; } @@ -3664,7 +3657,7 @@ (define_insn "aarch64_sqdml (const_int 1))))] "TARGET_SIMD" { - operands[4] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqdmll\\t%0, %2, %3.[%4]"; } @@ -3687,7 +3680,7 @@ (define_insn "aarch64_sqdml (const_int 1))))] "TARGET_SIMD" { - operands[4] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqdmll\\t%0, %2, %3.[%4]"; } @@ -3782,7 +3775,7 @@ (define_insn "aarch64_sqdml (const_int 1))))] "TARGET_SIMD" { - operands[4] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqdmll2\\t%0, %2, %3.[%4]"; } @@ -3808,7 +3801,7 @@ (define_insn "aarch64_sqdml (const_int 1))))] "TARGET_SIMD" { - operands[4] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[4]))); + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); return "sqdmll2\\t%0, %2, %3.[%4]"; } @@ -3955,7 +3948,7 @@ (define_insn "aarch64_sqdmull_lane (const_int 1)))] "TARGET_SIMD" { - operands[3] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "sqdmull\\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_sat_mul__scalar_long")] @@ -3976,7 +3969,7 @@ (define_insn "aarch64_sqdmull_laneqmode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "sqdmull\\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_sat_mul__scalar_long")] @@ -3996,7 +3989,7 @@ (define_insn "aarch64_sqdmull_lane (const_int 1)))] "TARGET_SIMD" { - operands[3] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "sqdmull\\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_sat_mul__scalar_long")] @@ -4016,7 +4009,7 @@ (define_insn "aarch64_sqdmull_laneqmode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "sqdmull\\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_sat_mul__scalar_long")] @@ -4094,7 +4087,7 @@ (define_insn "aarch64_sqdmull2_lanemode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "sqdmull2\\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_sat_mul__scalar_long")] @@ -4117,7 +4110,7 @@ (define_insn "aarch64_sqdmull2_laneqmode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "sqdmull2\\t%0, %1, %2.[%3]"; } [(set_attr "type" "neon_sat_mul__scalar_long")] @@ -4623,7 +4616,7 @@ (define_insn "aarch64_vec_load_lanesoi_l UNSPEC_LD2_LANE))] "TARGET_SIMD" { - operands[3] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "ld2\\t{%S0. - %T0.}[%3], %1"; } [(set_attr "type" "neon_load2_one_lane")] @@ -4667,7 +4660,7 @@ (define_insn "aarch64_vec_store_lanesoi_ UNSPEC_ST2_LANE))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "st2\\t{%S1. - %T1.}[%2], %0"; } [(set_attr "type" "neon_store2_one_lane")] @@ -4721,7 +4714,7 @@ (define_insn "aarch64_vec_load_lanesci_l UNSPEC_LD3_LANE))] "TARGET_SIMD" { - operands[3] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "ld3\\t{%S0. - %U0.}[%3], %1"; } [(set_attr "type" "neon_load3_one_lane")] @@ -4765,7 +4758,7 @@ (define_insn "aarch64_vec_store_lanesci_ UNSPEC_ST3_LANE))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "st3\\t{%S1. - %U1.}[%2], %0"; } [(set_attr "type" "neon_store3_one_lane")] @@ -4819,7 +4812,7 @@ (define_insn "aarch64_vec_load_lanesxi_l UNSPEC_LD4_LANE))] "TARGET_SIMD" { - operands[3] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[3]))); + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); return "ld4\\t{%S0. - %V0.}[%3], %1"; } [(set_attr "type" "neon_load4_one_lane")] @@ -4863,7 +4856,7 @@ (define_insn "aarch64_vec_store_lanesxi_ UNSPEC_ST4_LANE))] "TARGET_SIMD" { - operands[2] = GEN_INT (ENDIAN_LANE_N (mode, INTVAL (operands[2]))); + operands[2] = aarch64_endian_lane_rtx (mode, INTVAL (operands[2])); return "st4\\t{%S1. - %V1.}[%2], %0"; } [(set_attr "type" "neon_store4_one_lane")]