From patchwork Thu Jul 13 09:00:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 107648 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp1942436qge; Thu, 13 Jul 2017 02:02:16 -0700 (PDT) X-Received: by 10.84.130.47 with SMTP id 44mr8918994plc.192.1499936536502; Thu, 13 Jul 2017 02:02:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1499936536; cv=none; d=google.com; s=arc-20160816; b=wTyliEwLJ6SyBBN9QWroRu97HlL+F/N94mpJ/pd5erZ7h5D9HfTeymUCWdSYUdqKMf 8+M+Nb6sxT3jUl5DmL4rn23xRBzVf+NB9nooMFTMrbaM521lS5gxcTUjK2mQ4LHwp188 rWmrvGZQj+wpNsrpWq6IsoqzwUteEeo346XxvpDMT3xa4JVygk7jfHGTCh2ifNOmiSlc cfNVaPd2Pc1xAgGKE34gOKb/34n23IjmaDIFkZC4vspf4MpYUp2+GVPGuMC4xLZe/p8k UbuXxxoHrORNFpx1KROqZAk2Qg7DoWOVjnxZZv+F81c/Csw7VdwbRydXq9j6inZPhJdh dfeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=FaPnmeTllgH3pT3Wi9KEsuvIuBo38s6v19pFLvpPzuM=; b=PYf2zCnWnsc6RAcSUYpXfdxSTkSESmn43PFc2Kqpjv7jnf1CHZS8brL1kvpCzjFlGF 9cgq8EVlUVLdB7Tzlz+h1vs2AKl1XGmxPBAw189cWeC8H1Xeop9g5uez6nOdpUWaxtjo 8oUhgom7orz0QujKGNWuJOttbplNa0BSDyBHOt/yCGD50zTIqh2yWiO1zNFY5Lx8o1py zCbx8VJXrycB1YbU51ilmDPTaTg8PlIMi/GSNr5hUeIr0bpEFqQlPZY72KeFZ/0u0B/W rEFKfG9oBp2ZJKUX1Aeg8jbZnzJtFCoa072XxsaWbs/EnCZvnkOk3FgJNlsPdNN1xdPw 3rQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=bjVOEbvb; spf=pass (google.com: domain of gcc-patches-return-458052-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458052-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id d188si3784548pfc.312.2017.07.13.02.02.16 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jul 2017 02:02:16 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-458052-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=bjVOEbvb; spf=pass (google.com: domain of gcc-patches-return-458052-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458052-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=JN2u9Kj/NJ0XkjUjWu7jn5qBQGapd scXxiZFbNei3rHGP40labZSjimpjxwTYrat+A/0lqXRYVniwiTE4x84W9Brp8G1h kM3z5c7P3U48+zyUv+fkLVUPzvbn1i8x5hYiQ4xYe1LpUmshsnw39aGVTSU+O42A m6c+DTgN+tinnA= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=0+aqV9G2TBJ1y3GVXqBSo4qbMAs=; b=bjV OEbvbusQiOIzQqHz3PgHW+pN6Mr93PhiYveqU915WCczb1Oa34B914qxMp3Fi6zV Za/yfHeI7uDQhubaY8rc9Jw2Jwecizh8FvnkituS9379Oo0pWP1lYcReXncYfV1K S6rN4vY2rrs4PUNwQ/6YFhGyq7P1XRYtZE86vI98= Received: (qmail 112181 invoked by alias); 13 Jul 2017 09:00:30 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 109844 invoked by uid 89); 13 Jul 2017 09:00:27 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-15.1 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, KAM_STOCKGEN, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=ham version=3.3.2 spammy=33021 X-HELO: mail-wr0-f176.google.com Received: from mail-wr0-f176.google.com (HELO mail-wr0-f176.google.com) (209.85.128.176) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 13 Jul 2017 09:00:20 +0000 Received: by mail-wr0-f176.google.com with SMTP id k67so49492072wrc.2 for ; Thu, 13 Jul 2017 02:00:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=FaPnmeTllgH3pT3Wi9KEsuvIuBo38s6v19pFLvpPzuM=; b=E5TFxB9pJyrpxJoBrTyP6e8Wz/42ncyT3np8TJkn4VBq4eeQUVZzi+QJ7kpBoEJ4po C0mn5sB0hfC7Tq8Cqloao1YvXW4qV1HytJsfsi/M+VmDp/x+wrW4SmPRhks3Iw0Fpt4/ OOG2E1zRwgJSwlu73eiJRShIbHbKnqO9u9wSNEuf4gcK1KoetFLwqYwXeIDmwXEEQEUi PvzlZgWiGIdLXz9lfv5ZmDTeh29BMKxOdmjCo6VYLgU3QrJg9Ob0LuD/qiq7Rx4McPuy px6Ouf+M1zYV63oqMMJCYnE86jhxR2tjseElgzgvbuE6+GSg/69kqjGm0JWrYsxOlmv/ LMHQ== X-Gm-Message-State: AIVw113/O2jCuCqyKxT1Z0RtAUVEJhYE44xe9pETYkgFGz4ynmn8vY7A 793MA4kiZphHXP4QHv+j5Q== X-Received: by 10.223.138.234 with SMTP id z39mr971838wrz.50.1499936405735; Thu, 13 Jul 2017 02:00:05 -0700 (PDT) Received: from localhost (92.40.249.184.threembb.co.uk. [92.40.249.184]) by smtp.gmail.com with ESMTPSA id i67sm3629061wri.61.2017.07.13.02.00.04 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Jul 2017 02:00:05 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [61/77] Use scalar_int_mode in the AArch64 port References: <8760ewohsv.fsf@linaro.org> Date: Thu, 13 Jul 2017 10:00:03 +0100 In-Reply-To: <8760ewohsv.fsf@linaro.org> (Richard Sandiford's message of "Thu, 13 Jul 2017 09:35:44 +0100") Message-ID: <87tw2gd84s.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 This patch makes the AArch64 port use scalar_int_mode in various places. Other ports won't need this kind of change; we only need it for AArch64 because of the variable-sized SVE modes. The only change in functionality is in the rtx_costs handling of CONST_INT. If the caller doesn't supply a mode, we now pass word_mode rather than VOIDmode to aarch64_internal_mov_immediate. aarch64_movw_imm will therefore not now truncate large constants in this situation. 2017-07-13 Richard Sandiford Alan Hayward David Sherwood gcc/ * config/aarch64/aarch64-protos.h (aarch64_is_extend_from_extract): Take a scalar_int_mode instead of a machine_mode. (aarch64_mask_and_shift_for_ubfiz_p): Likewise. (aarch64_move_imm): Likewise. (aarch64_output_scalar_simd_mov_immediate): Likewise. (aarch64_simd_scalar_immediate_valid_for_move): Likewise. (aarch64_simd_attr_length_rglist): Delete. * config/aarch64/aarch64.c (aarch64_is_extend_from_extract): Take a scalar_int_mode instead of a machine_mode. (aarch64_add_offset): Likewise. (aarch64_internal_mov_immediate): Likewise (aarch64_add_constant_internal): Likewise. (aarch64_add_constant): Likewise. (aarch64_movw_imm): Likewise. (aarch64_move_imm): Likewise. (aarch64_rtx_arith_op_extract_p): Likewise. (aarch64_mask_and_shift_for_ubfiz_p): Likewise. (aarch64_simd_scalar_immediate_valid_for_move): Likewise. Remove assert that the mode isn't a vector. (aarch64_output_scalar_simd_mov_immediate): Likewise. (aarch64_expand_mov_immediate): Update calls after above changes. (aarch64_output_casesi): Use as_a . (aarch64_and_bitmask_imm): Check for scalar integer modes. (aarch64_strip_extend): Likewise. (aarch64_extr_rtx_p): Likewise. (aarch64_rtx_costs): Likewise, using wode_mode as the mode of a CONST_INT when the mode parameter is VOIDmode. Index: gcc/config/aarch64/aarch64-protos.h =================================================================== --- gcc/config/aarch64/aarch64-protos.h 2017-07-05 16:29:19.581861907 +0100 +++ gcc/config/aarch64/aarch64-protos.h 2017-07-13 09:18:50.737840686 +0100 @@ -330,22 +330,21 @@ bool aarch64_function_arg_regno_p (unsig bool aarch64_fusion_enabled_p (enum aarch64_fusion_pairs); bool aarch64_gen_movmemqi (rtx *); bool aarch64_gimple_fold_builtin (gimple_stmt_iterator *); -bool aarch64_is_extend_from_extract (machine_mode, rtx, rtx); +bool aarch64_is_extend_from_extract (scalar_int_mode, rtx, rtx); bool aarch64_is_long_call_p (rtx); bool aarch64_is_noplt_call_p (rtx); bool aarch64_label_mentioned_p (rtx); void aarch64_declare_function_name (FILE *, const char*, tree); bool aarch64_legitimate_pic_operand_p (rtx); -bool aarch64_mask_and_shift_for_ubfiz_p (machine_mode, rtx, rtx); +bool aarch64_mask_and_shift_for_ubfiz_p (scalar_int_mode, rtx, rtx); bool aarch64_modes_tieable_p (machine_mode mode1, machine_mode mode2); bool aarch64_zero_extend_const_eq (machine_mode, rtx, machine_mode, rtx); -bool aarch64_move_imm (HOST_WIDE_INT, machine_mode); +bool aarch64_move_imm (HOST_WIDE_INT, scalar_int_mode); bool aarch64_mov_operand_p (rtx, machine_mode); -int aarch64_simd_attr_length_rglist (machine_mode); rtx aarch64_reverse_mask (machine_mode); bool aarch64_offset_7bit_signed_scaled_p (machine_mode, HOST_WIDE_INT); -char *aarch64_output_scalar_simd_mov_immediate (rtx, machine_mode); +char *aarch64_output_scalar_simd_mov_immediate (rtx, scalar_int_mode); char *aarch64_output_simd_mov_immediate (rtx, machine_mode, unsigned); bool aarch64_pad_arg_upward (machine_mode, const_tree); bool aarch64_pad_reg_upward (machine_mode, const_tree, bool); @@ -355,7 +354,7 @@ bool aarch64_simd_check_vect_par_cnst_ha bool high); bool aarch64_simd_imm_scalar_p (rtx x, machine_mode mode); bool aarch64_simd_imm_zero_p (rtx, machine_mode); -bool aarch64_simd_scalar_immediate_valid_for_move (rtx, machine_mode); +bool aarch64_simd_scalar_immediate_valid_for_move (rtx, scalar_int_mode); bool aarch64_simd_shift_imm_p (rtx, machine_mode, bool); bool aarch64_simd_valid_immediate (rtx, machine_mode, bool, struct simd_immediate_info *); Index: gcc/config/aarch64/aarch64.c =================================================================== --- gcc/config/aarch64/aarch64.c 2017-07-13 09:18:31.691429056 +0100 +++ gcc/config/aarch64/aarch64.c 2017-07-13 09:18:50.738840609 +0100 @@ -1216,7 +1216,7 @@ aarch64_is_noplt_call_p (rtx sym) (extract:MODE (mult (reg) (MULT_IMM)) (EXTRACT_IMM) (const_int 0)). */ bool -aarch64_is_extend_from_extract (machine_mode mode, rtx mult_imm, +aarch64_is_extend_from_extract (scalar_int_mode mode, rtx mult_imm, rtx extract_imm) { HOST_WIDE_INT mult_val, extract_val; @@ -1842,7 +1842,8 @@ aarch64_force_temporary (machine_mode mo static rtx -aarch64_add_offset (machine_mode mode, rtx temp, rtx reg, HOST_WIDE_INT offset) +aarch64_add_offset (scalar_int_mode mode, rtx temp, rtx reg, + HOST_WIDE_INT offset) { if (!aarch64_plus_immediate (GEN_INT (offset), mode)) { @@ -1860,7 +1861,7 @@ aarch64_add_offset (machine_mode mode, r static int aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate, - machine_mode mode) + scalar_int_mode mode) { int i; unsigned HOST_WIDE_INT val, val2, mask; @@ -1966,9 +1967,11 @@ aarch64_expand_mov_immediate (rtx dest, gcc_assert (mode == SImode || mode == DImode); /* Check on what type of symbol it is. */ - if (GET_CODE (imm) == SYMBOL_REF - || GET_CODE (imm) == LABEL_REF - || GET_CODE (imm) == CONST) + scalar_int_mode int_mode; + if ((GET_CODE (imm) == SYMBOL_REF + || GET_CODE (imm) == LABEL_REF + || GET_CODE (imm) == CONST) + && is_a (mode, &int_mode)) { rtx mem, base, offset; enum aarch64_symbol_type sty; @@ -1982,11 +1985,12 @@ aarch64_expand_mov_immediate (rtx dest, { case SYMBOL_FORCE_TO_MEM: if (offset != const0_rtx - && targetm.cannot_force_const_mem (mode, imm)) + && targetm.cannot_force_const_mem (int_mode, imm)) { gcc_assert (can_create_pseudo_p ()); - base = aarch64_force_temporary (mode, dest, base); - base = aarch64_add_offset (mode, NULL, base, INTVAL (offset)); + base = aarch64_force_temporary (int_mode, dest, base); + base = aarch64_add_offset (int_mode, NULL, base, + INTVAL (offset)); aarch64_emit_move (dest, base); return; } @@ -2008,8 +2012,8 @@ aarch64_expand_mov_immediate (rtx dest, mem = gen_rtx_MEM (ptr_mode, base); } - if (mode != ptr_mode) - mem = gen_rtx_ZERO_EXTEND (mode, mem); + if (int_mode != ptr_mode) + mem = gen_rtx_ZERO_EXTEND (int_mode, mem); emit_insn (gen_rtx_SET (dest, mem)); @@ -2025,8 +2029,9 @@ aarch64_expand_mov_immediate (rtx dest, if (offset != const0_rtx) { gcc_assert(can_create_pseudo_p ()); - base = aarch64_force_temporary (mode, dest, base); - base = aarch64_add_offset (mode, NULL, base, INTVAL (offset)); + base = aarch64_force_temporary (int_mode, dest, base); + base = aarch64_add_offset (int_mode, NULL, base, + INTVAL (offset)); aarch64_emit_move (dest, base); return; } @@ -2060,7 +2065,8 @@ aarch64_expand_mov_immediate (rtx dest, return; } - aarch64_internal_mov_immediate (dest, imm, true, GET_MODE (dest)); + aarch64_internal_mov_immediate (dest, imm, true, + as_a (mode)); } /* Add DELTA to REGNUM in mode MODE. SCRATCHREG can be used to hold a @@ -2076,9 +2082,9 @@ aarch64_expand_mov_immediate (rtx dest, large immediate). */ static void -aarch64_add_constant_internal (machine_mode mode, int regnum, int scratchreg, - HOST_WIDE_INT delta, bool frame_related_p, - bool emit_move_imm) +aarch64_add_constant_internal (scalar_int_mode mode, int regnum, + int scratchreg, HOST_WIDE_INT delta, + bool frame_related_p, bool emit_move_imm) { HOST_WIDE_INT mdelta = abs_hwi (delta); rtx this_rtx = gen_rtx_REG (mode, regnum); @@ -2125,7 +2131,7 @@ aarch64_add_constant_internal (machine_m } static inline void -aarch64_add_constant (machine_mode mode, int regnum, int scratchreg, +aarch64_add_constant (scalar_int_mode mode, int regnum, int scratchreg, HOST_WIDE_INT delta) { aarch64_add_constant_internal (mode, regnum, scratchreg, delta, false, true); @@ -4000,7 +4006,7 @@ aarch64_uimm12_shift (HOST_WIDE_INT val) /* Return true if val is an immediate that can be loaded into a register by a MOVZ instruction. */ static bool -aarch64_movw_imm (HOST_WIDE_INT val, machine_mode mode) +aarch64_movw_imm (HOST_WIDE_INT val, scalar_int_mode mode) { if (GET_MODE_SIZE (mode) > 4) { @@ -4104,21 +4110,25 @@ aarch64_and_split_imm2 (HOST_WIDE_INT va bool aarch64_and_bitmask_imm (unsigned HOST_WIDE_INT val_in, machine_mode mode) { - if (aarch64_bitmask_imm (val_in, mode)) + scalar_int_mode int_mode; + if (!is_a (mode, &int_mode)) + return false; + + if (aarch64_bitmask_imm (val_in, int_mode)) return false; - if (aarch64_move_imm (val_in, mode)) + if (aarch64_move_imm (val_in, int_mode)) return false; unsigned HOST_WIDE_INT imm2 = aarch64_and_split_imm2 (val_in); - return aarch64_bitmask_imm (imm2, mode); + return aarch64_bitmask_imm (imm2, int_mode); } /* Return true if val is an immediate that can be loaded into a register in a single instruction. */ bool -aarch64_move_imm (HOST_WIDE_INT val, machine_mode mode) +aarch64_move_imm (HOST_WIDE_INT val, scalar_int_mode mode) { if (aarch64_movw_imm (val, mode) || aarch64_movw_imm (~val, mode)) return 1; @@ -6019,7 +6029,8 @@ aarch64_output_casesi (rtx *operands) gcc_assert (GET_CODE (diff_vec) == ADDR_DIFF_VEC); - index = exact_log2 (GET_MODE_SIZE (GET_MODE (diff_vec))); + scalar_int_mode mode = as_a (GET_MODE (diff_vec)); + index = exact_log2 (GET_MODE_SIZE (mode)); gcc_assert (index >= 0 && index <= 3); @@ -6139,13 +6150,17 @@ aarch64_strip_shift (rtx x) static rtx aarch64_strip_extend (rtx x, bool strip_shift) { + scalar_int_mode mode; rtx op = x; + if (!is_a (GET_MODE (op), &mode)) + return op; + /* Zero and sign extraction of a widened value. */ if ((GET_CODE (op) == ZERO_EXTRACT || GET_CODE (op) == SIGN_EXTRACT) && XEXP (op, 2) == const0_rtx && GET_CODE (XEXP (op, 0)) == MULT - && aarch64_is_extend_from_extract (GET_MODE (op), XEXP (XEXP (op, 0), 1), + && aarch64_is_extend_from_extract (mode, XEXP (XEXP (op, 0), 1), XEXP (op, 1))) return XEXP (XEXP (op, 0), 0); @@ -6482,7 +6497,7 @@ aarch64_branch_cost (bool speed_p, bool /* Return true if the RTX X in mode MODE is a zero or sign extract usable in an ADD or SUB (extended register) instruction. */ static bool -aarch64_rtx_arith_op_extract_p (rtx x, machine_mode mode) +aarch64_rtx_arith_op_extract_p (rtx x, scalar_int_mode mode) { /* Catch add with a sign extract. This is add__multp2. */ @@ -6541,7 +6556,9 @@ aarch64_frint_unspec_p (unsigned int u) aarch64_extr_rtx_p (rtx x, rtx *res_op0, rtx *res_op1) { rtx op0, op1; - machine_mode mode = GET_MODE (x); + scalar_int_mode mode; + if (!is_a (GET_MODE (x), &mode)) + return false; *res_op0 = NULL_RTX; *res_op1 = NULL_RTX; @@ -6726,7 +6743,8 @@ aarch64_extend_bitfield_pattern_p (rtx x mode MODE. See the *andim_ashift_bfiz pattern. */ bool -aarch64_mask_and_shift_for_ubfiz_p (machine_mode mode, rtx mask, rtx shft_amnt) +aarch64_mask_and_shift_for_ubfiz_p (scalar_int_mode mode, rtx mask, + rtx shft_amnt) { return CONST_INT_P (mask) && CONST_INT_P (shft_amnt) && INTVAL (shft_amnt) < GET_MODE_BITSIZE (mode) @@ -6818,8 +6836,8 @@ aarch64_rtx_costs (rtx x, machine_mode m if ((GET_CODE (op1) == ZERO_EXTEND || GET_CODE (op1) == SIGN_EXTEND) && CONST_INT_P (XEXP (op0, 1)) - && (GET_MODE_BITSIZE (GET_MODE (XEXP (op1, 0))) - >= INTVAL (XEXP (op0, 1)))) + && is_a (GET_MODE (XEXP (op1, 0)), &int_mode) + && GET_MODE_BITSIZE (int_mode) >= INTVAL (XEXP (op0, 1))) op1 = XEXP (op1, 0); if (CONST_INT_P (op1)) @@ -6864,8 +6882,10 @@ aarch64_rtx_costs (rtx x, machine_mode m proportionally expensive to the number of instructions required to build that constant. This is true whether we are compiling for SPEED or otherwise. */ + if (!is_a (mode, &int_mode)) + int_mode = word_mode; *cost = COSTS_N_INSNS (aarch64_internal_mov_immediate - (NULL_RTX, x, false, mode)); + (NULL_RTX, x, false, int_mode)); } return true; @@ -7118,7 +7138,8 @@ aarch64_rtx_costs (rtx x, machine_mode m } /* Look for SUB (extended register). */ - if (aarch64_rtx_arith_op_extract_p (op1, mode)) + if (is_a (mode, &int_mode) + && aarch64_rtx_arith_op_extract_p (op1, int_mode)) { if (speed) *cost += extra_cost->alu.extend_arith; @@ -7197,7 +7218,8 @@ aarch64_rtx_costs (rtx x, machine_mode m *cost += rtx_cost (op1, mode, PLUS, 1, speed); /* Look for ADD (extended register). */ - if (aarch64_rtx_arith_op_extract_p (op0, mode)) + if (is_a (mode, &int_mode) + && aarch64_rtx_arith_op_extract_p (op0, int_mode)) { if (speed) *cost += extra_cost->alu.extend_arith; @@ -11583,11 +11605,10 @@ aarch64_simd_gen_const_vector_dup (machi /* Check OP is a legal scalar immediate for the MOVI instruction. */ bool -aarch64_simd_scalar_immediate_valid_for_move (rtx op, machine_mode mode) +aarch64_simd_scalar_immediate_valid_for_move (rtx op, scalar_int_mode mode) { machine_mode vmode; - gcc_assert (!VECTOR_MODE_P (mode)); vmode = aarch64_preferred_simd_mode (mode); rtx op_v = aarch64_simd_gen_const_vector_dup (vmode, INTVAL (op)); return aarch64_simd_valid_immediate (op_v, vmode, false, NULL); @@ -12940,12 +12961,10 @@ aarch64_output_simd_mov_immediate (rtx c } char* -aarch64_output_scalar_simd_mov_immediate (rtx immediate, - machine_mode mode) +aarch64_output_scalar_simd_mov_immediate (rtx immediate, scalar_int_mode mode) { machine_mode vmode; - gcc_assert (!VECTOR_MODE_P (mode)); vmode = aarch64_simd_container_mode (mode, 64); rtx v_op = aarch64_simd_gen_const_vector_dup (vmode, INTVAL (immediate)); return aarch64_output_simd_mov_immediate (v_op, vmode, 64);