From patchwork Thu Jul 13 08:51:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 107621 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp1934042qge; Thu, 13 Jul 2017 01:52:16 -0700 (PDT) X-Received: by 10.101.90.75 with SMTP id z11mr8148130pgs.58.1499935935912; Thu, 13 Jul 2017 01:52:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1499935935; cv=none; d=google.com; s=arc-20160816; b=DvozwJ7xLOM6GSLP9hmlxvCss+tpjcWYt+R1fm1B1Q1BpFriXVgS39Rdk0qhPZ8qcL FntMo/1jahp+G/213+gymkqWkQ8TsJ5ykfmcpw4/9+YAlCZTociMvJZRsBwL7IlTBgTX 1wsdJxTdSAb6p9xquzC/N+chmgf496WhSOvwgMi2jrcJqiN/L9GOkYw9htpxF8ED0NnE TCIefd6wd+Wol5v9ETmYOw7IwXl7pXnjh7N7AzQltlM5kxN3eXVRSoliwQInQ2Jce8MG Jb9U27xKQak+/i2C0HqHOIDqxfbCmyQav6z+6onsd3KfbftuxJN/KIhv+QbBUTtQ7Q9B pDWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=iCpC07z1xanz/n+UPSlVDneVi62Ce55jnbzZrqxxMGU=; b=JJVLm9GfVcEwLoGdmphxvDJ0QHZ3hrc/sTum3yXNmavo71rdtyd/2mkVOb3feJG4+L 5rdm1W20e/GgilptcxnfZd4O68UG9GauBSL7h+EkokBgmusvMORqiWlCDDkQ7yJZAiFL KH5nmdQUPuS6inGHLb+tcSxuxMnI3JadicylMdc9/xmdmOZXQQQijhHjOw2V+EvgLSk9 D0kwnEQBFSppwjVWJhMjNzKCRs+/fzoopNMMtsCEuW7IszIbNAqnp49exz9bkcIvBL7p yqmiSSFTUUlYxmf3angGSE7ZaQxKtM7QczGPSpu0N4lQwFPxzRHaF8wqxuHMwP61Ia96 Xrlw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=tykx5yRM; spf=pass (google.com: domain of gcc-patches-return-458026-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458026-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id h24si3763419pfk.157.2017.07.13.01.52.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jul 2017 01:52:15 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-458026-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=tykx5yRM; spf=pass (google.com: domain of gcc-patches-return-458026-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458026-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=A5wHcja40LWI4FkAY7fobb4t7zm3E WGYH+trVy7/eO5wVh1kgOjKtXDRikAlBWZmGmkyoeApKYAEpYEYYKUWpNdeIkd04 m4o+CMuYR2cwuXoRbRkIN6On5sfnqAZdJhYDGOR5OTTbXfBKGj8u7Ehqe/wYopBr RJimcinLeWbbfo= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=UC5mAYpCo5sEolAnUB5L4jq2gYo=; b=tyk x5yRMSTDUie4fdIFTuHJj7m3DPCYqtAp4zMTSEeRqAzdotwDXM+m8PikgsPWhCls i2ukwVWe5Mr9j0GG9zzFg3mUuVoqkUbxYDEDA1AM4RBCQbMyHIq7XvsnPZ8xppK9 M02fj8WiAXA/5Sb8863l+CK6By4EnW5Hl3pH+7VY= Received: (qmail 5022 invoked by alias); 13 Jul 2017 08:51:22 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 4445 invoked by uid 89); 13 Jul 2017 08:51:21 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-10.6 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wr0-f172.google.com Received: from mail-wr0-f172.google.com (HELO mail-wr0-f172.google.com) (209.85.128.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 13 Jul 2017 08:51:12 +0000 Received: by mail-wr0-f172.google.com with SMTP id k67so49375217wrc.2 for ; Thu, 13 Jul 2017 01:51:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=iCpC07z1xanz/n+UPSlVDneVi62Ce55jnbzZrqxxMGU=; b=ns4BNudQbazjfgUc/2+e7OjrBImFPJKMbYbqByruWdP3oCSAqXmnET5sU6eDQHkEBn 3qvRx5W60zMXGXxGce6n08zkAw/6BADSR+Gjc4M7AURAawUbCKrKj1tm4uj7AB1Y4vq2 WTlrTCsFJPThmG3jou6nwBA78aZkLZVllewGfk1aS+gwbhKvf8yMIWb0eUi1JVXouncg RB3mV+BU0GvU9nW1jiaVPVQM9LuDekKnVzzno9LUJ61ljnzZlIYM3EYrtIvlU98fDyFr p+ikNR9xT8rvWPSB7ctD/TLI2L5VgkMHaZw1t+U0bFJ+VW99p1/PTv7iHmT6AS8x28IS QfFA== X-Gm-Message-State: AIVw113Rmwxc7OlwhzMvOX8xmoWamM64xrhfcN/iedYb5NNt5opBZgWI l2ll0tl2dfKXEMOX0zglZA== X-Received: by 10.223.164.23 with SMTP id d23mr908291wra.58.1499935869029; Thu, 13 Jul 2017 01:51:09 -0700 (PDT) Received: from localhost (92.40.249.184.threembb.co.uk. [92.40.249.184]) by smtp.gmail.com with ESMTPSA id i185sm4836069wmf.34.2017.07.13.01.51.07 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Jul 2017 01:51:08 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [35/77] Add uses of as_a References: <8760ewohsv.fsf@linaro.org> Date: Thu, 13 Jul 2017 09:51:06 +0100 In-Reply-To: <8760ewohsv.fsf@linaro.org> (Richard Sandiford's message of "Thu, 13 Jul 2017 09:35:44 +0100") Message-ID: <87y3rshg91.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 This patch adds asserting as_a conversions to contexts in which the input is known to be a scalar integer mode. In expand_divmod, op1 is always a scalar_int_mode if op1_is_constant (but might not be otherwise). In expand_binop, the patch reverses a < comparison in order to avoid splitting a long line. gcc/ 2017-07-13 Richard Sandiford Alan Hayward David Sherwood * cfgexpand.c (convert_debug_memory_address): Use as_a . * combine.c (expand_compound_operation): Likewise. (make_extraction): Likewise. (change_zero_ext): Likewise. (simplify_comparison): Likewise. * cse.c (cse_insn): Likewise. * dwarf2out.c (minmax_loc_descriptor): Likewise. (mem_loc_descriptor): Likewise. (loc_descriptor): Likewise. * expmed.c (synth_mult): Likewise. (emit_store_flag_1): Likewise. (expand_divmod): Likewise. Use HWI_COMPUTABLE_MODE_P instead of a comparison with size. * expr.c (expand_assignment): Use as_a . (reduce_to_bit_field_precision): Likewise. * function.c (expand_function_end): Likewise. * internal-fn.c (expand_arith_overflow_result_store): Likewise. * loop-doloop.c (doloop_modify): Likewise. * optabs.c (expand_binop): Likewise. (expand_unop): Likewise. (expand_copysign_absneg): Likewise. (prepare_cmp_insn): Likewise. (maybe_legitimize_operand): Likewise. * recog.c (const_scalar_int_operand): Likewise. * rtlanal.c (get_address_mode): Likewise. * simplify-rtx.c (simplify_unary_operation_1): Likewise. (simplify_cond_clz_ctz): Likewise. * tree-nested.c (get_nl_goto_field): Likewise. * tree.c (build_vector_type_for_mode): Likewise. * var-tracking.c (use_narrower_mode): Likewise. gcc/c-family/ 2017-07-13 Richard Sandiford Alan Hayward David Sherwood * c-common.c (c_common_type_for_mode): Use as_a . gcc/lto/ 2017-07-13 Richard Sandiford Alan Hayward David Sherwood * lto-lang.c (lto_type_for_mode): Use as_a . Index: gcc/cfgexpand.c =================================================================== --- gcc/cfgexpand.c 2017-07-13 09:18:38.656813056 +0100 +++ gcc/cfgexpand.c 2017-07-13 09:18:39.582734406 +0100 @@ -3963,12 +3963,10 @@ round_udiv_adjust (machine_mode mode, rt convert_debug_memory_address (machine_mode mode, rtx x, addr_space_t as) { - machine_mode xmode = GET_MODE (x); - #ifndef POINTERS_EXTEND_UNSIGNED gcc_assert (mode == Pmode || mode == targetm.addr_space.address_mode (as)); - gcc_assert (xmode == mode || xmode == VOIDmode); + gcc_assert (GET_MODE (x) == mode || GET_MODE (x) == VOIDmode); #else rtx temp; @@ -3977,6 +3975,8 @@ convert_debug_memory_address (machine_mo if (GET_MODE (x) == mode || GET_MODE (x) == VOIDmode) return x; + /* X must have some form of address mode already. */ + scalar_int_mode xmode = as_a (GET_MODE (x)); if (GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (xmode)) x = lowpart_subreg (mode, x, xmode); else if (POINTERS_EXTEND_UNSIGNED > 0) Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-07-13 09:18:35.965045845 +0100 +++ gcc/combine.c 2017-07-13 09:18:39.584734236 +0100 @@ -7145,16 +7145,19 @@ expand_compound_operation (rtx x) default: return x; } + + /* We've rejected non-scalar operations by now. */ + scalar_int_mode mode = as_a (GET_MODE (x)); + /* Convert sign extension to zero extension, if we know that the high bit is not set, as this is easier to optimize. It will be converted back to cheaper alternative in make_extraction. */ if (GET_CODE (x) == SIGN_EXTEND - && HWI_COMPUTABLE_MODE_P (GET_MODE (x)) + && HWI_COMPUTABLE_MODE_P (mode) && ((nonzero_bits (XEXP (x, 0), inner_mode) & ~(((unsigned HOST_WIDE_INT) GET_MODE_MASK (inner_mode)) >> 1)) == 0)) { - machine_mode mode = GET_MODE (x); rtx temp = gen_rtx_ZERO_EXTEND (mode, XEXP (x, 0)); rtx temp2 = expand_compound_operation (temp); @@ -7176,27 +7179,27 @@ expand_compound_operation (rtx x) know that the last value didn't have any inappropriate bits set. */ if (GET_CODE (XEXP (x, 0)) == TRUNCATE - && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x) - && HWI_COMPUTABLE_MODE_P (GET_MODE (x)) - && (nonzero_bits (XEXP (XEXP (x, 0), 0), GET_MODE (x)) + && GET_MODE (XEXP (XEXP (x, 0), 0)) == mode + && HWI_COMPUTABLE_MODE_P (mode) + && (nonzero_bits (XEXP (XEXP (x, 0), 0), mode) & ~GET_MODE_MASK (inner_mode)) == 0) return XEXP (XEXP (x, 0), 0); /* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)). */ if (GET_CODE (XEXP (x, 0)) == SUBREG - && GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x) + && GET_MODE (SUBREG_REG (XEXP (x, 0))) == mode && subreg_lowpart_p (XEXP (x, 0)) - && HWI_COMPUTABLE_MODE_P (GET_MODE (x)) - && (nonzero_bits (SUBREG_REG (XEXP (x, 0)), GET_MODE (x)) + && HWI_COMPUTABLE_MODE_P (mode) + && (nonzero_bits (SUBREG_REG (XEXP (x, 0)), mode) & ~GET_MODE_MASK (inner_mode)) == 0) return SUBREG_REG (XEXP (x, 0)); /* (zero_extend:DI (truncate:SI foo:DI)) is just foo:DI when foo is a comparison and STORE_FLAG_VALUE permits. This is like - the first case, but it works even when GET_MODE (x) is larger + the first case, but it works even when MODE is larger than HOST_WIDE_INT. */ if (GET_CODE (XEXP (x, 0)) == TRUNCATE - && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x) + && GET_MODE (XEXP (XEXP (x, 0), 0)) == mode && COMPARISON_P (XEXP (XEXP (x, 0), 0)) && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT && (STORE_FLAG_VALUE & ~GET_MODE_MASK (inner_mode)) == 0) @@ -7204,7 +7207,7 @@ expand_compound_operation (rtx x) /* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)). */ if (GET_CODE (XEXP (x, 0)) == SUBREG - && GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x) + && GET_MODE (SUBREG_REG (XEXP (x, 0))) == mode && subreg_lowpart_p (XEXP (x, 0)) && COMPARISON_P (SUBREG_REG (XEXP (x, 0))) && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT @@ -7228,10 +7231,9 @@ expand_compound_operation (rtx x) extraction. Then the constant of 31 would be substituted in to produce such a position. */ - modewidth = GET_MODE_PRECISION (GET_MODE (x)); + modewidth = GET_MODE_PRECISION (mode); if (modewidth >= pos + len) { - machine_mode mode = GET_MODE (x); tem = gen_lowpart (mode, XEXP (x, 0)); if (!tem || GET_CODE (tem) == CLOBBER) return x; @@ -7241,10 +7243,10 @@ expand_compound_operation (rtx x) mode, tem, modewidth - len); } else if (unsignedp && len < HOST_BITS_PER_WIDE_INT) - tem = simplify_and_const_int (NULL_RTX, GET_MODE (x), + tem = simplify_and_const_int (NULL_RTX, mode, simplify_shift_const (NULL_RTX, LSHIFTRT, - GET_MODE (x), - XEXP (x, 0), pos), + mode, XEXP (x, 0), + pos), (HOST_WIDE_INT_1U << len) - 1); else /* Any other cases we can't handle. */ @@ -7745,9 +7747,13 @@ make_extraction (machine_mode mode, rtx } /* Adjust mode of POS_RTX, if needed. If we want a wider mode, we - have to zero extend. Otherwise, we can just use a SUBREG. */ + have to zero extend. Otherwise, we can just use a SUBREG. + + We dealt with constant rtxes earlier, so pos_rtx cannot + have VOIDmode at this point. */ if (pos_rtx != 0 - && GET_MODE_SIZE (pos_mode) > GET_MODE_SIZE (GET_MODE (pos_rtx))) + && (GET_MODE_SIZE (pos_mode) + > GET_MODE_SIZE (as_a (GET_MODE (pos_rtx))))) { rtx temp = simplify_gen_unary (ZERO_EXTEND, pos_mode, pos_rtx, GET_MODE (pos_rtx)); @@ -11358,7 +11364,8 @@ change_zero_ext (rtx pat) && !paradoxical_subreg_p (XEXP (x, 0)) && subreg_lowpart_p (XEXP (x, 0))) { - size = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))); + inner_mode = as_a (GET_MODE (XEXP (x, 0))); + size = GET_MODE_PRECISION (inner_mode); x = SUBREG_REG (XEXP (x, 0)); if (GET_MODE (x) != mode) x = gen_lowpart_SUBREG (mode, x); @@ -11368,7 +11375,8 @@ change_zero_ext (rtx pat) && HARD_REGISTER_P (XEXP (x, 0)) && can_change_dest_mode (XEXP (x, 0), 0, mode)) { - size = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))); + inner_mode = as_a (GET_MODE (XEXP (x, 0))); + size = GET_MODE_PRECISION (inner_mode); x = gen_rtx_REG (mode, REGNO (XEXP (x, 0))); } else @@ -11786,8 +11794,8 @@ simplify_comparison (enum rtx_code code, rtx op1 = *pop1; rtx tem, tem1; int i; - scalar_int_mode mode, inner_mode; - machine_mode tmode; + scalar_int_mode mode, inner_mode, tmode; + opt_scalar_int_mode tmode_iter; /* Try a few ways of applying the same transformation to both operands. */ while (1) @@ -11895,7 +11903,8 @@ simplify_comparison (enum rtx_code code, } else if (c0 == c1) - FOR_EACH_MODE_UNTIL (tmode, GET_MODE (op0)) + FOR_EACH_MODE_UNTIL (tmode, + as_a (GET_MODE (op0))) if ((unsigned HOST_WIDE_INT) c0 == GET_MODE_MASK (tmode)) { op0 = gen_lowpart_or_truncate (tmode, inner_op0); @@ -12762,8 +12771,9 @@ simplify_comparison (enum rtx_code code, if (is_int_mode (GET_MODE (op0), &mode) && GET_MODE_SIZE (mode) < UNITS_PER_WORD && ! have_insn_for (COMPARE, mode)) - FOR_EACH_WIDER_MODE (tmode, mode) + FOR_EACH_WIDER_MODE (tmode_iter, mode) { + tmode = *tmode_iter; if (!HWI_COMPUTABLE_MODE_P (tmode)) break; if (have_insn_for (COMPARE, tmode)) Index: gcc/cse.c =================================================================== --- gcc/cse.c 2017-07-13 09:18:35.966045757 +0100 +++ gcc/cse.c 2017-07-13 09:18:39.584734236 +0100 @@ -4552,14 +4552,17 @@ cse_insn (rtx_insn *insn) && CONST_INT_P (XEXP (SET_DEST (sets[0].rtl), 2))) { rtx dest_reg = XEXP (SET_DEST (sets[0].rtl), 0); + /* This is the mode of XEXP (tem, 0) as well. */ + scalar_int_mode dest_mode + = as_a (GET_MODE (dest_reg)); rtx width = XEXP (SET_DEST (sets[0].rtl), 1); rtx pos = XEXP (SET_DEST (sets[0].rtl), 2); HOST_WIDE_INT val = INTVAL (XEXP (tem, 0)); HOST_WIDE_INT mask; unsigned int shift; if (BITS_BIG_ENDIAN) - shift = GET_MODE_PRECISION (GET_MODE (dest_reg)) - - INTVAL (pos) - INTVAL (width); + shift = (GET_MODE_PRECISION (dest_mode) + - INTVAL (pos) - INTVAL (width)); else shift = INTVAL (pos); if (INTVAL (width) == HOST_BITS_PER_WIDE_INT) @@ -5231,8 +5234,11 @@ cse_insn (rtx_insn *insn) HOST_WIDE_INT val = INTVAL (dest_cst); HOST_WIDE_INT mask; unsigned int shift; + /* This is the mode of DEST_CST as well. */ + scalar_int_mode dest_mode + = as_a (GET_MODE (dest_reg)); if (BITS_BIG_ENDIAN) - shift = GET_MODE_PRECISION (GET_MODE (dest_reg)) + shift = GET_MODE_PRECISION (dest_mode) - INTVAL (pos) - INTVAL (width); else shift = INTVAL (pos); @@ -5242,7 +5248,7 @@ cse_insn (rtx_insn *insn) mask = (HOST_WIDE_INT_1 << INTVAL (width)) - 1; val &= ~(mask << shift); val |= (INTVAL (trial) & mask) << shift; - val = trunc_int_for_mode (val, GET_MODE (dest_reg)); + val = trunc_int_for_mode (val, dest_mode); validate_unshare_change (insn, &SET_DEST (sets[i].rtl), dest_reg, 1); validate_unshare_change (insn, &SET_SRC (sets[i].rtl), Index: gcc/dwarf2out.c =================================================================== --- gcc/dwarf2out.c 2017-07-13 09:18:36.422005956 +0100 +++ gcc/dwarf2out.c 2017-07-13 09:18:39.587733983 +0100 @@ -14085,15 +14085,17 @@ minmax_loc_descriptor (rtx rtl, machine_ add_loc_descr (&op1, new_loc_descr (DW_OP_over, 0, 0)); if (GET_CODE (rtl) == UMIN || GET_CODE (rtl) == UMAX) { - if (GET_MODE_SIZE (mode) < DWARF2_ADDR_SIZE) + /* Checked by the caller. */ + int_mode = as_a (mode); + if (GET_MODE_SIZE (int_mode) < DWARF2_ADDR_SIZE) { - HOST_WIDE_INT mask = GET_MODE_MASK (mode); + HOST_WIDE_INT mask = GET_MODE_MASK (int_mode); add_loc_descr (&op0, int_loc_descriptor (mask)); add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0)); add_loc_descr (&op1, int_loc_descriptor (mask)); add_loc_descr (&op1, new_loc_descr (DW_OP_and, 0, 0)); } - else if (GET_MODE_SIZE (mode) == DWARF2_ADDR_SIZE) + else if (GET_MODE_SIZE (int_mode) == DWARF2_ADDR_SIZE) { HOST_WIDE_INT bias = 1; bias <<= (DWARF2_ADDR_SIZE * BITS_PER_UNIT - 1); @@ -14976,7 +14978,8 @@ mem_loc_descriptor (rtx rtl, machine_mod break; if (CONST_INT_P (XEXP (rtl, 1)) - && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE) + && (GET_MODE_SIZE (as_a (mode)) + <= DWARF2_ADDR_SIZE)) loc_descr_plus_const (&mem_loc_result, INTVAL (XEXP (rtl, 1))); else { @@ -15759,8 +15762,11 @@ loc_descriptor (rtx rtl, machine_mode mo case CONST_INT: if (mode != VOIDmode && mode != BLKmode) - loc_result = address_of_int_loc_descriptor (GET_MODE_SIZE (mode), - INTVAL (rtl)); + { + int_mode = as_a (mode); + loc_result = address_of_int_loc_descriptor (GET_MODE_SIZE (int_mode), + INTVAL (rtl)); + } break; case CONST_DOUBLE: @@ -15805,11 +15811,12 @@ loc_descriptor (rtx rtl, machine_mode mo if (mode != VOIDmode && (dwarf_version >= 4 || !dwarf_strict)) { + int_mode = as_a (mode); loc_result = new_loc_descr (DW_OP_implicit_value, - GET_MODE_SIZE (mode), 0); + GET_MODE_SIZE (int_mode), 0); loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int; loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc (); - *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, mode); + *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, int_mode); } break; Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2017-07-13 09:18:38.658812885 +0100 +++ gcc/expmed.c 2017-07-13 09:18:39.588733898 +0100 @@ -2523,7 +2523,7 @@ synth_mult (struct algorithm *alg_out, u bool cache_hit = false; enum alg_code cache_alg = alg_zero; bool speed = optimize_insn_for_speed_p (); - machine_mode imode; + scalar_int_mode imode; struct alg_hash_entry *entry_ptr; /* Indicate that no algorithm is yet found. If no algorithm @@ -2536,7 +2536,7 @@ synth_mult (struct algorithm *alg_out, u return; /* Be prepared for vector modes. */ - imode = GET_MODE_INNER (mode); + imode = as_a (GET_MODE_INNER (mode)); maxm = MIN (BITS_PER_WORD, GET_MODE_BITSIZE (imode)); @@ -3980,7 +3980,6 @@ expand_divmod (int rem_flag, enum tree_c rtx tquotient; rtx quotient = 0, remainder = 0; rtx_insn *last; - int size; rtx_insn *insn; optab optab1, optab2; int op1_is_constant, op1_is_pow2 = 0; @@ -4102,7 +4101,6 @@ expand_divmod (int rem_flag, enum tree_c else tquotient = gen_reg_rtx (compute_mode); - size = GET_MODE_BITSIZE (compute_mode); #if 0 /* It should be possible to restrict the precision to GET_MODE_BITSIZE (mode), and thereby get better code when OP1 is a constant. Do that @@ -4175,12 +4173,14 @@ expand_divmod (int rem_flag, enum tree_c case TRUNC_DIV_EXPR: if (op1_is_constant) { + scalar_int_mode int_mode = as_a (compute_mode); + int size = GET_MODE_BITSIZE (int_mode); if (unsignedp) { unsigned HOST_WIDE_INT mh, ml; int pre_shift, post_shift; int dummy; - wide_int wd = rtx_mode_t (op1, compute_mode); + wide_int wd = rtx_mode_t (op1, int_mode); unsigned HOST_WIDE_INT d = wd.to_uhwi (); if (wi::popcount (wd) == 1) @@ -4191,14 +4191,14 @@ expand_divmod (int rem_flag, enum tree_c unsigned HOST_WIDE_INT mask = (HOST_WIDE_INT_1U << pre_shift) - 1; remainder - = expand_binop (compute_mode, and_optab, op0, - gen_int_mode (mask, compute_mode), + = expand_binop (int_mode, and_optab, op0, + gen_int_mode (mask, int_mode), remainder, 1, OPTAB_LIB_WIDEN); if (remainder) return gen_lowpart (mode, remainder); } - quotient = expand_shift (RSHIFT_EXPR, compute_mode, op0, + quotient = expand_shift (RSHIFT_EXPR, int_mode, op0, pre_shift, tquotient, 1); } else if (size <= HOST_BITS_PER_WIDE_INT) @@ -4208,7 +4208,7 @@ expand_divmod (int rem_flag, enum tree_c /* Most significant bit of divisor is set; emit an scc insn. */ quotient = emit_store_flag_force (tquotient, GEU, op0, op1, - compute_mode, 1, 1); + int_mode, 1, 1); } else { @@ -4240,25 +4240,24 @@ expand_divmod (int rem_flag, enum tree_c goto fail1; extra_cost - = (shift_cost (speed, compute_mode, post_shift - 1) - + shift_cost (speed, compute_mode, 1) - + 2 * add_cost (speed, compute_mode)); + = (shift_cost (speed, int_mode, post_shift - 1) + + shift_cost (speed, int_mode, 1) + + 2 * add_cost (speed, int_mode)); t1 = expmed_mult_highpart - (compute_mode, op0, - gen_int_mode (ml, compute_mode), + (int_mode, op0, gen_int_mode (ml, int_mode), NULL_RTX, 1, max_cost - extra_cost); if (t1 == 0) goto fail1; - t2 = force_operand (gen_rtx_MINUS (compute_mode, + t2 = force_operand (gen_rtx_MINUS (int_mode, op0, t1), NULL_RTX); - t3 = expand_shift (RSHIFT_EXPR, compute_mode, + t3 = expand_shift (RSHIFT_EXPR, int_mode, t2, 1, NULL_RTX, 1); - t4 = force_operand (gen_rtx_PLUS (compute_mode, + t4 = force_operand (gen_rtx_PLUS (int_mode, t1, t3), NULL_RTX); quotient = expand_shift - (RSHIFT_EXPR, compute_mode, t4, + (RSHIFT_EXPR, int_mode, t4, post_shift - 1, tquotient, 1); } else @@ -4270,19 +4269,19 @@ expand_divmod (int rem_flag, enum tree_c goto fail1; t1 = expand_shift - (RSHIFT_EXPR, compute_mode, op0, + (RSHIFT_EXPR, int_mode, op0, pre_shift, NULL_RTX, 1); extra_cost - = (shift_cost (speed, compute_mode, pre_shift) - + shift_cost (speed, compute_mode, post_shift)); + = (shift_cost (speed, int_mode, pre_shift) + + shift_cost (speed, int_mode, post_shift)); t2 = expmed_mult_highpart - (compute_mode, t1, - gen_int_mode (ml, compute_mode), + (int_mode, t1, + gen_int_mode (ml, int_mode), NULL_RTX, 1, max_cost - extra_cost); if (t2 == 0) goto fail1; quotient = expand_shift - (RSHIFT_EXPR, compute_mode, t2, + (RSHIFT_EXPR, int_mode, t2, post_shift, tquotient, 1); } } @@ -4293,7 +4292,7 @@ expand_divmod (int rem_flag, enum tree_c insn = get_last_insn (); if (insn != last) set_dst_reg_note (insn, REG_EQUAL, - gen_rtx_UDIV (compute_mode, op0, op1), + gen_rtx_UDIV (int_mode, op0, op1), quotient); } else /* TRUNC_DIV, signed */ @@ -4315,36 +4314,35 @@ expand_divmod (int rem_flag, enum tree_c if (rem_flag && d < 0) { d = abs_d; - op1 = gen_int_mode (abs_d, compute_mode); + op1 = gen_int_mode (abs_d, int_mode); } if (d == 1) quotient = op0; else if (d == -1) - quotient = expand_unop (compute_mode, neg_optab, op0, + quotient = expand_unop (int_mode, neg_optab, op0, tquotient, 0); else if (size <= HOST_BITS_PER_WIDE_INT && abs_d == HOST_WIDE_INT_1U << (size - 1)) { /* This case is not handled correctly below. */ quotient = emit_store_flag (tquotient, EQ, op0, op1, - compute_mode, 1, 1); + int_mode, 1, 1); if (quotient == 0) goto fail1; } else if (EXACT_POWER_OF_2_OR_ZERO_P (d) && (size <= HOST_BITS_PER_WIDE_INT || d >= 0) && (rem_flag - ? smod_pow2_cheap (speed, compute_mode) - : sdiv_pow2_cheap (speed, compute_mode)) + ? smod_pow2_cheap (speed, int_mode) + : sdiv_pow2_cheap (speed, int_mode)) /* We assume that cheap metric is true if the optab has an expander for this mode. */ && ((optab_handler ((rem_flag ? smod_optab : sdiv_optab), - compute_mode) + int_mode) != CODE_FOR_nothing) - || (optab_handler (sdivmod_optab, - compute_mode) + || (optab_handler (sdivmod_optab, int_mode) != CODE_FOR_nothing))) ; else if (EXACT_POWER_OF_2_OR_ZERO_P (abs_d) @@ -4353,23 +4351,23 @@ expand_divmod (int rem_flag, enum tree_c { if (rem_flag) { - remainder = expand_smod_pow2 (compute_mode, op0, d); + remainder = expand_smod_pow2 (int_mode, op0, d); if (remainder) return gen_lowpart (mode, remainder); } - if (sdiv_pow2_cheap (speed, compute_mode) - && ((optab_handler (sdiv_optab, compute_mode) + if (sdiv_pow2_cheap (speed, int_mode) + && ((optab_handler (sdiv_optab, int_mode) != CODE_FOR_nothing) - || (optab_handler (sdivmod_optab, compute_mode) + || (optab_handler (sdivmod_optab, int_mode) != CODE_FOR_nothing))) quotient = expand_divmod (0, TRUNC_DIV_EXPR, - compute_mode, op0, + int_mode, op0, gen_int_mode (abs_d, - compute_mode), + int_mode), NULL_RTX, 0); else - quotient = expand_sdiv_pow2 (compute_mode, op0, abs_d); + quotient = expand_sdiv_pow2 (int_mode, op0, abs_d); /* We have computed OP0 / abs(OP1). If OP1 is negative, negate the quotient. */ @@ -4380,13 +4378,13 @@ expand_divmod (int rem_flag, enum tree_c && abs_d < (HOST_WIDE_INT_1U << (HOST_BITS_PER_WIDE_INT - 1))) set_dst_reg_note (insn, REG_EQUAL, - gen_rtx_DIV (compute_mode, op0, + gen_rtx_DIV (int_mode, op0, gen_int_mode (abs_d, - compute_mode)), + int_mode)), quotient); - quotient = expand_unop (compute_mode, neg_optab, + quotient = expand_unop (int_mode, neg_optab, quotient, quotient, 0); } } @@ -4402,29 +4400,27 @@ expand_divmod (int rem_flag, enum tree_c || size - 1 >= BITS_PER_WORD) goto fail1; - extra_cost = (shift_cost (speed, compute_mode, post_shift) - + shift_cost (speed, compute_mode, size - 1) - + add_cost (speed, compute_mode)); + extra_cost = (shift_cost (speed, int_mode, post_shift) + + shift_cost (speed, int_mode, size - 1) + + add_cost (speed, int_mode)); t1 = expmed_mult_highpart - (compute_mode, op0, gen_int_mode (ml, compute_mode), + (int_mode, op0, gen_int_mode (ml, int_mode), NULL_RTX, 0, max_cost - extra_cost); if (t1 == 0) goto fail1; t2 = expand_shift - (RSHIFT_EXPR, compute_mode, t1, + (RSHIFT_EXPR, int_mode, t1, post_shift, NULL_RTX, 0); t3 = expand_shift - (RSHIFT_EXPR, compute_mode, op0, + (RSHIFT_EXPR, int_mode, op0, size - 1, NULL_RTX, 0); if (d < 0) quotient - = force_operand (gen_rtx_MINUS (compute_mode, - t3, t2), + = force_operand (gen_rtx_MINUS (int_mode, t3, t2), tquotient); else quotient - = force_operand (gen_rtx_MINUS (compute_mode, - t2, t3), + = force_operand (gen_rtx_MINUS (int_mode, t2, t3), tquotient); } else @@ -4436,33 +4432,30 @@ expand_divmod (int rem_flag, enum tree_c goto fail1; ml |= HOST_WIDE_INT_M1U << (size - 1); - mlr = gen_int_mode (ml, compute_mode); - extra_cost = (shift_cost (speed, compute_mode, post_shift) - + shift_cost (speed, compute_mode, size - 1) - + 2 * add_cost (speed, compute_mode)); - t1 = expmed_mult_highpart (compute_mode, op0, mlr, + mlr = gen_int_mode (ml, int_mode); + extra_cost = (shift_cost (speed, int_mode, post_shift) + + shift_cost (speed, int_mode, size - 1) + + 2 * add_cost (speed, int_mode)); + t1 = expmed_mult_highpart (int_mode, op0, mlr, NULL_RTX, 0, max_cost - extra_cost); if (t1 == 0) goto fail1; - t2 = force_operand (gen_rtx_PLUS (compute_mode, - t1, op0), + t2 = force_operand (gen_rtx_PLUS (int_mode, t1, op0), NULL_RTX); t3 = expand_shift - (RSHIFT_EXPR, compute_mode, t2, + (RSHIFT_EXPR, int_mode, t2, post_shift, NULL_RTX, 0); t4 = expand_shift - (RSHIFT_EXPR, compute_mode, op0, + (RSHIFT_EXPR, int_mode, op0, size - 1, NULL_RTX, 0); if (d < 0) quotient - = force_operand (gen_rtx_MINUS (compute_mode, - t4, t3), + = force_operand (gen_rtx_MINUS (int_mode, t4, t3), tquotient); else quotient - = force_operand (gen_rtx_MINUS (compute_mode, - t3, t4), + = force_operand (gen_rtx_MINUS (int_mode, t3, t4), tquotient); } } @@ -4472,7 +4465,7 @@ expand_divmod (int rem_flag, enum tree_c insn = get_last_insn (); if (insn != last) set_dst_reg_note (insn, REG_EQUAL, - gen_rtx_DIV (compute_mode, op0, op1), + gen_rtx_DIV (int_mode, op0, op1), quotient); } break; @@ -4484,8 +4477,10 @@ expand_divmod (int rem_flag, enum tree_c case FLOOR_DIV_EXPR: case FLOOR_MOD_EXPR: /* We will come here only for signed operations. */ - if (op1_is_constant && size <= HOST_BITS_PER_WIDE_INT) + if (op1_is_constant && HWI_COMPUTABLE_MODE_P (compute_mode)) { + scalar_int_mode int_mode = as_a (compute_mode); + int size = GET_MODE_BITSIZE (int_mode); unsigned HOST_WIDE_INT mh, ml; int pre_shift, lgup, post_shift; HOST_WIDE_INT d = INTVAL (op1); @@ -4502,14 +4497,14 @@ expand_divmod (int rem_flag, enum tree_c unsigned HOST_WIDE_INT mask = (HOST_WIDE_INT_1U << pre_shift) - 1; remainder = expand_binop - (compute_mode, and_optab, op0, - gen_int_mode (mask, compute_mode), + (int_mode, and_optab, op0, + gen_int_mode (mask, int_mode), remainder, 0, OPTAB_LIB_WIDEN); if (remainder) return gen_lowpart (mode, remainder); } quotient = expand_shift - (RSHIFT_EXPR, compute_mode, op0, + (RSHIFT_EXPR, int_mode, op0, pre_shift, tquotient, 0); } else @@ -4524,22 +4519,22 @@ expand_divmod (int rem_flag, enum tree_c && size - 1 < BITS_PER_WORD) { t1 = expand_shift - (RSHIFT_EXPR, compute_mode, op0, + (RSHIFT_EXPR, int_mode, op0, size - 1, NULL_RTX, 0); - t2 = expand_binop (compute_mode, xor_optab, op0, t1, + t2 = expand_binop (int_mode, xor_optab, op0, t1, NULL_RTX, 0, OPTAB_WIDEN); - extra_cost = (shift_cost (speed, compute_mode, post_shift) - + shift_cost (speed, compute_mode, size - 1) - + 2 * add_cost (speed, compute_mode)); + extra_cost = (shift_cost (speed, int_mode, post_shift) + + shift_cost (speed, int_mode, size - 1) + + 2 * add_cost (speed, int_mode)); t3 = expmed_mult_highpart - (compute_mode, t2, gen_int_mode (ml, compute_mode), + (int_mode, t2, gen_int_mode (ml, int_mode), NULL_RTX, 1, max_cost - extra_cost); if (t3 != 0) { t4 = expand_shift - (RSHIFT_EXPR, compute_mode, t3, + (RSHIFT_EXPR, int_mode, t3, post_shift, NULL_RTX, 1); - quotient = expand_binop (compute_mode, xor_optab, + quotient = expand_binop (int_mode, xor_optab, t4, t1, tquotient, 0, OPTAB_WIDEN); } @@ -4549,23 +4544,22 @@ expand_divmod (int rem_flag, enum tree_c else { rtx nsign, t1, t2, t3, t4; - t1 = force_operand (gen_rtx_PLUS (compute_mode, + t1 = force_operand (gen_rtx_PLUS (int_mode, op0, constm1_rtx), NULL_RTX); - t2 = expand_binop (compute_mode, ior_optab, op0, t1, NULL_RTX, + t2 = expand_binop (int_mode, ior_optab, op0, t1, NULL_RTX, 0, OPTAB_WIDEN); - nsign = expand_shift (RSHIFT_EXPR, compute_mode, t2, + nsign = expand_shift (RSHIFT_EXPR, int_mode, t2, size - 1, NULL_RTX, 0); - t3 = force_operand (gen_rtx_MINUS (compute_mode, t1, nsign), + t3 = force_operand (gen_rtx_MINUS (int_mode, t1, nsign), NULL_RTX); - t4 = expand_divmod (0, TRUNC_DIV_EXPR, compute_mode, t3, op1, + t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, t3, op1, NULL_RTX, 0); if (t4) { rtx t5; - t5 = expand_unop (compute_mode, one_cmpl_optab, nsign, + t5 = expand_unop (int_mode, one_cmpl_optab, nsign, NULL_RTX, 0); - quotient = force_operand (gen_rtx_PLUS (compute_mode, - t4, t5), + quotient = force_operand (gen_rtx_PLUS (int_mode, t4, t5), tquotient); } } @@ -4665,31 +4659,31 @@ expand_divmod (int rem_flag, enum tree_c { if (op1_is_constant && EXACT_POWER_OF_2_OR_ZERO_P (INTVAL (op1)) - && (size <= HOST_BITS_PER_WIDE_INT + && (HWI_COMPUTABLE_MODE_P (compute_mode) || INTVAL (op1) >= 0)) { + scalar_int_mode int_mode + = as_a (compute_mode); rtx t1, t2, t3; unsigned HOST_WIDE_INT d = INTVAL (op1); - t1 = expand_shift (RSHIFT_EXPR, compute_mode, op0, + t1 = expand_shift (RSHIFT_EXPR, int_mode, op0, floor_log2 (d), tquotient, 1); - t2 = expand_binop (compute_mode, and_optab, op0, - gen_int_mode (d - 1, compute_mode), + t2 = expand_binop (int_mode, and_optab, op0, + gen_int_mode (d - 1, int_mode), NULL_RTX, 1, OPTAB_LIB_WIDEN); - t3 = gen_reg_rtx (compute_mode); - t3 = emit_store_flag (t3, NE, t2, const0_rtx, - compute_mode, 1, 1); + t3 = gen_reg_rtx (int_mode); + t3 = emit_store_flag (t3, NE, t2, const0_rtx, int_mode, 1, 1); if (t3 == 0) { rtx_code_label *lab; lab = gen_label_rtx (); - do_cmp_and_jump (t2, const0_rtx, EQ, compute_mode, lab); + do_cmp_and_jump (t2, const0_rtx, EQ, int_mode, lab); expand_inc (t1, const1_rtx); emit_label (lab); quotient = t1; } else - quotient = force_operand (gen_rtx_PLUS (compute_mode, - t1, t3), + quotient = force_operand (gen_rtx_PLUS (int_mode, t1, t3), tquotient); break; } @@ -4879,8 +4873,10 @@ expand_divmod (int rem_flag, enum tree_c break; case EXACT_DIV_EXPR: - if (op1_is_constant && size <= HOST_BITS_PER_WIDE_INT) + if (op1_is_constant && HWI_COMPUTABLE_MODE_P (compute_mode)) { + scalar_int_mode int_mode = as_a (compute_mode); + int size = GET_MODE_BITSIZE (int_mode); HOST_WIDE_INT d = INTVAL (op1); unsigned HOST_WIDE_INT ml; int pre_shift; @@ -4888,16 +4884,15 @@ expand_divmod (int rem_flag, enum tree_c pre_shift = ctz_or_zero (d); ml = invert_mod2n (d >> pre_shift, size); - t1 = expand_shift (RSHIFT_EXPR, compute_mode, op0, + t1 = expand_shift (RSHIFT_EXPR, int_mode, op0, pre_shift, NULL_RTX, unsignedp); - quotient = expand_mult (compute_mode, t1, - gen_int_mode (ml, compute_mode), + quotient = expand_mult (int_mode, t1, gen_int_mode (ml, int_mode), NULL_RTX, 1); insn = get_last_insn (); set_dst_reg_note (insn, REG_EQUAL, gen_rtx_fmt_ee (unsignedp ? UDIV : DIV, - compute_mode, op0, op1), + int_mode, op0, op1), quotient); } break; @@ -4906,60 +4901,63 @@ expand_divmod (int rem_flag, enum tree_c case ROUND_MOD_EXPR: if (unsignedp) { + scalar_int_mode int_mode = as_a (compute_mode); rtx tem; rtx_code_label *label; label = gen_label_rtx (); - quotient = gen_reg_rtx (compute_mode); - remainder = gen_reg_rtx (compute_mode); + quotient = gen_reg_rtx (int_mode); + remainder = gen_reg_rtx (int_mode); if (expand_twoval_binop (udivmod_optab, op0, op1, quotient, remainder, 1) == 0) { rtx tem; - quotient = expand_binop (compute_mode, udiv_optab, op0, op1, + quotient = expand_binop (int_mode, udiv_optab, op0, op1, quotient, 1, OPTAB_LIB_WIDEN); - tem = expand_mult (compute_mode, quotient, op1, NULL_RTX, 1); - remainder = expand_binop (compute_mode, sub_optab, op0, tem, + tem = expand_mult (int_mode, quotient, op1, NULL_RTX, 1); + remainder = expand_binop (int_mode, sub_optab, op0, tem, remainder, 1, OPTAB_LIB_WIDEN); } - tem = plus_constant (compute_mode, op1, -1); - tem = expand_shift (RSHIFT_EXPR, compute_mode, tem, 1, NULL_RTX, 1); - do_cmp_and_jump (remainder, tem, LEU, compute_mode, label); + tem = plus_constant (int_mode, op1, -1); + tem = expand_shift (RSHIFT_EXPR, int_mode, tem, 1, NULL_RTX, 1); + do_cmp_and_jump (remainder, tem, LEU, int_mode, label); expand_inc (quotient, const1_rtx); expand_dec (remainder, op1); emit_label (label); } else { + scalar_int_mode int_mode = as_a (compute_mode); + int size = GET_MODE_BITSIZE (int_mode); rtx abs_rem, abs_op1, tem, mask; rtx_code_label *label; label = gen_label_rtx (); - quotient = gen_reg_rtx (compute_mode); - remainder = gen_reg_rtx (compute_mode); + quotient = gen_reg_rtx (int_mode); + remainder = gen_reg_rtx (int_mode); if (expand_twoval_binop (sdivmod_optab, op0, op1, quotient, remainder, 0) == 0) { rtx tem; - quotient = expand_binop (compute_mode, sdiv_optab, op0, op1, + quotient = expand_binop (int_mode, sdiv_optab, op0, op1, quotient, 0, OPTAB_LIB_WIDEN); - tem = expand_mult (compute_mode, quotient, op1, NULL_RTX, 0); - remainder = expand_binop (compute_mode, sub_optab, op0, tem, + tem = expand_mult (int_mode, quotient, op1, NULL_RTX, 0); + remainder = expand_binop (int_mode, sub_optab, op0, tem, remainder, 0, OPTAB_LIB_WIDEN); } - abs_rem = expand_abs (compute_mode, remainder, NULL_RTX, 1, 0); - abs_op1 = expand_abs (compute_mode, op1, NULL_RTX, 1, 0); - tem = expand_shift (LSHIFT_EXPR, compute_mode, abs_rem, + abs_rem = expand_abs (int_mode, remainder, NULL_RTX, 1, 0); + abs_op1 = expand_abs (int_mode, op1, NULL_RTX, 1, 0); + tem = expand_shift (LSHIFT_EXPR, int_mode, abs_rem, 1, NULL_RTX, 1); - do_cmp_and_jump (tem, abs_op1, LTU, compute_mode, label); - tem = expand_binop (compute_mode, xor_optab, op0, op1, + do_cmp_and_jump (tem, abs_op1, LTU, int_mode, label); + tem = expand_binop (int_mode, xor_optab, op0, op1, NULL_RTX, 0, OPTAB_WIDEN); - mask = expand_shift (RSHIFT_EXPR, compute_mode, tem, + mask = expand_shift (RSHIFT_EXPR, int_mode, tem, size - 1, NULL_RTX, 0); - tem = expand_binop (compute_mode, xor_optab, mask, const1_rtx, + tem = expand_binop (int_mode, xor_optab, mask, const1_rtx, NULL_RTX, 0, OPTAB_WIDEN); - tem = expand_binop (compute_mode, sub_optab, tem, mask, + tem = expand_binop (int_mode, sub_optab, tem, mask, NULL_RTX, 0, OPTAB_WIDEN); expand_inc (quotient, tem); - tem = expand_binop (compute_mode, xor_optab, mask, op1, + tem = expand_binop (int_mode, xor_optab, mask, op1, NULL_RTX, 0, OPTAB_WIDEN); - tem = expand_binop (compute_mode, sub_optab, tem, mask, + tem = expand_binop (int_mode, sub_optab, tem, mask, NULL_RTX, 0, OPTAB_WIDEN); expand_dec (remainder, tem); emit_label (label); @@ -5453,25 +5451,29 @@ emit_store_flag_1 (rtx target, enum rtx_ && (normalizep || STORE_FLAG_VALUE == 1 || val_signbit_p (int_mode, STORE_FLAG_VALUE))) { + scalar_int_mode int_target_mode; subtarget = target; if (!target) - target_mode = int_mode; - - /* If the result is to be wider than OP0, it is best to convert it - first. If it is to be narrower, it is *incorrect* to convert it - first. */ - else if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (int_mode)) + int_target_mode = int_mode; + else { - op0 = convert_modes (target_mode, int_mode, op0, 0); - mode = target_mode; + /* If the result is to be wider than OP0, it is best to convert it + first. If it is to be narrower, it is *incorrect* to convert it + first. */ + int_target_mode = as_a (target_mode); + if (GET_MODE_SIZE (int_target_mode) > GET_MODE_SIZE (int_mode)) + { + op0 = convert_modes (int_target_mode, int_mode, op0, 0); + int_mode = int_target_mode; + } } - if (target_mode != mode) + if (int_target_mode != int_mode) subtarget = 0; if (code == GE) - op0 = expand_unop (mode, one_cmpl_optab, op0, + op0 = expand_unop (int_mode, one_cmpl_optab, op0, ((STORE_FLAG_VALUE == 1 || normalizep) ? 0 : subtarget), 0); @@ -5479,12 +5481,12 @@ emit_store_flag_1 (rtx target, enum rtx_ /* If we are supposed to produce a 0/1 value, we want to do a logical shift from the sign bit to the low-order bit; for a -1/0 value, we do an arithmetic shift. */ - op0 = expand_shift (RSHIFT_EXPR, mode, op0, - GET_MODE_BITSIZE (mode) - 1, + op0 = expand_shift (RSHIFT_EXPR, int_mode, op0, + GET_MODE_BITSIZE (int_mode) - 1, subtarget, normalizep != -1); - if (mode != target_mode) - op0 = convert_modes (target_mode, mode, op0, 0); + if (int_mode != int_target_mode) + op0 = convert_modes (int_target_mode, int_mode, op0, 0); return op0; } Index: gcc/expr.c =================================================================== --- gcc/expr.c 2017-07-13 09:18:38.659812799 +0100 +++ gcc/expr.c 2017-07-13 09:18:39.589733813 +0100 @@ -5243,8 +5243,8 @@ expand_assignment (tree to, tree from, b { if (POINTER_TYPE_P (TREE_TYPE (to))) value = convert_memory_address_addr_space - (GET_MODE (to_rtx), value, - TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (to)))); + (as_a (GET_MODE (to_rtx)), value, + TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (to)))); emit_move_insn (to_rtx, value); } @@ -11138,7 +11138,8 @@ expand_expr_real_1 (tree exp, rtx target } /* Subroutine of above: reduce EXP to the precision of TYPE (in the - signedness of TYPE), possibly returning the result in TARGET. */ + signedness of TYPE), possibly returning the result in TARGET. + TYPE is known to be a partial integer type. */ static rtx reduce_to_bit_field_precision (rtx exp, rtx target, tree type) { @@ -11154,18 +11155,17 @@ reduce_to_bit_field_precision (rtx exp, } else if (TYPE_UNSIGNED (type)) { - machine_mode mode = GET_MODE (exp); + scalar_int_mode mode = as_a (GET_MODE (exp)); rtx mask = immed_wide_int_const (wi::mask (prec, false, GET_MODE_PRECISION (mode)), mode); return expand_and (mode, exp, mask, target); } else { - int count = GET_MODE_PRECISION (GET_MODE (exp)) - prec; - exp = expand_shift (LSHIFT_EXPR, GET_MODE (exp), - exp, count, target, 0); - return expand_shift (RSHIFT_EXPR, GET_MODE (exp), - exp, count, target, 0); + scalar_int_mode mode = as_a (GET_MODE (exp)); + int count = GET_MODE_PRECISION (mode) - prec; + exp = expand_shift (LSHIFT_EXPR, mode, exp, count, target, 0); + return expand_shift (RSHIFT_EXPR, mode, exp, count, target, 0); } } Index: gcc/function.c =================================================================== --- gcc/function.c 2017-02-23 19:54:15.000000000 +0000 +++ gcc/function.c 2017-07-13 09:18:39.589733813 +0100 @@ -5601,8 +5601,8 @@ expand_function_end (void) REG_FUNCTION_VALUE_P (outgoing) = 1; /* The address may be ptr_mode and OUTGOING may be Pmode. */ - value_address = convert_memory_address (GET_MODE (outgoing), - value_address); + scalar_int_mode mode = as_a (GET_MODE (outgoing)); + value_address = convert_memory_address (mode, value_address); emit_move_insn (outgoing, value_address); Index: gcc/internal-fn.c =================================================================== --- gcc/internal-fn.c 2017-07-13 09:18:38.662812543 +0100 +++ gcc/internal-fn.c 2017-07-13 09:18:39.590733729 +0100 @@ -560,7 +560,8 @@ expand_arith_set_overflow (tree lhs, rtx expand_arith_overflow_result_store (tree lhs, rtx target, machine_mode mode, rtx res) { - machine_mode tgtmode = GET_MODE_INNER (GET_MODE (target)); + scalar_int_mode tgtmode + = as_a (GET_MODE_INNER (GET_MODE (target))); rtx lres = res; if (tgtmode != mode) { Index: gcc/loop-doloop.c =================================================================== --- gcc/loop-doloop.c 2017-07-03 14:21:23.796501143 +0100 +++ gcc/loop-doloop.c 2017-07-13 09:18:39.590733729 +0100 @@ -445,7 +445,8 @@ doloop_modify (struct loop *loop, struct counter_reg = XEXP (condition, 0); if (GET_CODE (counter_reg) == PLUS) counter_reg = XEXP (counter_reg, 0); - mode = GET_MODE (counter_reg); + /* These patterns must operate on integer counters. */ + mode = as_a (GET_MODE (counter_reg)); increment_count = false; switch (GET_CODE (condition)) Index: gcc/optabs.c =================================================================== --- gcc/optabs.c 2017-07-13 09:18:36.842969240 +0100 +++ gcc/optabs.c 2017-07-13 09:18:39.591733644 +0100 @@ -1231,8 +1231,8 @@ expand_binop (machine_mode mode, optab b it back to the proper size to fit in the broadcast vector. */ machine_mode inner_mode = GET_MODE_INNER (mode); if (!CONST_INT_P (op1) - && (GET_MODE_BITSIZE (inner_mode) - < GET_MODE_BITSIZE (GET_MODE (op1)))) + && (GET_MODE_BITSIZE (as_a (GET_MODE (op1))) + > GET_MODE_BITSIZE (inner_mode))) op1 = force_reg (inner_mode, simplify_gen_unary (TRUNCATE, inner_mode, op1, GET_MODE (op1))); @@ -1377,11 +1377,13 @@ expand_binop (machine_mode mode, optab b && optab_handler (lshr_optab, word_mode) != CODE_FOR_nothing) { unsigned HOST_WIDE_INT shift_mask, double_shift_mask; - machine_mode op1_mode; + scalar_int_mode op1_mode; double_shift_mask = targetm.shift_truncation_mask (int_mode); shift_mask = targetm.shift_truncation_mask (word_mode); - op1_mode = GET_MODE (op1) != VOIDmode ? GET_MODE (op1) : word_mode; + op1_mode = (GET_MODE (op1) != VOIDmode + ? as_a (GET_MODE (op1)) + : word_mode); /* Apply the truncation to constant shifts. */ if (double_shift_mask > 0 && CONST_INT_P (op1)) @@ -3010,24 +3012,32 @@ expand_unop (machine_mode mode, optab un result. Similarly for clrsb. */ if ((unoptab == clz_optab || unoptab == clrsb_optab) && temp != 0) - temp = expand_binop - (wider_mode, sub_optab, temp, - gen_int_mode (GET_MODE_PRECISION (wider_mode) - - GET_MODE_PRECISION (mode), - wider_mode), - target, true, OPTAB_DIRECT); + { + scalar_int_mode wider_int_mode + = as_a (wider_mode); + int_mode = as_a (mode); + temp = expand_binop + (wider_mode, sub_optab, temp, + gen_int_mode (GET_MODE_PRECISION (wider_int_mode) + - GET_MODE_PRECISION (int_mode), + wider_int_mode), + target, true, OPTAB_DIRECT); + } /* Likewise for bswap. */ if (unoptab == bswap_optab && temp != 0) { - gcc_assert (GET_MODE_PRECISION (wider_mode) - == GET_MODE_BITSIZE (wider_mode) - && GET_MODE_PRECISION (mode) - == GET_MODE_BITSIZE (mode)); - - temp = expand_shift (RSHIFT_EXPR, wider_mode, temp, - GET_MODE_BITSIZE (wider_mode) - - GET_MODE_BITSIZE (mode), + scalar_int_mode wider_int_mode + = as_a (wider_mode); + int_mode = as_a (mode); + gcc_assert (GET_MODE_PRECISION (wider_int_mode) + == GET_MODE_BITSIZE (wider_int_mode) + && GET_MODE_PRECISION (int_mode) + == GET_MODE_BITSIZE (int_mode)); + + temp = expand_shift (RSHIFT_EXPR, wider_int_mode, temp, + GET_MODE_BITSIZE (wider_int_mode) + - GET_MODE_BITSIZE (int_mode), NULL_RTX, true); } @@ -3255,7 +3265,7 @@ expand_one_cmpl_abs_nojump (machine_mode expand_copysign_absneg (scalar_float_mode mode, rtx op0, rtx op1, rtx target, int bitpos, bool op0_is_abs) { - machine_mode imode; + scalar_int_mode imode; enum insn_code icode; rtx sign; rtx_code_label *label; @@ -3268,7 +3278,7 @@ expand_copysign_absneg (scalar_float_mod icode = optab_handler (signbit_optab, mode); if (icode != CODE_FOR_nothing) { - imode = insn_data[(int) icode].operand[0].mode; + imode = as_a (insn_data[(int) icode].operand[0].mode); sign = gen_reg_rtx (imode); emit_unop_insn (icode, sign, op1, UNKNOWN); } @@ -3800,10 +3810,10 @@ prepare_cmp_insn (rtx x, rtx y, enum rtx continue; /* Must make sure the size fits the insn's mode. */ - if ((CONST_INT_P (size) - && INTVAL (size) >= (1 << GET_MODE_BITSIZE (cmp_mode))) - || (GET_MODE_BITSIZE (GET_MODE (size)) - > GET_MODE_BITSIZE (cmp_mode))) + if (CONST_INT_P (size) + ? INTVAL (size) >= (1 << GET_MODE_BITSIZE (cmp_mode)) + : (GET_MODE_BITSIZE (as_a (GET_MODE (size))) + > GET_MODE_BITSIZE (cmp_mode))) continue; result_mode = insn_data[cmp_code].operand[0].mode; @@ -6976,8 +6986,8 @@ maybe_legitimize_operand (enum insn_code goto input; case EXPAND_ADDRESS: - gcc_assert (mode != VOIDmode); - op->value = convert_memory_address (mode, op->value); + op->value = convert_memory_address (as_a (mode), + op->value); goto input; case EXPAND_INTEGER: Index: gcc/recog.c =================================================================== --- gcc/recog.c 2017-07-13 09:18:35.050126461 +0100 +++ gcc/recog.c 2017-07-13 09:18:39.591733644 +0100 @@ -1189,8 +1189,9 @@ const_scalar_int_operand (rtx op, machin if (mode != VOIDmode) { - int prec = GET_MODE_PRECISION (mode); - int bitsize = GET_MODE_BITSIZE (mode); + scalar_int_mode int_mode = as_a (mode); + int prec = GET_MODE_PRECISION (int_mode); + int bitsize = GET_MODE_BITSIZE (int_mode); if (CONST_WIDE_INT_NUNITS (op) * HOST_BITS_PER_WIDE_INT > bitsize) return 0; Index: gcc/rtlanal.c =================================================================== --- gcc/rtlanal.c 2017-07-13 09:18:33.655250907 +0100 +++ gcc/rtlanal.c 2017-07-13 09:18:39.592733560 +0100 @@ -5799,7 +5799,7 @@ get_address_mode (rtx mem) gcc_assert (MEM_P (mem)); mode = GET_MODE (XEXP (mem, 0)); if (mode != VOIDmode) - return mode; + return as_a (mode); return targetm.addr_space.address_mode (MEM_ADDR_SPACE (mem)); } Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c 2017-07-13 09:18:35.969045492 +0100 +++ gcc/simplify-rtx.c 2017-07-13 09:18:39.593733475 +0100 @@ -917,7 +917,7 @@ simplify_unary_operation_1 (enum rtx_cod { enum rtx_code reversed; rtx temp; - scalar_int_mode inner, int_mode, op0_mode; + scalar_int_mode inner, int_mode, op_mode, op0_mode; switch (code) { @@ -1153,26 +1153,27 @@ simplify_unary_operation_1 (enum rtx_cod && XEXP (op, 1) == const0_rtx && is_a (GET_MODE (XEXP (op, 0)), &inner)) { + int_mode = as_a (mode); int isize = GET_MODE_PRECISION (inner); if (STORE_FLAG_VALUE == 1) { temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), GEN_INT (isize - 1)); - if (mode == inner) + if (int_mode == inner) return temp; - if (GET_MODE_PRECISION (mode) > isize) - return simplify_gen_unary (SIGN_EXTEND, mode, temp, inner); - return simplify_gen_unary (TRUNCATE, mode, temp, inner); + if (GET_MODE_PRECISION (int_mode) > isize) + return simplify_gen_unary (SIGN_EXTEND, int_mode, temp, inner); + return simplify_gen_unary (TRUNCATE, int_mode, temp, inner); } else if (STORE_FLAG_VALUE == -1) { temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), GEN_INT (isize - 1)); - if (mode == inner) + if (int_mode == inner) return temp; - if (GET_MODE_PRECISION (mode) > isize) - return simplify_gen_unary (ZERO_EXTEND, mode, temp, inner); - return simplify_gen_unary (TRUNCATE, mode, temp, inner); + if (GET_MODE_PRECISION (int_mode) > isize) + return simplify_gen_unary (ZERO_EXTEND, int_mode, temp, inner); + return simplify_gen_unary (TRUNCATE, int_mode, temp, inner); } } break; @@ -1492,12 +1493,13 @@ simplify_unary_operation_1 (enum rtx_cod && is_a (mode, &int_mode) && CONST_INT_P (XEXP (op, 1)) && XEXP (XEXP (op, 0), 1) == XEXP (op, 1) - && GET_MODE_BITSIZE (GET_MODE (op)) > INTVAL (XEXP (op, 1))) + && (op_mode = as_a (GET_MODE (op)), + GET_MODE_BITSIZE (op_mode) > INTVAL (XEXP (op, 1)))) { scalar_int_mode tmode; gcc_assert (GET_MODE_BITSIZE (int_mode) - > GET_MODE_BITSIZE (GET_MODE (op))); - if (int_mode_for_size (GET_MODE_BITSIZE (GET_MODE (op)) + > GET_MODE_BITSIZE (op_mode)); + if (int_mode_for_size (GET_MODE_BITSIZE (op_mode) - INTVAL (XEXP (op, 1)), 1).exists (&tmode)) { rtx inner = @@ -1609,10 +1611,11 @@ simplify_unary_operation_1 (enum rtx_cod && is_a (mode, &int_mode) && CONST_INT_P (XEXP (op, 1)) && XEXP (XEXP (op, 0), 1) == XEXP (op, 1) - && GET_MODE_PRECISION (GET_MODE (op)) > INTVAL (XEXP (op, 1))) + && (op_mode = as_a (GET_MODE (op)), + GET_MODE_PRECISION (op_mode) > INTVAL (XEXP (op, 1)))) { scalar_int_mode tmode; - if (int_mode_for_size (GET_MODE_PRECISION (GET_MODE (op)) + if (int_mode_for_size (GET_MODE_PRECISION (op_mode) - INTVAL (XEXP (op, 1)), 1).exists (&tmode)) { rtx inner = @@ -5386,10 +5389,10 @@ simplify_cond_clz_ctz (rtx x, rtx_code c return NULL_RTX; HOST_WIDE_INT op_val; - if (((op_code == CLZ - && CLZ_DEFINED_VALUE_AT_ZERO (GET_MODE (on_nonzero), op_val)) - || (op_code == CTZ - && CTZ_DEFINED_VALUE_AT_ZERO (GET_MODE (on_nonzero), op_val))) + scalar_int_mode mode ATTRIBUTE_UNUSED + = as_a (GET_MODE (XEXP (on_nonzero, 0))); + if (((op_code == CLZ && CLZ_DEFINED_VALUE_AT_ZERO (mode, op_val)) + || (op_code == CTZ && CTZ_DEFINED_VALUE_AT_ZERO (mode, op_val))) && op_val == INTVAL (on_zero)) return on_nonzero; Index: gcc/tree-nested.c =================================================================== --- gcc/tree-nested.c 2017-05-18 07:51:12.360753371 +0100 +++ gcc/tree-nested.c 2017-07-13 09:18:39.594733390 +0100 @@ -625,7 +625,9 @@ get_nl_goto_field (struct nesting_info * else type = lang_hooks.types.type_for_mode (Pmode, 1); - size = GET_MODE_SIZE (STACK_SAVEAREA_MODE (SAVE_NONLOCAL)); + scalar_int_mode mode + = as_a (STACK_SAVEAREA_MODE (SAVE_NONLOCAL)); + size = GET_MODE_SIZE (mode); size = size / GET_MODE_SIZE (Pmode); size = size + 1; Index: gcc/tree.c =================================================================== --- gcc/tree.c 2017-07-13 09:18:38.667812116 +0100 +++ gcc/tree.c 2017-07-13 09:18:39.596733221 +0100 @@ -10986,6 +10986,7 @@ reconstruct_complex_type (tree type, tre build_vector_type_for_mode (tree innertype, machine_mode mode) { int nunits; + unsigned int bitsize; switch (GET_MODE_CLASS (mode)) { @@ -11000,11 +11001,9 @@ build_vector_type_for_mode (tree innerty case MODE_INT: /* Check that there are no leftover bits. */ - gcc_assert (GET_MODE_BITSIZE (mode) - % TREE_INT_CST_LOW (TYPE_SIZE (innertype)) == 0); - - nunits = GET_MODE_BITSIZE (mode) - / TREE_INT_CST_LOW (TYPE_SIZE (innertype)); + bitsize = GET_MODE_BITSIZE (as_a (mode)); + gcc_assert (bitsize % TREE_INT_CST_LOW (TYPE_SIZE (innertype)) == 0); + nunits = bitsize / TREE_INT_CST_LOW (TYPE_SIZE (innertype)); break; default: Index: gcc/var-tracking.c =================================================================== --- gcc/var-tracking.c 2017-07-13 09:18:32.530352523 +0100 +++ gcc/var-tracking.c 2017-07-13 09:18:39.597733136 +0100 @@ -991,7 +991,8 @@ use_narrower_mode (rtx x, machine_mode m /* Ensure shift amount is not wider than mode. */ if (GET_MODE (op1) == VOIDmode) op1 = lowpart_subreg (mode, op1, wmode); - else if (GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (GET_MODE (op1))) + else if (GET_MODE_PRECISION (mode) + < GET_MODE_PRECISION (as_a (GET_MODE (op1)))) op1 = lowpart_subreg (mode, op1, GET_MODE (op1)); return simplify_gen_binary (ASHIFT, mode, op0, op1); default: Index: gcc/c-family/c-common.c =================================================================== --- gcc/c-family/c-common.c 2017-07-13 09:18:21.523430126 +0100 +++ gcc/c-family/c-common.c 2017-07-13 09:18:39.582734406 +0100 @@ -2242,15 +2242,15 @@ c_common_type_for_mode (machine_mode mod if (mode == TYPE_MODE (void_type_node)) return void_type_node; - if (mode == TYPE_MODE (build_pointer_type (char_type_node))) - return (unsignedp - ? make_unsigned_type (GET_MODE_PRECISION (mode)) - : make_signed_type (GET_MODE_PRECISION (mode))); - - if (mode == TYPE_MODE (build_pointer_type (integer_type_node))) - return (unsignedp - ? make_unsigned_type (GET_MODE_PRECISION (mode)) - : make_signed_type (GET_MODE_PRECISION (mode))); + if (mode == TYPE_MODE (build_pointer_type (char_type_node)) + || mode == TYPE_MODE (build_pointer_type (integer_type_node))) + { + unsigned int precision + = GET_MODE_PRECISION (as_a (mode)); + return (unsignedp + ? make_unsigned_type (precision) + : make_signed_type (precision)); + } if (COMPLEX_MODE_P (mode)) { Index: gcc/lto/lto-lang.c =================================================================== --- gcc/lto/lto-lang.c 2017-06-30 12:50:38.437653735 +0100 +++ gcc/lto/lto-lang.c 2017-07-13 09:18:39.590733729 +0100 @@ -927,15 +927,15 @@ lto_type_for_mode (machine_mode mode, in if (mode == TYPE_MODE (void_type_node)) return void_type_node; - if (mode == TYPE_MODE (build_pointer_type (char_type_node))) - return (unsigned_p - ? make_unsigned_type (GET_MODE_PRECISION (mode)) - : make_signed_type (GET_MODE_PRECISION (mode))); - - if (mode == TYPE_MODE (build_pointer_type (integer_type_node))) - return (unsigned_p - ? make_unsigned_type (GET_MODE_PRECISION (mode)) - : make_signed_type (GET_MODE_PRECISION (mode))); + if (mode == TYPE_MODE (build_pointer_type (char_type_node)) + || mode == TYPE_MODE (build_pointer_type (integer_type_node))) + { + unsigned int precision + = GET_MODE_PRECISION (as_a (mode)); + return (unsigned_p + ? make_unsigned_type (precision) + : make_signed_type (precision)); + } if (COMPLEX_MODE_P (mode)) {