From patchwork Thu Jul 13 08:53:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 107628 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp1935954qge; Thu, 13 Jul 2017 01:54:49 -0700 (PDT) X-Received: by 10.99.168.5 with SMTP id o5mr7935802pgf.207.1499936089404; Thu, 13 Jul 2017 01:54:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1499936089; cv=none; d=google.com; s=arc-20160816; b=J6LGG6MOaeOpi4g5UMlp0n6+fJhFckjhuDvfxKq6YCi7PXByO5TVbRTL0c9nE9VHb6 Z+JlZh2GFvpglyPbSfDbwzLWirEOu8sGwYFjtZdWJRlB5C6S0bYAWkMDV9qPFiEWwtpu WoBHFHlelwY9FT2mIEzn/EPrUvkfnWqWEo2XYSeZztqD8UggKNF4t3dpewaO4hPzJv8y QF593IJ+idvSPsDM2ToJAlleHDwcwD2d/xIIhBhi7i+Cfv5DmRDwnspFhr7hudyqp1HM erToYHhRmYPZ+gKozWUtwMY20G9gG0brSvwkZ00EFkb+q3rEmLwkoWZnjuCfuHCBZkcW TsgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=NtcrsbW3VUU0R2dNpnV7UH3oQwsBHzLhzYCOntmwOsY=; b=DX7q15Y21OUGJI1zEyQGp9nTrxI/OyaYuAQbGgvFHhOCSpeKSStSStqJAQWVi08yPL Cy3hr7flvGgbenOSuhZX0JYcadkngqp4IjBRZZh/QMySR0A6xRVVezNtN4ONsfKbCqmO wzeQkhC4LEwcwq55l0dJo/DiIdzSPljMteROKSvjbqnSewae8wCnD/hWRUkn3AYGkcvx 1txFJtaXJybUyhNRFGVRyA0lPU5fq6Qki53x0B7qGE0luHed8MVnKa9Gmn8yiZViFnyD DZzLqMpdBjF7ovAJuIzpcBV5dB3beeDyqoz5wN28iGHGBe9NCib0K+5iJvOXlZcfB0T4 +9Vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=NR1pOoV4; spf=pass (google.com: domain of gcc-patches-return-458033-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458033-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id e21si3740374pfh.275.2017.07.13.01.54.49 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jul 2017 01:54:49 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-458033-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=NR1pOoV4; spf=pass (google.com: domain of gcc-patches-return-458033-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458033-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=JigQlH2dp9Orgb15C0FvFSKItuSNc T3FFFOn4X9+NitYYe3ZQW9CkRW2omD4kPhuokJy/K0BwJUqh7Vs8YR7lV1IkfksR QtO+J1dYvup5C2db+Jts4Co9mrBiq23A5/6dHSZU92xFqcMA2GYgsAPWeYqCUq8/ a71of87KWuuVzo= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=N/pFhtSOY8Icjn37Ft0B9nIIaiM=; b=NR1 pOoV4GL6rujzOSrWRh5ovh+RIpQ1h98wcvqopKlcXTEklbQngWFV5RrYjVNdQh8f 2lRDT42IXnnvoew6KmKVrnu7mnbPbD27M5o5F6aHe/7cFld2sTlsrKAieB1u3wQc lLhPuZ0jZiLuDTyWg1IkgRLMFaMrk1Nv/lqYu+d4= Received: (qmail 123551 invoked by alias); 13 Jul 2017 08:53:42 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 123039 invoked by uid 89); 13 Jul 2017 08:53:41 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-11.8 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wm0-f45.google.com Received: from mail-wm0-f45.google.com (HELO mail-wm0-f45.google.com) (74.125.82.45) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 13 Jul 2017 08:53:34 +0000 Received: by mail-wm0-f45.google.com with SMTP id f67so18336043wmh.1 for ; Thu, 13 Jul 2017 01:53:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=NtcrsbW3VUU0R2dNpnV7UH3oQwsBHzLhzYCOntmwOsY=; b=BLK3jWDhiz8D/HSH6UdA16+T4vc7nIU1X46BYymcZ4s11uPfYPHf/H+GWd3ulDz+h0 xp34c2kEMwTO9xvY4ptsiN4NAHnTE70bZ8+aZw9a2Db5hjrINmpaCHHBaP58njWT+JHb ECOLh/1G5sf09b1TgmxQVaRch9XuRzJRxYPDF0AiiWOkERk5agw8YxqcV3Irw00ET1Xl ICKQoNN9EV8TNO8SZp7jX60EK9OXp6HQOoZ9+0aQkaQ70ZApgQ9XynOkMa2OWqVn4xfQ lv36X2RHFA8l6E/O1zqHFW7D3JJEyY0ZrlrUAEq+dWwVVdhTUmC1WwXznuVD2tn1d/LE 6Ttg== X-Gm-Message-State: AIVw112YulrNnI8NSVt7KrrDlUzSg12nWkbWTwU4KKEgrUCmOi0AjSK6 d6wt/LwcbVRMMoHbDc0sOA== X-Received: by 10.28.15.135 with SMTP id 129mr1250460wmp.38.1499936011138; Thu, 13 Jul 2017 01:53:31 -0700 (PDT) Received: from localhost (92.40.249.184.threembb.co.uk. [92.40.249.184]) by smtp.gmail.com with ESMTPSA id a31sm6494363wrc.64.2017.07.13.01.53.29 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Jul 2017 01:53:30 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [42/77] Use scalar_int_mode in simplify_shift_const_1 References: <8760ewohsv.fsf@linaro.org> Date: Thu, 13 Jul 2017 09:53:28 +0100 In-Reply-To: <8760ewohsv.fsf@linaro.org> (Richard Sandiford's message of "Thu, 13 Jul 2017 09:35:44 +0100") Message-ID: <874lughg53.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 This patch makes simplify_shift_const_1 use scalar_int_modes for all code that is specific to scalars rather than vectors. This includes situations in which the new shift mode is different from the original one, since the function never changes the mode of vector shifts. That in turn makes it more natural to test for equal modes in simplify_shift_const_1 rather than try_widen_shift_mode (which only applies to scalars). 2017-07-13 Richard Sandiford Alan Hayward David Sherwood gcc/ * combine.c (try_widen_shift_mode): Move check for equal modes to... (simplify_shift_const_1): ...here. Use scalar_int_mode for shift_unit_mode and for modes involved in scalar shifts. Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-07-13 09:18:42.622481204 +0100 +++ gcc/combine.c 2017-07-13 09:18:43.037447143 +0100 @@ -10343,8 +10343,6 @@ try_widen_shift_mode (enum rtx_code code machine_mode orig_mode, machine_mode mode, enum rtx_code outer_code, HOST_WIDE_INT outer_const) { - if (orig_mode == mode) - return mode; gcc_assert (GET_MODE_PRECISION (mode) > GET_MODE_PRECISION (orig_mode)); /* In general we can't perform in wider mode for right shift and rotate. */ @@ -10405,7 +10403,7 @@ simplify_shift_const_1 (enum rtx_code co int count; machine_mode mode = result_mode; machine_mode shift_mode; - scalar_int_mode tmode, inner_mode; + scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; unsigned int mode_words = (GET_MODE_SIZE (mode) + (UNITS_PER_WORD - 1)) / UNITS_PER_WORD; /* We form (outer_op (code varop count) (outer_const)). */ @@ -10445,9 +10443,19 @@ simplify_shift_const_1 (enum rtx_code co count = bitsize - count; } - shift_mode = try_widen_shift_mode (code, varop, count, result_mode, - mode, outer_op, outer_const); - machine_mode shift_unit_mode = GET_MODE_INNER (shift_mode); + shift_mode = result_mode; + if (shift_mode != mode) + { + /* We only change the modes of scalar shifts. */ + int_mode = as_a (mode); + int_result_mode = as_a (result_mode); + shift_mode = try_widen_shift_mode (code, varop, count, + int_result_mode, int_mode, + outer_op, outer_const); + } + + scalar_int_mode shift_unit_mode + = as_a (GET_MODE_INNER (shift_mode)); /* Handle cases where the count is greater than the size of the mode minus 1. For ASHIFT, use the size minus one as the count (this can @@ -10542,6 +10550,7 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_mode = as_a (mode); /* If we have (xshiftrt (mem ...) C) and C is MODE_WIDTH minus the width of a smaller mode, we can do this with a @@ -10550,15 +10559,15 @@ simplify_shift_const_1 (enum rtx_code co && ! mode_dependent_address_p (XEXP (varop, 0), MEM_ADDR_SPACE (varop)) && ! MEM_VOLATILE_P (varop) - && (int_mode_for_size (GET_MODE_BITSIZE (mode) - count, 1) + && (int_mode_for_size (GET_MODE_BITSIZE (int_mode) - count, 1) .exists (&tmode))) { new_rtx = adjust_address_nv (varop, tmode, - BYTES_BIG_ENDIAN ? 0 - : count / BITS_PER_UNIT); + BYTES_BIG_ENDIAN ? 0 + : count / BITS_PER_UNIT); varop = gen_rtx_fmt_e (code == ASHIFTRT ? SIGN_EXTEND - : ZERO_EXTEND, mode, new_rtx); + : ZERO_EXTEND, int_mode, new_rtx); count = 0; continue; } @@ -10568,20 +10577,22 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_mode = as_a (mode); + int_varop_mode = as_a (GET_MODE (varop)); /* If VAROP is a SUBREG, strip it as long as the inner operand has the same number of words as what we've seen so far. Then store the widest mode in MODE. */ if (subreg_lowpart_p (varop) && is_int_mode (GET_MODE (SUBREG_REG (varop)), &inner_mode) - && GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (GET_MODE (varop)) + && GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (int_varop_mode) && (unsigned int) ((GET_MODE_SIZE (inner_mode) + (UNITS_PER_WORD - 1)) / UNITS_PER_WORD) == mode_words - && GET_MODE_CLASS (GET_MODE (varop)) == MODE_INT) + && GET_MODE_CLASS (int_varop_mode) == MODE_INT) { varop = SUBREG_REG (varop); - if (GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (mode)) + if (GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (int_mode)) mode = inner_mode; continue; } @@ -10640,14 +10651,17 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_mode = as_a (mode); + int_varop_mode = as_a (GET_MODE (varop)); + int_result_mode = as_a (result_mode); /* Here we have two nested shifts. The result is usually the AND of a new shift with a mask. We compute the result below. */ if (CONST_INT_P (XEXP (varop, 1)) && INTVAL (XEXP (varop, 1)) >= 0 - && INTVAL (XEXP (varop, 1)) < GET_MODE_PRECISION (GET_MODE (varop)) - && HWI_COMPUTABLE_MODE_P (result_mode) - && HWI_COMPUTABLE_MODE_P (mode)) + && INTVAL (XEXP (varop, 1)) < GET_MODE_PRECISION (int_varop_mode) + && HWI_COMPUTABLE_MODE_P (int_result_mode) + && HWI_COMPUTABLE_MODE_P (int_mode)) { enum rtx_code first_code = GET_CODE (varop); unsigned int first_count = INTVAL (XEXP (varop, 1)); @@ -10662,18 +10676,18 @@ simplify_shift_const_1 (enum rtx_code co (ashiftrt:M1 (ashift:M1 (and:M1 (subreg:M1 FOO 0) C3) C2) C1). This simplifies certain SIGN_EXTEND operations. */ if (code == ASHIFT && first_code == ASHIFTRT - && count == (GET_MODE_PRECISION (result_mode) - - GET_MODE_PRECISION (GET_MODE (varop)))) + && count == (GET_MODE_PRECISION (int_result_mode) + - GET_MODE_PRECISION (int_varop_mode))) { /* C3 has the low-order C1 bits zero. */ - mask = GET_MODE_MASK (mode) + mask = GET_MODE_MASK (int_mode) & ~((HOST_WIDE_INT_1U << first_count) - 1); - varop = simplify_and_const_int (NULL_RTX, result_mode, + varop = simplify_and_const_int (NULL_RTX, int_result_mode, XEXP (varop, 0), mask); - varop = simplify_shift_const (NULL_RTX, ASHIFT, result_mode, - varop, count); + varop = simplify_shift_const (NULL_RTX, ASHIFT, + int_result_mode, varop, count); count = first_count; code = ASHIFTRT; continue; @@ -10684,11 +10698,11 @@ simplify_shift_const_1 (enum rtx_code co this to either an ASHIFT or an ASHIFTRT depending on the two counts. - We cannot do this if VAROP's mode is not SHIFT_MODE. */ + We cannot do this if VAROP's mode is not SHIFT_UNIT_MODE. */ if (code == ASHIFTRT && first_code == ASHIFT - && GET_MODE (varop) == shift_mode - && (num_sign_bit_copies (XEXP (varop, 0), shift_mode) + && int_varop_mode == shift_unit_mode + && (num_sign_bit_copies (XEXP (varop, 0), shift_unit_mode) > first_count)) { varop = XEXP (varop, 0); @@ -10719,7 +10733,7 @@ simplify_shift_const_1 (enum rtx_code co if (code == first_code) { - if (GET_MODE (varop) != result_mode + if (int_varop_mode != int_result_mode && (code == ASHIFTRT || code == LSHIFTRT || code == ROTATE)) break; @@ -10731,8 +10745,8 @@ simplify_shift_const_1 (enum rtx_code co if (code == ASHIFTRT || (code == ROTATE && first_code == ASHIFTRT) - || GET_MODE_PRECISION (mode) > HOST_BITS_PER_WIDE_INT - || (GET_MODE (varop) != result_mode + || GET_MODE_PRECISION (int_mode) > HOST_BITS_PER_WIDE_INT + || (int_varop_mode != int_result_mode && (first_code == ASHIFTRT || first_code == LSHIFTRT || first_code == ROTATE || code == ROTATE))) @@ -10742,19 +10756,19 @@ simplify_shift_const_1 (enum rtx_code co nonzero bits of the inner shift the same way the outer shift will. */ - mask_rtx = gen_int_mode (nonzero_bits (varop, GET_MODE (varop)), - result_mode); + mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), + int_result_mode); mask_rtx - = simplify_const_binary_operation (code, result_mode, mask_rtx, - GEN_INT (count)); + = simplify_const_binary_operation (code, int_result_mode, + mask_rtx, GEN_INT (count)); /* Give up if we can't compute an outer operation to use. */ if (mask_rtx == 0 || !CONST_INT_P (mask_rtx) || ! merge_outer_ops (&outer_op, &outer_const, AND, INTVAL (mask_rtx), - result_mode, &complement_p)) + int_result_mode, &complement_p)) break; /* If the shifts are in the same direction, we add the @@ -10791,22 +10805,22 @@ simplify_shift_const_1 (enum rtx_code co /* For ((unsigned) (cstULL >> count)) >> cst2 we have to make sure the result will be masked. See PR70222. */ if (code == LSHIFTRT - && mode != result_mode + && int_mode != int_result_mode && !merge_outer_ops (&outer_op, &outer_const, AND, - GET_MODE_MASK (result_mode) - >> orig_count, result_mode, + GET_MODE_MASK (int_result_mode) + >> orig_count, int_result_mode, &complement_p)) break; /* For ((int) (cstLL >> count)) >> cst2 just give up. Queuing up outer sign extension (often left and right shift) is hardly more efficient than the original. See PR70429. */ - if (code == ASHIFTRT && mode != result_mode) + if (code == ASHIFTRT && int_mode != int_result_mode) break; - rtx new_rtx = simplify_const_binary_operation (code, mode, + rtx new_rtx = simplify_const_binary_operation (code, int_mode, XEXP (varop, 0), GEN_INT (count)); - varop = gen_rtx_fmt_ee (code, mode, new_rtx, XEXP (varop, 1)); + varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); count = 0; continue; } @@ -10827,6 +10841,8 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_varop_mode = as_a (GET_MODE (varop)); + int_result_mode = as_a (result_mode); /* If we have (xshiftrt (ior (plus X (const_int -1)) X) C) with C the size of VAROP - 1 and the shift is logical if @@ -10839,15 +10855,15 @@ simplify_shift_const_1 (enum rtx_code co && XEXP (XEXP (varop, 0), 1) == constm1_rtx && (STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1) && (code == LSHIFTRT || code == ASHIFTRT) - && count == (GET_MODE_PRECISION (GET_MODE (varop)) - 1) + && count == (GET_MODE_PRECISION (int_varop_mode) - 1) && rtx_equal_p (XEXP (XEXP (varop, 0), 0), XEXP (varop, 1))) { count = 0; - varop = gen_rtx_LE (GET_MODE (varop), XEXP (varop, 1), + varop = gen_rtx_LE (int_varop_mode, XEXP (varop, 1), const0_rtx); if (STORE_FLAG_VALUE == 1 ? code == ASHIFTRT : code == LSHIFTRT) - varop = gen_rtx_NEG (GET_MODE (varop), varop); + varop = gen_rtx_NEG (int_varop_mode, varop); continue; } @@ -10860,19 +10876,20 @@ simplify_shift_const_1 (enum rtx_code co if (CONST_INT_P (XEXP (varop, 1)) /* We can't do this if we have (ashiftrt (xor)) and the - constant has its sign bit set in shift_mode with shift_mode - wider than result_mode. */ + constant has its sign bit set in shift_unit_mode with + shift_unit_mode wider than result_mode. */ && !(code == ASHIFTRT && GET_CODE (varop) == XOR - && result_mode != shift_mode + && int_result_mode != shift_unit_mode && 0 > trunc_int_for_mode (INTVAL (XEXP (varop, 1)), - shift_mode)) + shift_unit_mode)) && (new_rtx = simplify_const_binary_operation - (code, result_mode, - gen_int_mode (INTVAL (XEXP (varop, 1)), result_mode), + (code, int_result_mode, + gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), GEN_INT (count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), - INTVAL (new_rtx), result_mode, &complement_p)) + INTVAL (new_rtx), int_result_mode, + &complement_p)) { varop = XEXP (varop, 0); continue; @@ -10885,16 +10902,16 @@ simplify_shift_const_1 (enum rtx_code co changes the sign bit. */ if (CONST_INT_P (XEXP (varop, 1)) && !(code == ASHIFTRT && GET_CODE (varop) == XOR - && result_mode != shift_mode + && int_result_mode != shift_unit_mode && 0 > trunc_int_for_mode (INTVAL (XEXP (varop, 1)), - shift_mode))) + shift_unit_mode))) { - rtx lhs = simplify_shift_const (NULL_RTX, code, shift_mode, + rtx lhs = simplify_shift_const (NULL_RTX, code, shift_unit_mode, XEXP (varop, 0), count); - rtx rhs = simplify_shift_const (NULL_RTX, code, shift_mode, + rtx rhs = simplify_shift_const (NULL_RTX, code, shift_unit_mode, XEXP (varop, 1), count); - varop = simplify_gen_binary (GET_CODE (varop), shift_mode, + varop = simplify_gen_binary (GET_CODE (varop), shift_unit_mode, lhs, rhs); varop = apply_distributive_law (varop); @@ -10907,6 +10924,7 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_result_mode = as_a (result_mode); /* Convert (lshiftrt (eq FOO 0) C) to (xor FOO 1) if STORE_FLAG_VALUE says that the sign bit can be tested, FOO has mode MODE, C is @@ -10914,13 +10932,13 @@ simplify_shift_const_1 (enum rtx_code co that may be nonzero. */ if (code == LSHIFTRT && XEXP (varop, 1) == const0_rtx - && GET_MODE (XEXP (varop, 0)) == result_mode - && count == (GET_MODE_PRECISION (result_mode) - 1) - && HWI_COMPUTABLE_MODE_P (result_mode) + && GET_MODE (XEXP (varop, 0)) == int_result_mode + && count == (GET_MODE_PRECISION (int_result_mode) - 1) + && HWI_COMPUTABLE_MODE_P (int_result_mode) && STORE_FLAG_VALUE == -1 - && nonzero_bits (XEXP (varop, 0), result_mode) == 1 - && merge_outer_ops (&outer_op, &outer_const, XOR, 1, result_mode, - &complement_p)) + && nonzero_bits (XEXP (varop, 0), int_result_mode) == 1 + && merge_outer_ops (&outer_op, &outer_const, XOR, 1, + int_result_mode, &complement_p)) { varop = XEXP (varop, 0); count = 0; @@ -10932,12 +10950,13 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_result_mode = as_a (result_mode); /* (lshiftrt (neg A) C) where A is either 0 or 1 and C is one less than the number of bits in the mode is equivalent to A. */ if (code == LSHIFTRT - && count == (GET_MODE_PRECISION (result_mode) - 1) - && nonzero_bits (XEXP (varop, 0), result_mode) == 1) + && count == (GET_MODE_PRECISION (int_result_mode) - 1) + && nonzero_bits (XEXP (varop, 0), int_result_mode) == 1) { varop = XEXP (varop, 0); count = 0; @@ -10947,8 +10966,8 @@ simplify_shift_const_1 (enum rtx_code co /* NEG commutes with ASHIFT since it is multiplication. Move the NEG outside to allow shifts to combine. */ if (code == ASHIFT - && merge_outer_ops (&outer_op, &outer_const, NEG, 0, result_mode, - &complement_p)) + && merge_outer_ops (&outer_op, &outer_const, NEG, 0, + int_result_mode, &complement_p)) { varop = XEXP (varop, 0); continue; @@ -10959,16 +10978,17 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_result_mode = as_a (result_mode); /* (lshiftrt (plus A -1) C) where A is either 0 or 1 and C is one less than the number of bits in the mode is equivalent to (xor A 1). */ if (code == LSHIFTRT - && count == (GET_MODE_PRECISION (result_mode) - 1) + && count == (GET_MODE_PRECISION (int_result_mode) - 1) && XEXP (varop, 1) == constm1_rtx - && nonzero_bits (XEXP (varop, 0), result_mode) == 1 - && merge_outer_ops (&outer_op, &outer_const, XOR, 1, result_mode, - &complement_p)) + && nonzero_bits (XEXP (varop, 0), int_result_mode) == 1 + && merge_outer_ops (&outer_op, &outer_const, XOR, 1, + int_result_mode, &complement_p)) { count = 0; varop = XEXP (varop, 0); @@ -10983,21 +11003,20 @@ simplify_shift_const_1 (enum rtx_code co if ((code == ASHIFTRT || code == LSHIFTRT) && count < HOST_BITS_PER_WIDE_INT - && nonzero_bits (XEXP (varop, 1), result_mode) >> count == 0 - && (nonzero_bits (XEXP (varop, 1), result_mode) - & nonzero_bits (XEXP (varop, 0), result_mode)) == 0) + && nonzero_bits (XEXP (varop, 1), int_result_mode) >> count == 0 + && (nonzero_bits (XEXP (varop, 1), int_result_mode) + & nonzero_bits (XEXP (varop, 0), int_result_mode)) == 0) { varop = XEXP (varop, 0); continue; } else if ((code == ASHIFTRT || code == LSHIFTRT) && count < HOST_BITS_PER_WIDE_INT - && HWI_COMPUTABLE_MODE_P (result_mode) - && 0 == (nonzero_bits (XEXP (varop, 0), result_mode) + && HWI_COMPUTABLE_MODE_P (int_result_mode) + && 0 == (nonzero_bits (XEXP (varop, 0), int_result_mode) >> count) - && 0 == (nonzero_bits (XEXP (varop, 0), result_mode) - & nonzero_bits (XEXP (varop, 1), - result_mode))) + && 0 == (nonzero_bits (XEXP (varop, 0), int_result_mode) + & nonzero_bits (XEXP (varop, 1), int_result_mode))) { varop = XEXP (varop, 1); continue; @@ -11007,12 +11026,13 @@ simplify_shift_const_1 (enum rtx_code co if (code == ASHIFT && CONST_INT_P (XEXP (varop, 1)) && (new_rtx = simplify_const_binary_operation - (ASHIFT, result_mode, - gen_int_mode (INTVAL (XEXP (varop, 1)), result_mode), + (ASHIFT, int_result_mode, + gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), GEN_INT (count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, PLUS, - INTVAL (new_rtx), result_mode, &complement_p)) + INTVAL (new_rtx), int_result_mode, + &complement_p)) { varop = XEXP (varop, 0); continue; @@ -11025,14 +11045,15 @@ simplify_shift_const_1 (enum rtx_code co for reasoning in doing so. */ if (code == LSHIFTRT && CONST_INT_P (XEXP (varop, 1)) - && mode_signbit_p (result_mode, XEXP (varop, 1)) + && mode_signbit_p (int_result_mode, XEXP (varop, 1)) && (new_rtx = simplify_const_binary_operation - (code, result_mode, - gen_int_mode (INTVAL (XEXP (varop, 1)), result_mode), + (code, int_result_mode, + gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), GEN_INT (count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, XOR, - INTVAL (new_rtx), result_mode, &complement_p)) + INTVAL (new_rtx), int_result_mode, + &complement_p)) { varop = XEXP (varop, 0); continue; @@ -11044,6 +11065,7 @@ simplify_shift_const_1 (enum rtx_code co /* The following rules apply only to scalars. */ if (shift_mode != shift_unit_mode) break; + int_varop_mode = as_a (GET_MODE (varop)); /* If we have (xshiftrt (minus (ashiftrt X C)) X) C) with C the size of VAROP - 1 and the shift is logical if @@ -11054,18 +11076,18 @@ simplify_shift_const_1 (enum rtx_code co if ((STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1) && GET_CODE (XEXP (varop, 0)) == ASHIFTRT - && count == (GET_MODE_PRECISION (GET_MODE (varop)) - 1) + && count == (GET_MODE_PRECISION (int_varop_mode) - 1) && (code == LSHIFTRT || code == ASHIFTRT) && CONST_INT_P (XEXP (XEXP (varop, 0), 1)) && INTVAL (XEXP (XEXP (varop, 0), 1)) == count && rtx_equal_p (XEXP (XEXP (varop, 0), 0), XEXP (varop, 1))) { count = 0; - varop = gen_rtx_GT (GET_MODE (varop), XEXP (varop, 1), + varop = gen_rtx_GT (int_varop_mode, XEXP (varop, 1), const0_rtx); if (STORE_FLAG_VALUE == 1 ? code == ASHIFTRT : code == LSHIFTRT) - varop = gen_rtx_NEG (GET_MODE (varop), varop); + varop = gen_rtx_NEG (int_varop_mode, varop); continue; } @@ -11101,8 +11123,15 @@ simplify_shift_const_1 (enum rtx_code co break; } - shift_mode = try_widen_shift_mode (code, varop, count, result_mode, mode, - outer_op, outer_const); + shift_mode = result_mode; + if (shift_mode != mode) + { + /* We only change the modes of scalar shifts. */ + int_mode = as_a (mode); + int_result_mode = as_a (result_mode); + shift_mode = try_widen_shift_mode (code, varop, count, int_result_mode, + int_mode, outer_op, outer_const); + } /* We have now finished analyzing the shift. The result should be a shift of type CODE with SHIFT_MODE shifting VAROP COUNT places. If @@ -11137,8 +11166,9 @@ simplify_shift_const_1 (enum rtx_code co /* If we were doing an LSHIFTRT in a wider mode than it was originally, turn off all the bits that the shift would have turned off. */ if (orig_code == LSHIFTRT && result_mode != shift_mode) - x = simplify_and_const_int (NULL_RTX, shift_mode, x, - GET_MODE_MASK (result_mode) >> orig_count); + /* We only change the modes of scalar shifts. */ + x = simplify_and_const_int (NULL_RTX, as_a (shift_mode), + x, GET_MODE_MASK (result_mode) >> orig_count); /* Do the remainder of the processing in RESULT_MODE. */ x = gen_lowpart_or_truncate (result_mode, x); @@ -11150,12 +11180,14 @@ simplify_shift_const_1 (enum rtx_code co if (outer_op != UNKNOWN) { + int_result_mode = as_a (result_mode); + if (GET_RTX_CLASS (outer_op) != RTX_UNARY - && GET_MODE_PRECISION (result_mode) < HOST_BITS_PER_WIDE_INT) - outer_const = trunc_int_for_mode (outer_const, result_mode); + && GET_MODE_PRECISION (int_result_mode) < HOST_BITS_PER_WIDE_INT) + outer_const = trunc_int_for_mode (outer_const, int_result_mode); if (outer_op == AND) - x = simplify_and_const_int (NULL_RTX, result_mode, x, outer_const); + x = simplify_and_const_int (NULL_RTX, int_result_mode, x, outer_const); else if (outer_op == SET) { /* This means that we have determined that the result is @@ -11164,9 +11196,9 @@ simplify_shift_const_1 (enum rtx_code co x = GEN_INT (outer_const); } else if (GET_RTX_CLASS (outer_op) == RTX_UNARY) - x = simplify_gen_unary (outer_op, result_mode, x, result_mode); + x = simplify_gen_unary (outer_op, int_result_mode, x, int_result_mode); else - x = simplify_gen_binary (outer_op, result_mode, x, + x = simplify_gen_binary (outer_op, int_result_mode, x, GEN_INT (outer_const)); }