From patchwork Thu Jul 13 08:55:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 107633 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp1937360qge; Thu, 13 Jul 2017 01:56:44 -0700 (PDT) X-Received: by 10.84.217.156 with SMTP id p28mr9146734pli.206.1499936204223; Thu, 13 Jul 2017 01:56:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1499936204; cv=none; d=google.com; s=arc-20160816; b=vr8miqWtm2G1cCllG5ZiugapQKf7IcQeFtXthS/s+fG1zw0/TdBBgfHYtFwxcMlcNL resetQQgeX5646LzHEspzxSWCm+o213LXp/PiC+iJFCxCs8wxgb2TdDUZ116NjzHoxda iH6wYPy8Rd2f1CsYrucwKKmaiIs2ufXlkqeXo/OQsXeY1uqi/n5JlwMxv5femMk1QHyN NgsiyQcYecgxbUTGLfIjkajxqppzfqLnHzMP0x0bo0KCaP0jjawybi/Ci2KjCZVq0fO3 uZuimLy+4JZ76qvh/dJoRmVU9PCblK/qigZYlRXI4a6PqJpq6SF5v177NozE0uFHxhVt O52g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:mail-followup-to:to:from:delivered-to:sender:list-help :list-post:list-archive:list-unsubscribe:list-id:precedence :mailing-list:dkim-signature:domainkey-signature :arc-authentication-results; bh=Jwh9clJZYYhAumWwU15CR1m8LHEG4PILcFATORV4fDM=; b=LS2vWjbTqpU2krP0LfGHsMOWWocN0xNQW4JVECrj1Kzp+FsEOaY/f8TkGUYY04gxyM zrlE32t5a3K4nmzKccqpu4OWsE9+781pZDSjyGmd8dCyV49INMWHfHZsi2WcXUtTCmKk 4Ufi7wpl/oHCeYkpXFWk0xjTYNFx3/dsmXt2CaAxYwcyzP2OQ+3DOBXMdvRinscNody3 6tcwXSbVOpyaAJT4PjX5xHHiB/JuUPvDfBo+zAQQGsxLEz79xiudU45CzCYUTS9dn4Xg q2orXQn03vghtEa3Wqucj4E+2xd4ixcP9Twtd6NCKQ7zhD5PsMT/N9tfnw9Yly9TSyfD wdHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=Tbe7CsR1; spf=pass (google.com: domain of gcc-patches-return-458038-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458038-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id c63si3794816pfk.286.2017.07.13.01.56.43 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jul 2017 01:56:44 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-458038-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=Tbe7CsR1; spf=pass (google.com: domain of gcc-patches-return-458038-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-458038-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=QXxydec1FUDnJtCIO6T1T95Yc7/63 cgIr1/Sk85UpJEkBcS2Hww42ZxZKlApN5FKeNeGp4OVOqAuLqhZL2hCfw2FmqCQb zKrd00qVPsnH4Ut5qdyB9CgIS5C8XkZrOXUFv7+Mc7J9OKWviQvBrFFg09mHZctH 0psY48S/kvM03k= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=4vHg7HyD4jvqckxyCBYmfuDUIVw=; b=Tbe 7CsR19rcF6U4PMrzcSg+GbmrknhC/bfxsuOCIF59W1JhCkyEmy4wJfWdf+0CYIZV Nju6e3up9L8OWPuOhBrXLompex85Na9bpyCoFkBg7u1/FYEYX1k+HPUM2EQGnhmP Z9VPXwwsko0tVEB1UrQT8sQUR+r6R2mZxXV43v+I= Received: (qmail 91777 invoked by alias); 13 Jul 2017 08:55:19 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 91141 invoked by uid 89); 13 Jul 2017 08:55:18 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-16.3 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wr0-f180.google.com Received: from mail-wr0-f180.google.com (HELO mail-wr0-f180.google.com) (209.85.128.180) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 13 Jul 2017 08:55:12 +0000 Received: by mail-wr0-f180.google.com with SMTP id c11so49476820wrc.3 for ; Thu, 13 Jul 2017 01:55:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=Jwh9clJZYYhAumWwU15CR1m8LHEG4PILcFATORV4fDM=; b=mQQke/nWIb+RE1OikNDWt0YHtpM7yEk3Vhj6xeFWisf6qspdeGCttnLeMdAfwkFG/U MKpLzjxgVoePIhaeXQcu/uTOlX+T5+0Rhthyw08sYnqEWUZXH6JL5o29/VzpwuzAeodV Fp/64gqSrOshZ4lls5CZk+XEolRDW/YhEQg2v9X6YuUrc6xiHMLC3nglVIHLz/mOwGY0 90m6SiLR5Lu6zYtE6OkI09/XqOYfe5a6GUOBsG/6V5dBFnnt+qwMrtAMfPq8gzSeuwgE Z8I+wu5nBh95yBAnKx0gUz4jTfuqtefvF5+eUP9WVL1wlghVHEqFX2E/IrbaMZDfH7eZ V5FQ== X-Gm-Message-State: AIVw1118LHugMSS9ZFZuulF3D/LlazegS9I5LljBjQefg4kXobPiHzGx AnJLFP2RJDCmvggEjF7sHw== X-Received: by 10.223.128.42 with SMTP id 39mr1086239wrk.175.1499936110263; Thu, 13 Jul 2017 01:55:10 -0700 (PDT) Received: from localhost (92.40.249.184.threembb.co.uk. [92.40.249.184]) by smtp.gmail.com with ESMTPSA id p7sm5902831wmf.11.2017.07.13.01.55.09 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Jul 2017 01:55:09 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [47/77] Make subroutines of nonzero_bits operate on scalar_int_mode References: <8760ewohsv.fsf@linaro.org> Date: Thu, 13 Jul 2017 09:55:07 +0100 In-Reply-To: <8760ewohsv.fsf@linaro.org> (Richard Sandiford's message of "Thu, 13 Jul 2017 09:35:44 +0100") Message-ID: <87iniwg1hw.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 nonzero_bits1 assumed that all bits of a floating-point or vector mode were needed. It seems likely that fixed-point modes should have been handled in the same way. After excluding those, the only remaining modes that are likely to be useful are scalar integer ones. This patch moves the mode class check to nonzero_bits itself, along with the handling of mode == VOIDmode. The subroutines of nonzero_bits can then take scalar_int_modes. gcc/ 2017-07-13 Richard Sandiford Alan Hayward David Sherwood * rtlanal.c (nonzero_bits): Handle VOIDmode here rather than in subroutines. Return the mode mask for non-integer modes. (cached_nonzero_bits): Change the type of the mode parameter to scalar_int_mode. (nonzero_bits1): Likewise. Remove early exit for other mode classes. Handle CONST_INT_P first and then check whether X also has a scalar integer mode. Index: gcc/rtlanal.c =================================================================== --- gcc/rtlanal.c 2017-07-13 09:18:39.592733560 +0100 +++ gcc/rtlanal.c 2017-07-13 09:18:44.976290184 +0100 @@ -43,10 +43,10 @@ static bool covers_regno_no_parallel_p ( static int computed_jump_p_1 (const_rtx); static void parms_set (rtx, const_rtx, void *); -static unsigned HOST_WIDE_INT cached_nonzero_bits (const_rtx, machine_mode, +static unsigned HOST_WIDE_INT cached_nonzero_bits (const_rtx, scalar_int_mode, const_rtx, machine_mode, unsigned HOST_WIDE_INT); -static unsigned HOST_WIDE_INT nonzero_bits1 (const_rtx, machine_mode, +static unsigned HOST_WIDE_INT nonzero_bits1 (const_rtx, scalar_int_mode, const_rtx, machine_mode, unsigned HOST_WIDE_INT); static unsigned int cached_num_sign_bit_copies (const_rtx, machine_mode, const_rtx, @@ -4237,7 +4237,12 @@ default_address_cost (rtx x, machine_mod unsigned HOST_WIDE_INT nonzero_bits (const_rtx x, machine_mode mode) { - return cached_nonzero_bits (x, mode, NULL_RTX, VOIDmode, 0); + if (mode == VOIDmode) + mode = GET_MODE (x); + scalar_int_mode int_mode; + if (!is_a (mode, &int_mode)) + return GET_MODE_MASK (mode); + return cached_nonzero_bits (x, int_mode, NULL_RTX, VOIDmode, 0); } unsigned int @@ -4281,7 +4286,7 @@ nonzero_bits_binary_arith_p (const_rtx x identical subexpressions on the first or the second level. */ static unsigned HOST_WIDE_INT -cached_nonzero_bits (const_rtx x, machine_mode mode, const_rtx known_x, +cached_nonzero_bits (const_rtx x, scalar_int_mode mode, const_rtx known_x, machine_mode known_mode, unsigned HOST_WIDE_INT known_ret) { @@ -4334,7 +4339,7 @@ #define cached_num_sign_bit_copies sorry an arithmetic operation, we can do better. */ static unsigned HOST_WIDE_INT -nonzero_bits1 (const_rtx x, machine_mode mode, const_rtx known_x, +nonzero_bits1 (const_rtx x, scalar_int_mode mode, const_rtx known_x, machine_mode known_mode, unsigned HOST_WIDE_INT known_ret) { @@ -4342,19 +4347,31 @@ nonzero_bits1 (const_rtx x, machine_mode unsigned HOST_WIDE_INT inner_nz; enum rtx_code code; machine_mode inner_mode; + scalar_int_mode xmode; + unsigned int mode_width = GET_MODE_PRECISION (mode); - /* For floating-point and vector values, assume all bits are needed. */ - if (FLOAT_MODE_P (GET_MODE (x)) || FLOAT_MODE_P (mode) - || VECTOR_MODE_P (GET_MODE (x)) || VECTOR_MODE_P (mode)) + if (CONST_INT_P (x)) + { + if (SHORT_IMMEDIATES_SIGN_EXTEND + && INTVAL (x) > 0 + && mode_width < BITS_PER_WORD + && (UINTVAL (x) & (HOST_WIDE_INT_1U << (mode_width - 1))) != 0) + return UINTVAL (x) | (HOST_WIDE_INT_M1U << mode_width); + + return UINTVAL (x); + } + + if (!is_a (GET_MODE (x), &xmode)) return nonzero; + unsigned int xmode_width = GET_MODE_PRECISION (xmode); /* If X is wider than MODE, use its mode instead. */ - if (GET_MODE_PRECISION (GET_MODE (x)) > mode_width) + if (xmode_width > mode_width) { - mode = GET_MODE (x); + mode = xmode; nonzero = GET_MODE_MASK (mode); - mode_width = GET_MODE_PRECISION (mode); + mode_width = xmode_width; } if (mode_width > HOST_BITS_PER_WIDE_INT) @@ -4370,15 +4387,13 @@ nonzero_bits1 (const_rtx x, machine_mode not known to be zero. */ if (!WORD_REGISTER_OPERATIONS - && GET_MODE (x) != VOIDmode - && GET_MODE (x) != mode - && GET_MODE_PRECISION (GET_MODE (x)) <= BITS_PER_WORD - && GET_MODE_PRECISION (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT - && GET_MODE_PRECISION (mode) > GET_MODE_PRECISION (GET_MODE (x))) + && mode_width > xmode_width + && xmode_width <= BITS_PER_WORD + && xmode_width <= HOST_BITS_PER_WIDE_INT) { - nonzero &= cached_nonzero_bits (x, GET_MODE (x), + nonzero &= cached_nonzero_bits (x, xmode, known_x, known_mode, known_ret); - nonzero |= GET_MODE_MASK (mode) & ~GET_MODE_MASK (GET_MODE (x)); + nonzero |= GET_MODE_MASK (mode) & ~GET_MODE_MASK (xmode); return nonzero; } @@ -4395,7 +4410,8 @@ nonzero_bits1 (const_rtx x, machine_mode we can do this only if the target does not support different pointer or address modes depending on the address space. */ if (target_default_pointer_address_modes_p () - && POINTERS_EXTEND_UNSIGNED && GET_MODE (x) == Pmode + && POINTERS_EXTEND_UNSIGNED + && xmode == Pmode && REG_POINTER (x) && !targetm.have_ptr_extend ()) nonzero &= GET_MODE_MASK (ptr_mode); @@ -4438,22 +4454,12 @@ nonzero_bits1 (const_rtx x, machine_mode return nonzero_for_hook; } - case CONST_INT: - /* If X is negative in MODE, sign-extend the value. */ - if (SHORT_IMMEDIATES_SIGN_EXTEND && INTVAL (x) > 0 - && mode_width < BITS_PER_WORD - && (UINTVAL (x) & (HOST_WIDE_INT_1U << (mode_width - 1))) - != 0) - return UINTVAL (x) | (HOST_WIDE_INT_M1U << mode_width); - - return UINTVAL (x); - case MEM: /* In many, if not most, RISC machines, reading a byte from memory zeros the rest of the register. Noticing that fact saves a lot of extra zero-extends. */ - if (load_extend_op (GET_MODE (x)) == ZERO_EXTEND) - nonzero &= GET_MODE_MASK (GET_MODE (x)); + if (load_extend_op (xmode) == ZERO_EXTEND) + nonzero &= GET_MODE_MASK (xmode); break; case EQ: case NE: @@ -4470,7 +4476,7 @@ nonzero_bits1 (const_rtx x, machine_mode operation in, and not the actual operation mode. We can wind up with (subreg:DI (gt:V4HI x y)), and we don't have anything that describes the results of a vector compare. */ - if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT + if (GET_MODE_CLASS (xmode) == MODE_INT && mode_width <= HOST_BITS_PER_WIDE_INT) nonzero = STORE_FLAG_VALUE; break; @@ -4479,21 +4485,19 @@ nonzero_bits1 (const_rtx x, machine_mode #if 0 /* Disabled to avoid exponential mutual recursion between nonzero_bits and num_sign_bit_copies. */ - if (num_sign_bit_copies (XEXP (x, 0), GET_MODE (x)) - == GET_MODE_PRECISION (GET_MODE (x))) + if (num_sign_bit_copies (XEXP (x, 0), xmode) == xmode_width) nonzero = 1; #endif - if (GET_MODE_PRECISION (GET_MODE (x)) < mode_width) - nonzero |= (GET_MODE_MASK (mode) & ~GET_MODE_MASK (GET_MODE (x))); + if (xmode_width < mode_width) + nonzero |= (GET_MODE_MASK (mode) & ~GET_MODE_MASK (xmode)); break; case ABS: #if 0 /* Disabled to avoid exponential mutual recursion between nonzero_bits and num_sign_bit_copies. */ - if (num_sign_bit_copies (XEXP (x, 0), GET_MODE (x)) - == GET_MODE_PRECISION (GET_MODE (x))) + if (num_sign_bit_copies (XEXP (x, 0), xmode) == xmode_width) nonzero = 1; #endif break; @@ -4566,7 +4570,7 @@ nonzero_bits1 (const_rtx x, machine_mode unsigned HOST_WIDE_INT nz1 = cached_nonzero_bits (XEXP (x, 1), mode, known_x, known_mode, known_ret); - int sign_index = GET_MODE_PRECISION (GET_MODE (x)) - 1; + int sign_index = xmode_width - 1; int width0 = floor_log2 (nz0) + 1; int width1 = floor_log2 (nz1) + 1; int low0 = ctz_or_zero (nz0); @@ -4638,8 +4642,8 @@ nonzero_bits1 (const_rtx x, machine_mode been zero-extended, we know that at least the high-order bits are zero, though others might be too. */ if (SUBREG_PROMOTED_VAR_P (x) && SUBREG_PROMOTED_UNSIGNED_P (x)) - nonzero = GET_MODE_MASK (GET_MODE (x)) - & cached_nonzero_bits (SUBREG_REG (x), GET_MODE (x), + nonzero = GET_MODE_MASK (xmode) + & cached_nonzero_bits (SUBREG_REG (x), xmode, known_x, known_mode, known_ret); /* If the inner mode is a single word for both the host and target @@ -4663,10 +4667,8 @@ nonzero_bits1 (const_rtx x, machine_mode ? val_signbit_known_set_p (inner_mode, nonzero) : extend_op != ZERO_EXTEND) || (!MEM_P (SUBREG_REG (x)) && !REG_P (SUBREG_REG (x)))) - && GET_MODE_PRECISION (GET_MODE (x)) - > GET_MODE_PRECISION (inner_mode)) - nonzero - |= (GET_MODE_MASK (GET_MODE (x)) & ~GET_MODE_MASK (inner_mode)); + && xmode_width > GET_MODE_PRECISION (inner_mode)) + nonzero |= (GET_MODE_MASK (xmode) & ~GET_MODE_MASK (inner_mode)); } break; @@ -4675,7 +4677,7 @@ nonzero_bits1 (const_rtx x, machine_mode case ASHIFT: case ROTATE: /* The nonzero bits are in two classes: any bits within MODE - that aren't in GET_MODE (x) are always significant. The rest of the + that aren't in xmode are always significant. The rest of the nonzero bits are those that are significant in the operand of the shift when shifted the appropriate number of bits. This shows that high-order bits are cleared by the right shift and @@ -4683,19 +4685,17 @@ nonzero_bits1 (const_rtx x, machine_mode if (CONST_INT_P (XEXP (x, 1)) && INTVAL (XEXP (x, 1)) >= 0 && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT - && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (GET_MODE (x))) + && INTVAL (XEXP (x, 1)) < xmode_width) { - machine_mode inner_mode = GET_MODE (x); - unsigned int width = GET_MODE_PRECISION (inner_mode); int count = INTVAL (XEXP (x, 1)); - unsigned HOST_WIDE_INT mode_mask = GET_MODE_MASK (inner_mode); + unsigned HOST_WIDE_INT mode_mask = GET_MODE_MASK (xmode); unsigned HOST_WIDE_INT op_nonzero = cached_nonzero_bits (XEXP (x, 0), mode, known_x, known_mode, known_ret); unsigned HOST_WIDE_INT inner = op_nonzero & mode_mask; unsigned HOST_WIDE_INT outer = 0; - if (mode_width > width) + if (mode_width > xmode_width) outer = (op_nonzero & nonzero & ~mode_mask); if (code == LSHIFTRT) @@ -4707,15 +4707,16 @@ nonzero_bits1 (const_rtx x, machine_mode /* If the sign bit may have been nonzero before the shift, we need to mark all the places it could have been copied to by the shift as possibly nonzero. */ - if (inner & (HOST_WIDE_INT_1U << (width - 1 - count))) - inner |= ((HOST_WIDE_INT_1U << count) - 1) - << (width - count); + if (inner & (HOST_WIDE_INT_1U << (xmode_width - 1 - count))) + inner |= (((HOST_WIDE_INT_1U << count) - 1) + << (xmode_width - count)); } else if (code == ASHIFT) inner <<= count; else - inner = ((inner << (count % width) - | (inner >> (width - (count % width)))) & mode_mask); + inner = ((inner << (count % xmode_width) + | (inner >> (xmode_width - (count % xmode_width)))) + & mode_mask); nonzero &= (outer | inner); }