From patchwork Fri May 1 04:41:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kugan Vivekanandarajah X-Patchwork-Id: 47884 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8745920553 for ; Fri, 1 May 2015 04:41:59 +0000 (UTC) Received: by wgin8 with SMTP id n8sf22478647wgi.0 for ; Thu, 30 Apr 2015 21:41:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:sender :delivered-to:message-id:date:from:user-agent:mime-version:to:cc :subject:references:in-reply-to:content-type:x-original-sender :x-original-authentication-results; bh=hzxeyc4wPI6KykceizZT4vAHdlqyXFeJAyjl2YEBbLM=; b=OqLz4c1dJtzuqpfAgJXW7qMVSYQUjrs7LqyNauhynqyxikCsDz2eG3NVKBCUuCtLRK IOLMh8CQu/pb+xFvoGfsPNsG9n6du2eJIUyIVD3YRw73HsVCeo2rOyDid64zco7ak2l/ DTD85pvAalIuVfxHVe4vkrVzp65bElIzvyP7fb16XNOtk5sY6u9Iu7czVIIRhhs/c4dw Ynfze76sOcBrU2iKG+ttIZTMDkXbIaSAJ9u12acyO0q4uNouTXfYApr8hsCHdCDEvuoM 3kjfg07eTM5qTgLsDr6g9m4Qwn1cdNpvvNy5mcIxVJ7t5uypD32KcCwP0yKcTa/Y202o TTIw== X-Gm-Message-State: ALoCoQlxNK1UiNN6Enu0BzGb8L7D/MqsI29f4wWcR6mjcZMg1naTg7MicASIeGGWt6HeEkMhYjuk X-Received: by 10.194.5.229 with SMTP id v5mr5786194wjv.0.1430455318791; Thu, 30 Apr 2015 21:41:58 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.19.65 with SMTP id c1ls439442lae.46.gmail; Thu, 30 Apr 2015 21:41:58 -0700 (PDT) X-Received: by 10.152.23.66 with SMTP id k2mr6672896laf.89.1430455318528; Thu, 30 Apr 2015 21:41:58 -0700 (PDT) Received: from mail-la0-x229.google.com (mail-la0-x229.google.com. [2a00:1450:4010:c03::229]) by mx.google.com with ESMTPS id lk7si3199140lac.61.2015.04.30.21.41.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Apr 2015 21:41:58 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::229 as permitted sender) client-ip=2a00:1450:4010:c03::229; Received: by labbd9 with SMTP id bd9so58586139lab.2 for ; Thu, 30 Apr 2015 21:41:58 -0700 (PDT) X-Received: by 10.112.204.72 with SMTP id kw8mr6766968lbc.88.1430455318279; Thu, 30 Apr 2015 21:41:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp84341lbt; Thu, 30 Apr 2015 21:41:56 -0700 (PDT) X-Received: by 10.66.145.74 with SMTP id ss10mr14646760pab.28.1430455315867; Thu, 30 Apr 2015 21:41:55 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id x3si6431730pas.29.2015.04.30.21.41.54 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Apr 2015 21:41:55 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-396567-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 933 invoked by alias); 1 May 2015 04:41:38 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 922 invoked by uid 89); 1 May 2015 04:41:37 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.1 required=5.0 tests=AWL, BAYES_50, RCVD_IN_DNSWL_LOW, SPF_PASS, T_FILL_THIS_FORM_SHORT autolearn=ham version=3.3.2 X-HELO: mail-pa0-f45.google.com Received: from mail-pa0-f45.google.com (HELO mail-pa0-f45.google.com) (209.85.220.45) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-GCM-SHA256 encrypted) ESMTPS; Fri, 01 May 2015 04:41:31 +0000 Received: by pacyx8 with SMTP id yx8so80714113pac.1 for ; Thu, 30 Apr 2015 21:41:29 -0700 (PDT) X-Received: by 10.66.101.9 with SMTP id fc9mr14170591pab.37.1430455289151; Thu, 30 Apr 2015 21:41:29 -0700 (PDT) Received: from [10.1.1.2] (58-6-183-210.dyn.iinet.net.au. [58.6.183.210]) by mx.google.com with ESMTPSA id qp13sm3642618pdb.31.2015.04.30.21.41.25 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Apr 2015 21:41:27 -0700 (PDT) Message-ID: <554303F0.1000103@linaro.org> Date: Fri, 01 May 2015 14:41:20 +1000 From: Kugan User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Richard Biener CC: Uros Bizjak , "gcc-patches@gcc.gnu.org" , Jakub Jelinek Subject: Re: [RFC] Elimination of zext/sext - type promotion pass References: <53FEDF34.4090605@linaro.org> <5407DF57.2040902@linaro.org> <540912E1.30505@linaro.org> <545FF8EE.1090900@linaro.org> In-Reply-To: X-IsSubscribed: yes X-Original-Sender: kugan.vivekanandarajah@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::229 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 >> Thanks for the comments. Here is a prototype patch that implements a >> type promotion pass. This pass records SSA variables that will have >> values in higher bits (than the original type precision) if promoted and >> uses this information in inserting appropriate truncations and >> extensions. This pass also classifies some of the stmts that sets ssa's >> to be unsafe to promote. Here is a gimple difference for the type >> promotion as compared to previous dump for a testcase. > > Note that while GIMPLE has a way to zero-extend (using BIT_AND_EXPR) > it has no convenient way to sign-extend other than truncating to a signed > (non-promoted) type and then extending to the promoted type. Thus > I think such pass should be accompanied with a new tree code, > SEXT_EXPR. Otherwise we end up with "spurious" un-promoted > signed types which later optimizations may be confused about. > > Not sure if that is the actual issue though. > > Instead op "prmt" and "prmtn" I'd spell out promote and tree-type-prmtn > should be gimple-ssa-type-promote.c. In the end all targets with > non-trivial PROMOTE_MODE should run the pass as a lowering step > so it should be enabled even at -O0 (and not disablable). > > I'd definitely run the pass _after_ pass_lower_vector_ssa (and in the > end I'd like to run it before IVOPTs ... which means moving IVOPTs > later, after VRP which should be the pass optimizing away some of > the extensions). > > In get_promoted_type I don't understand why you preserve qualifiers. > Also even for targets without PROMOTE_MODE it may be > beneficial to expose truncations required by expanding bit-precision > arithmetic earlier (that is, if !PROMOTE_MODE at least promote > to GET_MODE_PRECISION (TYPE_MODE (type))). A testcase > for that is for example > > struct { long i : 33; long j : 33; } a; > return a.i + a.j; > > where bitfields of type > int do not promote so you get a > 33 bit add which we expand to a 64bit add plus a sign-extension > (and nothing optimizes that later usually). > > insert_next_bb sounds like you want to use insert_on_edge > somewhere. > > in assign_rhs_promotable_p you handle comparisons special > but the ternary COND_EXPR and VEC_COND_EXPR can have > comparisons embedded in their first operand. The comment > confuses me though - with proper sign- or zero-extensions inserted > you should be able to promote them anyway? > > You seem to miss that a GIMPLE_ASSIGN can have 3 operands > in promote_cst_in_stmt as well. > > In promote_assign_stmt_use I consider a default: case that ends > up doing nothing dangerous ;) Please either use gcc_unreachable () > or do the safe thing (fix = true;?). You seem to be working with > a lattice of some kind - fixing up stmt uses the way you do - walking > over immediate uses - is not very cache friendly. Why not use > a lattice for this - record promoted vars to be used for old SSA names > and walk over all stmts instead, replacing SSA uses on them? > Btw, you don't need to call update_stmt if you SET_USE and not > replace an SSA name with a constant. > > You seem to "fix" with a single stmt but I don't see where you insert > zero- or sign-extensions for ssa_overflows_p cases? > > Note that at least for SSA names with !SSA_NAME_VAR (thus > anonymous vars) you want to do a cheaper promotion by not > allocating a new SSA name but simply "fixing" its type by > assigning to its TREE_TYPE. For SSA names with SSA_NAME_VAR > there is of course debug-info to consider and thus doing what you > do is better (but probably still will wreck debuginfo?). > > GIMPLE_NOPs are not only used for parameters but also uninitialized > uses - for non-parameters you should simply adjust their type. No > need to fixup their value. > > The pass needs more comments. > > It looks like you are not promoting all variables but only those > where compensation code (zero-/sign-extensions) is not necessary? > Thanks for the comments. Please find an updated version of this which addresses your review comments above. I am still to do full benchmarking on this, but tried with few small benchmarks. I will do proper benchmarking after getting feedback on the implementation. I have however bootstrapped on x86-64-none-linux and regression tested on x86-64, ARM and AArch64. I am also not clear with how I should handle the gimple debug statements when the intermediate temporary variable that maps to the original variable is promoted. Thanks, Kugan gcc/ChangeLog: 2015-05-01 Kugan Vivekanandarajah * Makefile.in: Add gimple-ssa-type-promote.o. * cfgexpand.c (expand_debug_expr): Handle SEXT_EXPR. * common.opt: New option -ftree-type-promote. * expr.c (expand_expr_real_2): Handle SEXT_EXPR. * fold-const.c (int_const_binop_1): * gimple-ssa-type-promote.c: New file. * passes.def: Define new pass_type_promote. * timevar.def: Define new TV_TREE_TYPE_PROMOTE. * tree-cfg.c (verify_gimple_assign_binary): Handle SEXT_EXPR. * tree-inline.c (estimate_operator_cost): * tree-pass.h (make_pass_type_promote): New. * tree-pretty-print.c (dump_generic_node): Handle SEXT_EXPR. (op_symbol_code): Likewise. * tree-vrp.c (extract_range_from_binary_expr_1): Likewise. * tree.def: Define new SEXT_EXPR. diff --git a/gcc/Makefile.in b/gcc/Makefile.in index 80c91f0..0318631 100644 --- a/gcc/Makefile.in +++ b/gcc/Makefile.in @@ -1478,6 +1478,7 @@ OBJS = \ tree-vect-slp.o \ tree-vectorizer.o \ tree-vrp.o \ + gimple-ssa-type-promote.o \ tree.o \ valtrack.o \ value-prof.o \ diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c index ca491a0..99a1d4c 100644 --- a/gcc/cfgexpand.c +++ b/gcc/cfgexpand.c @@ -4881,6 +4881,10 @@ expand_debug_expr (tree exp) case FMA_EXPR: return simplify_gen_ternary (FMA, mode, inner_mode, op0, op1, op2); + case SEXT_EXPR: + return op0; + + default: flag_unsupported: #ifdef ENABLE_CHECKING diff --git a/gcc/common.opt b/gcc/common.opt index 380848c..1dc9b1b 100644 --- a/gcc/common.opt +++ b/gcc/common.opt @@ -2356,6 +2356,10 @@ ftree-vrp Common Report Var(flag_tree_vrp) Init(0) Optimization Perform Value Range Propagation on trees +ftree-type-promote +Common Report Var(flag_tree_type_promote) Init(1) Optimization +Perform Type Promotion on trees + funit-at-a-time Common Report Var(flag_unit_at_a_time) Init(1) Compile whole compilation unit at a time diff --git a/gcc/expr.c b/gcc/expr.c index 530a944..f672a99 100644 --- a/gcc/expr.c +++ b/gcc/expr.c @@ -9336,6 +9336,21 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode, target = expand_vec_cond_expr (type, treeop0, treeop1, treeop2, target); return target; + case SEXT_EXPR: + { + rtx op0 = expand_normal (treeop0); + rtx temp; + if (!target) + target = gen_reg_rtx (TYPE_MODE (TREE_TYPE (treeop0))); + + machine_mode inner_mode = smallest_mode_for_size (tree_to_shwi (treeop1), + MODE_INT); + temp = convert_modes (inner_mode, + TYPE_MODE (TREE_TYPE (treeop0)), op0, 0); + convert_move (target, temp, 0); + return target; + } + default: gcc_unreachable (); } diff --git a/gcc/fold-const.c b/gcc/fold-const.c index 3654fd6..f5f00af 100644 --- a/gcc/fold-const.c +++ b/gcc/fold-const.c @@ -1007,6 +1007,10 @@ int_const_binop_1 (enum tree_code code, const_tree arg1, const_tree parg2, res = wi::bit_and (arg1, arg2); break; + case SEXT_EXPR: + res = wi::sext (arg1, arg2.to_uhwi ()); + break; + case RSHIFT_EXPR: case LSHIFT_EXPR: if (wi::neg_p (arg2)) diff --git a/gcc/gimple-ssa-type-promote.c b/gcc/gimple-ssa-type-promote.c index e69de29..a226e50c 100644 --- a/gcc/gimple-ssa-type-promote.c +++ b/gcc/gimple-ssa-type-promote.c @@ -0,0 +1,1311 @@ +/* Type promotion of SSA names to minimise redundant zero/sign extension. + Copyright (C) 2015 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with GCC; see the file COPYING3. If not see +. */ + +#include "config.h" +#include "system.h" +#include "coretypes.h" +#include "tm.h" +#include "flags.h" +#include "hash-set.h" +#include "machmode.h" +#include "vec.h" +#include "double-int.h" +#include "input.h" +#include "alias.h" +#include "symtab.h" +#include "wide-int.h" +#include "inchash.h" +#include "tree.h" +#include "fold-const.h" +#include "stor-layout.h" +#include "calls.h" +#include "predict.h" +#include "hard-reg-set.h" +#include "function.h" +#include "dominance.h" +#include "cfg.h" +#include "basic-block.h" +#include "tree-ssa-alias.h" +#include "gimple-fold.h" +#include "tree-eh.h" +#include "gimple-expr.h" +#include "is-a.h" +#include "gimple.h" +#include "gimple-iterator.h" +#include "gimple-ssa.h" +#include "tree-phinodes.h" +#include "ssa-iterators.h" +#include "stringpool.h" +#include "tree-ssanames.h" +#include "tree-pass.h" +#include "gimple-pretty-print.h" +#include "langhooks.h" +#include "sbitmap.h" +#include "domwalk.h" + +/* This pass applies type promotion to SSA names in the function and + inserts appropriate truncations. Idea of this pass is to promote operations + such a way that we can minimise generation of subreg in RTL, + that intern results in removal of redundant zero/sign extensions. This pass + will run prior to The VRP and DOM such that they will be able to optimise + redundant truncations and extensions. This is based on the discussion from + https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00472.html. + + This pass execute as follows: + + 1. This pass records gimple statements that may produce results that can + overflow (beyond the original type) and operations that has to be always + performed in the original type. This is done in + process_all_stmts_for_unsafe_promotion. Here, gimple which sets SSA_NAMES + are processed in a work_list to set ssa_sets_higher_bits_bitmap + (set_ssa_overflows) and ssa_not_safe_bitmap. + + 2. promote_all_stmts traverses the basic blocks in dominator order and + promotes all the SSA_NAMES that were selected as safe in the step 1 above. + It uses promote_all_stmts to do the register promotion stmt by stmt. + The definition of the SSA_NAME is promoted first and then all the uses are + promoted according to the gimple stmt type. If the SSA_NAME can overflow + when promoted necessary fix-ups are also performed to preserve the semantics + of the program. +*/ + +static unsigned n_ssa_val; +static sbitmap ssa_not_safe_bitmap; +static sbitmap ssa_to_be_promoted_bitmap; +static sbitmap ssa_sets_higher_bits_bitmap; + +/* Return the promoted type for TYPE. */ +static tree +get_promoted_type (tree type) +{ + tree promoted_type; + enum machine_mode mode; + int uns; + if (POINTER_TYPE_P (type) + || TYPE_PRECISION (type) == 1 + || !INTEGRAL_TYPE_P (type)) + return type; +#ifdef PROMOTE_MODE + mode = TYPE_MODE (type); + uns = TYPE_SIGN (type); + PROMOTE_MODE (mode, uns, type); +#else + mode = smallest_mode_for_size (GET_MODE_PRECISION (TYPE_MODE (type)), + MODE_INT); +#endif + uns = TYPE_SIGN (type); + promoted_type = lang_hooks.types.type_for_mode (mode, uns); + if (promoted_type + && (TYPE_PRECISION (promoted_type) > TYPE_PRECISION (type))) + type = promoted_type; + return type; +} + +/* Predicate that tells if promoting computation with ssa NAME is safe. */ +static bool +promotion_safe_p (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + gimple stmt = SSA_NAME_DEF_STMT (name); + unsigned int index = SSA_NAME_VERSION (name); + + if (gimple_vdef (stmt) != NULL_TREE + || gimple_vuse (stmt) != NULL_TREE) + return false; + if (index < n_ssa_val) + return !bitmap_bit_p (ssa_not_safe_bitmap, index); + } + return false; +} + +/* Return true if ssa NAME is already considered for promotion. */ +static bool +ssa_promoted_p (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + unsigned int index = SSA_NAME_VERSION (name); + if (index < n_ssa_val) + return !bitmap_bit_p (ssa_to_be_promoted_bitmap, index); + } + return true; +} + +/* Return true if ssa NAME will be considered for promotion. */ +static bool +ssa_tobe_promoted_p (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + unsigned int index = SSA_NAME_VERSION (name); + if (index < n_ssa_val) + return bitmap_bit_p (ssa_to_be_promoted_bitmap, index); + } + return false; +} + +/* Set ssa NAME to be already considered for promotion. */ +static void +set_ssa_promoted (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + unsigned int index = SSA_NAME_VERSION (name); + if (index < n_ssa_val) + bitmap_clear_bit (ssa_to_be_promoted_bitmap, index); + } +} + +/* Set ssa NAME will have higher bits if promoted. */ +static void +set_ssa_overflows (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + unsigned int index = SSA_NAME_VERSION (name); + if (index < n_ssa_val) + bitmap_set_bit (ssa_sets_higher_bits_bitmap, index); + } +} + +/* Return true if ssa NAME will have higher bits if promoted. */ +static bool +ssa_overflows_p (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + unsigned int index = SSA_NAME_VERSION (name); + if (index < n_ssa_val) + return bitmap_bit_p (ssa_sets_higher_bits_bitmap, index); + } + return false; +} + +/* Create an ssa with TYPE to copy ssa VAR. */ +static tree +make_promoted_copy (tree var, gimple def_stmt, tree type) +{ + tree new_lhs = make_ssa_name (type, def_stmt); + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (var)) + SSA_NAME_OCCURS_IN_ABNORMAL_PHI (new_lhs) = 1; + return new_lhs; +} + +/* Return single successor (excluding EH edge) for basic block BB. If there + are more than one successors, return NULL. */ +static basic_block +get_single_successor_bb (basic_block bb) +{ + edge e, res = NULL; + edge_iterator ei; + + FOR_EACH_EDGE (e, ei, bb->succs) + if (!(e->flags & EDGE_EH)) + { + if (res) + return NULL; + res = e; + } + return res->dest; +} + +/* Insert COPY_STMT along the edge from STMT to its successor. */ +static void +insert_stmt_on_edge (gimple stmt, gimple copy_stmt) +{ + edge_iterator ei; + edge e, edge = NULL; + basic_block bb = gimple_bb (stmt); + + FOR_EACH_EDGE (e, ei, bb->succs) + if (!(e->flags & EDGE_EH)) + { + gcc_assert (edge == NULL); + edge = e; + } + + gcc_assert (edge); + gsi_insert_on_edge_immediate (edge, copy_stmt); +} + +/* Convert constant CST to TYPE. */ +static tree +convert_int_cst (tree type, tree cst, signop sign = SIGNED) +{ + wide_int wi_cons = fold_convert (type, cst); + wi_cons = wi::ext (wi_cons, TYPE_PRECISION (TREE_TYPE (cst)), sign); + return wide_int_to_tree (type, wi_cons); +} + + +/* Promote constants in STMT to TYPE. If PROMOTE_COND_EXPR is true, + promote only the constants in conditions part of the COND_EXPR. */ +static void +promote_cst_in_stmt (gimple stmt, tree type, + bool promote_cond_expr = false, signop sign = SIGNED) +{ + tree op; + ssa_op_iter iter; + use_operand_p oprnd; + int index; + tree op0, op1; + + if (promote_cond_expr) + { + /* Promote constant in COND_EXPR. */ + gcc_assert (gimple_assign_rhs_code (stmt) == COND_EXPR); + op = gimple_assign_rhs1 (stmt); + op0 = TREE_OPERAND (op, 0); + op1 = TREE_OPERAND (op, 1); + + if (TREE_CODE (op0) == INTEGER_CST) + op0 = convert_int_cst (type, op0, sign); + if (TREE_CODE (op1) == INTEGER_CST) + op1 = convert_int_cst (type, op1, sign); + + tree new_op = build2 (TREE_CODE (op), type, op0, op1); + gimple_assign_set_rhs1 (stmt, new_op); + return; + } + + switch (gimple_code (stmt)) + { + case GIMPLE_ASSIGN: + op = gimple_assign_rhs1 (stmt); + + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_assign_set_rhs1 (stmt, convert_int_cst (type, op, sign)); + op = gimple_assign_rhs2 (stmt); + + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_assign_set_rhs2 (stmt, convert_int_cst (type, op, sign)); + op = gimple_assign_rhs3 (stmt); + + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_assign_set_rhs3 (stmt, convert_int_cst (type, op, sign)); + break; + + case GIMPLE_PHI: + { + gphi *phi = as_a (stmt); + FOR_EACH_PHI_ARG (oprnd, phi, iter, SSA_OP_USE) + { + op = USE_FROM_PTR (oprnd); + index = PHI_ARG_INDEX_FROM_USE (oprnd); + if (TREE_CODE (op) == INTEGER_CST) + SET_PHI_ARG_DEF (phi, index, convert_int_cst (type, op, sign)); + } + } + break; + + case GIMPLE_COND: + { + gcond *cond = as_a (stmt); + op = gimple_cond_lhs (cond); + + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_cond_set_lhs (cond, convert_int_cst (type, op, sign)); + op = gimple_cond_rhs (cond); + + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_cond_set_rhs (cond, convert_int_cst (type, op, sign)); + } + + default: + break; + } +} + +/* Zero/sign extend (depending on type) VAR and truncate to WIDTH bits. + Assign the zero/sign extended value in NEW_VAR. gimple statement + that performs the zero/sign extension is returned. */ +static gimple +zero_sign_extend_stmt (tree new_var, tree var, int width) +{ + gcc_assert (TYPE_PRECISION (TREE_TYPE (var)) + == TYPE_PRECISION (TREE_TYPE (new_var))); + gcc_assert (TYPE_PRECISION (TREE_TYPE (var)) > width); + gimple stmt; + + if (TYPE_UNSIGNED (TREE_TYPE (new_var))) + /* Zero extend. */ + stmt = gimple_build_assign (new_var, + BIT_AND_EXPR, + var, build_int_cst (TREE_TYPE (var), + ((1ULL << width) - 1))); + else + /* Sign extend. */ + stmt = gimple_build_assign (new_var, + SEXT_EXPR, + var, build_int_cst (TREE_TYPE (var), width)); + return stmt; +} + +/* Promote use in an assignment. Depending on the gimple_assign_rhs_code, + values in NEW_USE might have to be truncated to the type of USE. */ +static void +promote_assign_stmt_use (gimple stmt, + tree use, + imm_use_iterator *ui, + tree new_use, + tree copy_of_use, + tree promoted_type) +{ + tree lhs = gimple_assign_lhs (stmt); + tree rhs1 = gimple_assign_rhs1 (stmt); + tree rhs2 = gimple_assign_rhs2 (stmt); + tree rhs3 = gimple_assign_rhs3 (stmt); + gimple_stmt_iterator gsi; + use_operand_p op; + enum tree_code code = gimple_assign_rhs_code (stmt); + /* If promoted and fix up is to be performed, fix is true. */ + bool fix = false; + + switch (code) + { + CASE_CONVERT: + if (ssa_tobe_promoted_p (lhs) + && promotion_safe_p (lhs) + && TREE_TYPE (new_use) == promoted_type) + { + if (TYPE_PRECISION (TREE_TYPE (lhs)) > TYPE_PRECISION (TREE_TYPE (rhs1))) + { + tree temp = make_promoted_copy (lhs, NULL, promoted_type); + gimple copy_stmt = + zero_sign_extend_stmt (temp, new_use, + TYPE_PRECISION (TREE_TYPE (use))); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); + } + else + { + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, new_use); + update_stmt (stmt); + } + } + else + { + if (TYPE_PRECISION (TREE_TYPE (lhs)) < TYPE_PRECISION (TREE_TYPE (rhs1))) + { + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, new_use); + update_stmt (stmt); + } + else if (!copy_of_use) + { + tree temp = make_promoted_copy (use, NULL, TREE_TYPE (use)); + gimple copy_stmt = gimple_build_assign (temp, CONVERT_EXPR, + new_use, NULL_TREE); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); + } + else + { + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, copy_of_use); + update_stmt (stmt); + } + } + return; + + case COND_EXPR: + /* Promote COND_EXPR coparison operands. */ + if (use != rhs2 + && use != rhs3) + { + tree temp; + tree op0 = TREE_OPERAND (rhs1, 0); + tree op1 = TREE_OPERAND (rhs1, 1); + bool is_cst = false; + + if (TREE_CODE (op0) == INTEGER_CST + || TREE_CODE (op1) == INTEGER_CST) + is_cst = true; + + /* If this SSA is not promoted. */ + if (use == new_use) + { + if (is_cst) + temp = new_use; + else + { + temp = make_promoted_copy (use, NULL, promoted_type); + gimple copy_stmt = gimple_build_assign (temp, CONVERT_EXPR, + new_use, NULL_TREE); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + promote_cst_in_stmt (stmt, promoted_type, true, + TYPE_SIGN (TREE_TYPE (use))); + } + } + /* If this SSA is promoted. */ + else + { + temp = make_promoted_copy (use, NULL, promoted_type); + gimple copy_stmt + = zero_sign_extend_stmt (temp, new_use, + TYPE_PRECISION (TREE_TYPE (use))); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + promote_cst_in_stmt (stmt, promoted_type, true, + TYPE_SIGN (TREE_TYPE (use))); + } + + if (op0 == use) + op0 = temp; + else + op1 = temp; + + tree new_op = build2 (TREE_CODE (rhs1), promoted_type, op0, op1); + gimple_assign_set_rhs1 (stmt, new_op); + update_stmt (stmt); + return; + } + else + { + promote_cst_in_stmt (stmt, promoted_type); + } + break; + + case RSHIFT_EXPR: + case LSHIFT_EXPR: + case WIDEN_LSHIFT_EXPR: + case TRUNC_MOD_EXPR: + case CEIL_MOD_EXPR: + case FLOOR_MOD_EXPR: + case ROUND_MOD_EXPR: + case TRUNC_DIV_EXPR: + case CEIL_DIV_EXPR: + case FLOOR_DIV_EXPR: + case RDIV_EXPR: + case ROUND_DIV_EXPR: + case EXACT_DIV_EXPR: + case MIN_EXPR: + case MAX_EXPR: + case RANGE_EXPR: + if (ssa_overflows_p (use)) + fix = true; + break; + + default: + break; + } + + if (fix && promotion_safe_p (lhs) + && TREE_TYPE (new_use) == promoted_type) + { + /* Promoted with values truncated. */ + tree temp = make_promoted_copy (use, NULL, promoted_type); + gimple copy_stmt = zero_sign_extend_stmt (temp, new_use, + TYPE_PRECISION (TREE_TYPE (use))); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); + return; + } + else if (!(TREE_CODE_CLASS (code) == tcc_comparison + || TREE_CODE_CLASS (code) == tcc_reference + || code == VIEW_CONVERT_EXPR + || code == COMPLEX_EXPR + || code == ASM_EXPR + || code == OBJ_TYPE_REF + || gimple_vdef (stmt) + || VECTOR_TYPE_P (TREE_TYPE (lhs))) + && (promotion_safe_p (lhs) + || (TREE_CODE_CLASS (code) == tcc_comparison))) + { + /* Statement promoted. */ + if ((TYPE_PRECISION (TREE_TYPE (use)) + < TYPE_PRECISION (promoted_type)) + && (code != COND_EXPR)) + promote_cst_in_stmt (stmt, promoted_type); + + if (promoted_type == TREE_TYPE (new_use)) + { + /* Operand also promoted. */ + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, new_use); + update_stmt (stmt); + } + else + { + /* Operand not promoted. */ + tree temp = make_promoted_copy (use, NULL, promoted_type); + gimple copy_stmt = gimple_build_assign (temp, CONVERT_EXPR, + new_use, NULL_TREE); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); + } + } + else + { + /* Statement not promoted. */ + if (copy_of_use) + { + /* Operand also not promoted. */ + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, copy_of_use); + update_stmt (stmt); + } + else + { + /* Operand promoted. */ + tree temp = make_promoted_copy (use, NULL, TREE_TYPE (use)); + gimple copy_stmt = gimple_build_assign (temp, CONVERT_EXPR, + new_use, NULL_TREE); + + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); + } + } +} + +/* Promote ssa USE in phi STMT to PROMOTED_TYPE. */ +static void +promote_phi_stmt_use (gimple stmt, + tree use, + imm_use_iterator *ui, + tree new_use, + tree copy_of_use, + tree promoted_type) +{ + tree lhs = PHI_RESULT (stmt); + tree type; + tree temp; + gimple_stmt_iterator gsi; + use_operand_p op; + + if (ssa_tobe_promoted_p (lhs) + && promotion_safe_p (lhs)) + type = promoted_type; + else + type = TREE_TYPE (lhs); + + /* Check if we need a convert stmt to get the required type. */ + if (type == TREE_TYPE (new_use)) + temp = new_use; + else if (copy_of_use && (type == TREE_TYPE (copy_of_use))) + temp = copy_of_use; + else + { + temp = make_promoted_copy (use, NULL, type); + gimple copy_stmt + = gimple_build_assign (temp, CONVERT_EXPR, + new_use, NULL_TREE); + + if (gimple_code (SSA_NAME_DEF_STMT (new_use)) == GIMPLE_NOP) + { + basic_block bb = ENTRY_BLOCK_PTR_FOR_FN (cfun); + bb = get_single_successor_bb (bb); + gcc_assert (bb); + gsi = gsi_after_labels (bb); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + } + else if (gimple_code (SSA_NAME_DEF_STMT (new_use)) + != GIMPLE_PHI) + { + gsi = gsi_for_stmt (SSA_NAME_DEF_STMT (new_use)); + if (lookup_stmt_eh_lp (SSA_NAME_DEF_STMT (new_use)) > 0) + insert_stmt_on_edge (SSA_NAME_DEF_STMT (new_use), copy_stmt); + else + gsi_insert_after (&gsi, copy_stmt, GSI_NEW_STMT); + } + else + { + gsi = gsi_after_labels + (gimple_bb (SSA_NAME_DEF_STMT (new_use))); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + } + } + + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); +} + +/* Promote ssa USE in STMT to PROMOTED_TYPE. */ +static void +promote_cond_stmt_use (gimple stmt, + tree use, + imm_use_iterator *ui, + tree new_use, + tree promoted_type) +{ + gimple_stmt_iterator gsi; + use_operand_p op; + bool is_cst = false; + tree lhs = gimple_cond_lhs (stmt); + tree rhs = gimple_cond_rhs (stmt); + + if (TREE_CODE (lhs) == INTEGER_CST + || TREE_CODE (rhs) == INTEGER_CST) + is_cst = true; + + if (TREE_TYPE (new_use) == promoted_type) + { + tree temp = make_promoted_copy (use, NULL, promoted_type); + gimple copy_stmt = zero_sign_extend_stmt (temp, new_use, + TYPE_PRECISION (TREE_TYPE (use))); + + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); + promote_cst_in_stmt (stmt, promoted_type, false, + TYPE_SIGN (TREE_TYPE (use))); + } + else + { + /* Copmparison will happen in promoted type. */ + tree temp; + if (TREE_TYPE (new_use) == promoted_type) + { + temp = new_use; + promote_cst_in_stmt (stmt, promoted_type, false, + TYPE_SIGN (TREE_TYPE (use))); + } + else if (is_cst) + { + temp = new_use; + } + else + { + temp = make_promoted_copy (use, NULL, promoted_type); + gimple copy_stmt = gimple_build_assign (temp, CONVERT_EXPR, + new_use, NULL_TREE); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + } + + FOR_EACH_IMM_USE_ON_STMT (op, *ui) + SET_USE (op, temp); + update_stmt (stmt); + } +} + +/* Promote definition DEF to NEW_TYPE. If the DEF is replaced and has to + be released, set RELEASE_DEF. Also return COPY_OF_DEF with the original + type for any use statement that needs truncation. */ +static tree +promote_definition (tree def, + tree promoted_type, + tree *copy_of_def, + bool *release_def) +{ + gimple def_stmt = SSA_NAME_DEF_STMT (def); + gimple copy_stmt = NULL; + tree new_def; + basic_block bb; + gimple_stmt_iterator gsi; + gphi *phi; + + gcc_assert (release_def); + *release_def = false; + if (SSA_NAME_VAR (def) == NULL + && gimple_code (def_stmt) == GIMPLE_NOP) + { + TREE_TYPE (def) = promoted_type; + promote_cst_in_stmt (def_stmt, promoted_type); + new_def = def; + *copy_of_def = NULL; + return new_def; + } + + switch (gimple_code (def_stmt)) + { + + case GIMPLE_PHI: + phi = as_a (def_stmt); + new_def = make_promoted_copy (def, phi, promoted_type); + *copy_of_def = NULL; + gimple_phi_set_result (phi, new_def); + SET_PHI_RESULT (phi, new_def); + *release_def = true; + update_stmt (def_stmt); + promote_cst_in_stmt (def_stmt, promoted_type); + break; + + case GIMPLE_NOP: + /* Create a promoted type copy of parameters. */ + bb = ENTRY_BLOCK_PTR_FOR_FN (cfun); + bb = get_single_successor_bb (bb); + gcc_assert (bb); + gsi = gsi_after_labels (bb); + new_def = make_promoted_copy (def, NULL, promoted_type); + copy_stmt = gimple_build_assign (new_def, CONVERT_EXPR, + def, NULL_TREE); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + *copy_of_def = def; + break; + + case GIMPLE_ASSIGN: + { + enum tree_code code = gimple_assign_rhs_code (def_stmt); + tree rhs1 = gimple_assign_rhs1 (def_stmt); + if (CONVERT_EXPR_CODE_P (code) + && TREE_TYPE (rhs1) == promoted_type) + { + new_def = make_promoted_copy (def, NULL, promoted_type); + gimple copy_stmt = + zero_sign_extend_stmt (new_def, rhs1, + TYPE_PRECISION (TREE_TYPE (def))); + gsi = gsi_for_stmt (def_stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + gsi = gsi_for_stmt (def_stmt); + gsi_remove (&gsi, true); + } + else + { + new_def = make_promoted_copy (def, def_stmt, promoted_type); + gimple_assign_set_lhs (def_stmt, new_def); + update_stmt (def_stmt); + if (TREE_CODE_CLASS (gimple_assign_rhs_code (def_stmt)) + != tcc_comparison) + promote_cst_in_stmt (def_stmt, promoted_type); + } + *release_def = true; + *copy_of_def = NULL; + break; + } + + default: + new_def = make_promoted_copy (def, NULL, promoted_type); + copy_stmt = gimple_build_assign (new_def, CONVERT_EXPR, + def, NULL_TREE); + gsi = gsi_for_stmt (def_stmt); + if (lookup_stmt_eh_lp (def_stmt) > 0) + insert_stmt_on_edge (def_stmt, copy_stmt); + else + gsi_insert_after (&gsi, copy_stmt, GSI_NEW_STMT); + update_stmt (copy_stmt); + *copy_of_def = def; + break; + } + + return new_def; +} + + +/* Promote all the USE with NEW_USE. */ +static unsigned int +promote_all_uses (tree use, tree new_use, tree copy_of_use, + tree promoted_type) +{ + gimple stmt; + imm_use_iterator ui; + gimple_stmt_iterator gsi; + use_operand_p op; + + /* Replace all the use with the promoted variable. */ + FOR_EACH_IMM_USE_STMT (stmt, ui, use) + { + if (stmt == SSA_NAME_DEF_STMT (new_use)) + continue; + + switch (gimple_code (stmt)) + { + + case GIMPLE_ASSIGN: + promote_assign_stmt_use (stmt, use, &ui, new_use, + copy_of_use, promoted_type); + break; + + case GIMPLE_PHI: + promote_phi_stmt_use (stmt, use, &ui, new_use, + copy_of_use, promoted_type); + break; + + case GIMPLE_COND: + promote_cond_stmt_use (stmt, use, &ui, new_use, + promoted_type); + break; + + case GIMPLE_DEBUG: + if (TREE_TYPE (use) != TREE_TYPE (new_use) + && gimple_debug_bind_p (stmt)) + { + gsi = gsi_for_stmt (stmt); + gsi_remove (&gsi, true); + } + break; + + default: + if (TREE_TYPE (use) != TREE_TYPE (new_use)) + { + tree temp; + if (copy_of_use) + temp = copy_of_use; + else + { + temp = make_promoted_copy (use, NULL, TREE_TYPE (use)); + gimple copy_stmt = gimple_build_assign (temp, CONVERT_EXPR, + new_use, NULL_TREE); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + } + + FOR_EACH_IMM_USE_ON_STMT (op, ui) + SET_USE (op, temp); + update_stmt (stmt); + } + break; + } + } + + return 0; +} + +/* Promote definition of NAME and all its uses. */ +static unsigned int +promote_def_and_uses (tree name) +{ + tree type, new_name, copy_of_name; + bool release_def = false; + + if (TREE_CODE (name) != SSA_NAME + || POINTER_TYPE_P (TREE_TYPE (name)) + || !INTEGRAL_TYPE_P (TREE_TYPE (name)) + || VECTOR_TYPE_P (TREE_TYPE (name)) + || ssa_promoted_p (name) + || (type = get_promoted_type (TREE_TYPE (name))) == TREE_TYPE (name)) + return 0; + + if (promotion_safe_p (name)) + { + new_name = promote_definition (name, type, ©_of_name, + &release_def); + promote_all_uses (name, new_name, copy_of_name, type); + } + else + promote_all_uses (name, name, name, type); + set_ssa_promoted (name); + + if (release_def) + release_ssa_name (name); + return 0; +} + +/* Mark the candidates for promotion. */ +static void +set_ssa_to_be_promoted_flag (gimple stmt) +{ + ssa_op_iter i; + tree def; + use_operand_p op; + + switch (gimple_code (stmt)) + { + + case GIMPLE_PHI: + { + gphi *phi = as_a (stmt); + def = PHI_RESULT (phi); + bitmap_set_bit (ssa_to_be_promoted_bitmap, SSA_NAME_VERSION (def)); + FOR_EACH_PHI_ARG (op, phi, i, SSA_OP_USE) + { + def = USE_FROM_PTR (op); + if (TREE_CODE (def) == SSA_NAME) + bitmap_set_bit (ssa_to_be_promoted_bitmap, SSA_NAME_VERSION (def)); + } + break; + } + + default: + FOR_EACH_SSA_TREE_OPERAND (def, stmt, i, SSA_OP_USE | SSA_OP_DEF) + { + if (TREE_CODE (def) == SSA_NAME) + bitmap_set_bit (ssa_to_be_promoted_bitmap, SSA_NAME_VERSION (def)); + } + break; + } +} + +/* Visit PHI stmt and record if variables might have higher bits set if + promoted. */ +static bool +record_visit_phi_node (gimple stmt) +{ + tree def; + ssa_op_iter i; + use_operand_p op; + bool high_bits_set = false; + gphi *phi = as_a (stmt); + tree lhs = PHI_RESULT (phi); + + if (TREE_CODE (lhs) != SSA_NAME + || POINTER_TYPE_P (TREE_TYPE (lhs)) + || !INTEGRAL_TYPE_P (TREE_TYPE (lhs)) + || ssa_overflows_p (lhs)) + return false; + + FOR_EACH_PHI_ARG (op, phi, i, SSA_OP_USE) + { + def = USE_FROM_PTR (op); + if (ssa_overflows_p (def)) + high_bits_set = true; + } + + if (high_bits_set) + { + set_ssa_overflows (lhs); + return true; + } + else + return false; +} + +/* Visit STMT and record if variables might have higher bits set if + promoted. */ +static bool +record_visit_stmt (gimple stmt) +{ + tree def; + ssa_op_iter i; + bool changed = false; + gcc_assert (gimple_code (stmt) == GIMPLE_ASSIGN); + enum tree_code code = gimple_assign_rhs_code (stmt); + tree lhs = gimple_assign_lhs (stmt); + tree rhs1 = gimple_assign_rhs1 (stmt); + + if (TREE_CODE (lhs) != SSA_NAME + || POINTER_TYPE_P (TREE_TYPE (lhs)) + || !INTEGRAL_TYPE_P (TREE_TYPE (lhs))) + return false; + + switch (code) + { + /* Conversion expressions that may need to be preserved. */ + CASE_CONVERT: + /* if the precision of LHS is greater than RHS, it is not safe to + convert this with ZEXT/SEXT stmt when there is also type change. */ + if ((TYPE_PRECISION (TREE_TYPE (lhs)) + > TYPE_PRECISION (TREE_TYPE (rhs1))) + && (TYPE_UNSIGNED (TREE_TYPE (lhs)) + != TYPE_PRECISION (TREE_TYPE (rhs1)))) + bitmap_set_bit (ssa_not_safe_bitmap, SSA_NAME_VERSION (lhs)); + else if ((TYPE_PRECISION (TREE_TYPE (lhs)) + <= TYPE_PRECISION (TREE_TYPE (rhs1))) + && !ssa_overflows_p (lhs)) + { + set_ssa_overflows (lhs); + changed = true; + } + break; + + case SSA_NAME: + if (!ssa_overflows_p (lhs) + && ssa_overflows_p (rhs1)) + { + set_ssa_overflows (lhs); + changed = true; + } + break; + + case NE_EXPR: + case LT_EXPR: + case LE_EXPR: + case GT_EXPR: + case GE_EXPR: + case EQ_EXPR: + case UNLT_EXPR: + case UNLE_EXPR: + case UNGT_EXPR: + case UNGE_EXPR: + case UNEQ_EXPR: + case LTGT_EXPR: + case RSHIFT_EXPR: + case LSHIFT_EXPR: + case WIDEN_LSHIFT_EXPR: + case MIN_EXPR: + case MAX_EXPR: + case RANGE_EXPR: + break; + + case TRUNC_DIV_EXPR: + case CEIL_DIV_EXPR: + case FLOOR_DIV_EXPR: + case RDIV_EXPR: + case ROUND_DIV_EXPR: + case EXACT_DIV_EXPR: + if (!ssa_overflows_p (lhs)) + { + set_ssa_overflows (lhs); + changed = true; + } + break; + + /* Expressions which may produce results that will have higher bits if + computed in promoted type. (i.e. results may overflow) */ + case MULT_HIGHPART_EXPR: + case PLUS_EXPR: + case MINUS_EXPR: + case MULT_EXPR: + case BIT_XOR_EXPR: + case BIT_NOT_EXPR: + case WIDEN_MULT_EXPR: + case WIDEN_MULT_PLUS_EXPR: + case WIDEN_MULT_MINUS_EXPR: + case WIDEN_SUM_EXPR: + case BIT_IOR_EXPR: + case BIT_AND_EXPR: + if (!ssa_overflows_p (lhs)) + { + set_ssa_overflows (lhs); + changed = true; + } + break; + + /* Expressions for which operation has to be performed in original + types if promoted operands may have higher bits. */ + case ABS_EXPR: + case NEGATE_EXPR: + case TRUNC_MOD_EXPR: + case CEIL_MOD_EXPR: + case FLOOR_MOD_EXPR: + case ROUND_MOD_EXPR: + FOR_EACH_SSA_TREE_OPERAND (def, stmt, i, SSA_OP_USE) + { + if (ssa_overflows_p (def)) + bitmap_set_bit (ssa_not_safe_bitmap, SSA_NAME_VERSION (lhs)); + } + break; + + case COND_EXPR: + { + tree rhs2 = gimple_assign_rhs2 (stmt); + tree rhs3 = gimple_assign_rhs3 (stmt); + + if (ssa_overflows_p (rhs2)) + { + set_ssa_overflows (lhs); + changed = true; + } + else if (ssa_overflows_p (rhs3)) + { + set_ssa_overflows (lhs); + changed = true; + } + } + break; + + /* Expressions that has to be done in original types. */ + case LROTATE_EXPR: + case RROTATE_EXPR: + bitmap_set_bit (ssa_not_safe_bitmap, SSA_NAME_VERSION (lhs)); + break; + + /* To be safe, all other have to be done in original types. */ + default: + bitmap_set_bit (ssa_not_safe_bitmap, SSA_NAME_VERSION (lhs)); + break; + } + return changed; +} + + +/* Promote all the stmts in the basic block. */ +static void +promote_all_stmts (basic_block bb) +{ + gimple_stmt_iterator gsi; + ssa_op_iter iter; + tree def; + + for (gphi_iterator gpi = gsi_start_phis (bb); + !gsi_end_p (gpi); gsi_next (&gpi)) + { + gphi *phi = gpi.phi (); + use_operand_p op; + + def = PHI_RESULT (phi); + promote_def_and_uses (def); + FOR_EACH_PHI_ARG (op, phi, iter, SSA_OP_USE) + { + def = USE_FROM_PTR (op); + promote_def_and_uses (def); + } + } + for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) + { + gimple stmt = gsi_stmt (gsi); + + FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_USE | SSA_OP_DEF) + promote_def_and_uses (def); + } +} + +static void +process_all_stmts_for_unsafe_promotion () +{ + basic_block bb; + gimple_stmt_iterator gsi; + auto_vec work_list; + + FOR_EACH_BB_FN (bb, cfun) + { + for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi)) + { + gimple phi = gsi_stmt (gsi); + + set_ssa_to_be_promoted_flag (phi); + work_list.safe_push (phi); + } + + for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) + { + gimple stmt = gsi_stmt (gsi); + + set_ssa_to_be_promoted_flag (stmt); + if (gimple_code (stmt) == GIMPLE_ASSIGN) + work_list.safe_push (stmt); + } + } + + while (work_list.length () > 0) + { + bool changed; + gimple stmt = work_list.pop (); + tree lhs; + + switch (gimple_code (stmt)) + { + + case GIMPLE_ASSIGN: + changed = record_visit_stmt (stmt); + lhs = gimple_assign_lhs (stmt); + break; + + case GIMPLE_PHI: + changed = record_visit_phi_node (stmt); + lhs = PHI_RESULT (stmt); + break; + + default: + gcc_assert (false); + break; + } + + if (changed) + { + gimple use_stmt; + imm_use_iterator ui; + + FOR_EACH_IMM_USE_STMT (use_stmt, ui, lhs) + { + if (gimple_code (use_stmt) == GIMPLE_ASSIGN + || gimple_code (use_stmt) == GIMPLE_PHI) + work_list.safe_push (use_stmt); + } + } + } +} + +class type_promotion_dom_walker : public dom_walker +{ +public: + type_promotion_dom_walker (cdi_direction direction) + : dom_walker (direction) {} + virtual void before_dom_children (basic_block bb) + { + promote_all_stmts (bb); + } +}; + +/* Main entry point to the pass. */ +static unsigned int +execute_type_promotion (void) +{ + n_ssa_val = num_ssa_names; + ssa_not_safe_bitmap = sbitmap_alloc (n_ssa_val); + bitmap_clear (ssa_not_safe_bitmap); + ssa_to_be_promoted_bitmap = sbitmap_alloc (n_ssa_val); + bitmap_clear (ssa_to_be_promoted_bitmap); + ssa_sets_higher_bits_bitmap = sbitmap_alloc (n_ssa_val); + bitmap_clear (ssa_sets_higher_bits_bitmap); + + calculate_dominance_info (CDI_DOMINATORS); + process_all_stmts_for_unsafe_promotion (); + /* Walk the CFG in dominator order. */ + type_promotion_dom_walker (CDI_DOMINATORS) + .walk (ENTRY_BLOCK_PTR_FOR_FN (cfun)); + + sbitmap_free (ssa_not_safe_bitmap); + sbitmap_free (ssa_to_be_promoted_bitmap); + sbitmap_free (ssa_sets_higher_bits_bitmap); + free_dominance_info (CDI_DOMINATORS); + return 0; +} + +namespace { +const pass_data pass_data_type_promotion = +{ + GIMPLE_PASS, /* type */ + "promotion", /* name */ + OPTGROUP_NONE, /* optinfo_flags */ + TV_TREE_TYPE_PROMOTE, /* tv_id */ + PROP_ssa, /* properties_required */ + 0, /* properties_provided */ + 0, /* properties_destroyed */ + 0, /* todo_flags_start */ + (TODO_cleanup_cfg | TODO_update_ssa | TODO_verify_all), +}; + +class pass_type_promotion : public gimple_opt_pass +{ +public: + pass_type_promotion (gcc::context *ctxt) + : gimple_opt_pass (pass_data_type_promotion, ctxt) + {} + + /* opt_pass methods: */ + opt_pass * clone () { return new pass_type_promotion (m_ctxt); } + virtual bool gate (function *) { return flag_tree_type_promote != 0; } + virtual unsigned int execute (function *) + { + return execute_type_promotion (); + } + +}; // class pass_type_promotion + +} // anon namespace + +gimple_opt_pass * +make_pass_type_promote (gcc::context *ctxt) +{ + return new pass_type_promotion (ctxt); +} + diff --git a/gcc/passes.def b/gcc/passes.def index ffa63b5..846ec1b 100644 --- a/gcc/passes.def +++ b/gcc/passes.def @@ -271,6 +271,7 @@ along with GCC; see the file COPYING3. If not see NEXT_PASS (pass_slp_vectorize); POP_INSERT_PASSES () NEXT_PASS (pass_lower_vector_ssa); + NEXT_PASS (pass_type_promote); NEXT_PASS (pass_cse_reciprocals); NEXT_PASS (pass_reassoc); NEXT_PASS (pass_strength_reduction); diff --git a/gcc/timevar.def b/gcc/timevar.def index 711bbed..f15e931 100644 --- a/gcc/timevar.def +++ b/gcc/timevar.def @@ -268,6 +268,7 @@ DEFTIMEVAR (TV_PLUGIN_RUN , "plugin execution") DEFTIMEVAR (TV_GIMPLE_SLSR , "straight-line strength reduction") DEFTIMEVAR (TV_VTABLE_VERIFICATION , "vtable verification") DEFTIMEVAR (TV_TREE_UBSAN , "tree ubsan") +DEFTIMEVAR (TV_TREE_TYPE_PROMOTE , "tree type promote") /* Everything else in rest_of_compilation not included above. */ DEFTIMEVAR (TV_EARLY_LOCAL , "early local passes") diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c index 4929629..a766ac6 100644 --- a/gcc/tree-cfg.c +++ b/gcc/tree-cfg.c @@ -3805,6 +3805,18 @@ verify_gimple_assign_binary (gassign *stmt) return false; } + case SEXT_EXPR: + { + if (!INTEGRAL_TYPE_P (lhs_type) + || !INTEGRAL_TYPE_P (rhs1_type) + || TREE_CODE (rhs2) != INTEGER_CST) + { + error ("invalid operands in sext expr"); + return true; + } + return false; + } + case VEC_WIDEN_LSHIFT_HI_EXPR: case VEC_WIDEN_LSHIFT_LO_EXPR: { diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c index 42ddb9f..8c20089 100644 --- a/gcc/tree-inline.c +++ b/gcc/tree-inline.c @@ -3913,6 +3913,7 @@ estimate_operator_cost (enum tree_code code, eni_weights *weights, case BIT_XOR_EXPR: case BIT_AND_EXPR: case BIT_NOT_EXPR: + case SEXT_EXPR: case TRUTH_ANDIF_EXPR: case TRUTH_ORIF_EXPR: diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h index 172bd82..533e4a6 100644 --- a/gcc/tree-pass.h +++ b/gcc/tree-pass.h @@ -428,6 +428,7 @@ extern gimple_opt_pass *make_pass_fre (gcc::context *ctxt); extern gimple_opt_pass *make_pass_check_data_deps (gcc::context *ctxt); extern gimple_opt_pass *make_pass_copy_prop (gcc::context *ctxt); extern gimple_opt_pass *make_pass_isolate_erroneous_paths (gcc::context *ctxt); +extern gimple_opt_pass *make_pass_type_promote (gcc::context *ctxt); extern gimple_opt_pass *make_pass_vrp (gcc::context *ctxt); extern gimple_opt_pass *make_pass_uncprop (gcc::context *ctxt); extern gimple_opt_pass *make_pass_return_slot (gcc::context *ctxt); diff --git a/gcc/tree-pretty-print.c b/gcc/tree-pretty-print.c index d7c049f..0045962 100644 --- a/gcc/tree-pretty-print.c +++ b/gcc/tree-pretty-print.c @@ -1812,6 +1812,14 @@ dump_generic_node (pretty_printer *pp, tree node, int spc, int flags, } break; + case SEXT_EXPR: + pp_string (pp, "SEXT_EXPR <"); + dump_generic_node (pp, TREE_OPERAND (node, 0), spc, flags, false); + pp_string (pp, ", "); + dump_generic_node (pp, TREE_OPERAND (node, 1), spc, flags, false); + pp_greater (pp); + break; + case MODIFY_EXPR: case INIT_EXPR: dump_generic_node (pp, TREE_OPERAND (node, 0), spc, flags, @@ -3432,6 +3440,9 @@ op_symbol_code (enum tree_code code) case MIN_EXPR: return "min"; + case SEXT_EXPR: + return "sext from bit"; + default: return "<<< ??? >>>"; } diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c index e7ab23c..581b1fe 100644 --- a/gcc/tree-vrp.c +++ b/gcc/tree-vrp.c @@ -2408,6 +2408,7 @@ extract_range_from_binary_expr_1 (value_range_t *vr, && code != LSHIFT_EXPR && code != MIN_EXPR && code != MAX_EXPR + && code != SEXT_EXPR && code != BIT_AND_EXPR && code != BIT_IOR_EXPR && code != BIT_XOR_EXPR) @@ -2984,6 +2985,49 @@ extract_range_from_binary_expr_1 (value_range_t *vr, extract_range_from_multiplicative_op_1 (vr, code, &vr0, &vr1); return; } + else if (code == SEXT_EXPR) + { + gcc_assert (range_int_cst_p (&vr1)); + unsigned int prec = tree_to_uhwi (vr1.min); + type = vr0.type; + wide_int tmin, tmax; + wide_int type_min, type_max; + wide_int may_be_nonzero, must_be_nonzero; + + gcc_assert (!TYPE_UNSIGNED (expr_type)); + type_min = wi::shwi (1 << (prec - 1), + TYPE_PRECISION (TREE_TYPE (vr0.min))); + type_max = wi::shwi (((1 << (prec - 1)) - 1), + TYPE_PRECISION (TREE_TYPE (vr0.max))); + if (zero_nonzero_bits_from_vr (expr_type, &vr0, + &may_be_nonzero, + &must_be_nonzero)) + { + HOST_WIDE_INT _may_be_nonzero = may_be_nonzero.to_uhwi (); + + if (_may_be_nonzero & (1 << (prec - 1))) + { + /* If to-be-extended sign bit can be one. */ + tmin = type_min; + tmax = may_be_nonzero & type_max; + } + else + { + /* If to-be-extended sign bit is zero. */ + tmin = must_be_nonzero; + tmax = may_be_nonzero; + } + } + else + { + tmin = type_min; + tmax = type_max; + } + tmin = wi::sext (tmin, prec); + tmax = wi::sext (tmax, prec); + min = wide_int_to_tree (expr_type, tmin); + max = wide_int_to_tree (expr_type, tmax); + } else if (code == RSHIFT_EXPR || code == LSHIFT_EXPR) { diff --git a/gcc/tree.def b/gcc/tree.def index b4b4164..f58b073 100644 --- a/gcc/tree.def +++ b/gcc/tree.def @@ -747,6 +747,9 @@ DEFTREECODE (BIT_XOR_EXPR, "bit_xor_expr", tcc_binary, 2) DEFTREECODE (BIT_AND_EXPR, "bit_and_expr", tcc_binary, 2) DEFTREECODE (BIT_NOT_EXPR, "bit_not_expr", tcc_unary, 1) +/* Sign-extend operation. */ +DEFTREECODE (SEXT_EXPR, "sext_expr", tcc_binary, 2) + /* ANDIF and ORIF allow the second operand not to be computed if the value of the expression is determined from the first operand. AND, OR, and XOR always compute the second operand whether its value is