From patchwork Thu Oct 15 05:51:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kugan Vivekanandarajah X-Patchwork-Id: 54983 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by patches.linaro.org (Postfix) with ESMTPS id 5F66223012 for ; Thu, 15 Oct 2015 05:52:42 +0000 (UTC) Received: by wijq8 with SMTP id q8sf4584440wij.1 for ; Wed, 14 Oct 2015 22:52:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:sender :delivered-to:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to:content-type:x-original-sender :x-original-authentication-results; bh=QpR3x8DikiIbr/xnIZ2ZKFSDUcCh5cyLieSBgwZMGv0=; b=RdOFt+25g9RJREtaifai/CZs6MgusXh2kNfMkk9rpXXrR61+dQZXxzNRCdL1n1XRcV muW5VCRKsd7Tr5KzzEofpELiczlCpP5NKticHwyrzIxObe94W7LShMLIko83XZ0OjNhs h4LQ4sZ9wpEn0areJo/pkbUV3g+zfBdjHzIlrbyPMuEwOki/+EwW3V8nAuOJS4Wv5qCc QVbm63urB4oqSMeu/30/XRpauZbGQUMrEcHbKMDB+yZq+Oi6wmVtKi52vMTtVdNiFNSq 3NxlRMkj9h5nhGjDFhr+SX2DAZ2dj/Iu65iJUqs/N1WaOZa8xk+xYVbvnz8zLjUvA6ak qmVw== X-Gm-Message-State: ALoCoQmn/ISE1DfaHTcuO2xja9mPNMOdUkXE9RVFghURMMAoJbYFLOWfnqj1n9u0d6aHrXRQLRbM X-Received: by 10.112.145.3 with SMTP id sq3mr1686231lbb.7.1444888361560; Wed, 14 Oct 2015 22:52:41 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.149.7 with SMTP id x7ls119964lfd.51.gmail; Wed, 14 Oct 2015 22:52:41 -0700 (PDT) X-Received: by 10.25.88.67 with SMTP id m64mr2289855lfb.23.1444888361420; Wed, 14 Oct 2015 22:52:41 -0700 (PDT) Received: from mail-lb0-x22e.google.com (mail-lb0-x22e.google.com. [2a00:1450:4010:c04::22e]) by mx.google.com with ESMTPS id sm5si8024427lbb.88.2015.10.14.22.52.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Oct 2015 22:52:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22e as permitted sender) client-ip=2a00:1450:4010:c04::22e; Received: by lbcao8 with SMTP id ao8so61252756lbc.3 for ; Wed, 14 Oct 2015 22:52:41 -0700 (PDT) X-Received: by 10.112.64.72 with SMTP id m8mr3383755lbs.41.1444888361103; Wed, 14 Oct 2015 22:52:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp409908lbq; Wed, 14 Oct 2015 22:52:39 -0700 (PDT) X-Received: by 10.50.62.68 with SMTP id w4mr7787849igr.75.1444888339749; Wed, 14 Oct 2015 22:52:19 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id p136si10247049ioe.31.2015.10.14.22.52.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Oct 2015 22:52:19 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-410214-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 1916 invoked by alias); 15 Oct 2015 05:52:03 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 1841 invoked by uid 89); 15 Oct 2015 05:51:59 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.0 required=5.0 tests=AWL, BAYES_50, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-pa0-f42.google.com Received: from mail-pa0-f42.google.com (HELO mail-pa0-f42.google.com) (209.85.220.42) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-GCM-SHA256 encrypted) ESMTPS; Thu, 15 Oct 2015 05:51:55 +0000 Received: by pabws5 with SMTP id ws5so13082617pab.3 for ; Wed, 14 Oct 2015 22:51:54 -0700 (PDT) X-Received: by 10.66.248.10 with SMTP id yi10mr8243117pac.6.1444888313938; Wed, 14 Oct 2015 22:51:53 -0700 (PDT) Received: from [10.1.1.5] (58-6-183-210.dyn.iinet.net.au. [58.6.183.210]) by smtp.googlemail.com with ESMTPSA id rz7sm12928184pbc.7.2015.10.14.22.51.51 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Oct 2015 22:51:52 -0700 (PDT) Subject: Re: [2/7] Add new type promotion pass To: "gcc-patches@gcc.gnu.org" References: <55ECFC2A.7050908@linaro.org> <55ECFCF5.4000803@linaro.org> Cc: Richard Biener From: Kugan Message-ID: <561F3EF5.1090402@linaro.org> Date: Thu, 15 Oct 2015 16:51:49 +1100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <55ECFCF5.4000803@linaro.org> X-IsSubscribed: yes X-Original-Sender: kugan.vivekanandarajah@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22e as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 On 07/09/15 12:56, Kugan wrote: > > This pass applies type promotion to SSA names in the function and > inserts appropriate truncations to preserve the semantics. Idea of this > pass is to promote operations such a way that we can minimize generation > of subreg in RTL, that intern results in removal of redundant zero/sign > extensions. > > gcc/ChangeLog: > > 2015-09-07 Kugan Vivekanandarajah > > * Makefile.in: Add gimple-ssa-type-promote.o. > * common.opt: New option -ftree-type-promote. > * doc/invoke.texi: Document -ftree-type-promote. > * gimple-ssa-type-promote.c: New file. > * passes.def: Define new pass_type_promote. > * timevar.def: Define new TV_TREE_TYPE_PROMOTE. > * tree-pass.h (make_pass_type_promote): New. > * tree-ssanames.c (set_range_info): Adjust range_info. > Here is the latest version of patch. Thanks, Kugan >From 69c05e27b39cd9977e1a412e1c1b3255409ba351 Mon Sep 17 00:00:00 2001 From: Kugan Vivekanandarajah Date: Mon, 17 Aug 2015 13:44:50 +1000 Subject: [PATCH 2/7] Add type promotion pass --- gcc/Makefile.in | 1 + gcc/common.opt | 4 + gcc/doc/invoke.texi | 10 + gcc/gimple-ssa-type-promote.c | 827 ++++++++++++++++++++++++++++++++++++++++++ gcc/passes.def | 1 + gcc/timevar.def | 1 + gcc/tree-pass.h | 1 + gcc/tree-ssanames.c | 3 +- 8 files changed, 847 insertions(+), 1 deletion(-) create mode 100644 gcc/gimple-ssa-type-promote.c diff --git a/gcc/Makefile.in b/gcc/Makefile.in index 009c745..0946055 100644 --- a/gcc/Makefile.in +++ b/gcc/Makefile.in @@ -1498,6 +1498,7 @@ OBJS = \ tree-vect-slp.o \ tree-vectorizer.o \ tree-vrp.o \ + gimple-ssa-type-promote.o \ tree.o \ valtrack.o \ value-prof.o \ diff --git a/gcc/common.opt b/gcc/common.opt index 94d1d88..b5a93b0 100644 --- a/gcc/common.opt +++ b/gcc/common.opt @@ -2378,6 +2378,10 @@ ftree-vrp Common Report Var(flag_tree_vrp) Init(0) Optimization Perform Value Range Propagation on trees +ftree-type-promote +Common Report Var(flag_tree_type_promote) Init(1) Optimization +Perform Type Promotion on trees + funit-at-a-time Common Report Var(flag_unit_at_a_time) Init(1) Compile whole compilation unit at a time diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 50cc520..e6f0ce1 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -9051,6 +9051,16 @@ enabled by default at @option{-O2} and higher. Null pointer check elimination is only done if @option{-fdelete-null-pointer-checks} is enabled. +@item -ftree-type-promote +@opindex ftree-type-promote +This pass applies type promotion to SSA names in the function and +inserts appropriate truncations to preserve the semantics. Idea of +this pass is to promote operations such a way that we can minimise +generation of subreg in RTL, that intern results in removal of +redundant zero/sign extensions. + +This optimization is enabled by default. + @item -fsplit-ivs-in-unroller @opindex fsplit-ivs-in-unroller Enables expression of values of induction variables in later iterations diff --git a/gcc/gimple-ssa-type-promote.c b/gcc/gimple-ssa-type-promote.c new file mode 100644 index 0000000..513d20d --- /dev/null +++ b/gcc/gimple-ssa-type-promote.c @@ -0,0 +1,827 @@ +/* Type promotion of SSA names to minimise redundant zero/sign extension. + Copyright (C) 2015 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with GCC; see the file COPYING3. If not see +. */ + +#include "config.h" +#include "system.h" +#include "coretypes.h" +#include "backend.h" +#include "hash-set.h" +#include "machmode.h" +#include "vec.h" +#include "double-int.h" +#include "input.h" +#include "symtab.h" +#include "wide-int.h" +#include "inchash.h" +#include "tree.h" +#include "fold-const.h" +#include "stor-layout.h" +#include "predict.h" +#include "function.h" +#include "dominance.h" +#include "cfg.h" +#include "basic-block.h" +#include "tree-ssa-alias.h" +#include "gimple-fold.h" +#include "tree-eh.h" +#include "gimple-expr.h" +#include "is-a.h" +#include "gimple.h" +#include "gimple-iterator.h" +#include "gimple-ssa.h" +#include "tree-phinodes.h" +#include "ssa-iterators.h" +#include "stringpool.h" +#include "tree-ssanames.h" +#include "tree-pass.h" +#include "gimple-pretty-print.h" +#include "langhooks.h" +#include "sbitmap.h" +#include "domwalk.h" +#include "tree-dfa.h" + +/* This pass applies type promotion to SSA names in the function and + inserts appropriate truncations. Idea of this pass is to promote operations + such a way that we can minimise generation of subreg in RTL, + that intern results in removal of redundant zero/sign extensions. This pass + will run prior to The VRP and DOM such that they will be able to optimise + redundant truncations and extensions. This is based on the discussion from + https://gcc.gnu.org/ml/gcc-patches/2014-09/msg00472.html. + +*/ + +static unsigned n_ssa_val; +static sbitmap ssa_to_be_promoted_bitmap; +static sbitmap ssa_sets_higher_bits_bitmap; +static hash_map *original_type_map; + +static bool +type_precision_ok (tree type) +{ + return (TYPE_PRECISION (type) == 8 + || TYPE_PRECISION (type) == 16 + || TYPE_PRECISION (type) == 32); +} + +/* Return the promoted type for TYPE. */ +static tree +get_promoted_type (tree type) +{ + tree promoted_type; + enum machine_mode mode; + int uns; + if (POINTER_TYPE_P (type) + || !INTEGRAL_TYPE_P (type) + || !type_precision_ok (type)) + return type; + + mode = TYPE_MODE (type); +#ifdef PROMOTE_MODE + uns = TYPE_SIGN (type); + PROMOTE_MODE (mode, uns, type); +#endif + uns = TYPE_SIGN (type); + promoted_type = lang_hooks.types.type_for_mode (mode, uns); + if (promoted_type + && (TYPE_PRECISION (promoted_type) > TYPE_PRECISION (type))) + type = promoted_type; + return type; +} + +/* Return true if ssa NAME is already considered for promotion. */ +static bool +ssa_promoted_p (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + unsigned int index = SSA_NAME_VERSION (name); + if (index < n_ssa_val) + return bitmap_bit_p (ssa_to_be_promoted_bitmap, index); + } + return true; +} + + +/* Set ssa NAME to be already considered for promotion. */ +static void +set_ssa_promoted (tree name) +{ + if (TREE_CODE (name) == SSA_NAME) + { + unsigned int index = SSA_NAME_VERSION (name); + if (index < n_ssa_val) + bitmap_set_bit (ssa_to_be_promoted_bitmap, index); + } +} + +/* Insert COPY_STMT along the edge from STMT to its successor. */ +static void +insert_stmt_on_edge (gimple *stmt, gimple *copy_stmt) +{ + edge_iterator ei; + edge e, edge = NULL; + basic_block bb = gimple_bb (stmt); + + FOR_EACH_EDGE (e, ei, bb->succs) + if (!(e->flags & EDGE_EH)) + { + gcc_assert (edge == NULL); + edge = e; + } + + gcc_assert (edge); + gsi_insert_on_edge_immediate (edge, copy_stmt); +} + +/* Return true if it is safe to promote the defined SSA_NAME in the STMT + itself. */ +static bool +safe_to_promote_def_p (gimple *stmt) +{ + enum tree_code code = gimple_assign_rhs_code (stmt); + if (gimple_vuse (stmt) != NULL_TREE + || gimple_vdef (stmt) != NULL_TREE + || code == ARRAY_REF + || code == LROTATE_EXPR + || code == RROTATE_EXPR + || code == VIEW_CONVERT_EXPR + || code == BIT_FIELD_REF + || code == REALPART_EXPR + || code == IMAGPART_EXPR + || code == REDUC_MAX_EXPR + || code == REDUC_PLUS_EXPR + || code == REDUC_MIN_EXPR) + return false; + return true; +} + +/* Return true if it is safe to promote the use in the STMT. */ +static bool +safe_to_promote_use_p (gimple *stmt) +{ + enum tree_code code = gimple_assign_rhs_code (stmt); + tree lhs = gimple_assign_lhs (stmt); + + if (gimple_vuse (stmt) != NULL_TREE + || gimple_vdef (stmt) != NULL_TREE + || code == VIEW_CONVERT_EXPR + || code == LROTATE_EXPR + || code == RROTATE_EXPR + || code == CONSTRUCTOR + || code == BIT_FIELD_REF + || code == COMPLEX_EXPR + || code == ASM_EXPR + || VECTOR_TYPE_P (TREE_TYPE (lhs))) + return false; + return true; +} + +/* Return true if the SSA_NAME has to be truncated to preserve the + semantics. */ +static bool +truncate_use_p (gimple *stmt) +{ + enum tree_code code = gimple_assign_rhs_code (stmt); + if (TREE_CODE_CLASS (code) + == tcc_comparison + || code == TRUNC_DIV_EXPR + || code == CEIL_DIV_EXPR + || code == FLOOR_DIV_EXPR + || code == ROUND_DIV_EXPR + || code == TRUNC_MOD_EXPR + || code == CEIL_MOD_EXPR + || code == FLOOR_MOD_EXPR + || code == ROUND_MOD_EXPR + || code == LSHIFT_EXPR + || code == RSHIFT_EXPR) + return true; + return false; +} + +/* Return true if LHS will be promoted later. */ +static bool +tobe_promoted_p (tree lhs) +{ + if (TREE_CODE (lhs) == SSA_NAME + && !POINTER_TYPE_P (TREE_TYPE (lhs)) + && INTEGRAL_TYPE_P (TREE_TYPE (lhs)) + && !VECTOR_TYPE_P (TREE_TYPE (lhs)) + && !ssa_promoted_p (lhs) + && (get_promoted_type (TREE_TYPE (lhs)) + != TREE_TYPE (lhs))) + return true; + else + return false; +} + +/* Convert constant CST to TYPE. */ +static tree +convert_int_cst (tree type, tree cst, signop sign = SIGNED) +{ + wide_int wi_cons = fold_convert (type, cst); + wi_cons = wi::ext (wi_cons, TYPE_PRECISION (TREE_TYPE (cst)), sign); + return wide_int_to_tree (type, wi_cons); +} + +/* Promote constants in STMT to TYPE. If PROMOTE_COND_EXPR is true, + promote only the constants in conditions part of the COND_EXPR. */ +static void +promote_cst_in_stmt (gimple *stmt, tree type, bool promote_cond = false) +{ + tree op; + ssa_op_iter iter; + use_operand_p oprnd; + int index; + tree op0, op1; + signop sign = SIGNED; + + switch (gimple_code (stmt)) + { + case GIMPLE_ASSIGN: + if (promote_cond + && gimple_assign_rhs_code (stmt) == COND_EXPR) + { + /* Promote INTEGER_CST that are tcc_compare arguments. */ + sign = TYPE_SIGN (type); + op = gimple_assign_rhs1 (stmt); + op0 = TREE_OPERAND (op, 0); + op1 = TREE_OPERAND (op, 1); + if (TREE_CODE (op0) == INTEGER_CST) + op0 = convert_int_cst (type, op0, sign); + if (TREE_CODE (op1) == INTEGER_CST) + op1 = convert_int_cst (type, op1, sign); + tree new_op = build2 (TREE_CODE (op), type, op0, op1); + gimple_assign_set_rhs1 (stmt, new_op); + } + else + { + /* Promote INTEGER_CST in GIMPLE_ASSIGN. */ + op = gimple_assign_rhs3 (stmt); + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_assign_set_rhs3 (stmt, convert_int_cst (type, op, sign)); + if (TREE_CODE_CLASS (gimple_assign_rhs_code (stmt)) + == tcc_comparison) + sign = TYPE_SIGN (type); + op = gimple_assign_rhs1 (stmt); + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_assign_set_rhs1 (stmt, convert_int_cst (type, op, sign)); + op = gimple_assign_rhs2 (stmt); + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_assign_set_rhs2 (stmt, convert_int_cst (type, op, sign)); + } + break; + + case GIMPLE_PHI: + { + /* Promote INTEGER_CST arguments to GIMPLE_PHI. */ + gphi *phi = as_a (stmt); + FOR_EACH_PHI_ARG (oprnd, phi, iter, SSA_OP_USE) + { + op = USE_FROM_PTR (oprnd); + index = PHI_ARG_INDEX_FROM_USE (oprnd); + if (TREE_CODE (op) == INTEGER_CST) + SET_PHI_ARG_DEF (phi, index, convert_int_cst (type, op, sign)); + } + } + break; + + case GIMPLE_COND: + { + /* Promote INTEGER_CST that are GIMPLE_COND arguments. */ + gcond *cond = as_a (stmt); + op = gimple_cond_lhs (cond); + sign = TYPE_SIGN (type); + + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_cond_set_lhs (cond, convert_int_cst (type, op, sign)); + op = gimple_cond_rhs (cond); + + if (op && TREE_CODE (op) == INTEGER_CST) + gimple_cond_set_rhs (cond, convert_int_cst (type, op, sign)); + } + break; + + default: + gcc_unreachable (); + } +} + +/* Create an ssa with TYPE to copy ssa VAR. */ +static tree +make_promoted_copy (tree var, gimple *def_stmt, tree type) +{ + tree new_lhs = make_ssa_name (type, def_stmt); + if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (var)) + SSA_NAME_OCCURS_IN_ABNORMAL_PHI (new_lhs) = 1; + return new_lhs; +} + +/* Zero/sign extend (depending on type) VAR and truncate to WIDTH bits. + Assign the zero/sign extended value in NEW_VAR. gimple statement + that performs the zero/sign extension is returned. */ +static gimple * +zero_sign_extend_stmt (tree new_var, tree var, int width) +{ + gcc_assert (TYPE_PRECISION (TREE_TYPE (var)) + == TYPE_PRECISION (TREE_TYPE (new_var))); + gcc_assert (TYPE_PRECISION (TREE_TYPE (var)) > width); + gcc_assert (width != 1); + gimple *stmt; + + if (TYPE_UNSIGNED (TREE_TYPE (new_var))) + /* Zero extend. */ + stmt = gimple_build_assign (new_var, + BIT_AND_EXPR, + var, build_int_cst (TREE_TYPE (var), + ((1ULL << width) - 1))); + else + /* Sign extend. */ + stmt = gimple_build_assign (new_var, + SEXT_EXPR, + var, build_int_cst (TREE_TYPE (var), width)); + return stmt; +} + + +void duplicate_default_ssa (tree to, tree from) +{ + SET_SSA_NAME_VAR_OR_IDENTIFIER (to, SSA_NAME_VAR (from)); + SSA_NAME_IS_DEFAULT_DEF (to) = SSA_NAME_IS_DEFAULT_DEF (from); + SSA_NAME_DEF_STMT (to) = SSA_NAME_DEF_STMT (from); + SET_SSA_NAME_VAR_OR_IDENTIFIER (from, NULL_TREE); + SSA_NAME_IS_DEFAULT_DEF (to) = 1; + SSA_NAME_IS_DEFAULT_DEF (from) = 0; +} + +/* Promote definition DEF to PROMOTED_TYPE. If the stmt that defines def + is def_stmt, make the type of def promoted_type. If the stmt is such + that, result of the def_stmt cannot be of promoted_type, create a new_def + of the original_type and make the def_stmt assign its value to newdef. + Then, create a CONVERT_EXPR to convert new_def to def of promoted type. + + For example, for stmt with original_type char and promoted_type int: + char _1 = mem; + becomes: + char _2 = mem; + int _1 = (int)_2; + + If the def_stmt allows def to be promoted, promote def in-place + (and its arguments when needed). + + For example: + char _3 = _1 + _2; + becomes: + int _3 = _1 + _2; + Here, _1 and _2 will also be promoted. */ + +static void +promote_definition (tree def, + tree promoted_type) +{ + gimple *def_stmt = SSA_NAME_DEF_STMT (def); + gimple *copy_stmt = NULL; + basic_block bb; + gimple_stmt_iterator gsi; + tree original_type = TREE_TYPE (def); + tree new_def; + bool do_not_promote = false; + + switch (gimple_code (def_stmt)) + { + case GIMPLE_PHI: + { + /* Promote def by fixing its type and make def anonymous. */ + TREE_TYPE (def) = promoted_type; + SET_SSA_NAME_VAR_OR_IDENTIFIER (def, NULL_TREE); + promote_cst_in_stmt (def_stmt, promoted_type); + break; + } + + case GIMPLE_ASM: + { + gasm *asm_stmt = as_a (def_stmt); + for (unsigned int i = 0; i < gimple_asm_noutputs (asm_stmt); ++i) + { + /* Promote def and copy (i.e. convert) the value defined + by asm to def. */ + tree link = gimple_asm_output_op (asm_stmt, i); + tree op = TREE_VALUE (link); + if (op == def) + { + new_def = copy_ssa_name (def); + set_ssa_promoted (new_def); + duplicate_default_ssa (new_def, def); + TREE_VALUE (link) = new_def; + gimple_asm_set_output_op (asm_stmt, i, link); + + TREE_TYPE (def) = promoted_type; + copy_stmt = gimple_build_assign (def, CONVERT_EXPR, + new_def, NULL_TREE); + gsi = gsi_for_stmt (def_stmt); + SSA_NAME_IS_DEFAULT_DEF (new_def) = 0; + gsi_insert_after (&gsi, copy_stmt, GSI_NEW_STMT); + break; + } + } + break; + } + + case GIMPLE_NOP: + { + if (SSA_NAME_VAR (def) == NULL) + { + /* Promote def by fixing its type for anonymous def. */ + TREE_TYPE (def) = promoted_type; + } + else + { + /* Create a promoted copy of parameters. */ + bb = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)); + gcc_assert (bb); + gsi = gsi_after_labels (bb); + new_def = copy_ssa_name (def); + set_ssa_promoted (new_def); + set_ssa_default_def (cfun, SSA_NAME_VAR (def), new_def); + duplicate_default_ssa (new_def, def); + TREE_TYPE (def) = promoted_type; + copy_stmt = gimple_build_assign (def, CONVERT_EXPR, + new_def, NULL_TREE); + SSA_NAME_DEF_STMT (def) = copy_stmt; + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + } + break; + } + + case GIMPLE_ASSIGN: + { + enum tree_code code = gimple_assign_rhs_code (def_stmt); + if (!safe_to_promote_def_p (def_stmt)) + { + do_not_promote = true; + } + else if (CONVERT_EXPR_CODE_P (code)) + { + tree rhs = gimple_assign_rhs1 (def_stmt); + if (!type_precision_ok (TREE_TYPE (rhs))) + { + do_not_promote = true; + } + else if (types_compatible_p (TREE_TYPE (rhs), promoted_type)) + { + /* As we travel statements in dominated order, arguments + of def_stmt will be visited before visiting def. If RHS + is already promoted and type is compatible, we can convert + them into ZERO/SIGN EXTEND stmt. */ + tree &type = original_type_map->get_or_insert (rhs); + if (type == NULL_TREE) + type = TREE_TYPE (rhs); + if (TYPE_PRECISION (original_type) < TYPE_PRECISION (type)) + type = original_type; + gcc_assert (type != NULL_TREE); + TREE_TYPE (def) = promoted_type; + gimple *copy_stmt = + zero_sign_extend_stmt (def, rhs, + TYPE_PRECISION (type)); + SET_SSA_NAME_VAR_OR_IDENTIFIER (def, NULL_TREE); + gsi = gsi_for_stmt (def_stmt); + gsi_replace (&gsi, copy_stmt, false); + } + else { + /* If RHS is not promoted OR their types are not + compatible, create CONVERT_EXPR that converts + RHS to promoted DEF type and perform a + ZERO/SIGN EXTEND to get the required value + from RHS. */ + tree s = (TYPE_PRECISION (TREE_TYPE (def)) + < TYPE_PRECISION (TREE_TYPE (rhs))) + ? TREE_TYPE (def) : TREE_TYPE (rhs); + new_def = copy_ssa_name (def); + set_ssa_promoted (new_def); + TREE_TYPE (def) = promoted_type; + TREE_TYPE (new_def) = promoted_type; + SET_SSA_NAME_VAR_OR_IDENTIFIER (def, NULL_TREE); + SET_SSA_NAME_VAR_OR_IDENTIFIER (new_def, NULL_TREE); + gimple_set_lhs (def_stmt, new_def); + gimple *copy_stmt = + zero_sign_extend_stmt (def, new_def, + TYPE_PRECISION (s)); + gsi = gsi_for_stmt (def_stmt); + if (lookup_stmt_eh_lp (def_stmt) > 0) + insert_stmt_on_edge (def_stmt, copy_stmt); + else + gsi_insert_after (&gsi, copy_stmt, GSI_NEW_STMT); + } + } + else + { + /* Promote def by fixing its type and make def anonymous. */ + SET_SSA_NAME_VAR_OR_IDENTIFIER (def, NULL_TREE); + promote_cst_in_stmt (def_stmt, promoted_type); + TREE_TYPE (def) = promoted_type; + } + break; + } + + default: + do_not_promote = true; + break; + } + + if (do_not_promote) + { + /* Promote def and copy (i.e. convert) the value defined + by the stmt that cannot be promoted. */ + new_def = copy_ssa_name (def); + set_ssa_promoted (new_def); + SET_SSA_NAME_VAR_OR_IDENTIFIER (def, NULL_TREE); + TREE_TYPE (def) = promoted_type; + gimple_set_lhs (def_stmt, new_def); + copy_stmt = gimple_build_assign (def, CONVERT_EXPR, + new_def, NULL_TREE); + gsi = gsi_for_stmt (def_stmt); + if (lookup_stmt_eh_lp (def_stmt) > 0) + insert_stmt_on_edge (def_stmt, copy_stmt); + else + gsi_insert_after (&gsi, copy_stmt, GSI_NEW_STMT); + } + else + { + /* Type is now promoted. Due to this, some of the value ranges computed + by VRP1 will is invalid. TODO: We can be intelligent in deciding + which ranges to be invalidated instead of invalidating everything. */ + SSA_NAME_RANGE_INFO (def) = NULL; + } +} + +/* Fix the (promoted) USE in stmts where USE cannot be be promoted. */ +static unsigned int +fixup_uses (tree use, tree promoted_type, tree old_type) +{ + gimple *stmt; + imm_use_iterator ui; + gimple_stmt_iterator gsi; + use_operand_p op; + + FOR_EACH_IMM_USE_STMT (stmt, ui, use) + { + bool do_not_promote = false; + switch (gimple_code (stmt)) + { + case GIMPLE_DEBUG: + { + gsi = gsi_for_stmt (stmt); + gsi_remove (&gsi, true); + break; + } + + case GIMPLE_ASM: + case GIMPLE_CALL: + case GIMPLE_RETURN: + { + /* USE cannot be promoted here. */ + do_not_promote = true; + break; + } + + case GIMPLE_ASSIGN: + { + enum tree_code code = gimple_assign_rhs_code (stmt); + tree lhs = gimple_assign_lhs (stmt); + if (!safe_to_promote_use_p (stmt)) + { + do_not_promote = true; + } + else if (truncate_use_p (stmt)) + { + /* In some stmts, value in USE has to be ZERO/SIGN + Extended based on the original type for correct + result. */ + tree temp = make_promoted_copy (use, NULL, TREE_TYPE (use)); + gimple *copy_stmt = + zero_sign_extend_stmt (temp, use, + TYPE_PRECISION (old_type)); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + + FOR_EACH_IMM_USE_ON_STMT (op, ui) + SET_USE (op, temp); + if (TREE_CODE_CLASS (code) + == tcc_comparison) + promote_cst_in_stmt (stmt, promoted_type, true); + update_stmt (stmt); + } + else if (CONVERT_EXPR_CODE_P (code)) + { + tree rhs = gimple_assign_rhs1 (stmt); + if (!type_precision_ok (TREE_TYPE (rhs))) + { + do_not_promote = true; + } + else if (types_compatible_p (TREE_TYPE (lhs), promoted_type)) + { + /* Type of LHS and promoted RHS are compatible, we can + convert this into ZERO/SIGN EXTEND stmt. */ + gimple *copy_stmt = + zero_sign_extend_stmt (lhs, use, + TYPE_PRECISION (old_type)); + gsi = gsi_for_stmt (stmt); + set_ssa_promoted (lhs); + gsi_replace (&gsi, copy_stmt, false); + } + else if (tobe_promoted_p (lhs)) + { + /* If LHS will be promoted later, store the original + type of RHS so that we can convert it to ZERO/SIGN + EXTEND when LHS is promoted. */ + tree rhs = gimple_assign_rhs1 (stmt); + tree &type = original_type_map->get_or_insert (rhs); + type = TREE_TYPE (old_type); + } + else + { + do_not_promote = true; + } + } + break; + } + + case GIMPLE_COND: + { + /* In GIMPLE_COND, value in USE has to be ZERO/SIGN + Extended based on the original type for correct + result. */ + tree temp = make_promoted_copy (use, NULL, TREE_TYPE (use)); + gimple *copy_stmt = + zero_sign_extend_stmt (temp, use, + TYPE_PRECISION (old_type)); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + + FOR_EACH_IMM_USE_ON_STMT (op, ui) + SET_USE (op, temp); + promote_cst_in_stmt (stmt, promoted_type, true); + update_stmt (stmt); + break; + } + + default: + break; + } + + if (do_not_promote) + { + /* FOR stmts where USE canoot be promoted, create an + original type copy. */ + tree temp; + temp = copy_ssa_name (use); + set_ssa_promoted (temp); + TREE_TYPE (temp) = old_type; + gimple *copy_stmt = gimple_build_assign (temp, CONVERT_EXPR, + use, NULL_TREE); + gsi = gsi_for_stmt (stmt); + gsi_insert_before (&gsi, copy_stmt, GSI_NEW_STMT); + FOR_EACH_IMM_USE_ON_STMT (op, ui) + SET_USE (op, temp); + update_stmt (stmt); + } + } + return 0; +} + +/* Promote definition of NAME and adjust its uses if necessary. */ +static unsigned int +promote_def_and_uses (tree name) +{ + tree type; + if (tobe_promoted_p (name)) + { + type = get_promoted_type (TREE_TYPE (name)); + tree old_type = TREE_TYPE (name); + promote_definition (name, type); + fixup_uses (name, type, old_type); + set_ssa_promoted (name); + } + return 0; +} + +/* Promote all the stmts in the basic block. */ +static void +promote_all_stmts (basic_block bb) +{ + gimple_stmt_iterator gsi; + ssa_op_iter iter; + tree def; + + for (gphi_iterator gpi = gsi_start_phis (bb); + !gsi_end_p (gpi); gsi_next (&gpi)) + { + gphi *phi = gpi.phi (); + use_operand_p op; + + FOR_EACH_PHI_ARG (op, phi, iter, SSA_OP_USE) + { + def = USE_FROM_PTR (op); + promote_def_and_uses (def); + } + def = PHI_RESULT (phi); + promote_def_and_uses (def); + } + for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) + { + gimple *stmt = gsi_stmt (gsi); + + FOR_EACH_SSA_TREE_OPERAND (def, stmt, iter, SSA_OP_USE | SSA_OP_DEF) + promote_def_and_uses (def); + } +} + + +class type_promotion_dom_walker : public dom_walker +{ +public: + type_promotion_dom_walker (cdi_direction direction) + : dom_walker (direction) {} + virtual void before_dom_children (basic_block bb) + { + promote_all_stmts (bb); + } +}; + +/* Main entry point to the pass. */ +static unsigned int +execute_type_promotion (void) +{ + n_ssa_val = num_ssa_names; + original_type_map = new hash_map; + ssa_to_be_promoted_bitmap = sbitmap_alloc (n_ssa_val); + bitmap_clear (ssa_to_be_promoted_bitmap); + ssa_sets_higher_bits_bitmap = sbitmap_alloc (n_ssa_val); + bitmap_clear (ssa_sets_higher_bits_bitmap); + + calculate_dominance_info (CDI_DOMINATORS); + /* Walk the CFG in dominator order. */ + type_promotion_dom_walker (CDI_DOMINATORS) + .walk (ENTRY_BLOCK_PTR_FOR_FN (cfun)); + + sbitmap_free (ssa_to_be_promoted_bitmap); + sbitmap_free (ssa_sets_higher_bits_bitmap); + free_dominance_info (CDI_DOMINATORS); + delete original_type_map; + return 0; +} + +namespace { +const pass_data pass_data_type_promotion = +{ + GIMPLE_PASS, /* type */ + "promotion", /* name */ + OPTGROUP_NONE, /* optinfo_flags */ + TV_TREE_TYPE_PROMOTE, /* tv_id */ + PROP_ssa, /* properties_required */ + 0, /* properties_provided */ + 0, /* properties_destroyed */ + 0, /* todo_flags_start */ + (TODO_cleanup_cfg | TODO_update_ssa | TODO_verify_all), +}; + +class pass_type_promotion : public gimple_opt_pass +{ +public: + pass_type_promotion (gcc::context *ctxt) + : gimple_opt_pass (pass_data_type_promotion, ctxt) + {} + + /* opt_pass methods: */ + opt_pass * clone () { return new pass_type_promotion (m_ctxt); } + virtual bool gate (function *) { return flag_tree_type_promote != 0; } + virtual unsigned int execute (function *) + { + return execute_type_promotion (); + } + +}; // class pass_type_promotion + +} // anon namespace + +gimple_opt_pass * +make_pass_type_promote (gcc::context *ctxt) +{ + return new pass_type_promotion (ctxt); +} + diff --git a/gcc/passes.def b/gcc/passes.def index 64fc4d9..254496b 100644 --- a/gcc/passes.def +++ b/gcc/passes.def @@ -270,6 +270,7 @@ along with GCC; see the file COPYING3. If not see POP_INSERT_PASSES () NEXT_PASS (pass_simduid_cleanup); NEXT_PASS (pass_lower_vector_ssa); + NEXT_PASS (pass_type_promote); NEXT_PASS (pass_cse_reciprocals); NEXT_PASS (pass_reassoc); NEXT_PASS (pass_strength_reduction); diff --git a/gcc/timevar.def b/gcc/timevar.def index ac41075..80171ec 100644 --- a/gcc/timevar.def +++ b/gcc/timevar.def @@ -276,6 +276,7 @@ DEFTIMEVAR (TV_VTABLE_VERIFICATION , "vtable verification") DEFTIMEVAR (TV_TREE_UBSAN , "tree ubsan") DEFTIMEVAR (TV_INITIALIZE_RTL , "initialize rtl") DEFTIMEVAR (TV_GIMPLE_LADDRESS , "address lowering") +DEFTIMEVAR (TV_TREE_TYPE_PROMOTE , "tree type promote") /* Everything else in rest_of_compilation not included above. */ DEFTIMEVAR (TV_EARLY_LOCAL , "early local passes") diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h index 3c913ea..d4cc485 100644 --- a/gcc/tree-pass.h +++ b/gcc/tree-pass.h @@ -431,6 +431,7 @@ extern gimple_opt_pass *make_pass_fre (gcc::context *ctxt); extern gimple_opt_pass *make_pass_check_data_deps (gcc::context *ctxt); extern gimple_opt_pass *make_pass_copy_prop (gcc::context *ctxt); extern gimple_opt_pass *make_pass_isolate_erroneous_paths (gcc::context *ctxt); +extern gimple_opt_pass *make_pass_type_promote (gcc::context *ctxt); extern gimple_opt_pass *make_pass_vrp (gcc::context *ctxt); extern gimple_opt_pass *make_pass_uncprop (gcc::context *ctxt); extern gimple_opt_pass *make_pass_return_slot (gcc::context *ctxt); diff --git a/gcc/tree-ssanames.c b/gcc/tree-ssanames.c index 4199290..43b46d9 100644 --- a/gcc/tree-ssanames.c +++ b/gcc/tree-ssanames.c @@ -190,7 +190,8 @@ set_range_info (tree name, enum value_range_type range_type, unsigned int precision = TYPE_PRECISION (TREE_TYPE (name)); /* Allocate if not available. */ - if (ri == NULL) + if (ri == NULL + || (precision != ri->get_min ().get_precision ())) { size_t size = (sizeof (range_info_def) + trailing_wide_ints <3>::extra_size (precision)); -- 1.9.1