From patchwork Thu Sep 4 03:41:11 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kugan Vivekanandarajah X-Patchwork-Id: 36679 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oi0-f70.google.com (mail-oi0-f70.google.com [209.85.218.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 92397202E4 for ; Thu, 4 Sep 2014 03:42:16 +0000 (UTC) Received: by mail-oi0-f70.google.com with SMTP id u20sf45907795oif.9 for ; Wed, 03 Sep 2014 20:42:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:sender :delivered-to:message-id:date:from:user-agent:mime-version:to:cc :subject:references:in-reply-to:x-original-sender :x-original-authentication-results:content-type; bh=h9TgTsE9VvrYPmvaFvFw4V2Y027yNDTU56FFgFhT+pU=; b=eqsjFfpUwgICvzAnD4i+YFGQKSyywP5Gc0NH/gtknDi+RG6CRssRKNMR79XTqd8hh1 pONt5utrWNqyc4wf6G9m4NeJiTPKzin7xpyRDLLgDHPl4OUtCc3Vc4f/xytkU/WzJZ16 rc0rP8oSs1KSDhYD7A6Lxqp+TE2fP/q7nrM7EQLi39MFjmwI/yNFrr5hohmksqmEzaz4 wi4FyyF9WBV/Vp6JuKIKmUHb8wAXUWtvKgyUHMHFT8MNQo85g7ENEMDCwhBRFN8zDT9b jEHMhx7i7uWmrWJZl9O7sKNWaIl9WlySRP/n+hv6mYMWGSx92xlr6w5N6+3FdybZNNm8 WQyQ== X-Gm-Message-State: ALoCoQmm8ixZUS9HOo9e58NcTcXBa7WZ/WKyRwZPHBTh085LJ+9cYZEvGmBRCnOyJpyDh5fYrLdc X-Received: by 10.182.246.70 with SMTP id xu6mr998834obc.31.1409802136167; Wed, 03 Sep 2014 20:42:16 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.87.6 with SMTP id q6ls115537qgd.82.gmail; Wed, 03 Sep 2014 20:42:16 -0700 (PDT) X-Received: by 10.220.118.136 with SMTP id v8mr1289594vcq.50.1409802135989; Wed, 03 Sep 2014 20:42:15 -0700 (PDT) Received: from mail-vc0-x22b.google.com (mail-vc0-x22b.google.com [2607:f8b0:400c:c03::22b]) by mx.google.com with ESMTPS id ck3si5015261vcb.102.2014.09.03.20.42.15 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 03 Sep 2014 20:42:15 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c03::22b as permitted sender) client-ip=2607:f8b0:400c:c03::22b; Received: by mail-vc0-f171.google.com with SMTP id id10so10010385vcb.30 for ; Wed, 03 Sep 2014 20:42:15 -0700 (PDT) X-Received: by 10.52.3.40 with SMTP id 8mr1110398vdz.24.1409802135827; Wed, 03 Sep 2014 20:42:15 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp774894vcb; Wed, 3 Sep 2014 20:42:14 -0700 (PDT) X-Received: by 10.66.243.208 with SMTP id xa16mr3325482pac.41.1409802134234; Wed, 03 Sep 2014 20:42:14 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id ig2si639158pbb.232.2014.09.03.20.42.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Sep 2014 20:42:14 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-376807-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 16182 invoked by alias); 4 Sep 2014 03:41:35 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 16142 invoked by uid 89); 4 Sep 2014 03:41:31 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.4 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-pa0-f45.google.com Received: from mail-pa0-f45.google.com (HELO mail-pa0-f45.google.com) (209.85.220.45) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Thu, 04 Sep 2014 03:41:29 +0000 Received: by mail-pa0-f45.google.com with SMTP id bj1so18842212pad.4 for ; Wed, 03 Sep 2014 20:41:26 -0700 (PDT) X-Received: by 10.66.182.10 with SMTP id ea10mr2782455pac.123.1409802086604; Wed, 03 Sep 2014 20:41:26 -0700 (PDT) Received: from [10.1.1.2] (58-6-183-210.dyn.iinet.net.au. [58.6.183.210]) by mx.google.com with ESMTPSA id ty8sm328458pab.26.2014.09.03.20.41.23 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Sep 2014 20:41:25 -0700 (PDT) Message-ID: <5407DF57.2040902@linaro.org> Date: Thu, 04 Sep 2014 13:41:11 +1000 From: Kugan User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0 MIME-Version: 1.0 To: Richard Biener CC: Uros Bizjak , "gcc-patches@gcc.gnu.org" , Jakub Jelinek Subject: Re: [PATCH 2/2] Enable elimination of zext/sext References: <53FEDF34.4090605@linaro.org> In-Reply-To: X-IsSubscribed: yes X-Original-Sender: kugan.vivekanandarajah@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c03::22b as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 >> I added this part of the code (in cfgexpand.c) to handle binary/unary/.. >> gimple operations and used the LHS value range to infer the assigned >> value range. I will revert this part of the code as this is wrong. >> >> I dont think checking promoted_mode for temp will be necessary here as >> convert_move will handle it correctly if promoted_mode is set for temp. >> >> Thus, I will reimplement setting promoted_mode to temp (in >> expand_expr_real_2) based on the gimple statement content on RHS. i.e. >> by looking at the RHS operands and its value ranges and by calculating >> the resulting value range. Does this sound OK to you. > > No, this sounds backward again and won't work because those operands > again could be just truncated - thus you can't rely on their value-range. > > What you would need is VRP computing value-ranges in the promoted > mode from the start (and it doesn't do that). Hi Richard, Here is an attempt to do the value range computation in promoted_mode's type when it is overflowing. Bootstrapped on x86-84. Based on your feedback, I will do more testing on this. Thanks for your time, Kugan gcc/ChangeLog: 2014-09-04 Kugan Vivekanandarajah * tree-ssa-ccp.c (ccp_finalize): Adjust the nonzero_bits precision to the type. (evaluate_stmt): Likewise. * tree-ssanames.c (set_range_info): Adjust if the precision of stored value range is different. * tree-vrp.c (normalize_int_cst_precision): New function. (set_value_range): Add assert to check precision. (set_and_canonicalize_value_range): Call normalize_int_cst_precision on min and max. (promoted_type): New function. (promote_unary_vr): Likewise. (promote_binary_vr): Likewise. (extract_range_from_binary_expr_1): Adjust type to match value range. Store value ranges in promoted type if they overflow. (extract_range_from_unary_expr_1): Likewise. (adjust_range_with_scev): Call normalize_int_cst_precision on min and max. (vrp_visit_assignment_or_call): Likewise. (simplify_bit_ops_using_ranges): Adjust the value range precision. (test_for_singularity): Likewise. (simplify_stmt_for_jump_threading): Likewise. (extract_range_from_assert): Likewise. diff --git a/gcc/tree-ssa-ccp.c b/gcc/tree-ssa-ccp.c index a90f708..1733073 100644 --- a/gcc/tree-ssa-ccp.c +++ b/gcc/tree-ssa-ccp.c @@ -916,7 +916,11 @@ ccp_finalize (void) unsigned int precision = TYPE_PRECISION (TREE_TYPE (val->value)); wide_int nonzero_bits = wide_int::from (val->mask, precision, UNSIGNED) | val->value; - nonzero_bits &= get_nonzero_bits (name); + wide_int nonzero_bits_name = get_nonzero_bits (name); + if (precision != nonzero_bits_name.get_precision ()) + nonzero_bits = wi::shwi (*nonzero_bits.get_val (), + nonzero_bits_name.get_precision ()); + nonzero_bits &= nonzero_bits_name; set_nonzero_bits (name, nonzero_bits); } } @@ -1852,6 +1856,8 @@ evaluate_stmt (gimple stmt) { tree lhs = gimple_get_lhs (stmt); wide_int nonzero_bits = get_nonzero_bits (lhs); + if (TYPE_PRECISION (TREE_TYPE (lhs)) != nonzero_bits.get_precision ()) + nonzero_bits = wide_int_to_tree (TREE_TYPE (lhs), nonzero_bits); if (nonzero_bits != -1) { if (!is_constant) diff --git a/gcc/tree-ssanames.c b/gcc/tree-ssanames.c index 3af80a0..459c669 100644 --- a/gcc/tree-ssanames.c +++ b/gcc/tree-ssanames.c @@ -192,7 +192,7 @@ set_range_info (tree name, enum value_range_type range_type, gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name))); gcc_assert (range_type == VR_RANGE || range_type == VR_ANTI_RANGE); range_info_def *ri = SSA_NAME_RANGE_INFO (name); - unsigned int precision = TYPE_PRECISION (TREE_TYPE (name)); + unsigned int precision = min.get_precision (); /* Allocate if not available. */ if (ri == NULL) @@ -204,6 +204,15 @@ set_range_info (tree name, enum value_range_type range_type, SSA_NAME_RANGE_INFO (name) = ri; ri->set_nonzero_bits (wi::shwi (-1, precision)); } + else if (ri->get_min ().get_precision () != precision) + { + size_t size = (sizeof (range_info_def) + + trailing_wide_ints <3>::extra_size (precision)); + ri = static_cast (ggc_realloc (ri, size)); + ri->ints.set_precision (precision); + SSA_NAME_RANGE_INFO (name) = ri; + ri->set_nonzero_bits (wi::shwi (-1, precision)); + } /* Record the range type. */ if (SSA_NAME_RANGE_TYPE (name) != range_type) diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c index d16fd8a..772676a 100644 --- a/gcc/tree-vrp.c +++ b/gcc/tree-vrp.c @@ -61,6 +61,7 @@ along with GCC; see the file COPYING3. If not see #include "optabs.h" #include "tree-ssa-threadedge.h" #include "wide-int.h" +#include "langhooks.h" @@ -424,6 +425,23 @@ set_value_range_to_varying (value_range_t *vr) bitmap_clear (vr->equiv); } +/* Normalize min and max to promoted_type if their precision differs. */ + +static void +normalize_int_cst_precision (tree *min, tree *max) +{ + if (TREE_CODE (*min) != INTEGER_CST + || TREE_CODE (*max) != INTEGER_CST) + return; + if (TYPE_PRECISION (TREE_TYPE (*min)) != TYPE_PRECISION (TREE_TYPE (*max))) + { + tree type = TREE_TYPE (*min); + if (TYPE_PRECISION (TREE_TYPE (*min)) < TYPE_PRECISION (TREE_TYPE (*max))) + type = TREE_TYPE (*max); + *min = wide_int_to_tree (type, *min); + *max = wide_int_to_tree (type, *max); + } +} /* Set value range VR to {T, MIN, MAX, EQUIV}. */ @@ -438,6 +456,8 @@ set_value_range (value_range_t *vr, enum value_range_type t, tree min, int cmp; gcc_assert (min && max); + gcc_assert (TYPE_PRECISION (TREE_TYPE (min)) + == TYPE_PRECISION (TREE_TYPE (max))); gcc_assert ((!TREE_OVERFLOW_P (min) || is_overflow_infinity (min)) && (!TREE_OVERFLOW_P (max) || is_overflow_infinity (max))); @@ -597,6 +617,8 @@ set_and_canonicalize_value_range (value_range_t *vr, enum value_range_type t, return; } + if (min != NULL_TREE && max != NULL_TREE) + normalize_int_cst_precision (&min, &max); set_value_range (vr, t, min, max, equiv); } @@ -951,6 +973,66 @@ usable_range_p (value_range_t *vr, bool *strict_overflow_p) return true; } +/* Return the promoted type as defined by PROMOTE_MODE of the target. */ + +static tree +promoted_type (tree type) +{ +#ifdef PROMOTE_MODE + tree new_type; + if (!POINTER_TYPE_P (type) + && (TREE_CODE (type) != ENUMERAL_TYPE) + && INTEGRAL_TYPE_P (type)) + { + enum machine_mode mode = TYPE_MODE (type); + int uns = TYPE_SIGN (type); + PROMOTE_MODE (mode, uns, type); + uns = TYPE_SIGN (type); + new_type = lang_hooks.types.type_for_mode (mode, uns); + if (TYPE_PRECISION (new_type) > TYPE_PRECISION (type)) + type = new_type; + } +#endif + return type; +} + +/* Promote VRO to promoted_type if their precision differ and + return the new type. */ + +static tree +promote_unary_vr (tree type, value_range_t *vr0) +{ + tree expr_type = type; + + if (!range_int_cst_p (vr0)) + return expr_type; + if ((TYPE_PRECISION (type) != TYPE_PRECISION (TREE_TYPE (vr0->min))) + || (TYPE_PRECISION (type) != TYPE_PRECISION (TREE_TYPE (vr0->max)))) + { + expr_type = promoted_type (type); + vr0->min = wide_int_to_tree (expr_type, vr0->min); + vr0->max = wide_int_to_tree (expr_type, vr0->max); + } + return expr_type; +} + +/* Promote VRO and VR1 to promoted_type if their precision differ and + return the new type. */ + +static tree +promote_binary_vr (tree type, value_range_t *vr0, value_range_t *vr1) +{ + tree expr_type0 = promote_unary_vr (type, vr0); + tree expr_type1 = promote_unary_vr (type, vr1); + + if (TYPE_PRECISION (expr_type0) == TYPE_PRECISION (expr_type1)) + return expr_type0; + if (TYPE_PRECISION (expr_type0) < TYPE_PRECISION (expr_type1)) + return promote_unary_vr (expr_type1, vr0); + else + return promote_unary_vr (expr_type0, vr1); +} + /* Return true if the result of assignment STMT is know to be non-negative. If the return value is based on the assumption that signed overflow is @@ -1741,6 +1823,7 @@ extract_range_from_assert (value_range_t *vr_p, tree expr) TREE_NO_WARNING (max) = 1; } + normalize_int_cst_precision (&min, &max); set_value_range (vr_p, VR_RANGE, min, max, vr_p->equiv); } } @@ -1781,6 +1864,7 @@ extract_range_from_assert (value_range_t *vr_p, tree expr) TREE_NO_WARNING (min) = 1; } + normalize_int_cst_precision (&min, &max); set_value_range (vr_p, VR_RANGE, min, max, vr_p->equiv); } } @@ -2376,6 +2460,9 @@ extract_range_from_binary_expr_1 (value_range_t *vr, range and see what we end up with. */ if (code == PLUS_EXPR || code == MINUS_EXPR) { + /* If any of the value range is in promoted type, promote them all + including the type. */ + expr_type = promote_binary_vr (expr_type, &vr0, &vr1); /* If we have a PLUS_EXPR with two VR_RANGE integer constant ranges compute the precise range for such case if possible. */ if (range_int_cst_p (&vr0) @@ -2562,6 +2649,9 @@ extract_range_from_binary_expr_1 (value_range_t *vr, else if (code == MIN_EXPR || code == MAX_EXPR) { + /* If any of the value range is in promoted type, promote them all + including the type. */ + expr_type = promote_binary_vr (expr_type, &vr0, &vr1); if (vr0.type == VR_RANGE && !symbolic_range_p (&vr0)) { @@ -2625,6 +2715,8 @@ extract_range_from_binary_expr_1 (value_range_t *vr, > vrp_int_cst; vrp_int sizem1 = wi::mask (prec, false); vrp_int size = sizem1 + 1; + vrp_int type_min = vrp_int_cst (TYPE_MIN_VALUE (expr_type)); + vrp_int type_max = vrp_int_cst (TYPE_MAX_VALUE (expr_type)); /* Extend the values using the sign of the result to PREC2. From here on out, everthing is just signed math no matter @@ -2697,8 +2789,17 @@ extract_range_from_binary_expr_1 (value_range_t *vr, /* The following should handle the wrapping and selecting VR_ANTI_RANGE for us. */ - min = wide_int_to_tree (expr_type, prod0); - max = wide_int_to_tree (expr_type, prod3); + if (wi::lts_p (prod0, type_min) + || wi::gts_p (prod3, type_max)) + { + min = wide_int_to_tree (promoted_type (expr_type), prod0); + max = wide_int_to_tree (promoted_type (expr_type), prod3); + } + else + { + min = wide_int_to_tree (expr_type, prod0); + max = wide_int_to_tree (expr_type, prod3); + } set_and_canonicalize_value_range (vr, VR_RANGE, min, max, NULL); return; } @@ -2724,6 +2825,8 @@ extract_range_from_binary_expr_1 (value_range_t *vr, else if (code == RSHIFT_EXPR || code == LSHIFT_EXPR) { + /* If value range is in promoted type, promote the type as well. */ + expr_type = promote_unary_vr (expr_type, &vr0); /* If we have a RSHIFT_EXPR with any shift values outside [0..prec-1], then drop to VR_VARYING. Outside of this range we get undefined behavior from the shift operation. We cannot even trust @@ -2946,6 +3049,9 @@ extract_range_from_binary_expr_1 (value_range_t *vr, wide_int may_be_nonzero0, may_be_nonzero1; wide_int must_be_nonzero0, must_be_nonzero1; + /* If any of the value range is in promoted type, promote them all + including the type. */ + expr_type = promote_binary_vr (expr_type, &vr0, &vr1); int_cst_range0 = zero_nonzero_bits_from_vr (expr_type, &vr0, &may_be_nonzero0, &must_be_nonzero0); @@ -3224,14 +3330,22 @@ extract_range_from_unary_expr_1 (value_range_t *vr, tree new_min, new_max; if (is_overflow_infinity (vr0.min)) new_min = negative_overflow_infinity (outer_type); - else + else if (int_fits_type_p (vr0.min, outer_type)) new_min = force_fit_type (outer_type, wi::to_widest (vr0.min), 0, false); + else + new_min = force_fit_type (promoted_type (outer_type), + wi::to_widest (vr0.min), + 0, false); if (is_overflow_infinity (vr0.max)) new_max = positive_overflow_infinity (outer_type); - else + else if (int_fits_type_p (vr0.min, outer_type)) new_max = force_fit_type (outer_type, wi::to_widest (vr0.max), 0, false); + else + new_max = force_fit_type (promoted_type (outer_type), + wi::to_widest (vr0.max), + 0, false); set_and_canonicalize_value_range (vr, vr0.type, new_min, new_max, NULL); return; @@ -3940,6 +4054,8 @@ adjust_range_with_scev (value_range_t *vr, struct loop *loop, && is_positive_overflow_infinity (max))) return; + if (min != NULL_TREE && max != NULL_TREE) + normalize_int_cst_precision (&min, &max); set_value_range (vr, VR_RANGE, min, max, vr->equiv); } @@ -6668,6 +6784,8 @@ vrp_visit_assignment_or_call (gimple stmt, tree *output_p) else extract_range_from_assignment (&new_vr, stmt); + if (range_int_cst_p (&new_vr)) + normalize_int_cst_precision (&new_vr.min, &new_vr.max); if (update_value_range (lhs, &new_vr)) { *output_p = lhs; @@ -8399,6 +8517,8 @@ vrp_visit_phi_node (gimple phi) /* If the new range is different than the previous value, keep iterating. */ update_range: + if (range_int_cst_p (&vr_result)) + normalize_int_cst_precision (&vr_result.min, &vr_result.max); if (update_value_range (lhs, &vr_result)) { if (dump_file && (dump_flags & TDF_DETAILS)) @@ -8655,9 +8775,19 @@ simplify_bit_ops_using_ranges (gimple_stmt_iterator *gsi, gimple stmt) if (!zero_nonzero_bits_from_vr (TREE_TYPE (op0), &vr0, &may_be_nonzero0, &must_be_nonzero0)) return false; - if (!zero_nonzero_bits_from_vr (TREE_TYPE (op1), &vr1, &may_be_nonzero1, + if (!zero_nonzero_bits_from_vr (TREE_TYPE (op0), &vr1, &may_be_nonzero1, &must_be_nonzero1)) return false; + if (TYPE_PRECISION (TREE_TYPE (op0)) != may_be_nonzero0.get_precision ()) + { + may_be_nonzero0 = wide_int_to_tree (TREE_TYPE (op0), may_be_nonzero0); + must_be_nonzero0 = wide_int_to_tree (TREE_TYPE (op0), must_be_nonzero0); + } + if (TYPE_PRECISION (TREE_TYPE (op0)) != may_be_nonzero1.get_precision ()) + { + may_be_nonzero1 = wide_int_to_tree (TREE_TYPE (op1), may_be_nonzero0); + must_be_nonzero1 = wide_int_to_tree (TREE_TYPE (op1), must_be_nonzero0); + } switch (gimple_assign_rhs_code (stmt)) { @@ -8752,9 +8882,9 @@ test_for_singularity (enum tree_code cond_code, tree op0, if (min && max) { if (compare_values (vr->min, min) == 1) - min = vr->min; + min = wide_int_to_tree (TREE_TYPE (op0), vr->min); if (compare_values (vr->max, max) == -1) - max = vr->max; + max = wide_int_to_tree (TREE_TYPE (op0), vr->max); /* If the new min/max values have converged to a single value, then there is only one value which can satisfy the condition, @@ -9474,7 +9604,7 @@ simplify_stmt_for_jump_threading (gimple stmt, gimple within_stmt) { extract_range_from_assignment (&new_vr, stmt); if (range_int_cst_singleton_p (&new_vr)) - return new_vr.min; + return wide_int_to_tree (TREE_TYPE (lhs), new_vr.min); } }