From patchwork Sun Oct 11 02:56:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kugan Vivekanandarajah X-Patchwork-Id: 54738 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by patches.linaro.org (Postfix) with ESMTPS id 8AFB422B0E for ; Sun, 11 Oct 2015 02:57:07 +0000 (UTC) Received: by wibzt1 with SMTP id zt1sf1259756wib.0 for ; Sat, 10 Oct 2015 19:57:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:sender :delivered-to:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to:content-type:x-original-sender :x-original-authentication-results; bh=lsxXTS1j9IbWLTBti33uF5aLUw3oUR1LwjVhS2bPnHo=; b=VCxC/9mvfbv9Io0yB/MqZ5tDO4WAHfLU1CO06/Us1lYMCPMBMCTiBQ3m4b1tSrImIh tt7ePcPqMfF6PmxvR+Fav0UTDZ7clgMB6jBtVY6IAgnPmXoPwoW0B7bJWPfsQy1j3JuH J9RvMcNnfHhzlkVH78HlQ1SgWBHZRujDViN4fFvLywLyjlvncORhnvX2VN/1ZMtnuGsu eQ+RcNJ67mxXAHfTw4N9XIrm+5JfIHI6c/tMm1RVEOBhZGHvzzcpaAlzsFBQBduJCXDC UN/IIvwV4XeT9EV9a3wJnNRYTRDphfx5qsyaVEg8lzTkPlQ6ORTSnJ7SBoOu+Xmq3Or4 pvZA== X-Gm-Message-State: ALoCoQlgkaRub/vuFgPMEiNM//E9C6kRpG7XcfzrNxO8/F50OcAzs9NT4Sopr61vi1/tdOEhMFdN X-Received: by 10.112.209.73 with SMTP id mk9mr4123403lbc.14.1444532226780; Sat, 10 Oct 2015 19:57:06 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.152.7 with SMTP id a7ls435051lfe.100.gmail; Sat, 10 Oct 2015 19:57:06 -0700 (PDT) X-Received: by 10.112.162.34 with SMTP id xx2mr9687253lbb.109.1444532226646; Sat, 10 Oct 2015 19:57:06 -0700 (PDT) Received: from mail-lb0-x234.google.com (mail-lb0-x234.google.com. [2a00:1450:4010:c04::234]) by mx.google.com with ESMTPS id s199si6405682lfd.168.2015.10.10.19.57.06 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Oct 2015 19:57:06 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::234 as permitted sender) client-ip=2a00:1450:4010:c04::234; Received: by lbbk10 with SMTP id k10so2732457lbb.0 for ; Sat, 10 Oct 2015 19:57:06 -0700 (PDT) X-Received: by 10.112.163.131 with SMTP id yi3mr9677055lbb.36.1444532226485; Sat, 10 Oct 2015 19:57:06 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp822269lbq; Sat, 10 Oct 2015 19:57:05 -0700 (PDT) X-Received: by 10.66.221.193 with SMTP id qg1mr26573193pac.103.1444532224971; Sat, 10 Oct 2015 19:57:04 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id ju4si14895879pbc.241.2015.10.10.19.57.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Oct 2015 19:57:04 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-409846-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 100041 invoked by alias); 11 Oct 2015 02:56:49 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 100026 invoked by uid 89); 11 Oct 2015 02:56:47 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.4 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-pa0-f47.google.com Received: from mail-pa0-f47.google.com (HELO mail-pa0-f47.google.com) (209.85.220.47) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-GCM-SHA256 encrypted) ESMTPS; Sun, 11 Oct 2015 02:56:46 +0000 Received: by pabve7 with SMTP id ve7so63973146pab.2 for ; Sat, 10 Oct 2015 19:56:44 -0700 (PDT) X-Received: by 10.66.97.73 with SMTP id dy9mr25803822pab.115.1444532204028; Sat, 10 Oct 2015 19:56:44 -0700 (PDT) Received: from [10.1.1.2] (58-6-183-210.dyn.iinet.net.au. [58.6.183.210]) by smtp.googlemail.com with ESMTPSA id dk2sm10549739pbd.57.2015.10.10.19.56.41 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Oct 2015 19:56:42 -0700 (PDT) Subject: Re: [3/7] Optimize ZEXT_EXPR with tree-vrp To: Richard Biener References: <55ECFC2A.7050908@linaro.org> <55ECFD57.5060507@linaro.org> <5614556E.5020908@linaro.org> <5615AD60.6040904@linaro.org> Cc: "gcc-patches@gcc.gnu.org" From: Kugan Message-ID: <5619CFE6.40403@linaro.org> Date: Sun, 11 Oct 2015 13:56:38 +1100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: X-IsSubscribed: yes X-Original-Sender: kugan.vivekanandarajah@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::234 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 On 09/10/15 21:29, Richard Biener wrote: > + unsigned int prec = tree_to_uhwi (vr1.min); > > this should use unsigned HOST_WIDE_INT > > + wide_int sign_bit = wi::shwi (1ULL << (prec - 1), > + TYPE_PRECISION (TREE_TYPE (vr0.min))); > > use wi::one (TYPE_PRECISION (TREE_TYPE (vr0.min))) << (prec - 1); > > That is, you really need to handle precisions bigger than HOST_WIDE_INT. > > But I suppose wide_int really misses a test_bit function (it has a set_bit > one already). > > + if (wi::bit_and (must_be_nonzero, sign_bit) == sign_bit) > + { > + /* If to-be-extended sign bit is one. */ > + tmin = type_min; > + tmax = may_be_nonzero; > > I think tmax should be zero-extended may_be_nonzero from prec. > > + else if (wi::bit_and (may_be_nonzero, sign_bit) > + != sign_bit) > + { > + /* If to-be-extended sign bit is zero. */ > + tmin = must_be_nonzero; > + tmax = may_be_nonzero; > > likewise here tmin/tmax should be zero-extended may/must_be_nonzero from prec. > > + case SEXT_EXPR: > + { > + unsigned int prec = tree_to_uhwi (op1); > + wide_int sign_bit = wi::shwi (1ULL << (prec - 1), > + TYPE_PRECISION (TREE_TYPE (vr0.min))); > + wide_int mask = wi::shwi (((1ULL << (prec - 1)) - 1), > + TYPE_PRECISION (TREE_TYPE (vr0.max))); > > this has the same host precision issues of 1ULL (HOST_WIDE_INT). > There is wi::mask, eventually you can use wi::set_bit_in_zero to > produce the sign-bit wide_int (also above). Thanks Ricahrd. Does the attached patch looks better ? Thanks, Kugan >From cf5f75f5c96d30cdd968e71035a398cb0d5fcff7 Mon Sep 17 00:00:00 2001 From: Kugan Vivekanandarajah Date: Mon, 17 Aug 2015 13:45:52 +1000 Subject: [PATCH 3/7] Optimize ZEXT_EXPR with tree-vrp --- gcc/tree-vrp.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c index 2cd71a2..c04d290 100644 --- a/gcc/tree-vrp.c +++ b/gcc/tree-vrp.c @@ -2317,6 +2317,7 @@ extract_range_from_binary_expr_1 (value_range_t *vr, && code != LSHIFT_EXPR && code != MIN_EXPR && code != MAX_EXPR + && code != SEXT_EXPR && code != BIT_AND_EXPR && code != BIT_IOR_EXPR && code != BIT_XOR_EXPR) @@ -2877,6 +2878,52 @@ extract_range_from_binary_expr_1 (value_range_t *vr, extract_range_from_multiplicative_op_1 (vr, code, &vr0, &vr1); return; } + else if (code == SEXT_EXPR) + { + gcc_assert (range_int_cst_p (&vr1)); + HOST_WIDE_INT prec = tree_to_uhwi (vr1.min); + type = vr0.type; + wide_int tmin, tmax; + wide_int may_be_nonzero, must_be_nonzero; + + wide_int type_min = wi::min_value (prec, SIGNED); + wide_int type_max = wi::max_value (prec, SIGNED); + type_min = wide_int_to_tree (expr_type, type_min); + type_max = wide_int_to_tree (expr_type, type_max); + wide_int sign_bit + = wi::set_bit_in_zero (prec - 1, + TYPE_PRECISION (TREE_TYPE (vr0.min))); + if (zero_nonzero_bits_from_vr (expr_type, &vr0, + &may_be_nonzero, + &must_be_nonzero)) + { + if (wi::bit_and (must_be_nonzero, sign_bit) == sign_bit) + { + /* If to-be-extended sign bit is one. */ + tmin = type_min; + tmax = wi::zext (may_be_nonzero, prec); + } + else if (wi::bit_and (may_be_nonzero, sign_bit) + != sign_bit) + { + /* If to-be-extended sign bit is zero. */ + tmin = wi::zext (must_be_nonzero, prec); + tmax = wi::zext (may_be_nonzero, prec); + } + else + { + tmin = type_min; + tmax = type_max; + } + } + else + { + tmin = type_min; + tmax = type_max; + } + min = wide_int_to_tree (expr_type, tmin); + max = wide_int_to_tree (expr_type, tmax); + } else if (code == RSHIFT_EXPR || code == LSHIFT_EXPR) { @@ -9244,6 +9291,28 @@ simplify_bit_ops_using_ranges (gimple_stmt_iterator *gsi, gimple *stmt) break; } break; + case SEXT_EXPR: + { + unsigned int prec = tree_to_uhwi (op1); + wide_int sign_bit + = wi::set_bit_in_zero (prec - 1, + TYPE_PRECISION (TREE_TYPE (vr0.min))); + wide_int mask = wi::mask (prec, true, + TYPE_PRECISION (TREE_TYPE (vr0.min))); + if (wi::bit_and (must_be_nonzero0, sign_bit) == sign_bit) + { + /* If to-be-extended sign bit is one. */ + if (wi::bit_and (must_be_nonzero0, mask) == mask) + op = op0; + } + else if (wi::bit_and (may_be_nonzero0, sign_bit) != sign_bit) + { + /* If to-be-extended sign bit is zero. */ + if (wi::bit_and (may_be_nonzero0, mask) == 0) + op = op0; + } + } + break; default: gcc_unreachable (); } @@ -9946,6 +10015,7 @@ simplify_stmt_using_ranges (gimple_stmt_iterator *gsi) case BIT_AND_EXPR: case BIT_IOR_EXPR: + case SEXT_EXPR: /* Optimize away BIT_AND_EXPR and BIT_IOR_EXPR if all the bits being cleared are already cleared or all the bits being set are already set. */ -- 1.9.1