From patchwork Mon Feb 29 10:53:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kugan Vivekanandarajah X-Patchwork-Id: 63195 Delivered-To: patch@linaro.org Received: by 10.112.199.169 with SMTP id jl9csp1170145lbc; Mon, 29 Feb 2016 02:54:17 -0800 (PST) X-Received: by 10.98.75.10 with SMTP id y10mr21047882pfa.32.1456743257842; Mon, 29 Feb 2016 02:54:17 -0800 (PST) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id sp7si2657557pac.230.2016.02.29.02.54.17 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Feb 2016 02:54:17 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-return-422376-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-return-422376-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-422376-patch=linaro.org@gcc.gnu.org; dkim=pass header.i=@gcc.gnu.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; q=dns; s=default; b=ArEV+NWn9UraW5ZZQ 4fP4IOS572xcqGd7Ydt4wtmlZ9vF25Wq0GMf2fmCzHLXuIbUaY5fvPAGun7p93NF kSEFdNW5Mg5fNjH7+ukEiLeen6e+4kn57UtUwvIgJ2oVPH4Jcr9tfD6kd/rmpBvp za1gEsNFyADDGKqYThKWdmkcXs= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; s=default; bh=COw1kVwb5MFGLVgJ9jGCEgp fL0w=; b=xwQwg7iwLdC3DfVDgk6lNswfUYVjDKrSxcB2MmZYj2ljGwG5I/hXPJn LXMH0UE4O3+yttZguJ8fkN+1MbI4x8Ors7qytATia5aYz1hfWarfRvAbwW2hX3X6 uYoh8xZFVPT+tubW4AYceTjEXWjBcjDELu4E8gUMsKtSKF1THsPE= Received: (qmail 100250 invoked by alias); 29 Feb 2016 10:54:04 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 100236 invoked by uid 89); 29 Feb 2016 10:54:03 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.0 required=5.0 tests=AWL, BAYES_50, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy=TREE_CODE, entries, sk:sort_by, kugan X-HELO: mail-pf0-f173.google.com Received: from mail-pf0-f173.google.com (HELO mail-pf0-f173.google.com) (209.85.192.173) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-GCM-SHA256 encrypted) ESMTPS; Mon, 29 Feb 2016 10:54:02 +0000 Received: by mail-pf0-f173.google.com with SMTP id w128so45506028pfb.2 for ; Mon, 29 Feb 2016 02:54:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to; bh=gXFQr/seK9zjkxQnqB+sJFKqx3gqMP6G5mweVnoqZ20=; b=dmLCTUFDkZYwzvrfXGm/8/+5+O7AfI6SYZ+e2eRCMGULHLpI/moY+B3v7D88sDZHlN ruh2cGdlk+mSdgXwcqHfdHUYdV5DzSHtKun/ZEEzufZl9dlPs4GmdtR0Olcj2Q0AvfPk pEKLlge/zfTr0/7LBCtm9JGPahmHhU5CkG0O+p6mIj03t0Z3oSz4425XCcXwIzg9EfPu yzEb7VRuVKyBNsyAaDB93Nqe/At2o1d+U2MGLI06+z82DKIoJTrxx+N0IiBVueyLib75 JujcdYStMKIVBVXIZJ6CTD9lGen7zmeTYCDRJxttitaj8+vLXD+2BrSmTCBRGFSY5VXP dHAQ== X-Gm-Message-State: AD7BkJI4gryHxN99fJlUxC9NF8BsuyR2hsbFVyCET9xiPdSB7GuoFgxqRkiFboevwYEw2TFb X-Received: by 10.98.74.74 with SMTP id x71mr13403608pfa.135.1456743240433; Mon, 29 Feb 2016 02:54:00 -0800 (PST) Received: from [10.1.1.5] (58-6-183-210.dyn.iinet.net.au. [58.6.183.210]) by smtp.googlemail.com with ESMTPSA id w9sm37168541pfa.21.2016.02.29.02.53.57 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 29 Feb 2016 02:53:59 -0800 (PST) Subject: Re: [RFC][PATCH][PR40921] Convert x + (-y * z * z) into x - y * z * z To: Richard Biener References: <56CFC02F.2070801@linaro.org> Cc: "gcc-patches@gcc.gnu.org" From: kugan Message-ID: <56D42341.5090607@linaro.org> Date: Mon, 29 Feb 2016 21:53:53 +1100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: X-IsSubscribed: yes > > Err. I think the way you implement that in reassoc is ad-hoc and not > related to reassoc at all. > > In fact what reassoc is missing is to handle > > -y * z * (-w) * x -> y * x * w * x > > thus optimize negates as if they were additional * -1 entries in a > multiplication chain. And > then optimize a single remaining * -1 in the result chain to a negate. > > Then match.pd handles x + (-y) -> x - y (independent of -frounding-math btw). > > So no, this isn't ok as-is, IMHO you want to expand the multiplication ops chain > pulling in the * -1 ops (if single-use, of course). > I agree. Here is the updated patch along what you suggested. Does this look better ? Thanks, Kugan diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c index 17eb64f..bbb5ffb 100644 --- a/gcc/tree-ssa-reassoc.c +++ b/gcc/tree-ssa-reassoc.c @@ -4674,6 +4674,41 @@ attempt_builtin_powi (gimple *stmt, vec *ops) return result; } +/* Factor out NEGATE_EXPR from the multiplication operands. */ +static void +factor_out_negate_expr (gimple_stmt_iterator *gsi, + gimple *stmt, vec *ops) +{ + operand_entry *oe; + unsigned int i; + int neg_count = 0; + + FOR_EACH_VEC_ELT (*ops, i, oe) + { + if (TREE_CODE (oe->op) != SSA_NAME + || !has_single_use (oe->op)) + continue; + gimple *def_stmt = SSA_NAME_DEF_STMT (oe->op); + if (!is_gimple_assign (def_stmt) + || gimple_assign_rhs_code (def_stmt) != NEGATE_EXPR) + continue; + oe->op = gimple_assign_rhs1 (def_stmt); + neg_count ++; + } + + if (neg_count % 2) + { + tree lhs = gimple_assign_lhs (stmt); + tree tmp = make_temp_ssa_name (TREE_TYPE (lhs), NULL, "reassocneg"); + gimple_set_lhs (stmt, tmp); + gassign *neg_stmt = gimple_build_assign (lhs, NEGATE_EXPR, + tmp); + gimple_set_location (neg_stmt, gimple_location (stmt)); + gimple_set_uid (neg_stmt, gimple_uid (stmt)); + gsi_insert_after (gsi, neg_stmt, GSI_SAME_STMT); + } +} + /* Attempt to optimize CST1 * copysign (CST2, y) -> copysign (CST1 * CST2, y) if CST1 > 0, or CST1 * copysign (CST2, y) -> -copysign (CST1 * CST2, y) if CST1 < 0. */ @@ -4917,6 +4952,12 @@ reassociate_bb (basic_block bb) if (rhs_code == MULT_EXPR) attempt_builtin_copysign (&ops); + if (rhs_code == MULT_EXPR) + { + factor_out_negate_expr (&gsi, stmt, &ops); + ops.qsort (sort_by_operand_rank); + } + if (reassoc_insert_powi_p && rhs_code == MULT_EXPR && flag_unsafe_math_optimizations)