From patchwork Thu May 8 08:07:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenqiang Chen X-Patchwork-Id: 29822 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f199.google.com (mail-ob0-f199.google.com [209.85.214.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D04FE20534 for ; Thu, 8 May 2014 08:08:21 +0000 (UTC) Received: by mail-ob0-f199.google.com with SMTP id wm4sf10590214obc.10 for ; Thu, 08 May 2014 01:08:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:sender :delivered-to:mime-version:date:message-id:subject:from:to:cc :x-original-sender:x-original-authentication-results:content-type; bh=fNfkIHWsbaSC/33xc56RMLENmDQV0JrHCQEmrU9wwEQ=; b=NjwbkvEMwcqgqfHFg5+reQeIMa6JRpmAjQvSejIUI1UyVB0jL9CuMkRtmF4AYnXfz9 2xNz1AdukpvAoOUvfDc13A1A4yi17k+9z/lCAzvBXSJe4qnAOJNVy+O7peIWYhHzKEcU qbIM19LVJHUg10tVjIj1OZ4ZlXbiZDDHRSN+AzjDcFuYPln3DV66objLuAdq9hBMr57T ZATxQEalE9QJIHNMmjTH7ML++Jse0QMDFO770l6Ie1Q40y4aIH6FKUelMxMwXrchJgT9 qeMpq3sZh9yibND04TEbUk+9fEd8T1pbkap+y+uQHmKTe53d4nHyGXdpN5D1Z3f4Rbi1 zR7w== X-Gm-Message-State: ALoCoQlfwT88KJ9KVSXDEezgI//9wD3/caf4JTSJPWZCw1ODZmZ3nnLDq/LEL5lHq48MZPRRgzQ2 X-Received: by 10.182.75.193 with SMTP id e1mr1008872obw.1.1399536501290; Thu, 08 May 2014 01:08:21 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.87.177 with SMTP id r46ls3702852qgd.98.gmail; Thu, 08 May 2014 01:08:21 -0700 (PDT) X-Received: by 10.220.87.211 with SMTP id x19mr2780vcl.68.1399536501123; Thu, 08 May 2014 01:08:21 -0700 (PDT) Received: from mail-vc0-x229.google.com (mail-vc0-x229.google.com [2607:f8b0:400c:c03::229]) by mx.google.com with ESMTPS id u5si38202vdo.184.2014.05.08.01.08.21 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 08 May 2014 01:08:21 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=2607:f8b0:400c:c03::229; Received: by mail-vc0-f169.google.com with SMTP id ij19so2886361vcb.28 for ; Thu, 08 May 2014 01:08:21 -0700 (PDT) X-Received: by 10.52.137.174 with SMTP id qj14mr1646942vdb.32.1399536500993; Thu, 08 May 2014 01:08:20 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp375059vcb; Thu, 8 May 2014 01:08:20 -0700 (PDT) X-Received: by 10.66.192.73 with SMTP id he9mr4890835pac.88.1399536500251; Thu, 08 May 2014 01:08:20 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id hi3si117395pac.410.2014.05.08.01.08.19 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 08 May 2014 01:08:20 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-366847-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 5048 invoked by alias); 8 May 2014 08:08:07 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 5033 invoked by uid 89); 8 May 2014 08:08:05 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.5 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-lb0-f170.google.com Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com) (209.85.217.170) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Thu, 08 May 2014 08:08:02 +0000 Received: by mail-lb0-f170.google.com with SMTP id w7so3030692lbi.1 for ; Thu, 08 May 2014 01:07:59 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.112.54.228 with SMTP id m4mr57142lbp.96.1399536478644; Thu, 08 May 2014 01:07:58 -0700 (PDT) Received: by 10.112.162.170 with HTTP; Thu, 8 May 2014 01:07:58 -0700 (PDT) Date: Thu, 8 May 2014 16:07:58 +0800 Message-ID: Subject: [PATCH, 2/2] shrink wrap a function with a single loop: split live_edge From: Zhenqiang Chen To: "gcc-patches@gcc.gnu.org" Cc: Jeff Law X-IsSubscribed: yes X-Original-Sender: zhenqiang.chen@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 Hi, The patch splits the live_edge for move_insn_for_shrink_wrap to sink the copy out of the entry block. Bootstrap and no make check regression on X86-64 and ARM. OK for trunk? Thanks! -Zhenqiang ChangeLog: 2014-05-08 Zhenqiang Chen * function.c (next_block_for_reg): Allow live_edge->dest has two predecessors. (move_insn_for_shrink_wrap): Split live_edge. (prepre_shrink_wrap): One more parameter for move_insn_for_shrink_wrap. edge e, live_edge; @@ -5415,10 +5415,12 @@ next_block_for_reg (basic_block bb, int regno, int end_regno) if (live_edge->flags & EDGE_ABNORMAL) return NULL; - if (EDGE_COUNT (live_edge->dest->preds) > 1) + /* When live_edge->dest->preds == 2, we can create a new block on + the edge to make it meet the requirement. */ + if (EDGE_COUNT (live_edge->dest->preds) > 2) return NULL; - return live_edge->dest; + return live_edge; } /* Check whether INSN is the last insn in BB or @@ -5545,20 +5547,25 @@ try_copy_prop (basic_block bb, rtx insn, rtx src, rtx dest, return ret; } - /* Try to move INSN from BB to a successor. Return true on success. - USES and DEFS are the set of registers that are used and defined - after INSN in BB. */ +/* Try to move INSN from BB to a successor. Return true on success. + LAST_USES is the set of registers that are used by the COMPARE or JUMP + instructions in the block. USES is the set of registers that are used + by others after INSN except COMARE and JUMP. DEFS are the set of registers + that are used and defined others after INSN. SPLIT_P indicates whether + a live edge from BB is splitted or not. */ static bool move_insn_for_shrink_wrap (basic_block bb, rtx insn, const HARD_REG_SET uses, const HARD_REG_SET defs, - HARD_REG_SET *last_uses) + HARD_REG_SET *last_uses, + bool *split_p) { rtx set, src, dest; bitmap live_out, live_in, bb_uses, bb_defs; unsigned int i, dregno, end_dregno, sregno, end_sregno; basic_block next_block; + edge live_edge; /* Look for a simple register copy. */ set = single_set (insn); @@ -5582,17 +5589,31 @@ move_insn_for_shrink_wrap (basic_block bb, rtx insn, || overlaps_hard_reg_set_p (defs, GET_MODE (dest), dregno)) return false; - /* See whether there is a successor block to which we could move INSN. */ - next_block = next_block_for_reg (bb, dregno, end_dregno); - if (!next_block) + live_edge = next_block_for_reg (bb, dregno, end_dregno); + if (!live_edge) return false; + next_block = live_edge->dest; + /* If the destination register is referred in later insn, try to forward it. */ if (overlaps_hard_reg_set_p (*last_uses, GET_MODE (dest), dregno) && !try_copy_prop (bb, insn, src, dest, last_uses)) return false; + /* Create a new basic block on the edge. */ + if (EDGE_COUNT (next_block->preds) == 2) + { + next_block = split_edge (live_edge); + + bitmap_copy (df_get_live_in (next_block), df_get_live_out (bb)); + df_set_bb_dirty (next_block); + + /* We should not split more than once for a function. */ + gcc_assert (!(*split_p)); + *split_p = true; + } + /* At this point we are committed to moving INSN, but let's try to move it as far as we can. */ do @@ -5610,7 +5631,10 @@ move_insn_for_shrink_wrap (basic_block bb, rtx insn, { for (i = dregno; i < end_dregno; i++) { - if (REGNO_REG_SET_P (bb_uses, i) || REGNO_REG_SET_P (bb_defs, i) + + if (*split_p + || REGNO_REG_SET_P (bb_uses, i) + || REGNO_REG_SET_P (bb_defs, i) || REGNO_REG_SET_P (&DF_LIVE_BB_INFO (bb)->gen, i)) next_block = NULL; CLEAR_REGNO_REG_SET (live_out, i); @@ -5621,7 +5645,8 @@ move_insn_for_shrink_wrap (basic_block bb, rtx insn, Either way, SRC is now live on entry. */ for (i = sregno; i < end_sregno; i++) { - if (REGNO_REG_SET_P (bb_defs, i) + if (*split_p + || REGNO_REG_SET_P (bb_defs, i) || REGNO_REG_SET_P (&DF_LIVE_BB_INFO (bb)->gen, i)) next_block = NULL; SET_REGNO_REG_SET (live_out, i); @@ -5650,21 +5675,31 @@ move_insn_for_shrink_wrap (basic_block bb, rtx insn, /* If we don't need to add the move to BB, look for a single successor block. */ if (next_block) - next_block = next_block_for_reg (next_block, dregno, end_dregno); + { + live_edge = next_block_for_reg (next_block, dregno, end_dregno); + if (!live_edge || EDGE_COUNT (live_edge->dest->preds) > 1) + break; + next_block = live_edge->dest; + } } while (next_block); - /* BB now defines DEST. It only uses the parts of DEST that overlap SRC - (next loop). */ - for (i = dregno; i < end_dregno; i++) + /* For the new created basic block, there is no dataflow info at all. + So skip the following dataflow update and check. */ + if (!(*split_p)) { - CLEAR_REGNO_REG_SET (bb_uses, i); - SET_REGNO_REG_SET (bb_defs, i); - } + /* BB now defines DEST. It only uses the parts of DEST that overlap SRC + (next loop). */ + for (i = dregno; i < end_dregno; i++) + { + CLEAR_REGNO_REG_SET (bb_uses, i); + SET_REGNO_REG_SET (bb_defs, i); + } - /* BB now uses SRC. */ - for (i = sregno; i < end_sregno; i++) - SET_REGNO_REG_SET (bb_uses, i); + /* BB now uses SRC. */ + for (i = sregno; i < end_sregno; i++) + SET_REGNO_REG_SET (bb_uses, i); + } emit_insn_after (PATTERN (insn), bb_note (bb)); delete_insn (insn); @@ -5684,6 +5719,7 @@ prepare_shrink_wrap (basic_block entry_block) rtx insn, curr, x; HARD_REG_SET uses, defs, last_uses; df_ref *ref; + bool split_p = false; if (!JUMP_P (BB_END (entry_block))) return; @@ -5693,7 +5729,7 @@ prepare_shrink_wrap (basic_block entry_block) FOR_BB_INSNS_REVERSE_SAFE (entry_block, insn, curr) if (NONDEBUG_INSN_P (insn) && !move_insn_for_shrink_wrap (entry_block, insn, uses, defs, - &last_uses)) + &last_uses, &split_p)) { /* Add all defined registers to DEFs. */ for (ref = DF_INSN_DEFS (insn); *ref; ref++) diff --git a/gcc/function.c b/gcc/function.c index 764ac82..0be58e2 100644 --- a/gcc/function.c +++ b/gcc/function.c @@ -5381,7 +5381,7 @@ requires_stack_frame_p (rtx insn, HARD_REG_SET prologue_used, and if BB is its only predecessor. Return that block if so, otherwise return null. */ -static basic_block +static edge next_block_for_reg (basic_block bb, int regno, int end_regno) {