From patchwork Wed Jun 11 09:35:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenqiang Chen X-Patchwork-Id: 31750 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f198.google.com (mail-qc0-f198.google.com [209.85.216.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7565A203C2 for ; Wed, 11 Jun 2014 09:35:46 +0000 (UTC) Received: by mail-qc0-f198.google.com with SMTP id m20sf11503814qcx.5 for ; Wed, 11 Jun 2014 02:35:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:sender :delivered-to:mime-version:in-reply-to:references:date:message-id :subject:from:to:cc:x-original-sender :x-original-authentication-results:content-type; bh=u5bUkuTZPtpesQ3/xNSdn24Qijs+/hBb5DwlrXCupWI=; b=knMFZi77stxPFLPHJsW4dyEvINbA5ypEPzGdz+LsnT2eLKjOzuTjiSi7yJ050AzKPH IJ4GIdwWkSGGHXe7i3GSvE2kn8FlAj53UmZWMaHCEVKGjBxlEDIVVjGQtsbXHAIiHhbX 3uXsHI81EiPpTkAEEkUH+5Bx9y6LqQgiZX3OPsX7F80uuv7Ee9huY1txY4PR2Qi8eXDb MLF6tdVvmwQ08Wl4XVG/4DF+tUZRaJK/7a9Q2H1V47LWgPBvXloOLXC2H1bs5A+F2Xya jRDo+f9EkV9gRJTdbkLmHgtilWd8O4DWmiQOuNPSjj4FGcevQx0Nhc0DyB0aCcnNBwlK zldw== X-Gm-Message-State: ALoCoQm/z+Xlx49n3UDE7PMjmcpaPFE6qs2Iy2yLquzhGz038Tid8QentudrY4LdN0wApCotxCfH X-Received: by 10.58.143.66 with SMTP id sc2mr7855337veb.14.1402479346228; Wed, 11 Jun 2014 02:35:46 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.33.244 with SMTP id j107ls2423270qgj.24.gmail; Wed, 11 Jun 2014 02:35:46 -0700 (PDT) X-Received: by 10.52.230.34 with SMTP id sv2mr543591vdc.57.1402479346119; Wed, 11 Jun 2014 02:35:46 -0700 (PDT) Received: from mail-ve0-x236.google.com (mail-ve0-x236.google.com [2607:f8b0:400c:c01::236]) by mx.google.com with ESMTPS id tu5si12788408vcb.27.2014.06.11.02.35.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 11 Jun 2014 02:35:46 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::236 as permitted sender) client-ip=2607:f8b0:400c:c01::236; Received: by mail-ve0-f182.google.com with SMTP id sa20so10600505veb.27 for ; Wed, 11 Jun 2014 02:35:46 -0700 (PDT) X-Received: by 10.52.141.74 with SMTP id rm10mr2379354vdb.6.1402479346029; Wed, 11 Jun 2014 02:35:46 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp288574vcb; Wed, 11 Jun 2014 02:35:45 -0700 (PDT) X-Received: by 10.66.233.101 with SMTP id tv5mr12026135pac.92.1402479345193; Wed, 11 Jun 2014 02:35:45 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id yk1si37459928pbc.41.2014.06.11.02.35.44 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jun 2014 02:35:45 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-369973-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 15908 invoked by alias); 11 Jun 2014 09:35:33 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 15894 invoked by uid 89); 11 Jun 2014 09:35:33 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.5 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-lb0-f182.google.com Received: from mail-lb0-f182.google.com (HELO mail-lb0-f182.google.com) (209.85.217.182) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Wed, 11 Jun 2014 09:35:31 +0000 Received: by mail-lb0-f182.google.com with SMTP id z11so4717550lbi.27 for ; Wed, 11 Jun 2014 02:35:28 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.152.5.4 with SMTP id o4mr933050lao.64.1402479328299; Wed, 11 Jun 2014 02:35:28 -0700 (PDT) Received: by 10.112.13.36 with HTTP; Wed, 11 Jun 2014 02:35:28 -0700 (PDT) In-Reply-To: References: Date: Wed, 11 Jun 2014 17:35:28 +0800 Message-ID: Subject: Re: [PATCH, loop2_invariant] Pre-check invariants From: Zhenqiang Chen To: Steven Bosscher Cc: "gcc-patches@gcc.gnu.org" X-IsSubscribed: yes X-Original-Sender: zhenqiang.chen@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::236 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 On 10 June 2014 19:01, Steven Bosscher wrote: > On Tue, Jun 10, 2014 at 11:55 AM, Zhenqiang Chen wrote: >> >> * loop-invariant.c (find_invariant_insn): Skip invariants, which >> can not make a valid insn during replacement in move_invariant_reg. >> >> --- a/gcc/loop-invariant.c >> +++ b/gcc/loop-invariant.c >> @@ -881,6 +881,35 @@ find_invariant_insn (rtx insn, bool >> always_reached, bool always_executed) >> || HARD_REGISTER_P (dest)) >> simple = false; >> >> + /* Pre-check candidate to skip the one which can not make a valid insn >> + during move_invariant_reg. */ >> + if (flag_ira_loop_pressure && df_live && simple >> + && REG_P (dest) && DF_REG_DEF_COUNT (REGNO (dest)) > 1) > > Why only do this with (flag_ira_loop_pressure && df_live)? If the > invariant can't be moved, we should ignore it regardless of whether > register pressure is taken into account. Thanks for the comments. df_live seams redundant. With flag_ira_loop_pressure, the pass will call df_analyze () at the beginning, which can make sure all the DF info are correct. Can we guarantee all DF_... correct without df_analyze ()? >> + { >> + df_ref use; >> + rtx ref; >> + unsigned int i = REGNO (dest); >> + struct df_insn_info *insn_info; >> + df_ref *def_rec; >> + >> + for (use = DF_REG_USE_CHAIN (i); use; use = DF_REF_NEXT_REG (use)) >> + { >> + ref = DF_REF_INSN (use); >> + insn_info = DF_INSN_INFO_GET (ref); >> + >> + for (def_rec = DF_INSN_INFO_DEFS (insn_info); *def_rec; def_rec++) >> + if (DF_REF_REGNO (*def_rec) == i) >> + { >> + /* Multi definitions at this stage, most likely are due to >> + instruction constrain, which requires both read and write >> + on the same register. Since move_invariant_reg is not >> + powerful enough to handle such cases, just ignore the INV >> + and leave the chance to others. */ >> + return; >> + } >> + } >> + } >> + >> if (!may_assign_reg_p (SET_DEST (set)) >> || !check_maybe_invariant (SET_SRC (set))) >> return; > > > Can you put your new check between "may_assign_reg_p (dest)" and > "check_maybe_invariant"? The may_assign_reg_p check is cheap and > triggers quite often. Updated and also removed the "flag_ira_loop_pressure && df_live". To make it easy, I move the codes to a new function. OK for trunk? diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c index c6bf19b..d19f3c8 100644 --- a/gcc/loop-invariant.c +++ b/gcc/loop-invariant.c @@ -839,6 +852,39 @@ check_dependencies (rtx insn, bitmap depends_on) return true; } +/* Pre-check candidate DEST to skip the one which can not make a valid insn + during move_invariant_reg. SIMPlE is to skip HARD_REGISTER. */ +static bool +pre_check_invariant_p (bool simple, rtx dest) +{ + if (simple && REG_P (dest) && DF_REG_DEF_COUNT (REGNO (dest)) > 1) + { + df_ref use; + rtx ref; + unsigned int i = REGNO (dest); + struct df_insn_info *insn_info; + df_ref *def_rec; + + for (use = DF_REG_USE_CHAIN (i); use; use = DF_REF_NEXT_REG (use)) + { + ref = DF_REF_INSN (use); + insn_info = DF_INSN_INFO_GET (ref); + + for (def_rec = DF_INSN_INFO_DEFS (insn_info); *def_rec; def_rec++) + if (DF_REF_REGNO (*def_rec) == i) + { + /* Multi definitions at this stage, most likely are due to + instruction constrain, which requires both read and write + on the same register. Since move_invariant_reg is not + powerful enough to handle such cases, just ignore the INV + and leave the chance to others. */ + return false; + } + } + } + return true; +} + /* Finds invariant in INSN. ALWAYS_REACHED is true if the insn is always executed. ALWAYS_EXECUTED is true if the insn is always executed, unless the program ends due to a function call. */ @@ -868,7 +914,8 @@ find_invariant_insn (rtx insn, bool always_reached, bool always_executed) || HARD_REGISTER_P (dest)) simple = false; - if (!may_assign_reg_p (SET_DEST (set)) + if (!may_assign_reg_p (dest) + || !pre_check_invariant_p (simple, dest) || !check_maybe_invariant (SET_SRC (set))) return;