From patchwork Thu Jul 13 08:06:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 107572 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp1897496qge; Thu, 13 Jul 2017 01:07:03 -0700 (PDT) X-Received: by 10.84.128.39 with SMTP id 36mr8757153pla.226.1499933223726; Thu, 13 Jul 2017 01:07:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1499933223; cv=none; d=google.com; s=arc-20160816; b=AHGFviXe07ZbZFkC+/Bww36fPR0Kq5RorMZ5tiXIQ5VEykeBC8dFDIO0JJvcImjTJI H+G9zM5o1JFZSbKVXTF80w5k4r5BPTVMGYsgZ2yLXwAqezZXnz61J9RboKC4o1HoEz7O ODH0j7DSx5dxjkA1N+TlNF8vty4KR1h8fVt424Gpa+AbgEWZ9T5QuSsk9i7tWh3L5JAK 6+K1qOU30m406s8V1bLH79cK58GwqmZQ6X4usSj+n4UwEaw7HKPtaV6tZk7ep4iO7i8L HUEG6dXSdK6rBsI+CpZ1LJ22Q4mcXHj5M3ZV64GixD4d9tnDhwRNXZ+2JCvm2AV/M1US CV+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:message-id:date:subject:mail-followup-to:to :from:delivered-to:sender:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:mailing-list:dkim-signature :domainkey-signature:arc-authentication-results; bh=PE2fMUPaovWJT28OnGE/eSZdppgxszjAtwJ/zA+aLuY=; b=eLE1gK9JTpxgg1Tv10R8PaemHqogAOzuwjF+tHQpuCct+B9E613qWysss26qcKK/Ua X1QH/DKGsr4dzCwfWCKj+uk2F1nkhWf2yY79/zbAsVGsDZ2TYK/ipfmIihR4S+PZB4yF 8cCljRrls7n1FopY05F3H2CsOI08euKTbCDwIR2319cUGuoho+t3FEhsB/1KP8z2YFBI dahXLf6TFHcXR7L22CyuUCEkchCAC4coXMmXLB7CvGlPEX07bDHiiDha3nOauBa1DuX3 LxCc6tjwSdXtl1iD1Lk9S2o8juu78ZFJl/W3wYGjxsjpKH9nf4o8KGmIfEbamoeI4375 WuQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=J2ZilgFc; spf=pass (google.com: domain of gcc-patches-return-457986-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-457986-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id a15si3811360pln.417.2017.07.13.01.07.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jul 2017 01:07:03 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-457986-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.b=J2ZilgFc; spf=pass (google.com: domain of gcc-patches-return-457986-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-457986-patch=linaro.org@gcc.gnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=lOBnJ55uUgt68NQlBLzy9axnBU0WSCiEL7l/zIEPJgKLxM2hGpfTV CxUt1rpqoGUzGO/1gsElgr+r3QbPoLhJ8YUhYMVLkD4SV3KDYUE3iogM3PBPOCm1 OwAGL51tPyVLPEM07xVvTpj4JKqPi+NACDfgUM/g0w5xEnnyNgGAfg= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=R2I/RPBv8TzEfh7DFv+/4zKAmHQ=; b=J2ZilgFcSLDoeV2VqKvJ mCzNnNLIPHWcPUtc5kuCWqML8gD0olRPRk8YLZkjeZMocbTWr43SImKroRy0DYNV blfIfAqqrVkfzGjVcaFkNrg715SYzG0IWpi5pDUyeP7fM9p2/epLwGnqbzoYBceL K+S3/1tAhVIoaqnkc7N0nb4= Received: (qmail 26727 invoked by alias); 13 Jul 2017 08:06:40 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 25681 invoked by uid 89); 13 Jul 2017 08:06:39 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-10.6 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=ham version=3.3.2 spammy=cvt, rcode, ORDERED X-HELO: mail-wr0-f178.google.com Received: from mail-wr0-f178.google.com (HELO mail-wr0-f178.google.com) (209.85.128.178) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 13 Jul 2017 08:06:36 +0000 Received: by mail-wr0-f178.google.com with SMTP id k67so48800368wrc.2 for ; Thu, 13 Jul 2017 01:06:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:date:message-id :user-agent:mime-version; bh=PE2fMUPaovWJT28OnGE/eSZdppgxszjAtwJ/zA+aLuY=; b=DXXC4RkShCTQaGCo0acej8/dLzu4NKJGBRXf/rxIVliD46HiHVfCFM9kgrKqtaganq XKrrP5dMzMvBvRqDZhm/x3V4FC1N6gOdKzgglMD3mxkGA96z3iEo3XsRQKTjBP+9hxNr hsjI+1CpjaL3KCiy0boD8MJZIEak7N96PCKsCKgU7RSDDpX4prhGmvGxW3TgzpsJlxZn oj+SjrrTH11nwWDWW0QB9x5mmOgTr5CY7MaBXyy1dKy7tzHDBUDr6OyVliNEyD0xA4qZ A4p/d4IhO5spmn15BBEchWbLCzUrLdGyu1cV8czhtdwbh+t0uJ1GOCYzmbz/SexndleE 2SFg== X-Gm-Message-State: AIVw112YN1EldlRtaaTG1IQ2QBc2OU6u6dWCt1drfQQLd7rCeY7H4Ebi /1eQxkG2S1yarC4SxcpVSA== X-Received: by 10.223.128.42 with SMTP id 39mr942178wrk.175.1499933193506; Thu, 13 Jul 2017 01:06:33 -0700 (PDT) Received: from localhost (92.40.249.184.threembb.co.uk. [92.40.249.184]) by smtp.gmail.com with ESMTPSA id n189sm5358506wmd.0.2017.07.13.01.06.31 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Jul 2017 01:06:32 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: Split out parts of scompare_loc_descriptor and emit_store_flag Date: Thu, 13 Jul 2017 09:06:30 +0100 Message-ID: <87bmoooj5l.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 This patch splits some cases out of scompare_loc_descriptor and emit_store_flag, which helps with the upcoming machmode series. Tested on aarch64-linux-gnu, powerpc64le-linux-gnu and x86_64-linux-gnu. OK to install? Richard 2017-07-13 Richard Sandiford gcc/ * dwarf2out.c (scompare_loc_descriptor_wide) (scompare_loc_descriptor_narrow): New functions, split out from... (scompare_loc_descriptor): ...here. * expmed.c (emit_store_flag_int): New function, split out from... (emit_store_flag): ...here. Index: gcc/dwarf2out.c =================================================================== --- gcc/dwarf2out.c 2017-07-12 14:49:07.903932820 +0100 +++ gcc/dwarf2out.c 2017-07-13 09:04:15.318004945 +0100 @@ -13850,60 +13850,43 @@ compare_loc_descriptor (enum dwarf_locat return ret; } -/* Return location descriptor for signed comparison OP RTL. */ +/* Subroutine of scompare_loc_descriptor for the case in which we're + comparing two scalar integer operands OP0 and OP1 that have mode OP_MODE, + and in which OP_MODE is bigger than DWARF2_ADDR_SIZE. */ static dw_loc_descr_ref -scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl, - machine_mode mem_mode) +scompare_loc_descriptor_wide (enum dwarf_location_atom op, + machine_mode op_mode, + dw_loc_descr_ref op0, dw_loc_descr_ref op1) { - machine_mode op_mode = GET_MODE (XEXP (rtl, 0)); - dw_loc_descr_ref op0, op1; - int shift; - - if (op_mode == VOIDmode) - op_mode = GET_MODE (XEXP (rtl, 1)); - if (op_mode == VOIDmode) - return NULL; - - if (dwarf_strict - && dwarf_version < 5 - && (!SCALAR_INT_MODE_P (op_mode) - || GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE)) - return NULL; + dw_die_ref type_die = base_type_for_mode (op_mode, 0); + dw_loc_descr_ref cvt; - op0 = mem_loc_descriptor (XEXP (rtl, 0), op_mode, mem_mode, - VAR_INIT_STATUS_INITIALIZED); - op1 = mem_loc_descriptor (XEXP (rtl, 1), op_mode, mem_mode, - VAR_INIT_STATUS_INITIALIZED); - - if (op0 == NULL || op1 == NULL) + if (type_die == NULL) return NULL; + cvt = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0); + cvt->dw_loc_oprnd1.val_class = dw_val_class_die_ref; + cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die; + cvt->dw_loc_oprnd1.v.val_die_ref.external = 0; + add_loc_descr (&op0, cvt); + cvt = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0); + cvt->dw_loc_oprnd1.val_class = dw_val_class_die_ref; + cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die; + cvt->dw_loc_oprnd1.v.val_die_ref.external = 0; + add_loc_descr (&op1, cvt); + return compare_loc_descriptor (op, op0, op1); +} - if (!SCALAR_INT_MODE_P (op_mode) - || GET_MODE_SIZE (op_mode) == DWARF2_ADDR_SIZE) - return compare_loc_descriptor (op, op0, op1); - - if (GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE) - { - dw_die_ref type_die = base_type_for_mode (op_mode, 0); - dw_loc_descr_ref cvt; - - if (type_die == NULL) - return NULL; - cvt = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0); - cvt->dw_loc_oprnd1.val_class = dw_val_class_die_ref; - cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die; - cvt->dw_loc_oprnd1.v.val_die_ref.external = 0; - add_loc_descr (&op0, cvt); - cvt = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0); - cvt->dw_loc_oprnd1.val_class = dw_val_class_die_ref; - cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die; - cvt->dw_loc_oprnd1.v.val_die_ref.external = 0; - add_loc_descr (&op1, cvt); - return compare_loc_descriptor (op, op0, op1); - } +/* Subroutine of scompare_loc_descriptor for the case in which we're + comparing two scalar integer operands OP0 and OP1 that have mode OP_MODE, + and in which OP_MODE is smaller than DWARF2_ADDR_SIZE. */ - shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (op_mode)) * BITS_PER_UNIT; +static dw_loc_descr_ref +scompare_loc_descriptor_narrow (enum dwarf_location_atom op, rtx rtl, + machine_mode op_mode, + dw_loc_descr_ref op0, dw_loc_descr_ref op1) +{ + int shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (op_mode)) * BITS_PER_UNIT; /* For eq/ne, if the operands are known to be zero-extended, there is no need to do the fancy shifting up. */ if (op == DW_OP_eq || op == DW_OP_ne) @@ -13960,6 +13943,45 @@ scompare_loc_descriptor (enum dwarf_loca } return compare_loc_descriptor (op, op0, op1); } + +/* Return location descriptor for signed comparison OP RTL. */ + +static dw_loc_descr_ref +scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl, + machine_mode mem_mode) +{ + machine_mode op_mode = GET_MODE (XEXP (rtl, 0)); + dw_loc_descr_ref op0, op1; + + if (op_mode == VOIDmode) + op_mode = GET_MODE (XEXP (rtl, 1)); + if (op_mode == VOIDmode) + return NULL; + + if (dwarf_strict + && dwarf_version < 5 + && (!SCALAR_INT_MODE_P (op_mode) + || GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE)) + return NULL; + + op0 = mem_loc_descriptor (XEXP (rtl, 0), op_mode, mem_mode, + VAR_INIT_STATUS_INITIALIZED); + op1 = mem_loc_descriptor (XEXP (rtl, 1), op_mode, mem_mode, + VAR_INIT_STATUS_INITIALIZED); + + if (op0 == NULL || op1 == NULL) + return NULL; + + if (SCALAR_INT_MODE_P (op_mode)) + { + if (GET_MODE_SIZE (op_mode) < DWARF2_ADDR_SIZE) + return scompare_loc_descriptor_narrow (op, rtl, op_mode, op0, op1); + + if (GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE) + return scompare_loc_descriptor_wide (op, op_mode, op0, op1); + } + return compare_loc_descriptor (op, op0, op1); +} /* Return location descriptor for unsigned comparison OP RTL. */ Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2017-07-05 16:29:19.597161905 +0100 +++ gcc/expmed.c 2017-07-13 09:04:15.320004932 +0100 @@ -5531,155 +5531,19 @@ emit_store_flag_1 (rtx target, enum rtx_ return 0; } -/* Emit a store-flags instruction for comparison CODE on OP0 and OP1 - and storing in TARGET. Normally return TARGET. - Return 0 if that cannot be done. - - MODE is the mode to use for OP0 and OP1 should they be CONST_INTs. If - it is VOIDmode, they cannot both be CONST_INT. - - UNSIGNEDP is for the case where we have to widen the operands - to perform the operation. It says to use zero-extension. - - NORMALIZEP is 1 if we should convert the result to be either zero - or one. Normalize is -1 if we should convert the result to be - either zero or -1. If NORMALIZEP is zero, the result will be left - "raw" out of the scc insn. */ +/* Subroutine of emit_store_flag that handles cases in which the operands + are scalar integers. SUBTARGET is the target to use for temporary + operations and TRUEVAL is the value to store when the condition is + true. All other arguments are as for emit_store_flag. */ rtx -emit_store_flag (rtx target, enum rtx_code code, rtx op0, rtx op1, - machine_mode mode, int unsignedp, int normalizep) +emit_store_flag_int (rtx target, rtx subtarget, enum rtx_code code, rtx op0, + rtx op1, machine_mode mode, int unsignedp, + int normalizep, rtx trueval) { machine_mode target_mode = target ? GET_MODE (target) : VOIDmode; - enum rtx_code rcode; - rtx subtarget; - rtx tem, trueval; - rtx_insn *last; - - /* If we compare constants, we shouldn't use a store-flag operation, - but a constant load. We can get there via the vanilla route that - usually generates a compare-branch sequence, but will in this case - fold the comparison to a constant, and thus elide the branch. */ - if (CONSTANT_P (op0) && CONSTANT_P (op1)) - return NULL_RTX; - - tem = emit_store_flag_1 (target, code, op0, op1, mode, unsignedp, normalizep, - target_mode); - if (tem) - return tem; - - /* If we reached here, we can't do this with a scc insn, however there - are some comparisons that can be done in other ways. Don't do any - of these cases if branches are very cheap. */ - if (BRANCH_COST (optimize_insn_for_speed_p (), false) == 0) - return 0; - - /* See what we need to return. We can only return a 1, -1, or the - sign bit. */ - - if (normalizep == 0) - { - if (STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1) - normalizep = STORE_FLAG_VALUE; - - else if (val_signbit_p (mode, STORE_FLAG_VALUE)) - ; - else - return 0; - } - - last = get_last_insn (); - - /* If optimizing, use different pseudo registers for each insn, instead - of reusing the same pseudo. This leads to better CSE, but slows - down the compiler, since there are more pseudos */ - subtarget = (!optimize - && (target_mode == mode)) ? target : NULL_RTX; - trueval = GEN_INT (normalizep ? normalizep : STORE_FLAG_VALUE); - - /* For floating-point comparisons, try the reverse comparison or try - changing the "orderedness" of the comparison. */ - if (GET_MODE_CLASS (mode) == MODE_FLOAT) - { - enum rtx_code first_code; - bool and_them; - - rcode = reverse_condition_maybe_unordered (code); - if (can_compare_p (rcode, mode, ccp_store_flag) - && (code == ORDERED || code == UNORDERED - || (! HONOR_NANS (mode) && (code == LTGT || code == UNEQ)) - || (! HONOR_SNANS (mode) && (code == EQ || code == NE)))) - { - int want_add = ((STORE_FLAG_VALUE == 1 && normalizep == -1) - || (STORE_FLAG_VALUE == -1 && normalizep == 1)); - - /* For the reverse comparison, use either an addition or a XOR. */ - if (want_add - && rtx_cost (GEN_INT (normalizep), mode, PLUS, 1, - optimize_insn_for_speed_p ()) == 0) - { - tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0, - STORE_FLAG_VALUE, target_mode); - if (tem) - return expand_binop (target_mode, add_optab, tem, - gen_int_mode (normalizep, target_mode), - target, 0, OPTAB_WIDEN); - } - else if (!want_add - && rtx_cost (trueval, mode, XOR, 1, - optimize_insn_for_speed_p ()) == 0) - { - tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0, - normalizep, target_mode); - if (tem) - return expand_binop (target_mode, xor_optab, tem, trueval, - target, INTVAL (trueval) >= 0, OPTAB_WIDEN); - } - } - - delete_insns_since (last); - - /* Cannot split ORDERED and UNORDERED, only try the above trick. */ - if (code == ORDERED || code == UNORDERED) - return 0; - - and_them = split_comparison (code, mode, &first_code, &code); - - /* If there are no NaNs, the first comparison should always fall through. - Effectively change the comparison to the other one. */ - if (!HONOR_NANS (mode)) - { - gcc_assert (first_code == (and_them ? ORDERED : UNORDERED)); - return emit_store_flag_1 (target, code, op0, op1, mode, 0, normalizep, - target_mode); - } - - if (!HAVE_conditional_move) - return 0; - - /* Try using a setcc instruction for ORDERED/UNORDERED, followed by a - conditional move. */ - tem = emit_store_flag_1 (subtarget, first_code, op0, op1, mode, 0, - normalizep, target_mode); - if (tem == 0) - return 0; - - if (and_them) - tem = emit_conditional_move (target, code, op0, op1, mode, - tem, const0_rtx, GET_MODE (tem), 0); - else - tem = emit_conditional_move (target, code, op0, op1, mode, - trueval, tem, GET_MODE (tem), 0); - - if (tem == 0) - delete_insns_since (last); - return tem; - } - - /* The remaining tricks only apply to integer comparisons. */ - - if (GET_MODE_CLASS (mode) != MODE_INT) - return 0; + rtx_insn *last = get_last_insn (); + rtx tem; /* If this is an equality comparison of integers, we can try to exclusive-or (or subtract) the two operands and use a recursive call to try the @@ -5706,7 +5570,7 @@ emit_store_flag (rtx target, enum rtx_co /* For integer comparisons, try the reverse comparison. However, for small X and if we'd have anyway to extend, implementing "X != 0" as "-(int)X >> 31" is still cheaper than inverting "(int)X == 0". */ - rcode = reverse_condition (code); + rtx_code rcode = reverse_condition (code); if (can_compare_p (rcode, mode, ccp_store_flag) && ! (optab_handler (cstore_optab, mode) == CODE_FOR_nothing && code == NE @@ -5724,7 +5588,7 @@ emit_store_flag (rtx target, enum rtx_co tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0, STORE_FLAG_VALUE, target_mode); if (tem != 0) - tem = expand_binop (target_mode, add_optab, tem, + tem = expand_binop (target_mode, add_optab, tem, gen_int_mode (normalizep, target_mode), target, 0, OPTAB_WIDEN); } @@ -5735,7 +5599,7 @@ emit_store_flag (rtx target, enum rtx_co tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0, normalizep, target_mode); if (tem != 0) - tem = expand_binop (target_mode, xor_optab, tem, trueval, target, + tem = expand_binop (target_mode, xor_optab, tem, trueval, target, INTVAL (trueval) >= 0, OPTAB_WIDEN); } @@ -5836,7 +5700,7 @@ emit_store_flag (rtx target, enum rtx_co if (tem == 0 && (code == NE || BRANCH_COST (optimize_insn_for_speed_p (), - false) > 1)) + false) > 1)) { if (rtx_equal_p (subtarget, op0)) subtarget = 0; @@ -5858,7 +5722,7 @@ emit_store_flag (rtx target, enum rtx_co if (tem) { if (!target) - ; + ; else if (GET_MODE (tem) != target_mode) { convert_move (target, tem, 0); @@ -5876,6 +5740,161 @@ emit_store_flag (rtx target, enum rtx_co return tem; } +/* Emit a store-flags instruction for comparison CODE on OP0 and OP1 + and storing in TARGET. Normally return TARGET. + Return 0 if that cannot be done. + + MODE is the mode to use for OP0 and OP1 should they be CONST_INTs. If + it is VOIDmode, they cannot both be CONST_INT. + + UNSIGNEDP is for the case where we have to widen the operands + to perform the operation. It says to use zero-extension. + + NORMALIZEP is 1 if we should convert the result to be either zero + or one. Normalize is -1 if we should convert the result to be + either zero or -1. If NORMALIZEP is zero, the result will be left + "raw" out of the scc insn. */ + +rtx +emit_store_flag (rtx target, enum rtx_code code, rtx op0, rtx op1, + machine_mode mode, int unsignedp, int normalizep) +{ + machine_mode target_mode = target ? GET_MODE (target) : VOIDmode; + enum rtx_code rcode; + rtx subtarget; + rtx tem, trueval; + rtx_insn *last; + + /* If we compare constants, we shouldn't use a store-flag operation, + but a constant load. We can get there via the vanilla route that + usually generates a compare-branch sequence, but will in this case + fold the comparison to a constant, and thus elide the branch. */ + if (CONSTANT_P (op0) && CONSTANT_P (op1)) + return NULL_RTX; + + tem = emit_store_flag_1 (target, code, op0, op1, mode, unsignedp, normalizep, + target_mode); + if (tem) + return tem; + + /* If we reached here, we can't do this with a scc insn, however there + are some comparisons that can be done in other ways. Don't do any + of these cases if branches are very cheap. */ + if (BRANCH_COST (optimize_insn_for_speed_p (), false) == 0) + return 0; + + /* See what we need to return. We can only return a 1, -1, or the + sign bit. */ + + if (normalizep == 0) + { + if (STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1) + normalizep = STORE_FLAG_VALUE; + + else if (val_signbit_p (mode, STORE_FLAG_VALUE)) + ; + else + return 0; + } + + last = get_last_insn (); + + /* If optimizing, use different pseudo registers for each insn, instead + of reusing the same pseudo. This leads to better CSE, but slows + down the compiler, since there are more pseudos. */ + subtarget = (!optimize + && (target_mode == mode)) ? target : NULL_RTX; + trueval = GEN_INT (normalizep ? normalizep : STORE_FLAG_VALUE); + + /* For floating-point comparisons, try the reverse comparison or try + changing the "orderedness" of the comparison. */ + if (GET_MODE_CLASS (mode) == MODE_FLOAT) + { + enum rtx_code first_code; + bool and_them; + + rcode = reverse_condition_maybe_unordered (code); + if (can_compare_p (rcode, mode, ccp_store_flag) + && (code == ORDERED || code == UNORDERED + || (! HONOR_NANS (mode) && (code == LTGT || code == UNEQ)) + || (! HONOR_SNANS (mode) && (code == EQ || code == NE)))) + { + int want_add = ((STORE_FLAG_VALUE == 1 && normalizep == -1) + || (STORE_FLAG_VALUE == -1 && normalizep == 1)); + + /* For the reverse comparison, use either an addition or a XOR. */ + if (want_add + && rtx_cost (GEN_INT (normalizep), mode, PLUS, 1, + optimize_insn_for_speed_p ()) == 0) + { + tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0, + STORE_FLAG_VALUE, target_mode); + if (tem) + return expand_binop (target_mode, add_optab, tem, + gen_int_mode (normalizep, target_mode), + target, 0, OPTAB_WIDEN); + } + else if (!want_add + && rtx_cost (trueval, mode, XOR, 1, + optimize_insn_for_speed_p ()) == 0) + { + tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0, + normalizep, target_mode); + if (tem) + return expand_binop (target_mode, xor_optab, tem, trueval, + target, INTVAL (trueval) >= 0, + OPTAB_WIDEN); + } + } + + delete_insns_since (last); + + /* Cannot split ORDERED and UNORDERED, only try the above trick. */ + if (code == ORDERED || code == UNORDERED) + return 0; + + and_them = split_comparison (code, mode, &first_code, &code); + + /* If there are no NaNs, the first comparison should always fall through. + Effectively change the comparison to the other one. */ + if (!HONOR_NANS (mode)) + { + gcc_assert (first_code == (and_them ? ORDERED : UNORDERED)); + return emit_store_flag_1 (target, code, op0, op1, mode, 0, normalizep, + target_mode); + } + + if (!HAVE_conditional_move) + return 0; + + /* Try using a setcc instruction for ORDERED/UNORDERED, followed by a + conditional move. */ + tem = emit_store_flag_1 (subtarget, first_code, op0, op1, mode, 0, + normalizep, target_mode); + if (tem == 0) + return 0; + + if (and_them) + tem = emit_conditional_move (target, code, op0, op1, mode, + tem, const0_rtx, GET_MODE (tem), 0); + else + tem = emit_conditional_move (target, code, op0, op1, mode, + trueval, tem, GET_MODE (tem), 0); + + if (tem == 0) + delete_insns_since (last); + return tem; + } + + /* The remaining tricks only apply to integer comparisons. */ + + if (GET_MODE_CLASS (mode) == MODE_INT) + return emit_store_flag_int (target, subtarget, code, op0, op1, mode, + unsignedp, normalizep, trueval); + + return 0; +} + /* Like emit_store_flag, but always succeeds. */ rtx