From patchwork Mon Aug 30 06:24:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 504292 Delivered-To: patch@linaro.org Received: by 2002:a02:8629:0:0:0:0:0 with SMTP id e38csp1511909jai; Sun, 29 Aug 2021 23:29:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx3wa+4yVBgQMwcpZ9+piVz+Ik78RSoFuzWyiUZSr2tea+jFdziqwBjpyGAWDYfxieGZuOa X-Received: by 2002:a25:6e55:: with SMTP id j82mr20681800ybc.480.1630304952059; Sun, 29 Aug 2021 23:29:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630304952; cv=none; d=google.com; s=arc-20160816; b=JGIvk+7Vs2ckk409esz01iLYoYxKlIvNKnw159MZelY1bej4mtzQ9f6YtXyUa5SlwS WTIAU3OOTjo405iQnAjzbdBZztTEGC2Xy2CBZhZ7pnAGR1jtcLtwU5MZ2oRshhFUaEny wVP5xVehP0mG2DJHTKnJQQ5eKocz6SNMxlD0+P2XzGfYn+kEX8lOldYn4FKSM9dNN2PB 5dfonDkS041xoFsySnv2i16zcXw5qJMbXS53kzNwieXL3J7AOlFKBr/VUYZvwlPBzcyO AGxxE+SCVtMZqJ/z1n3WE3zGNPNj0d8PnGWGh23au2X3N8ib1ZrfowXEnCIFERShWodN irNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Ccx5AzXxSsIcY6hQ8fBoKCVwOyG1Wy2tldYYTSJ/shU=; b=gPj+Pq/5mMnRQlSoCi28a7VPq5vQ1/YZvYFHHOByEVGAmu7XXujsPxB/nAjnOEoyT0 AuLXhDQs9vAj3WllzV2tntHPhRiqgYRKueJmQY5XjBpyRKaDzCRTZu0kveqMB2SY/kN5 yEHe9c5LF15kBVyRrKsaAY902pOTlS4L4U7hRbb3B94kJ0ACpyMf4SaHU65rApdtqd3A hSj6jmLZn9tKkwpJPCsoih+062uEZrzdBkIgD+lgMui035fayvP34tNOBEc9iq8+OYBb Yy53RAuWTtjS11fNJtARgusplAzpiPeHwwIwlr6dkzBQzHM4eD4aD/+ldVgFShiRJfp9 jXfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Vbqt+BYy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c5si13272493ybk.313.2021.08.29.23.29.11 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sun, 29 Aug 2021 23:29:12 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Vbqt+BYy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:40038 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mKan9-0001p3-Fb for patch@linaro.org; Mon, 30 Aug 2021 02:29:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46318) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mKajC-0003uo-Im for qemu-devel@nongnu.org; Mon, 30 Aug 2021 02:25:06 -0400 Received: from mail-pj1-x102a.google.com ([2607:f8b0:4864:20::102a]:33541) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1mKajA-0007Uy-77 for qemu-devel@nongnu.org; Mon, 30 Aug 2021 02:25:06 -0400 Received: by mail-pj1-x102a.google.com with SMTP id 28-20020a17090a031cb0290178dcd8a4d1so11493633pje.0 for ; Sun, 29 Aug 2021 23:25:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Ccx5AzXxSsIcY6hQ8fBoKCVwOyG1Wy2tldYYTSJ/shU=; b=Vbqt+BYyRztL13tn3IvuKZPYv3bp9b0h5qIwCkWkkbXBrA/DSjK+DTYHGUFVbRDhK6 3mYLGOBG664MR0yWmtDqs1jTpSc3ZLqo6PPi6pyKsE2Sy9VFV2AyRp6niNQNwk+hgv2O zKjIaSm9T1J7CnXy2xI3eZwRclcXmdQsRU0uNA16l2UsE7pL24dk2pTD0UMhi0aYCjKR ojMLcU6meUU0FkvsOmG3OCba6GpJIYpHHGkWLfXMm2GJ+yG1Qv1FZvYAjuAxytloL1HE HBs59h91vbQC0+ClrOzCuzkobYjKkyTo/kR+wskMEbmmUdZkJaYNjR7E41Fc5uNu1qcp kwCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ccx5AzXxSsIcY6hQ8fBoKCVwOyG1Wy2tldYYTSJ/shU=; b=MEQ8W00y9iWqnk+Ex8JdVtSD9xvR27RHyFTjFx2Khenf8d4anbezNLezZejySJMIqC NdpPGEq0T2sItm1gtIz6peGVdL0ux1UkxhjeesblE+0MJLwRisSaqb2+RA/dNuLASuka buNzN3POnY881zOE/ZkTsCoDKkuDa6Nbyb68BVMnRPmxQ2l6WQy0Hq48cagqXUo7b9/G KxPqaXj1F7q4oeqEynfbtRpsRTEEYoViLPp1exSE0eWn6Ll/lbsNXUYob2N3l/4Zh0bU AafT4rDy/rU9Rme/F3bqWCeP49wfjce6ReWseP6Eu3AePF9aQJiiChbYfQd15jax9RH0 kTtw== X-Gm-Message-State: AOAM533nUyyLDsn9jldtMPjbi4Spjkv9RE37+cVQx5j5Hc3DcnRCUzVX rXILO6vpyUg7Uffxk4+/R5WCgedLDmZ2Sw== X-Received: by 2002:a17:903:31c3:b029:ed:6f74:49c7 with SMTP id v3-20020a17090331c3b02900ed6f7449c7mr19907014ple.12.1630304702849; Sun, 29 Aug 2021 23:25:02 -0700 (PDT) Received: from localhost.localdomain (174-21-72-39.tukw.qwest.net. [174.21.72.39]) by smtp.gmail.com with ESMTPSA id b12sm13942084pff.63.2021.08.29.23.25.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Aug 2021 23:25:02 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 15/48] tcg/optimize: Split out fold_const{1,2} Date: Sun, 29 Aug 2021 23:24:18 -0700 Message-Id: <20210830062451.639572-16-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210830062451.639572-1-richard.henderson@linaro.org> References: <20210830062451.639572-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102a; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Split out a whole bunch of placeholder functions, which are currently identical. That won't last as more code gets moved. Use CASE_32_64_VEC for some logical operators that previously missed the addition of vectors. Signed-off-by: Richard Henderson --- tcg/optimize.c | 254 +++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 202 insertions(+), 52 deletions(-) -- 2.25.1 diff --git a/tcg/optimize.c b/tcg/optimize.c index a3780514e5..05de083d50 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -660,6 +660,60 @@ static void finish_folding(OptContext *ctx, TCGOp *op) } } +/* + * The fold_* functions return true when processing is complete, + * usually by folding the operation to a constant or to a copy, + * and calling tcg_opt_gen_{mov,movi}. They may do other things, + * like collect information about the value produced, for use in + * optimizing a subsequent operation. + * + * These first fold_* functions are all helpers, used by other + * folders for more specific operations. + */ + +static bool fold_const1(OptContext *ctx, TCGOp *op) +{ + if (arg_is_const(op->args[1])) { + uint64_t t; + + t = arg_info(op->args[1])->val; + t = do_constant_folding(op->opc, t, 0); + return tcg_opt_gen_movi(ctx, op, op->args[0], t); + } + return false; +} + +static bool fold_const2(OptContext *ctx, TCGOp *op) +{ + if (arg_is_const(op->args[1]) && arg_is_const(op->args[2])) { + uint64_t t1 = arg_info(op->args[1])->val; + uint64_t t2 = arg_info(op->args[2])->val; + + t1 = do_constant_folding(op->opc, t1, t2); + return tcg_opt_gen_movi(ctx, op, op->args[0], t1); + } + return false; +} + +/* + * These outermost fold_ functions are sorted alphabetically. + */ + +static bool fold_add(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_and(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_andc(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + static bool fold_call(OptContext *ctx, TCGOp *op) { TCGContext *s = ctx->tcg; @@ -692,6 +746,26 @@ static bool fold_call(OptContext *ctx, TCGOp *op) return true; } +static bool fold_ctpop(OptContext *ctx, TCGOp *op) +{ + return fold_const1(ctx, op); +} + +static bool fold_eqv(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_exts(OptContext *ctx, TCGOp *op) +{ + return fold_const1(ctx, op); +} + +static bool fold_extu(OptContext *ctx, TCGOp *op) +{ + return fold_const1(ctx, op); +} + static bool fold_mb(OptContext *ctx, TCGOp *op) { /* Eliminate duplicate and redundant fence instructions. */ @@ -716,6 +790,41 @@ static bool fold_mb(OptContext *ctx, TCGOp *op) return true; } +static bool fold_multiply(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_nand(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_neg(OptContext *ctx, TCGOp *op) +{ + return fold_const1(ctx, op); +} + +static bool fold_nor(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_not(OptContext *ctx, TCGOp *op) +{ + return fold_const1(ctx, op); +} + +static bool fold_or(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_orc(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + static bool fold_qemu_ld(OptContext *ctx, TCGOp *op) { /* Opcodes that touch guest memory stop the mb optimization. */ @@ -730,6 +839,21 @@ static bool fold_qemu_st(OptContext *ctx, TCGOp *op) return false; } +static bool fold_shift(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_sub(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + +static bool fold_xor(OptContext *ctx, TCGOp *op) +{ + return fold_const2(ctx, op); +} + /* Propagate constants and copies, fold constant expressions. */ void tcg_optimize(TCGContext *s) { @@ -1276,26 +1400,6 @@ void tcg_optimize(TCGContext *s) } break; - CASE_OP_32_64(not): - CASE_OP_32_64(neg): - CASE_OP_32_64(ext8s): - CASE_OP_32_64(ext8u): - CASE_OP_32_64(ext16s): - CASE_OP_32_64(ext16u): - CASE_OP_32_64(ctpop): - case INDEX_op_ext32s_i64: - case INDEX_op_ext32u_i64: - case INDEX_op_ext_i32_i64: - case INDEX_op_extu_i32_i64: - case INDEX_op_extrl_i64_i32: - case INDEX_op_extrh_i64_i32: - if (arg_is_const(op->args[1])) { - tmp = do_constant_folding(opc, arg_info(op->args[1])->val, 0); - tcg_opt_gen_movi(&ctx, op, op->args[0], tmp); - continue; - } - break; - CASE_OP_32_64(bswap16): CASE_OP_32_64(bswap32): case INDEX_op_bswap64_i64: @@ -1307,36 +1411,6 @@ void tcg_optimize(TCGContext *s) } break; - CASE_OP_32_64(add): - CASE_OP_32_64(sub): - CASE_OP_32_64(mul): - CASE_OP_32_64(or): - CASE_OP_32_64(and): - CASE_OP_32_64(xor): - CASE_OP_32_64(shl): - CASE_OP_32_64(shr): - CASE_OP_32_64(sar): - CASE_OP_32_64(rotl): - CASE_OP_32_64(rotr): - CASE_OP_32_64(andc): - CASE_OP_32_64(orc): - CASE_OP_32_64(eqv): - CASE_OP_32_64(nand): - CASE_OP_32_64(nor): - CASE_OP_32_64(muluh): - CASE_OP_32_64(mulsh): - CASE_OP_32_64(div): - CASE_OP_32_64(divu): - CASE_OP_32_64(rem): - CASE_OP_32_64(remu): - if (arg_is_const(op->args[1]) && arg_is_const(op->args[2])) { - tmp = do_constant_folding(opc, arg_info(op->args[1])->val, - arg_info(op->args[2])->val); - tcg_opt_gen_movi(&ctx, op, op->args[0], tmp); - continue; - } - break; - CASE_OP_32_64(clz): CASE_OP_32_64(ctz): if (arg_is_const(op->args[1])) { @@ -1637,9 +1711,71 @@ void tcg_optimize(TCGContext *s) } break; + default: + break; + + /* ---------------------------------------------------------- */ + /* Sorted alphabetically by opcode as much as possible. */ + + CASE_OP_32_64_VEC(add): + done = fold_add(&ctx, op); + break; + CASE_OP_32_64_VEC(and): + done = fold_and(&ctx, op); + break; + CASE_OP_32_64_VEC(andc): + done = fold_andc(&ctx, op); + break; + CASE_OP_32_64(ctpop): + done = fold_ctpop(&ctx, op); + break; + CASE_OP_32_64(div): + CASE_OP_32_64(divu): + done = fold_const2(&ctx, op); + break; + CASE_OP_32_64(eqv): + done = fold_eqv(&ctx, op); + break; + CASE_OP_32_64(ext8s): + CASE_OP_32_64(ext16s): + case INDEX_op_ext32s_i64: + case INDEX_op_ext_i32_i64: + done = fold_exts(&ctx, op); + break; + CASE_OP_32_64(ext8u): + CASE_OP_32_64(ext16u): + case INDEX_op_ext32u_i64: + case INDEX_op_extu_i32_i64: + case INDEX_op_extrl_i64_i32: + case INDEX_op_extrh_i64_i32: + done = fold_extu(&ctx, op); + break; case INDEX_op_mb: done = fold_mb(&ctx, op); break; + CASE_OP_32_64(mul): + CASE_OP_32_64(mulsh): + CASE_OP_32_64(muluh): + done = fold_multiply(&ctx, op); + break; + CASE_OP_32_64(nand): + done = fold_nand(&ctx, op); + break; + CASE_OP_32_64(neg): + done = fold_neg(&ctx, op); + break; + CASE_OP_32_64(nor): + done = fold_nor(&ctx, op); + break; + CASE_OP_32_64_VEC(not): + done = fold_not(&ctx, op); + break; + CASE_OP_32_64_VEC(or): + done = fold_or(&ctx, op); + break; + CASE_OP_32_64_VEC(orc): + done = fold_orc(&ctx, op); + break; case INDEX_op_qemu_ld_i32: case INDEX_op_qemu_ld_i64: done = fold_qemu_ld(&ctx, op); @@ -1649,8 +1785,22 @@ void tcg_optimize(TCGContext *s) case INDEX_op_qemu_st_i64: done = fold_qemu_st(&ctx, op); break; - - default: + CASE_OP_32_64(rem): + CASE_OP_32_64(remu): + done = fold_const2(&ctx, op); + break; + CASE_OP_32_64(rotl): + CASE_OP_32_64(rotr): + CASE_OP_32_64(sar): + CASE_OP_32_64(shl): + CASE_OP_32_64(shr): + done = fold_shift(&ctx, op); + break; + CASE_OP_32_64_VEC(sub): + done = fold_sub(&ctx, op); + break; + CASE_OP_32_64_VEC(xor): + done = fold_xor(&ctx, op); break; }