From patchwork Thu Oct 21 21:05:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 516082 Delivered-To: patch@linaro.org Received: by 2002:ac0:bf50:0:0:0:0:0 with SMTP id o16csp335742imj; Thu, 21 Oct 2021 14:19:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxErQh2N5GpDZxX+bp06WUsKHj7P+hLQxZEANGhdQ2IveiOaukOxP7WsPNVnKIXftkws4yv X-Received: by 2002:a05:6830:43a5:: with SMTP id s37mr6688150otv.246.1634851153040; Thu, 21 Oct 2021 14:19:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634851153; cv=none; d=google.com; s=arc-20160816; b=Ip1M2lZxF4DZSETLxt/LKJTNO74v+mPlOG0qoRd6NpEaeopr42oplXHowqO/7j5TBR CGLkB5HbVDZGrtZ/7PMS1g19xxc+V2ccbBspz0vnk/PPIvwP3EpfUfNCasKz9UmTLmaM GXmEuln4TP3rVVn0QSHnnh5Ct6HUTT28Dw4C8v6zCTjR1vjf3tKW12XkcgJF/QXRS0Ta z2YzwBP5ubQPb7bXgLovWBBJR8lPq0QxUD8cfObu9lfq//RUD8Q31i3GxZu+SVztQ36n AEpI2qwdy4i52vWQ15f2Pi39YSQ/W4tcoUNDeW1Y33ObcUm965I/uoBhSh9EXjKwv6dQ jqjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=nXAcxan7S9VdDzWZ6hubQvAQCG5PQ50y7tm1Z6dakPA=; b=nGeodJg37/koKywXS0rO0aN2soxpzVmm+pjouu8zIcPnGqEeE3A78qYj8XxXN+ijEL OZ29/4FGz540Hb0PWCAQOQgyYDr3al2sWe9JL31QSECuEhMXDddbb6uaA/rNOHP6MdZl aLfoj5FuE18HZf31I4S18anh5wKojp5+CCf3WNvEMJqunTfOmn8QRB06xK4rJnPmrGuD IduGkcrGKCtNmPxdq/trvuNkxwb2Oi8IRvTqk62qgmr03ulESMihpyBTY6D5CRcLq456 NXU8CBbuiPmIoKkVUSTcKnpVI/dB0QOcGwsxdc5Mpg39ipGoDoBHHOY4kGKqdFsiqq1R mtbg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=JrnNYoTl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id g25si8181000otn.158.2021.10.21.14.19.12 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Thu, 21 Oct 2021 14:19:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=JrnNYoTl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44098 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mdfSy-000185-Do for patch@linaro.org; Thu, 21 Oct 2021 17:19:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48368) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mdfGA-0007P6-C9 for qemu-devel@nongnu.org; Thu, 21 Oct 2021 17:05:58 -0400 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]:35364) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1mdfG4-0004Gh-7B for qemu-devel@nongnu.org; Thu, 21 Oct 2021 17:05:56 -0400 Received: by mail-pl1-x632.google.com with SMTP id u6so1303309ple.2 for ; Thu, 21 Oct 2021 14:05:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nXAcxan7S9VdDzWZ6hubQvAQCG5PQ50y7tm1Z6dakPA=; b=JrnNYoTldP28ng84Bc9+Ixk4/ak0unO3AXg/VfqlLwFJeiIY8f2QHbKN/GF+ihbZTG qS1K+qphUsNEba3FMtnSuXotx+bjtKIWiqHBTZd9egcqOiV89uvFQkY0heIvfRgmlmy2 FEWFd/liyRxrsWZ8GGLcpuWAKh+8VuLOxK5TFTtwufj8tciHjq2Tg4hiNS6rUknfZOgy zf+BRTo4+BaMtA3RF34kFoUN+1A3Wo34r5LE1ORdW1JBKv0bdPIqymBmYzybeY/la+4W O50B6w3Yn1NURRItjzbU/XBfNqslPsJpdo0vlpMEHGvvn4zT0J/qsY87q/PKgKZmpjm9 9IrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nXAcxan7S9VdDzWZ6hubQvAQCG5PQ50y7tm1Z6dakPA=; b=53q78qI0Xme4kP4BFH8YzF9PGQj+8nPvdSCwmgXPduU2Uy5UK8tJnZnYMsnW11519c 4trYZehOReUoOFKYdCH9yXLcYST/3kBeIaRgEsgjIF9nDBU/1X0z1N1k7PA7ACC/KZ5g HfMclXOjOuxqxrF2Gm0uGtFKelPcFqNgFN76wyq1AQ6mIOrizepg+I9ShtBjGh+NKKWj 8vc0l+OnvIeKJJGRvksFJzL/JW5qa52IWf7xVNuw304vw4rHdiinWKD0xD8UyTBOh96a WlOU/rumhOozt13mU44pnJNAQG+34WwYGfR4bQ8GqtE3oZAQM8SgSN2sCoD14dweBKn+ Ip5Q== X-Gm-Message-State: AOAM530JJ5VDBpz6UNBVd96OFZBcXQs8pLK6+GkcELXyX2UjOYsXnUkc LH/Pc2zWM0FA9EYmBq0Wo5vSCNaMf1fUIQ== X-Received: by 2002:a17:90a:6942:: with SMTP id j2mr5969238pjm.113.1634850350940; Thu, 21 Oct 2021 14:05:50 -0700 (PDT) Received: from localhost.localdomain ([71.212.134.125]) by smtp.gmail.com with ESMTPSA id g7sm5981670pgp.17.2021.10.21.14.05.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 14:05:50 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v3 14/48] tcg/optimize: Split out fold_mb, fold_qemu_{ld, st} Date: Thu, 21 Oct 2021 14:05:05 -0700 Message-Id: <20211021210539.825582-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021210539.825582-1-richard.henderson@linaro.org> References: <20211021210539.825582-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::632; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x632.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: luis.pires@eldorado.org.br, alex.bennee@linaro.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This puts the separate mb optimization into the same framework as the others. While fold_qemu_{ld,st} are currently identical, that won't last as more code gets moved. Reviewed-by: Luis Pires Signed-off-by: Richard Henderson --- tcg/optimize.c | 89 +++++++++++++++++++++++++++++--------------------- 1 file changed, 51 insertions(+), 38 deletions(-) -- 2.25.1 Reviewed-by: Philippe Mathieu-Daudé diff --git a/tcg/optimize.c b/tcg/optimize.c index 699476e2f1..159a5a9ee5 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -692,6 +692,44 @@ static bool fold_call(OptContext *ctx, TCGOp *op) return true; } +static bool fold_mb(OptContext *ctx, TCGOp *op) +{ + /* Eliminate duplicate and redundant fence instructions. */ + if (ctx->prev_mb) { + /* + * Merge two barriers of the same type into one, + * or a weaker barrier into a stronger one, + * or two weaker barriers into a stronger one. + * mb X; mb Y => mb X|Y + * mb; strl => mb; st + * ldaq; mb => ld; mb + * ldaq; strl => ld; mb; st + * Other combinations are also merged into a strong + * barrier. This is stricter than specified but for + * the purposes of TCG is better than not optimizing. + */ + ctx->prev_mb->args[0] |= op->args[0]; + tcg_op_remove(ctx->tcg, op); + } else { + ctx->prev_mb = op; + } + return true; +} + +static bool fold_qemu_ld(OptContext *ctx, TCGOp *op) +{ + /* Opcodes that touch guest memory stop the mb optimization. */ + ctx->prev_mb = NULL; + return false; +} + +static bool fold_qemu_st(OptContext *ctx, TCGOp *op) +{ + /* Opcodes that touch guest memory stop the mb optimization. */ + ctx->prev_mb = NULL; + return false; +} + /* Propagate constants and copies, fold constant expressions. */ void tcg_optimize(TCGContext *s) { @@ -1599,6 +1637,19 @@ void tcg_optimize(TCGContext *s) } break; + case INDEX_op_mb: + done = fold_mb(&ctx, op); + break; + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: + done = fold_qemu_ld(&ctx, op); + break; + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st8_i32: + case INDEX_op_qemu_st_i64: + done = fold_qemu_st(&ctx, op); + break; + default: break; } @@ -1606,43 +1657,5 @@ void tcg_optimize(TCGContext *s) if (!done) { finish_folding(&ctx, op); } - - /* Eliminate duplicate and redundant fence instructions. */ - if (ctx.prev_mb) { - switch (opc) { - case INDEX_op_mb: - /* Merge two barriers of the same type into one, - * or a weaker barrier into a stronger one, - * or two weaker barriers into a stronger one. - * mb X; mb Y => mb X|Y - * mb; strl => mb; st - * ldaq; mb => ld; mb - * ldaq; strl => ld; mb; st - * Other combinations are also merged into a strong - * barrier. This is stricter than specified but for - * the purposes of TCG is better than not optimizing. - */ - ctx.prev_mb->args[0] |= op->args[0]; - tcg_op_remove(s, op); - break; - - default: - /* Opcodes that end the block stop the optimization. */ - if ((def->flags & TCG_OPF_BB_END) == 0) { - break; - } - /* fallthru */ - case INDEX_op_qemu_ld_i32: - case INDEX_op_qemu_ld_i64: - case INDEX_op_qemu_st_i32: - case INDEX_op_qemu_st8_i32: - case INDEX_op_qemu_st_i64: - /* Opcodes that touch guest memory stop the optimization. */ - ctx.prev_mb = NULL; - break; - } - } else if (opc == INDEX_op_mb) { - ctx.prev_mb = op; - } } }