From patchwork Tue Apr 25 19:30:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676825 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880331wrs; Tue, 25 Apr 2023 12:42:47 -0700 (PDT) X-Google-Smtp-Source: AKy350alExVbF7RnI0JS0+bONUJo1pxcpAi+NEsW4CSJIHlIn12ZqEX0ye9+7VinoE4S28v7LpjV X-Received: by 2002:ad4:5f87:0:b0:5f1:5f73:aed8 with SMTP id jp7-20020ad45f87000000b005f15f73aed8mr31486053qvb.20.1682451767472; Tue, 25 Apr 2023 12:42:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451767; cv=none; d=google.com; s=arc-20160816; b=AQ9KDZvZEO/hV9hztUs8LgN8/F3p1xZ7IcCeDqIa5gUZPBvhLcUrFuq6F61/hAfyLk OD1Jq1Fq8PdgiOR604RF8Nr7Fsjua5GouhUcXTCkYv8Zjn/Vx9TNDhR5lNFKzjsiy4SP HsgRq294bWYIO+OtnmbvCfBM64PTuLeP1RNBitc9aup/XwbYtivVmZrm+wmcTc1A9Hcm 10A8OHlfPj9Yu0W9X5USCVm+apY9pVyvdXFYUrC35Ohm9UP1l6bwC0IfcbJmqX7WOAHO J0MK35867ZdR/Tgto0TNLWwx+ja+ogLapK+2IgcqqNl77OSgWma0O9uZ+gpEuUqszS9X 3WRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3lrJF/qXESqdgqxzYXiwMDd8bhgg68V4bNiRRkSEKr0=; b=BIbhRQ+1FqKtyOhV8LC9bJmIVf4GcH5qW8vBxAxzm//SwRABdfHztMfKj0LITdkN+s P1xWXBisp3E7B4YZe2+/DIM8cTn3/W/YW8uWv/u9GJJ4DWOfzuHp4cGJq3G8VTjpe8wn w2Z48b89bVIi/cnTdvASwKfe6C5IxDQY3sGpSFMwiz89j+55b09Eqpe2np37u3hHikFD lRMVYK866rti4n3iM7vK4UfjWN/7cZQJ2YkL01MMTy+EEPydWd9U+uBFdArltdQaJnGS PvPOE/Lr7pRTbWBzYKP93Tp+GAEtvgrwyqTz1nRX/8IaIPhKLLdfc3KFH/nu+j427iOe mqfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=I+zZ5u3g; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i11-20020ad45c6b000000b005ef770c5ea9si9626282qvh.363.2023.04.25.12.42.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:42:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=I+zZ5u3g; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOOd-00084n-0K; Tue, 25 Apr 2023 15:32:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOOa-000833-CK for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:12 -0400 Received: from mail-lj1-x22b.google.com ([2a00:1450:4864:20::22b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOOV-00043E-Kr for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:11 -0400 Received: by mail-lj1-x22b.google.com with SMTP id 38308e7fff4ca-2a8bcfbf276so56914551fa.3 for ; Tue, 25 Apr 2023 12:32:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451126; x=1685043126; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3lrJF/qXESqdgqxzYXiwMDd8bhgg68V4bNiRRkSEKr0=; b=I+zZ5u3gA753DpP5BO5TOyihJ1P8U7gPF3Av7Zt3UhPqfHy5DV8WTGBdk4lS08QBVz 0oQAaoTj/XlWRHaQWg/JlDXWjm8S70zX7bwyoMlnLeHCXOhZdTypUWko+gf934qrEU+6 Z24SCEbDwjcGpVI89poNefs9nh9BN5rOSa4q3gZ3LyYPVH4s+N+nzUw4rgvgMOqiOSTF fCvKswuvgLjRuHYGR+QnK1a1CVQnFY4ykS4JTpmfyastT692aoP+Hf28nDl7yr+zgGLN 7JS7Y+iVmIKtqgj4O4QfM1jhrcWRjwRcBHexbV17BLmm9Ko1oRBRbex/nzQUMa3ou/T6 0DTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451126; x=1685043126; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3lrJF/qXESqdgqxzYXiwMDd8bhgg68V4bNiRRkSEKr0=; b=b1bHBIjUiDAuGnbZSWlJd9SHpSSb9YPogJvR//4p83EjHTMlIQhE5c+Ap/0PDFOeH6 WOVXlxmGxhsKw+WZ2TVW/yeEhaqIzJtWTBnWwKtdxbgqoLLfYi8aKUfwhembuizVf3X8 Q3oYVU98SyDf9RzFdwSp4yv6MvleQoawjAn/CDNytxOlUz6yiOMHXRvm0ML5VV4WEJOB OYeFrrOomdRKiBKag/Heo/JiHxUy2aMOV8x9mPKkoRWEn2Gfk36i7nA93Rl/1xC7yvsB S/hAi3EevXy+7+BB/QjByeUrISQYEIH4HV9sZNyd1BAcrKhYmyV3wGZ8m9czYPT9CKTy LvBg== X-Gm-Message-State: AC+VfDwm1zHLltsCkif8f5srwT2eKCNjE9X/O2LSQcyq8MjHWrd4fa9k xqxu0lnAlsFMALuPZzQ3Uk22Fk3wgnPXMyQK5a1lxQ== X-Received: by 2002:a2e:8302:0:b0:2ab:19a0:667b with SMTP id a2-20020a2e8302000000b002ab19a0667bmr1315313ljh.0.1682451125790; Tue, 25 Apr 2023 12:32:05 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.32.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:32:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v3 01/57] include/exec/memop: Add bits describing atomicity Date: Tue, 25 Apr 2023 20:30:50 +0100 Message-Id: <20230425193146.2106111-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22b; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org These bits may be used to describe the precise atomicity requirements of the guest, which may then be used to constrain the methods by which it may be emulated by the host. For instance, the AArch64 LDP (32-bit) instruction changes semantics with ARMv8.4 LSE2, from MO_64 | MO_ATMAX_4 | MO_ATOM_IFALIGN (64-bits, single-copy atomic only on 4 byte units, nonatomic if not aligned by 4), to MO_64 | MO_ATMAX_SIZE | MO_ATOM_WITHIN16 (64-bits, single-copy atomic within a 16 byte block) The former may be implemented with two 4 byte loads, or a single 8 byte load if that happens to be efficient on the host. The latter may not, and may also require a helper when misaligned. Reviewed-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- include/exec/memop.h | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/include/exec/memop.h b/include/exec/memop.h index 25d027434a..04e4048f0b 100644 --- a/include/exec/memop.h +++ b/include/exec/memop.h @@ -81,6 +81,42 @@ typedef enum MemOp { MO_ALIGN_32 = 5 << MO_ASHIFT, MO_ALIGN_64 = 6 << MO_ASHIFT, + /* + * MO_ATOM_* describes that atomicity requirements of the operation: + * MO_ATOM_IFALIGN: the operation must be single-copy atomic if and + * only if it is aligned; if unaligned there is no atomicity. + * MO_ATOM_NONE: the operation has no atomicity requirements. + * MO_ATOM_SUBALIGN: the operation is single-copy atomic by parts + * by the alignment. E.g. if the address is 0 mod 4, then each + * 4-byte subobject is single-copy atomic. + * This is the atomicity of IBM Power and S390X processors. + * MO_ATOM_WITHIN16: the operation is single-copy atomic, even if it + * is unaligned, so long as it does not cross a 16-byte boundary; + * if it crosses a 16-byte boundary there is no atomicity. + * This is the atomicity of Arm FEAT_LSE2. + * + * MO_ATMAX_* describes the maximum atomicity unit required: + * MO_ATMAX_SIZE: the entire operation, i.e. MO_SIZE. + * MO_ATMAX_[248]: units of N bytes. + * + * Note the default (i.e. 0) values are single-copy atomic to the + * size of the operation, if aligned. This retains the behaviour + * from before these were introduced. + */ + MO_ATOM_SHIFT = 8, + MO_ATOM_MASK = 0x3 << MO_ATOM_SHIFT, + MO_ATOM_IFALIGN = 0 << MO_ATOM_SHIFT, + MO_ATOM_NONE = 1 << MO_ATOM_SHIFT, + MO_ATOM_SUBALIGN = 2 << MO_ATOM_SHIFT, + MO_ATOM_WITHIN16 = 3 << MO_ATOM_SHIFT, + + MO_ATMAX_SHIFT = 10, + MO_ATMAX_MASK = 0x3 << MO_ATMAX_SHIFT, + MO_ATMAX_SIZE = 0 << MO_ATMAX_SHIFT, + MO_ATMAX_2 = 1 << MO_ATMAX_SHIFT, + MO_ATMAX_4 = 2 << MO_ATMAX_SHIFT, + MO_ATMAX_8 = 3 << MO_ATMAX_SHIFT, + /* Combinations of the above, for ease of use. */ MO_UB = MO_8, MO_UW = MO_16, From patchwork Tue Apr 25 19:30:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676852 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882720wrs; Tue, 25 Apr 2023 12:49:04 -0700 (PDT) X-Google-Smtp-Source: AKy350aLS+W7IbAHQiJvwe/O2fVV8E9ykGJi/WdmLCeFWxOYYTrTUpczKhhpLbq57ZdaYJkPJOjf X-Received: by 2002:a05:622a:120c:b0:3ef:5582:aea with SMTP id y12-20020a05622a120c00b003ef55820aeamr30241517qtx.68.1682452144158; Tue, 25 Apr 2023 12:49:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452144; cv=none; d=google.com; s=arc-20160816; b=MtBTtfvxHnAT+Jwxq2Sm8e89MVH4+KjQel2RoQhNFjpOgYhVDsy81rpOTSxK5facS+ iBuum6WpGQDaPSJlrBNScoWLcG5gCpe2qPt1mr0vyVxzLeaZqZSkjHM5g6lGmSCoAa1P uq2Uu8ZdMBOZVCBTTttpWVM2bzJd99iy/1qL4yTyuXq16+/zRjRbxBRszwLcLZ9nqkXh HSAbVu8/4r5dtpPum3yGR4UyteDCnLsS7DJCbJA96sT/uZNIdIWT2ds8Z6JcZ2+J2yZH 9habE0GmXjWZMJh4Ns3vhCNFHDa/zbFayFrKdhS6l0BvkRgKK9HJ8FI1yQRdiIcIsqER 8+aQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=R+fxTxTqIhiI86kN5IY9M/+obOs0ta97k4WM4ZR41R8=; b=OOw2pQxmY1BGOL8AlynupXMrl9BkYbgGVurFVaNkrmZ2Pexe0KY8Q/l8A0vpQkuYJX Cgx+mA4couSbSLgaD1R9uF2oXPPSt6e+kJe8HTuSphvEY0shtjIRIYobKZMy7/b1wxe7 duOkapguozeP+zGJchpeiOoQjZCV8oyazYbs1ZiBrm4w7iZ0mCMapeU9+2Yp94dreMGH qUCoXNBgj169f74ivSk9fOgkiwPwSu5brtS/BsxIXdxwT9oR1WPtKug9Qf46HSoTi/0H d89SN5eEIAELh5MhTfl5uW2KnzFcW1ByP4C6WnT9LJzLr0PRyvIFnI3+rFbktEMU1Nf5 RsoA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=X+97rYn7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h8-20020ac87d48000000b003ef3808dea9si1016114qtb.377.2023.04.25.12.49.04 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:49:04 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=X+97rYn7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOOl-0008Cf-JC; Tue, 25 Apr 2023 15:32:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOOj-0008Bn-Ju for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:21 -0400 Received: from mail-lj1-x22c.google.com ([2a00:1450:4864:20::22c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOOd-00045q-L7 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:21 -0400 Received: by mail-lj1-x22c.google.com with SMTP id 38308e7fff4ca-2a8ba693f69so59501391fa.0 for ; Tue, 25 Apr 2023 12:32:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451133; x=1685043133; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R+fxTxTqIhiI86kN5IY9M/+obOs0ta97k4WM4ZR41R8=; b=X+97rYn7nYhePOgZIOufdVTChNLCg1ILAZNK6JF98HvkPuvQ6h4ZOmVWwXCYKLFXAb zT9ZI2rCNNxKwLYS7397Dt+CYNqhlxrmyEqZaqDZ2C2h7xb0xw6bme8oW90JX3z6EDql RbUbhwkeQEm8FnWA8rmLSEuffeamgPyLTbBWuAb18Z6L2kTCJUBiz4ZS6+tbe0YVtnjJ A/F5RIdnbr4fkD7KL/hkTnwuycLIvz4h+XM1uJnnFN1ZcoiEjVlJJiMj3ecUBq13dgMu ii9tXAvn92OjS4pLUTDldQf8VHidA/mSrRRsCnweHmF39FpdIAyljaveHOcu0pWqgIMp PpJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451133; x=1685043133; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R+fxTxTqIhiI86kN5IY9M/+obOs0ta97k4WM4ZR41R8=; b=Ptuq9yQpGBu3lc4nHsJFHBI3hMgMJY5GUosdbNy/Zu7sj4yKCgwp9wgI0u7ItX/ZF/ nSvSh8+YSOM45T/pOajbUc3qr4mwMd4wUeez2Rs+jW5SGPTz4LmyOH9IvmbmMQRyu0tw 39O1YQRn4cS/t5J3Oof0O2s/f3mCtqsZAfmKL/G2OELbkH85picf7Ecieza9ZlmviT4u SzX6g5awPBaexnTc6Azrib3gAMMrk8GaGREdS2Zij7LelNuYimruwfgLTWnEjXeucDJA eSsMqtW+aDokDzNGBs1D2ewi6TvU37CSUencCDzzuRjp9asGf9bpJpTs8Gx1Dmj87Tzz 1Qig== X-Gm-Message-State: AAQBX9ecd4l46mKZCptEvBRKdc9xWb9OpC3Z7pwf/+txOytVLQaFSia2 nU6YuSJ/iVLumrxpIaUlWYu10ECeKc5qW7oL6ghxPA== X-Received: by 2002:a2e:7010:0:b0:2a9:f114:f166 with SMTP id l16-20020a2e7010000000b002a9f114f166mr3637834ljc.2.1682451133777; Tue, 25 Apr 2023 12:32:13 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.32.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:32:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v3 02/57] accel/tcg: Add cpu_in_serial_context Date: Tue, 25 Apr 2023 20:30:51 +0100 Message-Id: <20230425193146.2106111-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22c; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, T_SPF_HELO_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Like cpu_in_exclusive_context, but also true if there is no other cpu against which we could race. Use it in tb_flush as a direct replacement. Use it in cpu_loop_exit_atomic to ensure that there is no loop against cpu_exec_step_atomic. Reviewed-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- accel/tcg/internal.h | 5 +++++ accel/tcg/cpu-exec-common.c | 3 +++ accel/tcg/tb-maint.c | 2 +- 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 96f198b28b..8250ecbf74 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -64,6 +64,11 @@ static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb) } } +static inline bool cpu_in_serial_context(CPUState *cs) +{ + return !(cs->tcg_cflags & CF_PARALLEL) || cpu_in_exclusive_context(cs); +} + extern int64_t max_delay; extern int64_t max_advance; diff --git a/accel/tcg/cpu-exec-common.c b/accel/tcg/cpu-exec-common.c index e7962c9348..9a5fabf625 100644 --- a/accel/tcg/cpu-exec-common.c +++ b/accel/tcg/cpu-exec-common.c @@ -22,6 +22,7 @@ #include "sysemu/tcg.h" #include "exec/exec-all.h" #include "qemu/plugin.h" +#include "internal.h" bool tcg_allowed; @@ -81,6 +82,8 @@ void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc) void cpu_loop_exit_atomic(CPUState *cpu, uintptr_t pc) { + /* Prevent looping if already executing in a serial context. */ + g_assert(!cpu_in_serial_context(cpu)); cpu->exception_index = EXCP_ATOMIC; cpu_loop_exit_restore(cpu, pc); } diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index cb1f806f00..7d613d36d2 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -760,7 +760,7 @@ void tb_flush(CPUState *cpu) if (tcg_enabled()) { unsigned tb_flush_count = qatomic_mb_read(&tb_ctx.tb_flush_count); - if (cpu_in_exclusive_context(cpu)) { + if (cpu_in_serial_context(cpu)) { do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count)); } else { async_safe_run_on_cpu(cpu, do_tb_flush, From patchwork Tue Apr 25 19:30:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676845 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881749wrs; Tue, 25 Apr 2023 12:46:22 -0700 (PDT) X-Google-Smtp-Source: AKy350badMhz6u4wnrGLMTTZ35B4XukjbrQpSqvtPkEv74b9WpwxJjbCRZTn2Dfw22FKhzmNerxX X-Received: by 2002:a05:622a:1482:b0:3bf:db86:e538 with SMTP id t2-20020a05622a148200b003bfdb86e538mr30259990qtx.68.1682451982205; Tue, 25 Apr 2023 12:46:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451982; cv=none; d=google.com; s=arc-20160816; b=Sk/mKDvw+1pA18nrhhRAH6tpzM4/44moYoBHir/aBNg56/He/yXELVQu+fWB3SC/ui /N9vjLh0MC8kagJDgAzdu7ETDuWdODD7mv/pxygLq1OOloBWW8HSEksChFT9XgqBnmYR kE6XTYtQ/yVRtgIudtbLnTnFO/74mFYQg3JiVk70Lvf75S8IqMucvn/V9uDaZbCMUkoH aud8vMIyUFeaFvkOBAml5i30cKdRiLKywyth4qHajZga1BUfSOI0IbG7JnY5aA3/Z/t1 YDGqFZkM4GtUITF41DWz6PmJqbi0Y5QvnHXDq1QV/zgxpKOYOUtmV8zyfBmE86lWsi/J rgrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=wScQAbeOAdmPNfU1WObG2sqGKO5LVrW4aaS/W72yioE=; b=EVGeNOwbN1NTgeCSmcGekw45uZFc1VUX2XjJ/BG9QF14ksUTj3HCNK0udRVVJe49L2 Jv693rn5FxPY1q4kMvebzoPDQzBuel64U4H9G6m6ao45+6BF6hlDJyb+wDqM5eQEvXSi rZvORjROLFM2/7EI1lbA3cSwnjTu5LUZVGeWRjQoQUh3lGl0fZvztbqygOCYzScYlhXc EB+8XGZzY2EDuLU3SLtEAHDUU6fzGwFwi7jrJOrkq7u1cGif0a7dfvgxmqrCYkIdB1ov ED0A/MTTa2M8uMzes5gCevFMQmz6nVJ7w0ElVvbYrn3geV1+ALbOTLJqZuX9kpvOkijp mBrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Iwh8eDGH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o9-20020ac87c49000000b003ef3b04b8f4si9557709qtv.438.2023.04.25.12.46.22 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:46:22 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Iwh8eDGH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOOr-0008G5-8V; Tue, 25 Apr 2023 15:32:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOOp-0008Ew-BJ for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:27 -0400 Received: from mail-lj1-x232.google.com ([2a00:1450:4864:20::232]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOOj-000490-Lb for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:27 -0400 Received: by mail-lj1-x232.google.com with SMTP id 38308e7fff4ca-2a8dc00ade2so60498591fa.0 for ; Tue, 25 Apr 2023 12:32:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451140; x=1685043140; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wScQAbeOAdmPNfU1WObG2sqGKO5LVrW4aaS/W72yioE=; b=Iwh8eDGHSeaqEaM2QjjVW9M2fwYX0rjrHpHVCT+mHRljb1TXHfMunDimC18xlA0OpN xPUSGvu1IqsaCF5TKskjDzgxTD0cbmmgeGVeN2Vovuptk+8K+XSP/zGGQNzwPM5I/10f DU7yF1ClozPu9CfXx4Bz3B0OvYG6DeabIgQhUCI9BuHcX42Zmu49OIHd8cURT2z1Rhpg yi5hUQvnyqRPIodROiR6tklQq+HqmTCnH+bu3p+2gawk2DxNmwqqH3tR1As4Bzjy1Dte l8s96L4xfwhhBvvPO+Z9QP8gX91QP0eXVUEVBD3ef/JlKatElWZunkJGE9k7VoSBWtLJ h+xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451140; x=1685043140; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wScQAbeOAdmPNfU1WObG2sqGKO5LVrW4aaS/W72yioE=; b=TfZUgEqs83stTAf5qem+d0MqtS9cxR7xosuaJM9E3CwgOdu2BoSfiiK4eODf71saF8 mm2UuxfXDPrAa2zD6UwjbBzGeZbN8y60skzeGKdn2WRsqx2EsTkq6usnebe2VBF+h32C Z04Lb5CftqmgZN6AWtJs+78orO10nL0a9+XkIoC9yZmdiizcYa1NoXz/zxlKXH9bP8Vh XDi08bj9oJYQpsWH02cdIGUPE+PHpFnhihnuPF+h9jJ7GMEzhp0CEJ4GAAjHag9pnaJE uY1FYGIWLe0LdQHTZ9GygnKQwm5ay1waaoBtiMg4WQRWy00e8V55+akQmBhTv8nRCQyn Q0wQ== X-Gm-Message-State: AAQBX9d0mZ0LKzgdQ89n4FCV6Hhx5FNFpAO/8MAvQobTA4P9S02qDTNw np26bEsh5XOXH7eME/CzUEyENi1FEakoq2Ue9IVYHg== X-Received: by 2002:a2e:b17c:0:b0:2a7:6a49:5d76 with SMTP id a28-20020a2eb17c000000b002a76a495d76mr3912486ljm.33.1682451139861; Tue, 25 Apr 2023 12:32:19 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.32.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:32:19 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v3 03/57] accel/tcg: Introduce tlb_read_idx Date: Tue, 25 Apr 2023 20:30:52 +0100 Message-Id: <20230425193146.2106111-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::232; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x232.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01, T_SPF_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of playing with offsetof in various places, use MMUAccessType to index an array. This is easily defined instead of the previous dummy padding array in the union. Reviewed-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- include/exec/cpu-defs.h | 7 ++- include/exec/cpu_ldst.h | 26 ++++++++-- accel/tcg/cputlb.c | 104 +++++++++++++--------------------------- 3 files changed, 59 insertions(+), 78 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index e1c498ef4b..a6e0cf1812 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -111,8 +111,11 @@ typedef struct CPUTLBEntry { use the corresponding iotlb value. */ uintptr_t addend; }; - /* padding to get a power of two size */ - uint8_t dummy[1 << CPU_TLB_ENTRY_BITS]; + /* + * Padding to get a power of two size, as well as index + * access to addr_{read,write,code}. + */ + target_ulong addr_idx[(1 << CPU_TLB_ENTRY_BITS) / TARGET_LONG_SIZE]; }; } CPUTLBEntry; diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h index 09b55cc0ee..fad6efc0ad 100644 --- a/include/exec/cpu_ldst.h +++ b/include/exec/cpu_ldst.h @@ -360,13 +360,29 @@ static inline void clear_helper_retaddr(void) /* Needed for TCG_OVERSIZED_GUEST */ #include "tcg/tcg.h" +static inline target_ulong tlb_read_idx(const CPUTLBEntry *entry, + MMUAccessType access_type) +{ + /* Do not rearrange the CPUTLBEntry structure members. */ + QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) != + MMU_DATA_LOAD * TARGET_LONG_SIZE); + QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) != + MMU_DATA_STORE * TARGET_LONG_SIZE); + QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) != + MMU_INST_FETCH * TARGET_LONG_SIZE); + + const target_ulong *ptr = &entry->addr_idx[access_type]; +#if TCG_OVERSIZED_GUEST + return *ptr; +#else + /* ofs might correspond to .addr_write, so use qatomic_read */ + return qatomic_read(ptr); +#endif +} + static inline target_ulong tlb_addr_write(const CPUTLBEntry *entry) { -#if TCG_OVERSIZED_GUEST - return entry->addr_write; -#else - return qatomic_read(&entry->addr_write); -#endif + return tlb_read_idx(entry, MMU_DATA_STORE); } /* Find the TLB index corresponding to the mmu_idx + address pair. */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 665c41fc12..e68cf422c5 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1441,34 +1441,17 @@ static void io_writex(CPUArchState *env, CPUTLBEntryFull *full, } } -static inline target_ulong tlb_read_ofs(CPUTLBEntry *entry, size_t ofs) -{ -#if TCG_OVERSIZED_GUEST - return *(target_ulong *)((uintptr_t)entry + ofs); -#else - /* ofs might correspond to .addr_write, so use qatomic_read */ - return qatomic_read((target_ulong *)((uintptr_t)entry + ofs)); -#endif -} - /* Return true if ADDR is present in the victim tlb, and has been copied back to the main tlb. */ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, - size_t elt_ofs, target_ulong page) + MMUAccessType access_type, target_ulong page) { size_t vidx; assert_cpu_is_self(env_cpu(env)); for (vidx = 0; vidx < CPU_VTLB_SIZE; ++vidx) { CPUTLBEntry *vtlb = &env_tlb(env)->d[mmu_idx].vtable[vidx]; - target_ulong cmp; - - /* elt_ofs might correspond to .addr_write, so use qatomic_read */ -#if TCG_OVERSIZED_GUEST - cmp = *(target_ulong *)((uintptr_t)vtlb + elt_ofs); -#else - cmp = qatomic_read((target_ulong *)((uintptr_t)vtlb + elt_ofs)); -#endif + target_ulong cmp = tlb_read_idx(vtlb, access_type); if (cmp == page) { /* Found entry in victim tlb, swap tlb and iotlb. */ @@ -1490,11 +1473,6 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, return false; } -/* Macro to call the above, with local variables from the use context. */ -#define VICTIM_TLB_HIT(TY, ADDR) \ - victim_tlb_hit(env, mmu_idx, index, offsetof(CPUTLBEntry, TY), \ - (ADDR) & TARGET_PAGE_MASK) - static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, CPUTLBEntryFull *full, uintptr_t retaddr) { @@ -1527,29 +1505,12 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, { uintptr_t index = tlb_index(env, mmu_idx, addr); CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr, page_addr; - size_t elt_ofs; - int flags; + target_ulong tlb_addr = tlb_read_idx(entry, access_type); + target_ulong page_addr = addr & TARGET_PAGE_MASK; + int flags = TLB_FLAGS_MASK; - switch (access_type) { - case MMU_DATA_LOAD: - elt_ofs = offsetof(CPUTLBEntry, addr_read); - break; - case MMU_DATA_STORE: - elt_ofs = offsetof(CPUTLBEntry, addr_write); - break; - case MMU_INST_FETCH: - elt_ofs = offsetof(CPUTLBEntry, addr_code); - break; - default: - g_assert_not_reached(); - } - tlb_addr = tlb_read_ofs(entry, elt_ofs); - - flags = TLB_FLAGS_MASK; - page_addr = addr & TARGET_PAGE_MASK; if (!tlb_hit_page(tlb_addr, page_addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page_addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, access_type, page_addr)) { CPUState *cs = env_cpu(env); if (!cs->cc->tcg_ops->tlb_fill(cs, addr, fault_size, access_type, @@ -1571,7 +1532,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, */ flags &= ~TLB_INVALID_MASK; } - tlb_addr = tlb_read_ofs(entry, elt_ofs); + tlb_addr = tlb_read_idx(entry, access_type); } flags &= tlb_addr; @@ -1797,7 +1758,8 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, if (prot & PAGE_WRITE) { tlb_addr = tlb_addr_write(tlbe); if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE, + addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); @@ -1830,7 +1792,8 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, } else /* if (prot & PAGE_READ) */ { tlb_addr = tlbe->addr_read; if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_LOAD, + addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, MMU_DATA_LOAD, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); @@ -1924,13 +1887,9 @@ load_memop(const void *haddr, MemOp op) static inline uint64_t QEMU_ALWAYS_INLINE load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, - uintptr_t retaddr, MemOp op, bool code_read, + uintptr_t retaddr, MemOp op, MMUAccessType access_type, FullLoadHelper *full_load) { - const size_t tlb_off = code_read ? - offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read); - const MMUAccessType access_type = - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD; const unsigned a_bits = get_alignment_bits(get_memop(oi)); const size_t size = memop_size(op); uintptr_t mmu_idx = get_mmuidx(oi); @@ -1950,18 +1909,18 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = code_read ? entry->addr_code : entry->addr_read; + tlb_addr = tlb_read_idx(entry, access_type); /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + if (!victim_tlb_hit(env, mmu_idx, index, access_type, addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, access_type, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); } - tlb_addr = code_read ? entry->addr_code : entry->addr_read; + tlb_addr = tlb_read_idx(entry, access_type); tlb_addr &= ~TLB_INVALID_MASK; } @@ -2047,7 +2006,8 @@ static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_UB); - return load_helper(env, addr, oi, retaddr, MO_UB, false, full_ldub_mmu); + return load_helper(env, addr, oi, retaddr, MO_UB, MMU_DATA_LOAD, + full_ldub_mmu); } tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, @@ -2060,7 +2020,7 @@ static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUW); - return load_helper(env, addr, oi, retaddr, MO_LEUW, false, + return load_helper(env, addr, oi, retaddr, MO_LEUW, MMU_DATA_LOAD, full_le_lduw_mmu); } @@ -2074,7 +2034,7 @@ static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUW); - return load_helper(env, addr, oi, retaddr, MO_BEUW, false, + return load_helper(env, addr, oi, retaddr, MO_BEUW, MMU_DATA_LOAD, full_be_lduw_mmu); } @@ -2088,7 +2048,7 @@ static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUL); - return load_helper(env, addr, oi, retaddr, MO_LEUL, false, + return load_helper(env, addr, oi, retaddr, MO_LEUL, MMU_DATA_LOAD, full_le_ldul_mmu); } @@ -2102,7 +2062,7 @@ static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUL); - return load_helper(env, addr, oi, retaddr, MO_BEUL, false, + return load_helper(env, addr, oi, retaddr, MO_BEUL, MMU_DATA_LOAD, full_be_ldul_mmu); } @@ -2116,7 +2076,7 @@ uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUQ); - return load_helper(env, addr, oi, retaddr, MO_LEUQ, false, + return load_helper(env, addr, oi, retaddr, MO_LEUQ, MMU_DATA_LOAD, helper_le_ldq_mmu); } @@ -2124,7 +2084,7 @@ uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUQ); - return load_helper(env, addr, oi, retaddr, MO_BEUQ, false, + return load_helper(env, addr, oi, retaddr, MO_BEUQ, MMU_DATA_LOAD, helper_be_ldq_mmu); } @@ -2320,7 +2280,6 @@ store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val, uintptr_t retaddr, size_t size, uintptr_t mmu_idx, bool big_endian) { - const size_t tlb_off = offsetof(CPUTLBEntry, addr_write); uintptr_t index, index2; CPUTLBEntry *entry, *entry2; target_ulong page1, page2, tlb_addr, tlb_addr2; @@ -2342,7 +2301,7 @@ store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val, tlb_addr2 = tlb_addr_write(entry2); if (page1 != page2 && !tlb_hit_page(tlb_addr2, page2)) { - if (!victim_tlb_hit(env, mmu_idx, index2, tlb_off, page2)) { + if (!victim_tlb_hit(env, mmu_idx, index2, MMU_DATA_STORE, page2)) { tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE, mmu_idx, retaddr); index2 = tlb_index(env, mmu_idx, page2); @@ -2395,7 +2354,6 @@ static inline void QEMU_ALWAYS_INLINE store_helper(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr, MemOp op) { - const size_t tlb_off = offsetof(CPUTLBEntry, addr_write); const unsigned a_bits = get_alignment_bits(get_memop(oi)); const size_t size = memop_size(op); uintptr_t mmu_idx = get_mmuidx(oi); @@ -2418,7 +2376,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE, addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE, mmu_idx, retaddr); @@ -2724,7 +2682,8 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, static uint64_t full_ldub_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_8, true, full_ldub_code); + return load_helper(env, addr, oi, retaddr, MO_8, + MMU_INST_FETCH, full_ldub_code); } uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) @@ -2736,7 +2695,8 @@ uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) static uint64_t full_lduw_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_TEUW, true, full_lduw_code); + return load_helper(env, addr, oi, retaddr, MO_TEUW, + MMU_INST_FETCH, full_lduw_code); } uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) @@ -2748,7 +2708,8 @@ uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) static uint64_t full_ldl_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_TEUL, true, full_ldl_code); + return load_helper(env, addr, oi, retaddr, MO_TEUL, + MMU_INST_FETCH, full_ldl_code); } uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) @@ -2760,7 +2721,8 @@ uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) static uint64_t full_ldq_code(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, MO_TEUQ, true, full_ldq_code); + return load_helper(env, addr, oi, retaddr, MO_TEUQ, + MMU_INST_FETCH, full_ldq_code); } uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr) From patchwork Tue Apr 25 19:30:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676817 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2876703wrs; Tue, 25 Apr 2023 12:33:36 -0700 (PDT) X-Google-Smtp-Source: AKy350bJHB3gZyyMW6L/If9OvV3QKHI+AqhXyoR58Q7zNWcAPsdwVSHoqm3QYlVN+aEdKCTU+J84 X-Received: by 2002:a05:6214:301a:b0:5b4:89b4:1af8 with SMTP id ke26-20020a056214301a00b005b489b41af8mr30466488qvb.16.1682451215800; Tue, 25 Apr 2023 12:33:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451215; cv=none; d=google.com; s=arc-20160816; b=WEQRj5cwk63NAylaYf2tw8Nf30jS3gIklsRLCdhCsibDaSh+HQMXkTdczttjqpdNkx ndBx+A2WBs9KoiSAqvIN3T1lxEvlFXiSI1+m+LUgzrq6lH2QXC1TSBn9sXV6CEEIaSrJ JUm/1IEc5XwytlzmCLgj+3RklV8FU2Q6YdME50REsnzOMWRb+cGvLuAYcMi4MuHNhpoo L+xNSdnWVOGLKTetRUEKJnGyWiKKzQMEjQL9CZteijpd5BT1TaEzH6mDUTvzBvMBxo8l cD8vfgrT0WONu1BVrfUkpjuVA0hdug/SRMRHUk/V7Da21YMDKqfVeUqHnrO6sBZ2qUXz E8+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=TNOwPdw13oV5pbwhhGogevI1pWDH//U3rXo9/oNBk00=; b=LmpZJYjLZqZhoeDIabp7r9s+ASU2qt5pIeIHKS27yuqsifCcKwnmEPICHgcaacQyPm KO1Fq4medkfJ6v27LRHvvLEXk9P+dDiDdB+diR9fnPQtmAzPV7NIys9jnEXIQUuhumdp y/Ob9bt7MtNLb/L73lQfgEIYe21cM7RuAlDUCiigMo2+Zf2qd297vHNGu+laexnpQxsa 3fTGM3/L3BAiG1/Axh8N8x8ao2tkdjNalZEp5BEsEF4M+SUUSsv/RAOHi594H6BJtHid uxiHvdEILqJ2PrDT4cXq9zTtdrNRiPQKmYX/hs0DdfTa669DiquNALk6Z2Vk06R7vjlh pBng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GpXvyOVm; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id kj13-20020a056214528d00b005ef55ab9cf1si9221178qvb.18.2023.04.25.12.33.35 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:33:35 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GpXvyOVm; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOPA-0000GT-S7; Tue, 25 Apr 2023 15:32:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOP8-0000BR-Fz for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:46 -0400 Received: from mail-lj1-x22f.google.com ([2a00:1450:4864:20::22f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOOq-0004B7-Gc for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:46 -0400 Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2a8b3ecf59fso60883431fa.0 for ; Tue, 25 Apr 2023 12:32:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451147; x=1685043147; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TNOwPdw13oV5pbwhhGogevI1pWDH//U3rXo9/oNBk00=; b=GpXvyOVm0YUzakVMGSgh4JapHnIfbtchCGX0+rJF24a1AdQnxRDv7inJUO3PsgFpWA wkDvu7qXklGoWiutMRj0UIChnXpzd0LCoQH1Gs27Vt5wN6/QKtJAsTlgzQysbs6suy0s NJtAOtcJ2rQ20XZk2Z87GcOEVtHXF2R7t1QGwgYV9psa5Dh4jokTW/I0b1Ft5/kvSIDb k1zUvTm4NpsPZr4bBJOku4ksi+8azpQDXnUBCcXqCQ/6chjD5bEZbV06eCy3bb923Yyt QRa/fMdiJkdBTAH9Y4KLFpHNlMF1JUJXzx3P64+NP6VahUx1j0YlIuHoJd2cxC8i3jEF mS8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451147; x=1685043147; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TNOwPdw13oV5pbwhhGogevI1pWDH//U3rXo9/oNBk00=; b=XcYPeKO0l9VZfWmh81kEtgNs3QDENf5cWhC+5lHBiiWW0a0mQguLfvPEb7vtrUwg2Q cvxgaMdHu4lPwRFcw0VFcUAnMzSMvYbIZwS/g31j7QjCBoH6xnPla/DPuf5vVTmVf3XO U9G5AXgjyblXxUNfFSXkeyOlWdYG/GAfdL0vYXjXw0i8jf11tig7bQBxlxQ3Cfzpo3KR 5ZKvSeTspzIuBYxDmxjPAYSJRTEu0pv/np88FgXWP4Ub8qfqGzKIAlUg1krP2v4IdOrd TG+9GyKG+YVCInfX/XZN4cdisE3+1eZGU/3mIMzd3XhbqU45aIDzP9MHbiAHq2GiW5c9 Kixg== X-Gm-Message-State: AAQBX9fnD3TZ5bz/r0pd1kiLVlZ/qUqLdsFxOAsn6FHpGdUErvCq4X/b Bc2HwMGI2TW7qQ2VLASKW0+onRxpU94uiWM9ewiKHA== X-Received: by 2002:a05:651c:1207:b0:2a7:6e37:ee68 with SMTP id i7-20020a05651c120700b002a76e37ee68mr3767990lja.12.1682451146693; Tue, 25 Apr 2023 12:32:26 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.32.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:32:26 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v3 04/57] accel/tcg: Reorg system mode load helpers Date: Tue, 25 Apr 2023 20:30:53 +0100 Message-Id: <20230425193146.2106111-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22f; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of trying to unify all operations on uint64_t, pull out mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 612 +++++++++++++++++++++++++++++++-------------- 1 file changed, 419 insertions(+), 193 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index e68cf422c5..1b699ad786 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1711,6 +1711,178 @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, #endif +/* + * Probe for a load/store operation. + * Return the host address and into @flags. + */ + +typedef struct MMULookupPageData { + CPUTLBEntryFull *full; + void *haddr; + target_ulong addr; + int flags; + int size; +} MMULookupPageData; + +typedef struct MMULookupLocals { + MMULookupPageData page[2]; + MemOp memop; + int mmu_idx; +} MMULookupLocals; + +/** + * mmu_lookup1: translate one page + * @env: cpu context + * @data: lookup parameters + * @mmu_idx: virtual address context + * @access_type: load/store/code + * @ra: return address into tcg generated code, or 0 + * + * Resolve the translation for the one page at @data.addr, filling in + * the rest of @data with the results. If the translation fails, + * tlb_fill will longjmp out. Return true if the softmmu tlb for + * @mmu_idx may have resized. + */ +static bool mmu_lookup1(CPUArchState *env, MMULookupPageData *data, + int mmu_idx, MMUAccessType access_type, uintptr_t ra) +{ + target_ulong addr = data->addr; + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = tlb_read_idx(entry, access_type); + bool maybe_resized = false; + + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, access_type, + addr & TARGET_PAGE_MASK)) { + tlb_fill(env_cpu(env), addr, data->size, access_type, mmu_idx, ra); + maybe_resized = true; + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; + } + + data->flags = tlb_addr & TLB_FLAGS_MASK; + data->full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; + /* Compute haddr speculatively; depending on flags it might be invalid. */ + data->haddr = (void *)((uintptr_t)addr + entry->addend); + + return maybe_resized; +} + +/** + * mmu_watch_or_dirty + * @env: cpu context + * @data: lookup parameters + * @access_type: load/store/code + * @ra: return address into tcg generated code, or 0 + * + * Trigger watchpoints for @data.addr:@data.size; + * record writes to protected clean pages. + */ +static void mmu_watch_or_dirty(CPUArchState *env, MMULookupPageData *data, + MMUAccessType access_type, uintptr_t ra) +{ + CPUTLBEntryFull *full = data->full; + target_ulong addr = data->addr; + int flags = data->flags; + int size = data->size; + + /* On watchpoint hit, this will longjmp out. */ + if (flags & TLB_WATCHPOINT) { + int wp = access_type == MMU_DATA_STORE ? BP_MEM_WRITE : BP_MEM_READ; + cpu_check_watchpoint(env_cpu(env), addr, size, full->attrs, wp, ra); + flags &= ~TLB_WATCHPOINT; + } + + if (flags & TLB_NOTDIRTY) { + notdirty_write(env_cpu(env), addr, size, full, ra); + flags &= ~TLB_NOTDIRTY; + } + data->flags = flags; +} + +/** + * mmu_lookup: translate page(s) + * @env: cpu context + * @addr: virtual address + * @oi: combined mmu_idx and MemOp + * @ra: return address into tcg generated code, or 0 + * @access_type: load/store/code + * @l: output result + * + * Resolve the translation for the page(s) beginning at @addr, for MemOp.size + * bytes. Return true if the lookup crosses a page boundary. + */ +static bool mmu_lookup(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType type, MMULookupLocals *l) +{ + unsigned a_bits; + bool crosspage; + int flags; + + l->memop = get_memop(oi); + l->mmu_idx = get_mmuidx(oi); + + tcg_debug_assert(l->mmu_idx < NB_MMU_MODES); + + /* Handle CPU specific unaligned behaviour */ + a_bits = get_alignment_bits(l->memop); + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(env_cpu(env), addr, type, l->mmu_idx, ra); + } + + l->page[0].addr = addr; + l->page[0].size = memop_size(l->memop); + l->page[1].addr = (addr + l->page[0].size - 1) & TARGET_PAGE_MASK; + l->page[1].size = 0; + crosspage = (addr ^ l->page[1].addr) & TARGET_PAGE_MASK; + + if (likely(!crosspage)) { + mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra); + + flags = l->page[0].flags; + if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { + mmu_watch_or_dirty(env, &l->page[0], type, ra); + } + if (unlikely(flags & TLB_BSWAP)) { + l->memop ^= MO_BSWAP; + } + } else { + /* Finish compute of page crossing. */ + int size1 = l->page[1].addr - addr; + l->page[1].size = l->page[0].size - size1; + l->page[0].size = size1; + + /* + * Lookup both pages, recognizing exceptions from either. If the + * second lookup potentially resized, refresh first CPUTLBEntryFull. + */ + mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra); + if (mmu_lookup1(env, &l->page[1], l->mmu_idx, type, ra)) { + uintptr_t index = tlb_index(env, l->mmu_idx, addr); + l->page[0].full = &env_tlb(env)->d[l->mmu_idx].fulltlb[index]; + } + + flags = l->page[0].flags | l->page[1].flags; + if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { + mmu_watch_or_dirty(env, &l->page[0], type, ra); + mmu_watch_or_dirty(env, &l->page[1], type, ra); + } + + /* + * Since target/sparc is the only user of TLB_BSWAP, and all + * Sparc accesses are aligned, any treatment across two pages + * would be arbitrary. Refuse it until there's a use. + */ + tcg_debug_assert((flags & TLB_BSWAP) == 0); + } + + return crosspage; +} + /* * Probe for an atomic operation. Do not allow unaligned operations, * or io operations to proceed. Return the host address. @@ -1885,113 +2057,6 @@ load_memop(const void *haddr, MemOp op) } } -static inline uint64_t QEMU_ALWAYS_INLINE -load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, - uintptr_t retaddr, MemOp op, MMUAccessType access_type, - FullLoadHelper *full_load) -{ - const unsigned a_bits = get_alignment_bits(get_memop(oi)); - const size_t size = memop_size(op); - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index; - CPUTLBEntry *entry; - target_ulong tlb_addr; - void *haddr; - uint64_t res; - - tcg_debug_assert(mmu_idx < NB_MMU_MODES); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, access_type, - mmu_idx, retaddr); - } - - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_read_idx(entry, access_type); - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, access_type, - addr & TARGET_PAGE_MASK)) { - tlb_fill(env_cpu(env), addr, size, - access_type, mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_read_idx(entry, access_type); - tlb_addr &= ~TLB_INVALID_MASK; - } - - /* Handle anything that isn't just a straight memory access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUTLBEntryFull *full; - bool need_swap; - - /* For anything that is unaligned, recurse through full_load. */ - if ((addr & (size - 1)) != 0) { - goto do_unaligned_access; - } - - full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; - - /* Handle watchpoints. */ - if (unlikely(tlb_addr & TLB_WATCHPOINT)) { - /* On watchpoint hit, this will longjmp out. */ - cpu_check_watchpoint(env_cpu(env), addr, size, - full->attrs, BP_MEM_READ, retaddr); - } - - need_swap = size > 1 && (tlb_addr & TLB_BSWAP); - - /* Handle I/O access. */ - if (likely(tlb_addr & TLB_MMIO)) { - return io_readx(env, full, mmu_idx, addr, retaddr, - access_type, op ^ (need_swap * MO_BSWAP)); - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - - /* - * Keep these two load_memop separate to ensure that the compiler - * is able to fold the entire function to a single instruction. - * There is a build-time assert inside to remind you of this. ;-) - */ - if (unlikely(need_swap)) { - return load_memop(haddr, op ^ MO_BSWAP); - } - return load_memop(haddr, op); - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (size > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - uint64_t r1, r2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~((target_ulong)size - 1); - addr2 = addr1 + size; - r1 = full_load(env, addr1, oi, retaddr); - r2 = full_load(env, addr2, oi, retaddr); - shift = (addr & (size - 1)) * 8; - - if (memop_big_endian(op)) { - /* Big-endian combine. */ - res = (r1 << shift) | (r2 >> ((size * 8) - shift)); - } else { - /* Little-endian combine. */ - res = (r1 >> shift) | (r2 << ((size * 8) - shift)); - } - return res & MAKE_64BIT_MASK(0, size * 8); - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - return load_memop(haddr, op); -} - /* * For the benefit of TCG generated code, we want to avoid the * complication of ABI-specific return type promotion and always @@ -2002,90 +2067,250 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi, * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +/** + * do_ld_mmio_beN: + * @env: cpu context + * @p: translation parameters + * @ret_be: accumulated data + * @mmu_idx: virtual address context + * @ra: return address into tcg generated code, or 0 + * + * Load @p->size bytes from @p->addr, which is memory-mapped i/o. + * The bytes are concatenated with in big-endian order with @ret_be. + */ +static uint64_t do_ld_mmio_beN(CPUArchState *env, MMULookupPageData *p, + uint64_t ret_be, int mmu_idx, + MMUAccessType type, uintptr_t ra) { - validate_memop(oi, MO_UB); - return load_helper(env, addr, oi, retaddr, MO_UB, MMU_DATA_LOAD, - full_ldub_mmu); + CPUTLBEntryFull *full = p->full; + target_ulong addr = p->addr; + int i, size = p->size; + + QEMU_IOTHREAD_LOCK_GUARD(); + for (i = 0; i < size; i++) { + uint8_t x = io_readx(env, full, mmu_idx, addr + i, ra, type, MO_UB); + ret_be = (ret_be << 8) | x; + } + return ret_be; +} + +/** + * do_ld_bytes_beN + * @p: translation parameters + * @ret_be: accumulated data + * + * Load @p->size bytes from @p->haddr, which is RAM. + * The bytes to concatenated in big-endian order with @ret_be. + */ +static uint64_t do_ld_bytes_beN(MMULookupPageData *p, uint64_t ret_be) +{ + uint8_t *haddr = p->haddr; + int i, size = p->size; + + for (i = 0; i < size; i++) { + ret_be = (ret_be << 8) | haddr[i]; + } + return ret_be; +} + +/* + * Wrapper for the above. + */ +static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p, + uint64_t ret_be, int mmu_idx, + MMUAccessType type, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + return do_ld_mmio_beN(env, p, ret_be, mmu_idx, type, ra); + } else { + return do_ld_bytes_beN(p, ret_be); + } +} + +static uint8_t do_ld_1(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, MO_UB); + } else { + return *(uint8_t *)p->haddr; + } +} + +static uint16_t do_ld_2(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, MemOp memop, uintptr_t ra) +{ + uint64_t ret; + + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); + } + + /* Perform the load host endian, then swap if necessary. */ + ret = load_memop(p->haddr, MO_UW); + if (memop & MO_BSWAP) { + ret = bswap16(ret); + } + return ret; +} + +static uint32_t do_ld_4(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, MemOp memop, uintptr_t ra) +{ + uint32_t ret; + + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); + } + + /* Perform the load host endian. */ + ret = load_memop(p->haddr, MO_UL); + if (memop & MO_BSWAP) { + ret = bswap32(ret); + } + return ret; +} + +static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_idx, + MMUAccessType type, MemOp memop, uintptr_t ra) +{ + uint64_t ret; + + if (unlikely(p->flags & TLB_MMIO)) { + return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); + } + + /* Perform the load host endian. */ + ret = load_memop(p->haddr, MO_UQ); + if (memop & MO_BSWAP) { + ret = bswap64(ret); + } + return ret; +} + +static uint8_t do_ld1_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) +{ + MMULookupLocals l; + bool crosspage; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + tcg_debug_assert(!crosspage); + + return do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); } tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_ldub_mmu(env, addr, oi, retaddr); + validate_memop(oi, MO_UB); + return do_ld1_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } -static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +static uint16_t do_ld2_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { - validate_memop(oi, MO_LEUW); - return load_helper(env, addr, oi, retaddr, MO_LEUW, MMU_DATA_LOAD, - full_le_lduw_mmu); + MMULookupLocals l; + bool crosspage; + uint16_t ret; + uint8_t a, b; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + if (likely(!crosspage)) { + return do_ld_2(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); + } + + a = do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); + b = do_ld_1(env, &l.page[1], l.mmu_idx, access_type, ra); + + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = a | (b << 8); + } else { + ret = b | (a << 8); + } + return ret; } tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_le_lduw_mmu(env, addr, oi, retaddr); -} - -static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); - return load_helper(env, addr, oi, retaddr, MO_BEUW, MMU_DATA_LOAD, - full_be_lduw_mmu); + validate_memop(oi, MO_LEUW); + return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_be_lduw_mmu(env, addr, oi, retaddr); + validate_memop(oi, MO_BEUW); + return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } -static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +static uint32_t do_ld4_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { - validate_memop(oi, MO_LEUL); - return load_helper(env, addr, oi, retaddr, MO_LEUL, MMU_DATA_LOAD, - full_le_ldul_mmu); + MMULookupLocals l; + bool crosspage; + uint32_t ret; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + if (likely(!crosspage)) { + return do_ld_4(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); + } + + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = bswap32(ret); + } + return ret; } tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_le_ldul_mmu(env, addr, oi, retaddr); -} - -static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); - return load_helper(env, addr, oi, retaddr, MO_BEUL, MMU_DATA_LOAD, - full_be_ldul_mmu); + validate_memop(oi, MO_LEUL); + return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { - return full_be_ldul_mmu(env, addr, oi, retaddr); + validate_memop(oi, MO_BEUL); + return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); +} + +static uint64_t do_ld8_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) +{ + MMULookupLocals l; + bool crosspage; + uint64_t ret; + + crosspage = mmu_lookup(env, addr, oi, ra, access_type, &l); + if (likely(!crosspage)) { + return do_ld_8(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); + } + + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = bswap64(ret); + } + return ret; } uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUQ); - return load_helper(env, addr, oi, retaddr, MO_LEUQ, MMU_DATA_LOAD, - helper_le_ldq_mmu); + return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUQ); - return load_helper(env, addr, oi, retaddr, MO_BEUQ, MMU_DATA_LOAD, - helper_be_ldq_mmu); + return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } /* @@ -2128,56 +2353,85 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, * Load helpers for cpu_ldst.h. */ -static inline uint64_t cpu_load_helper(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t retaddr, - FullLoadHelper *full_load) +static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) { - uint64_t ret; - - ret = full_load(env, addr, oi, retaddr); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; } uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_ldub_mmu); + uint8_t ret; + + validate_memop(oi, MO_UB); + ret = do_ld1_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_be_lduw_mmu); + uint16_t ret; + + validate_memop(oi, MO_BEUW); + ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_be_ldul_mmu); + uint32_t ret; + + validate_memop(oi, MO_BEUL); + ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, helper_be_ldq_mmu); + uint64_t ret; + + validate_memop(oi, MO_BEUQ); + ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_le_lduw_mmu); + uint16_t ret; + + validate_memop(oi, MO_LEUW); + ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, full_le_ldul_mmu); + uint32_t ret; + + validate_memop(oi, MO_LEUL); + ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - return cpu_load_helper(env, addr, oi, ra, helper_le_ldq_mmu); + uint64_t ret; + + validate_memop(oi, MO_LEUQ); + ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; } Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, @@ -2679,54 +2933,26 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, /* Code access functions. */ -static uint64_t full_ldub_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_8, - MMU_INST_FETCH, full_ldub_code); -} - uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(env, true)); - return full_ldub_code(env, addr, oi, 0); -} - -static uint64_t full_lduw_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_TEUW, - MMU_INST_FETCH, full_lduw_code); + return do_ld1_mmu(env, addr, oi, 0, MMU_INST_FETCH); } uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(env, true)); - return full_lduw_code(env, addr, oi, 0); -} - -static uint64_t full_ldl_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_TEUL, - MMU_INST_FETCH, full_ldl_code); + return do_ld2_mmu(env, addr, oi, 0, MMU_INST_FETCH); } uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(env, true)); - return full_ldl_code(env, addr, oi, 0); -} - -static uint64_t full_ldq_code(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return load_helper(env, addr, oi, retaddr, MO_TEUQ, - MMU_INST_FETCH, full_ldq_code); + return do_ld4_mmu(env, addr, oi, 0, MMU_INST_FETCH); } uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(env, true)); - return full_ldq_code(env, addr, oi, 0); + return do_ld8_mmu(env, addr, oi, 0, MMU_INST_FETCH); } From patchwork Tue Apr 25 19:30:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676816 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2876429wrs; Tue, 25 Apr 2023 12:32:59 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5wPlvFkF1OE/mC3IcOnJfHOSwyKQ1Yb5/xjNUyeCx1j7EZMXeSiGVKV2c2XLTJ4M0tgiKP X-Received: by 2002:a05:6214:c64:b0:616:4d69:cc2b with SMTP id t4-20020a0562140c6400b006164d69cc2bmr6274321qvj.7.1682451178826; Tue, 25 Apr 2023 12:32:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451178; cv=none; d=google.com; s=arc-20160816; b=lvpBthyUForaftvCRgq+yjJY9EiY3tNRqB7nokyCUVJI7fcj4/DNBrim4LJxbu2hNO 5b/cPYFOoVJz2zk5/nba64Oz2JgsYpSNmr1phXeufI3udySps/k06o+P0ebsu6M5t+w0 zhH2lFpwp5XJ4tiSOqqyyo4MQ0FVy+vhFFhls7kdEWR4zqC2r9TApbMMBShN1ap0MCC3 nzUb2Viwt+xtmDIrywFztKWDQZiQ/L+2w4UmZCO2tx1n6V9vxDbncB1sUH7dhOHx1P6J OMYyUINPVxEex8fPgTAlEhCaRgvFjN8iY0EFDRWRhl7fgCUxzm8yaatY6r9ZdKp1Jv+W pFBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=BwVtyUq7a1s7I4TvJ+6Zq/Ivbms1tXxuN75W2bqdp+Y=; b=vHYYYCYbtvjNR/cIwt/WsHwxRuVYOQzbFGjm7UcYnyJ8wKzu8mtr13YRESJxOhsq3N IvWIHA+Hq8/WY9Gb2XZiZURlMhuB4EUemB2nWgHbyFcQd+vqvebBtdfX6W7AxPHNk5cA vDHMsl8y803DnlWJyO+huvwGCkqixt5Egb/25i46BA7eusZV2l8RWx4J7dmsJXmKS0wH n2mT2GLW3Rq6BdPCzAHaW1pU9na0uzT9hkoTmajG7KRn9UBcZkyXQE8mvra+AVPeGY0o IvhbCq7YxDyoy89D3BMlF8HsuyG5MM1sdUjWACatH2nYqfzHN+xnACNPmbonXxJSV757 I7Uw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ow1tYpVz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id e29-20020a05620a015d00b0074deaeab5fbsi8952584qkn.184.2023.04.25.12.32.58 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:32:58 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ow1tYpVz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOPG-0000OO-70; Tue, 25 Apr 2023 15:32:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOPE-0000Kx-Vr for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:53 -0400 Received: from mail-lj1-x22e.google.com ([2a00:1450:4864:20::22e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOPB-0004B9-7m for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:32:52 -0400 Received: by mail-lj1-x22e.google.com with SMTP id 38308e7fff4ca-2a8b3ecf59fso60887401fa.0 for ; Tue, 25 Apr 2023 12:32:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451168; x=1685043168; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BwVtyUq7a1s7I4TvJ+6Zq/Ivbms1tXxuN75W2bqdp+Y=; b=Ow1tYpVzWGUrnDl/yHdLiNwUHhVfHVE69/GCa6oL05XwnPR9K1mIVW9yfdxzz47J5x 4CfvtdBwy9xu159SEiV9s2WpwnIVQOraeab45b43/lLOCb1I54/1CCS0y4tIVOjBw+4S cdmxUjxR6iq40zsrtmoCoH40dw83CFf4KLdICK6sQCIWdE5hC0JJH00fhlp7JfG+MkI1 lFXHCgbjjxrczzc4QmqIkfzE5ehmMbhubdkOOjl4ZUIkHZSb4rFNOd9Z5eOdNKM2tvS9 Iex63p2YXzdDkO3hQYyUn4MaifHXgk/FCHVy3JCAvQaDjvzY0gBUZNliSjXeeTwwY4Sl Wrgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451168; x=1685043168; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BwVtyUq7a1s7I4TvJ+6Zq/Ivbms1tXxuN75W2bqdp+Y=; b=GPzquKJE2fNSFoWxSCVTMJVRuHEUa37kkEdn7M3qOBO6M1Ub9aD2yZwboknNqMUGZt 1otHQ+jzPkxRs7QGgaWN+M6UgwD6FPPj2726iOBqi5BdyBI2jJNOsXKih10iiVkofycn Ta7mkJOr1WRXVgfR5+XuN9LVBgyqZPoS1tmm/fJ++iZVCHHn3SsHfh3ztREJ5BHwmV73 Ded5eYOaRmTycGQwny3LLuXrtPIEk+9OFn48OoRdQZRIgNIWL5ESHsX5taqpMziQgBb/ G4BTQHPDqS/3JSs1nX6i/D2P3YFP+dSFaZukhNbLqA01lcS4VI8Su965RN758tWl8uSm rpiQ== X-Gm-Message-State: AAQBX9fQSZTEcxMkk660v1r6rGLsUUFhvWYGSSLSWZz0XNH6qlPKLX9V RrtuoFh2lXSwCeAQrvl3lDzsdbpARSixpbHUVqaZiQ== X-Received: by 2002:a2e:9245:0:b0:2a8:c75c:96cb with SMTP id v5-20020a2e9245000000b002a8c75c96cbmr3917872ljg.1.1682451168330; Tue, 25 Apr 2023 12:32:48 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.32.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:32:48 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 05/57] accel/tcg: Reorg system mode store helpers Date: Tue, 25 Apr 2023 20:30:54 +0100 Message-Id: <20230425193146.2106111-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22e; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of trying to unify all operations on uint64_t, use mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 408 +++++++++++++++++++++------------------------ 1 file changed, 193 insertions(+), 215 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 1b699ad786..99eb527278 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2526,322 +2526,300 @@ store_memop(void *haddr, uint64_t val, MemOp op) } } -static void full_stb_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr); - -static void __attribute__((noinline)) -store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val, - uintptr_t retaddr, size_t size, uintptr_t mmu_idx, - bool big_endian) +/** + * do_st_mmio_leN: + * @env: cpu context + * @p: translation parameters + * @val_le: data to store + * @mmu_idx: virtual address context + * @ra: return address into tcg generated code, or 0 + * + * Store @p->size bytes at @p->addr, which is memory-mapped i/o. + * The bytes to store are extracted in little-endian order from @val_le; + * return the bytes of @val_le beyond @p->size that have not been stored. + */ +static uint64_t do_st_mmio_leN(CPUArchState *env, MMULookupPageData *p, + uint64_t val_le, int mmu_idx, uintptr_t ra) { - uintptr_t index, index2; - CPUTLBEntry *entry, *entry2; - target_ulong page1, page2, tlb_addr, tlb_addr2; - MemOpIdx oi; - size_t size2; - int i; + CPUTLBEntryFull *full = p->full; + target_ulong addr = p->addr; + int i, size = p->size; - /* - * Ensure the second page is in the TLB. Note that the first page - * is already guaranteed to be filled, and that the second page - * cannot evict the first. An exception to this rule is PAGE_WRITE_INV - * handling: the first page could have evicted itself. - */ - page1 = addr & TARGET_PAGE_MASK; - page2 = (addr + size) & TARGET_PAGE_MASK; - size2 = (addr + size) & ~TARGET_PAGE_MASK; - index2 = tlb_index(env, mmu_idx, page2); - entry2 = tlb_entry(env, mmu_idx, page2); - - tlb_addr2 = tlb_addr_write(entry2); - if (page1 != page2 && !tlb_hit_page(tlb_addr2, page2)) { - if (!victim_tlb_hit(env, mmu_idx, index2, MMU_DATA_STORE, page2)) { - tlb_fill(env_cpu(env), page2, size2, MMU_DATA_STORE, - mmu_idx, retaddr); - index2 = tlb_index(env, mmu_idx, page2); - entry2 = tlb_entry(env, mmu_idx, page2); - } - tlb_addr2 = tlb_addr_write(entry2); + QEMU_IOTHREAD_LOCK_GUARD(); + for (i = 0; i < size; i++, val_le >>= 8) { + io_writex(env, full, mmu_idx, val_le, addr + i, ra, MO_UB); } + return val_le; +} - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_addr_write(entry); +/** + * do_st_bytes_leN: + * @p: translation parameters + * @val_le: data to store + * + * Store @p->size bytes at @p->haddr, which is RAM. + * The bytes to store are extracted in little-endian order from @val_le; + * return the bytes of @val_le beyond @p->size that have not been stored. + */ +static uint64_t do_st_bytes_leN(MMULookupPageData *p, uint64_t val_le) +{ + uint8_t *haddr = p->haddr; + int i, size = p->size; - /* - * Handle watchpoints. Since this may trap, all checks - * must happen before any store. - */ - if (unlikely(tlb_addr & TLB_WATCHPOINT)) { - cpu_check_watchpoint(env_cpu(env), addr, size - size2, - env_tlb(env)->d[mmu_idx].fulltlb[index].attrs, - BP_MEM_WRITE, retaddr); - } - if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) { - cpu_check_watchpoint(env_cpu(env), page2, size2, - env_tlb(env)->d[mmu_idx].fulltlb[index2].attrs, - BP_MEM_WRITE, retaddr); + for (i = 0; i < size; i++, val_le >>= 8) { + haddr[i] = val_le; } + return val_le; +} - /* - * XXX: not efficient, but simple. - * This loop must go in the forward direction to avoid issues - * with self-modifying code in Windows 64-bit. - */ - oi = make_memop_idx(MO_UB, mmu_idx); - if (big_endian) { - for (i = 0; i < size; ++i) { - /* Big-endian extract. */ - uint8_t val8 = val >> (((size - 1) * 8) - (i * 8)); - full_stb_mmu(env, addr + i, val8, oi, retaddr); - } +/* + * Wrapper for the above. + */ +static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p, + uint64_t val_le, int mmu_idx, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + return do_st_mmio_leN(env, p, val_le, mmu_idx, ra); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + return val_le >> (p->size * 8); } else { - for (i = 0; i < size; ++i) { - /* Little-endian extract. */ - uint8_t val8 = val >> (i * 8); - full_stb_mmu(env, addr + i, val8, oi, retaddr); - } + return do_st_bytes_leN(p, val_le); } } -static inline void QEMU_ALWAYS_INLINE -store_helper(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr, MemOp op) +static void do_st_1(CPUArchState *env, MMULookupPageData *p, uint8_t val, + int mmu_idx, uintptr_t ra) { - const unsigned a_bits = get_alignment_bits(get_memop(oi)); - const size_t size = memop_size(op); - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index; - CPUTLBEntry *entry; - target_ulong tlb_addr; - void *haddr; - - tcg_debug_assert(mmu_idx < NB_MMU_MODES); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, MO_UB); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + *(uint8_t *)p->haddr = val; } - - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_addr_write(entry); - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE, - addr & TARGET_PAGE_MASK)) { - tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; - } - - /* Handle anything that isn't just a straight memory access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUTLBEntryFull *full; - bool need_swap; - - /* For anything that is unaligned, recurse through byte stores. */ - if ((addr & (size - 1)) != 0) { - goto do_unaligned_access; - } - - full = &env_tlb(env)->d[mmu_idx].fulltlb[index]; - - /* Handle watchpoints. */ - if (unlikely(tlb_addr & TLB_WATCHPOINT)) { - /* On watchpoint hit, this will longjmp out. */ - cpu_check_watchpoint(env_cpu(env), addr, size, - full->attrs, BP_MEM_WRITE, retaddr); - } - - need_swap = size > 1 && (tlb_addr & TLB_BSWAP); - - /* Handle I/O access. */ - if (tlb_addr & TLB_MMIO) { - io_writex(env, full, mmu_idx, val, addr, retaddr, - op ^ (need_swap * MO_BSWAP)); - return; - } - - /* Ignore writes to ROM. */ - if (unlikely(tlb_addr & TLB_DISCARD_WRITE)) { - return; - } - - /* Handle clean RAM pages. */ - if (tlb_addr & TLB_NOTDIRTY) { - notdirty_write(env_cpu(env), addr, size, full, retaddr); - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - - /* - * Keep these two store_memop separate to ensure that the compiler - * is able to fold the entire function to a single instruction. - * There is a build-time assert inside to remind you of this. ;-) - */ - if (unlikely(need_swap)) { - store_memop(haddr, val, op ^ MO_BSWAP); - } else { - store_memop(haddr, val, op); - } - return; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (size > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 - >= TARGET_PAGE_SIZE)) { - do_unaligned_access: - store_helper_unaligned(env, addr, val, retaddr, size, - mmu_idx, memop_big_endian(op)); - return; - } - - haddr = (void *)((uintptr_t)addr + entry->addend); - store_memop(haddr, val, op); } -static void __attribute__((noinline)) -full_stb_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +static void do_st_2(CPUArchState *env, MMULookupPageData *p, uint16_t val, + int mmu_idx, MemOp memop, uintptr_t ra) { - validate_memop(oi, MO_UB); - store_helper(env, addr, val, oi, retaddr, MO_UB); + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + /* Swap to host endian if necessary, then store. */ + if (memop & MO_BSWAP) { + val = bswap16(val); + } + store_memop(p->haddr, val, MO_UW); + } +} + +static void do_st_4(CPUArchState *env, MMULookupPageData *p, uint32_t val, + int mmu_idx, MemOp memop, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + /* Swap to host endian if necessary, then store. */ + if (memop & MO_BSWAP) { + val = bswap32(val); + } + store_memop(p->haddr, val, MO_UL); + } +} + +static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, + int mmu_idx, MemOp memop, uintptr_t ra) +{ + if (unlikely(p->flags & TLB_MMIO)) { + io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + /* Swap to host endian if necessary, then store. */ + if (memop & MO_BSWAP) { + val = bswap64(val); + } + store_memop(p->haddr, val, MO_UQ); + } } void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) + MemOpIdx oi, uintptr_t ra) { - full_stb_mmu(env, addr, val, oi, retaddr); + MMULookupLocals l; + bool crosspage; + + validate_memop(oi, MO_UB); + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + tcg_debug_assert(!crosspage); + + do_st_1(env, &l.page[0], val, l.mmu_idx, ra); } -static void full_le_stw_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +static void do_st2_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + MemOpIdx oi, uintptr_t ra) { - validate_memop(oi, MO_LEUW); - store_helper(env, addr, val, oi, retaddr, MO_LEUW); + MMULookupLocals l; + bool crosspage; + uint8_t a, b; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + do_st_2(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + return; + } + + if ((l.memop & MO_BSWAP) == MO_LE) { + a = val, b = val >> 8; + } else { + b = val, a = val >> 8; + } + do_st_1(env, &l.page[0], a, l.mmu_idx, ra); + do_st_1(env, &l.page[1], b, l.mmu_idx, ra); } void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_le_stw_mmu(env, addr, val, oi, retaddr); -} - -static void full_be_stw_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); - store_helper(env, addr, val, oi, retaddr, MO_BEUW); + validate_memop(oi, MO_LEUW); + do_st2_mmu(env, addr, val, oi, retaddr); } void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_be_stw_mmu(env, addr, val, oi, retaddr); + validate_memop(oi, MO_BEUW); + do_st2_mmu(env, addr, val, oi, retaddr); } -static void full_le_stl_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +static void do_st4_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) { - validate_memop(oi, MO_LEUL); - store_helper(env, addr, val, oi, retaddr, MO_LEUL); + MMULookupLocals l; + bool crosspage; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + do_st_4(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + return; + } + + /* Swap to little endian for simplicity, then store by bytes. */ + if ((l.memop & MO_BSWAP) != MO_LE) { + val = bswap32(val); + } + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); } void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_le_stl_mmu(env, addr, val, oi, retaddr); -} - -static void full_be_stl_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); - store_helper(env, addr, val, oi, retaddr, MO_BEUL); + validate_memop(oi, MO_LEUL); + do_st4_mmu(env, addr, val, oi, retaddr); } void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - full_be_stl_mmu(env, addr, val, oi, retaddr); + validate_memop(oi, MO_BEUL); + do_st4_mmu(env, addr, val, oi, retaddr); +} + +static void do_st8_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MMULookupLocals l; + bool crosspage; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + do_st_8(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + return; + } + + /* Swap to little endian for simplicity, then store by bytes. */ + if ((l.memop & MO_BSWAP) != MO_LE) { + val = bswap64(val); + } + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); } void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_LEUQ); - store_helper(env, addr, val, oi, retaddr, MO_LEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); } void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { validate_memop(oi, MO_BEUQ); - store_helper(env, addr, val, oi, retaddr, MO_BEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); } /* * Store Helpers for cpu_ldst.h */ -typedef void FullStoreHelper(CPUArchState *env, target_ulong addr, - uint64_t val, MemOpIdx oi, uintptr_t retaddr); - -static inline void cpu_store_helper(CPUArchState *env, target_ulong addr, - uint64_t val, MemOpIdx oi, uintptr_t ra, - FullStoreHelper *full_store) +static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) { - full_store(env, addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_stb_mmu); + helper_ret_stb_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stw_be_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_be_stw_mmu); + helper_be_stw_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stl_be_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_be_stl_mmu); + helper_be_stl_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stq_be_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, helper_be_stq_mmu); + helper_be_stq_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stw_le_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_le_stw_mmu); + helper_le_stw_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stl_le_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, full_le_stl_mmu); + helper_le_stl_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_stq_le_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - cpu_store_helper(env, addr, val, oi, retaddr, helper_le_stq_mmu); + helper_le_stq_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 val, From patchwork Tue Apr 25 19:30:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676869 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2884127wrs; Tue, 25 Apr 2023 12:53:05 -0700 (PDT) X-Google-Smtp-Source: AKy350YQO+7VWkmlQN3p1qKpsdrgkKkKxBsHv0flXXvBOy/dY7PbZD6qUhHUYUcbE3RFLLSxUz2+ X-Received: by 2002:ad4:5de1:0:b0:5ed:68ba:ce6b with SMTP id jn1-20020ad45de1000000b005ed68bace6bmr29229207qvb.4.1682452385362; Tue, 25 Apr 2023 12:53:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452385; cv=none; d=google.com; s=arc-20160816; b=Q6ExDJpnPsnKssx4+pdfcO/HaYVk6oO6UmYjqVg4D5nCLXJ2QnMAmbf0oF7BH5X5Yy 4/ntNKAGCr2BzwFaseVRHgTf4azqivWq8txOpMohhjkVjQq1PUlu5NzeSLVh7htsTFgI JHcPTsRA8ZLek68vEm3EnbYNKNvYAgqUmKV3RDvDmqggWacZYYdP3KRgOi4v92Bq9uRu uvMXmxadPjTLCNwgSTvs96I47ycBqhckwfTNpj3uf5O7gCvbHwDe/Cpf7L1pCPOrgozn iRpj11Lbx/J1sAKvixnniWpGAbUQK+eI+QhgRgqbzcinnbMqjXCswIaqVDLm9ObL4oop jchQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=85iJGq2CjdCU/N8O/K+MInhKNWQboniNTJgJzrqVBYI=; b=qvHLYuxoI0OaUX6gX4o+goJpzb+/ebAcWMA5OmmpBu9UbglL7tpiVbJ7MiuDU2xf4z adEEWzty7dr9DhvInLGW4JPChoWeuMh0hiafgF2hwbB97BOx8R97V2+hei6hB0ssynrG 2Uu4s4KT8h0xTkAm4s6v98Bg8IlMKxQUNEoOQpoKXQ9e+xe3s5HAohH5ck4rgNezXKQY zcPRXQaSKBFAJyU/zhxGzxens1LEDbuYsGhgvpZH4mZhAfRcfIP+rbqN8WUv4pMH0Rm4 JBNEaRD0Ic3vgxf7Wszx74T/n/jSW0rSlpmrnJuNWT1GdH+bArdiCiPnBLeV0EHpodhf 2xzA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Y/N/8idt"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 11-20020ad45bab000000b005ef5997b4a7si8937905qvq.579.2023.04.25.12.53.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:53:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Y/N/8idt"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOPp-0001Kl-IA; Tue, 25 Apr 2023 15:33:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOPg-00018H-49 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:20 -0400 Received: from mail-lj1-x234.google.com ([2a00:1450:4864:20::234]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOPQ-0004JQ-AH for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:19 -0400 Received: by mail-lj1-x234.google.com with SMTP id 38308e7fff4ca-2ab25e8a4a7so11188591fa.3 for ; Tue, 25 Apr 2023 12:33:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451181; x=1685043181; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=85iJGq2CjdCU/N8O/K+MInhKNWQboniNTJgJzrqVBYI=; b=Y/N/8idtysiIyzwCoBI/IZqdMnmd+AyXZU63L1SdeX+rTeQNzdjFlM3r89OBNVw1nF EMSr8JjU1ehO3YPDwU8wY+Ydi6uNw5tfMyy6KPU7j2+yQBXh4+1WYKMoSwaFZncua75u tTkerWq+f4lhRbellbPswPDENgomUT51w/bC/pTiEo0iTNRV2xv4kBOuQZRXtB5hmr7Q jK+e43i20RXSL3yB0bnFn/iyzV5wPG/6NFuY5WkZNYfw1huxvdtmZlUj27RS6yYawfct 8XwnvZ/ePK9OUx9sHEFSbkNlSSVukVjvA1Zl/LXI2NwImszUTEHShojhUZ/e6zIxC7wB rNWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451181; x=1685043181; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=85iJGq2CjdCU/N8O/K+MInhKNWQboniNTJgJzrqVBYI=; b=T34OaGq4jzyWhEzc/v8/Ues/m2vYRArPM16P2k0T/iog9p80Xg2rzgu6xdMcepNopE yqKCJEFZE5YSoSrnYHILBClXXxhm57t926MBKPoqFEBWaH8So34XmemgXMFAo4bEaqP8 HBggbmxwfmcUvYtxsTV7I8yvDESvy79LYV18/q865fhpxM4p94W7HaK6v3dUIO1v9t+s IP7I81Iqh8UMI+uxNzwlm9CWBe/JrR6E4CJNmrLCPLqvKQ887SgmZHVDJJNJrtkgaXSs 3EC1OEpd8Yi+rLU51iBqPvQf+VrcD5vtx06o0VYK7Nw8ZqQJDELzNtYW184nJihU/UR3 PKJA== X-Gm-Message-State: AAQBX9ex8bNU2xpMSUQp7r382oTbrV7+MVU7sopOWXnSmQg2BETRlckQ zJBHvda30wrMO0xS5PKmFn7MveUrdwHi8Lvng0H8JA== X-Received: by 2002:a2e:9143:0:b0:2a8:c8c5:c769 with SMTP id q3-20020a2e9143000000b002a8c8c5c769mr3901080ljg.36.1682451181292; Tue, 25 Apr 2023 12:33:01 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.32.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:00 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [PATCH v3 06/57] accel/tcg: Honor atomicity of loads Date: Tue, 25 Apr 2023 20:30:55 +0100 Message-Id: <20230425193146.2106111-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::234; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x234.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Create ldst_atomicity.c.inc. Not required for user-only code loads, because we've ensured that the page is read-only before beginning to translate code. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 170 +++++++--- accel/tcg/user-exec.c | 26 +- accel/tcg/ldst_atomicity.c.inc | 550 +++++++++++++++++++++++++++++++++ 3 files changed, 695 insertions(+), 51 deletions(-) create mode 100644 accel/tcg/ldst_atomicity.c.inc diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 99eb527278..00e5a8f879 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1663,6 +1663,9 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr, return qemu_ram_addr_from_host_nofail(p); } +/* Load/store with atomicity primitives. */ +#include "ldst_atomicity.c.inc" + #ifdef CONFIG_PLUGIN /* * Perform a TLB lookup and populate the qemu_plugin_hwaddr structure. @@ -2029,35 +2032,7 @@ static void validate_memop(MemOpIdx oi, MemOp expected) * specifically for reading instructions from system memory. It is * called by the translation loop and in some helpers where the code * is disassembled. It shouldn't be called directly by guest code. - */ - -typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); - -static inline uint64_t QEMU_ALWAYS_INLINE -load_memop(const void *haddr, MemOp op) -{ - switch (op) { - case MO_UB: - return ldub_p(haddr); - case MO_BEUW: - return lduw_be_p(haddr); - case MO_LEUW: - return lduw_le_p(haddr); - case MO_BEUL: - return (uint32_t)ldl_be_p(haddr); - case MO_LEUL: - return (uint32_t)ldl_le_p(haddr); - case MO_BEUQ: - return ldq_be_p(haddr); - case MO_LEUQ: - return ldq_le_p(haddr); - default: - qemu_build_not_reached(); - } -} - -/* + * * For the benefit of TCG generated code, we want to avoid the * complication of ABI-specific return type promotion and always * return a value extended to the register size of the host. This is @@ -2113,17 +2088,134 @@ static uint64_t do_ld_bytes_beN(MMULookupPageData *p, uint64_t ret_be) return ret_be; } +/** + * do_ld_parts_beN + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but atomically on each aligned part. + */ +static uint64_t do_ld_parts_beN(MMULookupPageData *p, uint64_t ret_be) +{ + void *haddr = p->haddr; + int size = p->size; + + do { + uint64_t x; + int n; + + /* + * Find minimum of alignment and size. + * This is slightly stronger than required by MO_ATOM_SUBALIGN, which + * would have only checked the low bits of addr|size once at the start, + * but is just as easy. + */ + switch (((uintptr_t)haddr | size) & 7) { + case 4: + x = cpu_to_be32(load_atomic4(haddr)); + ret_be = (ret_be << 32) | x; + n = 4; + break; + case 2: + case 6: + x = cpu_to_be16(load_atomic2(haddr)); + ret_be = (ret_be << 16) | x; + n = 2; + break; + default: + x = *(uint8_t *)haddr; + ret_be = (ret_be << 8) | x; + n = 1; + break; + case 0: + g_assert_not_reached(); + } + haddr += n; + size -= n; + } while (size != 0); + return ret_be; +} + +/** + * do_ld_parts_be4 + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but with one atomic load. + * Four aligned bytes are guaranteed to cover the load. + */ +static uint64_t do_ld_whole_be4(MMULookupPageData *p, uint64_t ret_be) +{ + int o = p->addr & 3; + uint32_t x = load_atomic4(p->haddr - o); + + x = cpu_to_be32(x); + x <<= o * 8; + x >>= (4 - p->size) * 8; + return (ret_be << (p->size * 8)) | x; +} + +/** + * do_ld_parts_be8 + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but with one atomic load. + * Eight aligned bytes are guaranteed to cover the load. + */ +static uint64_t do_ld_whole_be8(CPUArchState *env, uintptr_t ra, + MMULookupPageData *p, uint64_t ret_be) +{ + int o = p->addr & 7; + uint64_t x = load_atomic8_or_exit(env, ra, p->haddr - o); + + x = cpu_to_be64(x); + x <<= o * 8; + x >>= (8 - p->size) * 8; + return (ret_be << (p->size * 8)) | x; +} + /* * Wrapper for the above. */ static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p, - uint64_t ret_be, int mmu_idx, - MMUAccessType type, uintptr_t ra) + uint64_t ret_be, int mmu_idx, MMUAccessType type, + MemOp mop, uintptr_t ra) { + MemOp atmax; + if (unlikely(p->flags & TLB_MMIO)) { return do_ld_mmio_beN(env, p, ret_be, mmu_idx, type, ra); - } else { + } + + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the load as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax == MO_ATMAX_SIZE) { + atmax = mop & MO_SIZE; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + if (unlikely(p->size >= (1 << atmax))) { + if (!HAVE_al8_fast && p->size < 4) { + return do_ld_whole_be4(p, ret_be); + } else { + return do_ld_whole_be8(env, ra, p, ret_be); + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: return do_ld_bytes_beN(p, ret_be); + case MO_ATOM_SUBALIGN: + return do_ld_parts_beN(p, ret_be); + default: + g_assert_not_reached(); } } @@ -2147,7 +2239,7 @@ static uint16_t do_ld_2(CPUArchState *env, MMULookupPageData *p, int mmu_idx, } /* Perform the load host endian, then swap if necessary. */ - ret = load_memop(p->haddr, MO_UW); + ret = load_atom_2(env, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap16(ret); } @@ -2164,7 +2256,7 @@ static uint32_t do_ld_4(CPUArchState *env, MMULookupPageData *p, int mmu_idx, } /* Perform the load host endian. */ - ret = load_memop(p->haddr, MO_UL); + ret = load_atom_4(env, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap32(ret); } @@ -2181,7 +2273,7 @@ static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_idx, } /* Perform the load host endian. */ - ret = load_memop(p->haddr, MO_UQ); + ret = load_atom_8(env, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap64(ret); } @@ -2257,8 +2349,8 @@ static uint32_t do_ld4_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return do_ld_4(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); } - ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); - ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, l.memop, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, l.memop, ra); if ((l.memop & MO_BSWAP) == MO_LE) { ret = bswap32(ret); } @@ -2291,8 +2383,8 @@ static uint64_t do_ld8_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return do_ld_8(env, &l.page[0], l.mmu_idx, access_type, l.memop, ra); } - ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, ra); - ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, ra); + ret = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, l.memop, ra); + ret = do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, l.memop, ra); if ((l.memop & MO_BSWAP) == MO_LE) { ret = bswap64(ret); } diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index a7e0c3e2f4..522bafe44e 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -931,6 +931,8 @@ static void *cpu_mmu_lookup(CPUArchState *env, target_ulong addr, return ret; } +#include "ldst_atomicity.c.inc" + uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { @@ -953,10 +955,10 @@ uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_BEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = lduw_be_p(haddr); + ret = load_atom_2(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_be16(ret); } uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, @@ -967,10 +969,10 @@ uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_BEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldl_be_p(haddr); + ret = load_atom_4(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_be32(ret); } uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, @@ -981,10 +983,10 @@ uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_BEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldq_be_p(haddr); + ret = load_atom_8(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_be64(ret); } uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, @@ -995,10 +997,10 @@ uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_LEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = lduw_le_p(haddr); + ret = load_atom_2(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_le16(ret); } uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, @@ -1009,10 +1011,10 @@ uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_LEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldl_le_p(haddr); + ret = load_atom_4(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_le32(ret); } uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, @@ -1023,10 +1025,10 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, validate_memop(oi, MO_LEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = ldq_le_p(haddr); + ret = load_atom_8(env, ra, haddr, get_memop(oi)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; + return cpu_to_le64(ret); } Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc new file mode 100644 index 0000000000..5169073431 --- /dev/null +++ b/accel/tcg/ldst_atomicity.c.inc @@ -0,0 +1,550 @@ +/* + * Routines common to user and system emulation of load/store. + * + * Copyright (c) 2022 Linaro, Ltd. + * + * SPDX-License-Identifier: GPL-2.0-or-later + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifdef CONFIG_ATOMIC64 +# define HAVE_al8 true +#else +# define HAVE_al8 false +#endif +#define HAVE_al8_fast (ATOMIC_REG_SIZE >= 8) + +#if defined(CONFIG_ATOMIC128) +# define HAVE_al16_fast true +#else +# define HAVE_al16_fast false +#endif + +/** + * required_atomicity: + * + * Return the lg2 bytes of atomicity required by @memop for @p. + * If the operation must be split into two operations to be + * examined separately for atomicity, return -lg2. + */ +static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop) +{ + int atmax = memop & MO_ATMAX_MASK; + int size = memop & MO_SIZE; + unsigned tmp; + + if (atmax == MO_ATMAX_SIZE) { + atmax = size; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + + switch (memop & MO_ATOM_MASK) { + case MO_ATOM_IFALIGN: + tmp = (1 << atmax) - 1; + if (p & tmp) { + return MO_8; + } + break; + case MO_ATOM_NONE: + return MO_8; + case MO_ATOM_SUBALIGN: + tmp = p & -p; + if (tmp != 0 && tmp < atmax) { + atmax = tmp; + } + break; + case MO_ATOM_WITHIN16: + tmp = p & 15; + if (tmp + (1 << size) <= 16) { + atmax = size; + } else if (atmax == size) { + return MO_8; + } else if (tmp + (1 << atmax) != 16) { + /* + * Paired load/store, where the pairs aren't aligned. + * One of the two must still be handled atomically. + */ + atmax = -atmax; + } + break; + default: + g_assert_not_reached(); + } + + /* + * Here we have the architectural atomicity of the operation. + * However, when executing in a serial context, we need no extra + * host atomicity in order to avoid racing. This reduction + * avoids looping with cpu_loop_exit_atomic. + */ + if (cpu_in_serial_context(env_cpu(env))) { + return MO_8; + } + return atmax; +} + +/** + * load_atomic2: + * @pv: host address + * + * Atomically load 2 aligned bytes from @pv. + */ +static inline uint16_t load_atomic2(void *pv) +{ + uint16_t *p = __builtin_assume_aligned(pv, 2); + return qatomic_read(p); +} + +/** + * load_atomic4: + * @pv: host address + * + * Atomically load 4 aligned bytes from @pv. + */ +static inline uint32_t load_atomic4(void *pv) +{ + uint32_t *p = __builtin_assume_aligned(pv, 4); + return qatomic_read(p); +} + +/** + * load_atomic8: + * @pv: host address + * + * Atomically load 8 aligned bytes from @pv. + */ +static inline uint64_t load_atomic8(void *pv) +{ + uint64_t *p = __builtin_assume_aligned(pv, 8); + + qemu_build_assert(HAVE_al8); + return qatomic_read__nocheck(p); +} + +/** + * load_atomic16: + * @pv: host address + * + * Atomically load 16 aligned bytes from @pv. + */ +static inline Int128 load_atomic16(void *pv) +{ +#ifdef CONFIG_ATOMIC128 + __uint128_t *p = __builtin_assume_aligned(pv, 16); + Int128Alias r; + + r.u = qatomic_read__nocheck(p); + return r.s; +#else + qemu_build_not_reached(); +#endif +} + +/** + * load_atomic8_or_exit: + * @env: cpu context + * @ra: host unwind address + * @pv: host address + * + * Atomically load 8 aligned bytes from @pv. + * If this is not possible, longjmp out to restart serially. + */ +static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv) +{ + if (HAVE_al8) { + return load_atomic8(pv); + } + +#ifdef CONFIG_USER_ONLY + /* + * If the page is not writable, then assume the value is immutable + * and requires no locking. This ignores the case of MAP_SHARED with + * another process, because the fallback start_exclusive solution + * provides no protection across processes. + */ + if (!page_check_range(h2g(pv), 8, PAGE_WRITE)) { + uint64_t *p = __builtin_assume_aligned(pv, 8); + return *p; + } +#endif + + /* Ultimate fallback: re-execute in serial context. */ + cpu_loop_exit_atomic(env_cpu(env), ra); +} + +/** + * load_atomic16_or_exit: + * @env: cpu context + * @ra: host unwind address + * @pv: host address + * + * Atomically load 16 aligned bytes from @pv. + * If this is not possible, longjmp out to restart serially. + */ +static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv) +{ + Int128 *p = __builtin_assume_aligned(pv, 16); + + if (HAVE_al16_fast) { + return load_atomic16(p); + } + +#ifdef CONFIG_USER_ONLY + /* + * We can only use cmpxchg to emulate a load if the page is writable. + * If the page is not writable, then assume the value is immutable + * and requires no locking. This ignores the case of MAP_SHARED with + * another process, because the fallback start_exclusive solution + * provides no protection across processes. + */ + if (!page_check_range(h2g(p), 16, PAGE_WRITE)) { + return *p; + } +#endif + + /* + * In system mode all guest pages are writable, and for user-only + * we have just checked writability. Try cmpxchg. + */ +#if defined(CONFIG_CMPXCHG128) + /* Swap 0 with 0, with the side-effect of returning the old value. */ + { + Int128Alias r; + r.u = __sync_val_compare_and_swap_16((__uint128_t *)p, 0, 0); + return r.s; + } +#endif + + /* Ultimate fallback: re-execute in serial context. */ + cpu_loop_exit_atomic(env_cpu(env), ra); +} + +/** + * load_atom_extract_al4x2: + * @pv: host address + * + * Load 4 bytes from @p, from two sequential atomic 4-byte loads. + */ +static uint32_t load_atom_extract_al4x2(void *pv) +{ + uintptr_t pi = (uintptr_t)pv; + int sh = (pi & 3) * 8; + uint32_t a, b; + + pv = (void *)(pi & ~3); + a = load_atomic4(pv); + b = load_atomic4(pv + 4); + + if (HOST_BIG_ENDIAN) { + return (a << sh) | (b >> (-sh & 31)); + } else { + return (a >> sh) | (b << (-sh & 31)); + } +} + +/** + * load_atom_extract_al8x2: + * @pv: host address + * + * Load 8 bytes from @p, from two sequential atomic 8-byte loads. + */ +static uint64_t load_atom_extract_al8x2(void *pv) +{ + uintptr_t pi = (uintptr_t)pv; + int sh = (pi & 7) * 8; + uint64_t a, b; + + pv = (void *)(pi & ~7); + a = load_atomic8(pv); + b = load_atomic8(pv + 8); + + if (HOST_BIG_ENDIAN) { + return (a << sh) | (b >> (-sh & 63)); + } else { + return (a >> sh) | (b << (-sh & 63)); + } +} + +/** + * load_atom_extract_al8_or_exit: + * @env: cpu context + * @ra: host unwind address + * @pv: host address + * @s: object size in bytes, @s <= 4. + * + * Atomically load @s bytes from @p, when p % s != 0, and [p, p+s-1] does + * not cross an 8-byte boundary. This means that we can perform an atomic + * 8-byte load and extract. + * The value is returned in the low bits of a uint32_t. + */ +static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra, + void *pv, int s) +{ + uintptr_t pi = (uintptr_t)pv; + int o = pi & 7; + int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8; + + pv = (void *)(pi & ~7); + return load_atomic8_or_exit(env, ra, pv) >> shr; +} + +/** + * load_atom_extract_al16_or_exit: + * @env: cpu context + * @ra: host unwind address + * @p: host address + * @s: object size in bytes, @s <= 8. + * + * Atomically load @s bytes from @p, when p % 16 < 8 + * and p % 16 + s > 8. I.e. does not cross a 16-byte + * boundary, but *does* cross an 8-byte boundary. + * This is the slow version, so we must have eliminated + * any faster load_atom_extract_al8_or_exit case. + * + * If this is not possible, longjmp out to restart serially. + */ +static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra, + void *pv, int s) +{ + uintptr_t pi = (uintptr_t)pv; + int o = pi & 7; + int shr = (HOST_BIG_ENDIAN ? 16 - s - o : o) * 8; + Int128 r; + + /* + * Note constraints above: p & 8 must be clear. + * Provoke SIGBUS if possible otherwise. + */ + pv = (void *)(pi & ~7); + r = load_atomic16_or_exit(env, ra, pv); + + r = int128_urshift(r, shr); + return int128_getlo(r); +} + +/** + * load_atom_extract_al16_or_al8: + * @p: host address + * @s: object size in bytes, @s <= 8. + * + * Load @s bytes from @p, when p % s != 0. If [p, p+s-1] does not + * cross an 16-byte boundary then the access must be 16-byte atomic, + * otherwise the access must be 8-byte atomic. + */ +static inline uint64_t load_atom_extract_al16_or_al8(void *pv, int s) +{ +#if defined(CONFIG_ATOMIC128) + uintptr_t pi = (uintptr_t)pv; + int o = pi & 7; + int shr = (HOST_BIG_ENDIAN ? 16 - s - o : o) * 8; + __uint128_t r; + + pv = (void *)(pi & ~7); + if (pi & 8) { + uint64_t *p8 = __builtin_assume_aligned(pv, 16, 8); + uint64_t a = qatomic_read__nocheck(p8); + uint64_t b = qatomic_read__nocheck(p8 + 1); + + if (HOST_BIG_ENDIAN) { + r = ((__uint128_t)a << 64) | b; + } else { + r = ((__uint128_t)b << 64) | a; + } + } else { + __uint128_t *p16 = __builtin_assume_aligned(pv, 16, 0); + r = qatomic_read__nocheck(p16); + } + return r >> shr; +#else + qemu_build_not_reached(); +#endif +} + +/** + * load_atom_4_by_2: + * @pv: host address + * + * Load 4 bytes from @pv, with two 2-byte atomic loads. + */ +static inline uint32_t load_atom_4_by_2(void *pv) +{ + uint32_t a = load_atomic2(pv); + uint32_t b = load_atomic2(pv + 2); + + if (HOST_BIG_ENDIAN) { + return (a << 16) | b; + } else { + return (b << 16) | a; + } +} + +/** + * load_atom_8_by_2: + * @pv: host address + * + * Load 8 bytes from @pv, with four 2-byte atomic loads. + */ +static inline uint64_t load_atom_8_by_2(void *pv) +{ + uint32_t a = load_atom_4_by_2(pv); + uint32_t b = load_atom_4_by_2(pv + 4); + + if (HOST_BIG_ENDIAN) { + return ((uint64_t)a << 32) | b; + } else { + return ((uint64_t)b << 32) | a; + } +} + +/** + * load_atom_8_by_4: + * @pv: host address + * + * Load 8 bytes from @pv, with two 4-byte atomic loads. + */ +static inline uint64_t load_atom_8_by_4(void *pv) +{ + uint32_t a = load_atomic4(pv); + uint32_t b = load_atomic4(pv + 4); + + if (HOST_BIG_ENDIAN) { + return ((uint64_t)a << 32) | b; + } else { + return ((uint64_t)b << 32) | a; + } +} + +/** + * load_atom_2: + * @p: host address + * @memop: the full memory op + * + * Load 2 bytes from @p, honoring the atomicity of @memop. + */ +static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 1) == 0)) { + return load_atomic2(pv); + } + if (HAVE_al16_fast) { + return load_atom_extract_al16_or_al8(pv, 2); + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + return lduw_he_p(pv); + case MO_16: + /* The only case remaining is MO_ATOM_WITHIN16. */ + if (!HAVE_al8_fast && (pi & 3) == 1) { + /* Big or little endian, we want the middle two bytes. */ + return load_atomic4(pv - 1) >> 8; + } + if (unlikely((pi & 15) != 7)) { + return load_atom_extract_al8_or_exit(env, ra, pv, 2); + } + return load_atom_extract_al16_or_exit(env, ra, pv, 2); + default: + g_assert_not_reached(); + } +} + +/** + * load_atom_4: + * @p: host address + * @memop: the full memory op + * + * Load 4 bytes from @p, honoring the atomicity of @memop. + */ +static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 3) == 0)) { + return load_atomic4(pv); + } + if (HAVE_al16_fast) { + return load_atom_extract_al16_or_al8(pv, 4); + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + case MO_16: + case -MO_16: + /* + * For MO_ATOM_IFALIGN, this is more atomicity than required, + * but it's trivially supported on all hosts, better than 4 + * individual byte loads (when the host requires alignment), + * and overlaps with the MO_ATOM_SUBALIGN case of p % 2 == 0. + */ + return load_atom_extract_al4x2(pv); + case MO_32: + if (!(pi & 4)) { + return load_atom_extract_al8_or_exit(env, ra, pv, 4); + } + return load_atom_extract_al16_or_exit(env, ra, pv, 4); + default: + g_assert_not_reached(); + } +} + +/** + * load_atom_8: + * @p: host address + * @memop: the full memory op + * + * Load 8 bytes from @p, honoring the atomicity of @memop. + */ +static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + /* + * If the host does not support 8-byte atomics, wait until we have + * examined the atomicity parameters below. + */ + if (HAVE_al8 && likely((pi & 7) == 0)) { + return load_atomic8(pv); + } + if (HAVE_al16_fast) { + return load_atom_extract_al16_or_al8(pv, 8); + } + + atmax = required_atomicity(env, pi, memop); + if (atmax == MO_64) { + if (!HAVE_al8 && (pi & 7) == 0) { + load_atomic8_or_exit(env, ra, pv); + } + return load_atom_extract_al16_or_exit(env, ra, pv, 8); + } + if (HAVE_al8_fast) { + return load_atom_extract_al8x2(pv); + } + switch (atmax) { + case MO_8: + return ldq_he_p(pv); + case MO_16: + return load_atom_8_by_2(pv); + case MO_32: + return load_atom_8_by_4(pv); + case -MO_32: + if (HAVE_al8) { + return load_atom_extract_al8x2(pv); + } + cpu_loop_exit_atomic(env_cpu(env), ra); + default: + g_assert_not_reached(); + } +} From patchwork Tue Apr 25 19:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676818 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2876781wrs; Tue, 25 Apr 2023 12:33:45 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5kngyyQUuqGXjHVXoNwQ6L0Wtkod9FJddQ5dpfww6KfPq9Dv35/4qcuL+lM2/wC6+n2f0s X-Received: by 2002:a05:6214:300a:b0:5e5:c0c2:c64e with SMTP id ke10-20020a056214300a00b005e5c0c2c64emr108863qvb.3.1682451225119; Tue, 25 Apr 2023 12:33:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451225; cv=none; d=google.com; s=arc-20160816; b=RJjgLCGO+VZy+wQfa0r3jXVYxxDrKgJkNoIS7iyQNMA98BAWuwj3WFZrHzfcZ2XixL XsjVawp06AJ6T6uAbTI1vYeuvc71OV5XZmQOlz33J/u2aty3WqdfwAcCaMRFJWUWo8PX /IhAZYPN/CXwcPjlFTg8wJo+hBQSzF3wZqRwbsYo0nV0+wILatwjNHV3+Xu1tzpMSCLp GugYiGWhNN05cGNw20mtgyyqLKm+qbF9geISaWX17FUOvfW4A5MldcBxY0x+F5+VHQ/l qwzfBGlNr61isiZfG9oBoYgW+aqlBgea94rQpAANRyqIOCcys5MWrnLLV0Uo4OwI4WvA S1hA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=MKAlOciC7wlOnLbSIOCzIBeWwpZaETmSW4DNwL3Bj28=; b=u5QPZ08WokgrMDIn6zQHctDofB/hbP1VZ381nUvskCzmQAwQ2Ns4bgkcrt4UkKeE9B InunHMkWxmoKicFw1FbD6XUMa/vGrOb/z7x45OPlBXnqF+JkR/20mqa56LUH1aas+Qs1 mHZEiSf0D47a7PzVTflZjH4WPFqCjzXbevxbJagJcwQuIUDwechxlajrd4HBpA+fnrgj szeXMIaiLBYlNCF1ABlfkVhThI2808jjHb7LRLULhp7zvmMUxjAAugBBYqg7bNesUR48 Yw57nuzyRnDfPn8TaU4skIyGf6Ae3QQ/e/B/c4WfsQJKxtZl6gWQTQQB2xe1VSPEBNCJ AF4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=wqdDco+D; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id s143-20020a374595000000b0074df8cd157bsi8554149qka.278.2023.04.25.12.33.44 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:33:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=wqdDco+D; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOPs-0001Y3-5N; Tue, 25 Apr 2023 15:33:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOPg-0001D2-Vd for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:22 -0400 Received: from mail-lf1-x12d.google.com ([2a00:1450:4864:20::12d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOPY-0004NG-KZ for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:20 -0400 Received: by mail-lf1-x12d.google.com with SMTP id 2adb3069b0e04-4efe8b3f3f7so3999834e87.2 for ; Tue, 25 Apr 2023 12:33:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451190; x=1685043190; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MKAlOciC7wlOnLbSIOCzIBeWwpZaETmSW4DNwL3Bj28=; b=wqdDco+DVxcPB3ZO42Ebcfll+pjxqmgl0cZWdQZIjYD6KcYmdFRC3xolxCjsr/hbmn xff0BooZqJwaRODSyu5RzDS5GA3dbpgwRJKpcql3jBMYtiRFyRH5QA4ohCdL7RywMKw7 /LEtJADbG02bzmd9eZ4QAGGfrP/YFLLlubV150GukEe4A4fqnsmQwv2P7rxGWpop+7yX K9g7d/D2Jr58ehNMthV5oxRX7h7byXXsrIxcuBwHOZca2NKD2BIeTaA7RQdBv0NztYHU WFkDw8ebB10UoP77P4yUhVl9JKoLAn8Gl2tXx7N1AG5LlQy3DzBiTagAQ0Vck8zbhaIB VsIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451190; x=1685043190; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MKAlOciC7wlOnLbSIOCzIBeWwpZaETmSW4DNwL3Bj28=; b=k/zo6fyKPakMSSHwPIqmLLvNfxQiRXRppsY8k69+l2TRDx9+HdSRtpNjCnJkCTbHeT COktOn1FP/WbsU1ad9nMRLEcsk85Wc364znvtnbopq33ZC2nxdJeGfgZQUcslyhWs8nf KCtWH/vxzhgKxXmW7frcEJoJsy3lbMlwenRp6+ScJjq3KvvNex8cTpOJr/v9OdtodDgf 1XVu1iCbjhqUdKR2adsTIxvPFJNU5cBHs1dQfd4vbsWT4+Ru6zlLoVFpkaSRxPl24UGi 6czcgqsSWAFwKGH8BcUECfgZSbjPV/6wDeu3HHHUifyDi7zvyeiOd9/iz1pB60lRy8Gt hwrw== X-Gm-Message-State: AAQBX9eSN1ybVThyhxlHgc7T00WceCz88AeaR1XZk0bchJWrO3i16D3i czXeI/Rk9Hm3cfAVhx+IMt+Rv68vj7yD5Dq3mhI3ug== X-Received: by 2002:ac2:4ade:0:b0:4e9:c627:195d with SMTP id m30-20020ac24ade000000b004e9c627195dmr4517226lfp.57.1682451190253; Tue, 25 Apr 2023 12:33:10 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:09 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 07/57] accel/tcg: Honor atomicity of stores Date: Tue, 25 Apr 2023 20:30:56 +0100 Message-Id: <20230425193146.2106111-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::12d; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x12d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 103 +++---- accel/tcg/user-exec.c | 12 +- accel/tcg/ldst_atomicity.c.inc | 491 +++++++++++++++++++++++++++++++++ 3 files changed, 540 insertions(+), 66 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 00e5a8f879..43206437e9 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2588,36 +2588,6 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, * Store Helpers */ -static inline void QEMU_ALWAYS_INLINE -store_memop(void *haddr, uint64_t val, MemOp op) -{ - switch (op) { - case MO_UB: - stb_p(haddr, val); - break; - case MO_BEUW: - stw_be_p(haddr, val); - break; - case MO_LEUW: - stw_le_p(haddr, val); - break; - case MO_BEUL: - stl_be_p(haddr, val); - break; - case MO_LEUL: - stl_le_p(haddr, val); - break; - case MO_BEUQ: - stq_be_p(haddr, val); - break; - case MO_LEUQ: - stq_le_p(haddr, val); - break; - default: - qemu_build_not_reached(); - } -} - /** * do_st_mmio_leN: * @env: cpu context @@ -2644,38 +2614,51 @@ static uint64_t do_st_mmio_leN(CPUArchState *env, MMULookupPageData *p, return val_le; } -/** - * do_st_bytes_leN: - * @p: translation parameters - * @val_le: data to store - * - * Store @p->size bytes at @p->haddr, which is RAM. - * The bytes to store are extracted in little-endian order from @val_le; - * return the bytes of @val_le beyond @p->size that have not been stored. - */ -static uint64_t do_st_bytes_leN(MMULookupPageData *p, uint64_t val_le) -{ - uint8_t *haddr = p->haddr; - int i, size = p->size; - - for (i = 0; i < size; i++, val_le >>= 8) { - haddr[i] = val_le; - } - return val_le; -} - /* * Wrapper for the above. */ static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p, - uint64_t val_le, int mmu_idx, uintptr_t ra) + uint64_t val_le, int mmu_idx, + MemOp mop, uintptr_t ra) { + MemOp atmax; + if (unlikely(p->flags & TLB_MMIO)) { return do_st_mmio_leN(env, p, val_le, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { return val_le >> (p->size * 8); - } else { - return do_st_bytes_leN(p, val_le); + } + + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the load as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax == MO_ATMAX_SIZE) { + atmax = mop & MO_SIZE; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + if (unlikely(p->size >= (1 << atmax))) { + if (!HAVE_al8_fast && p->size <= 4) { + return store_whole_le4(p->haddr, p->size, val_le); + } else if (HAVE_al8) { + return store_whole_le8(p->haddr, p->size, val_le); + } else { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: + return store_bytes_leN(p->haddr, p->size, val_le); + case MO_ATOM_SUBALIGN: + return store_parts_leN(p->haddr, p->size, val_le); + default: + g_assert_not_reached(); } } @@ -2703,7 +2686,7 @@ static void do_st_2(CPUArchState *env, MMULookupPageData *p, uint16_t val, if (memop & MO_BSWAP) { val = bswap16(val); } - store_memop(p->haddr, val, MO_UW); + store_atom_2(env, ra, p->haddr, memop, val); } } @@ -2719,7 +2702,7 @@ static void do_st_4(CPUArchState *env, MMULookupPageData *p, uint32_t val, if (memop & MO_BSWAP) { val = bswap32(val); } - store_memop(p->haddr, val, MO_UL); + store_atom_4(env, ra, p->haddr, memop, val); } } @@ -2735,7 +2718,7 @@ static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, if (memop & MO_BSWAP) { val = bswap64(val); } - store_memop(p->haddr, val, MO_UQ); + store_atom_8(env, ra, p->haddr, memop, val); } } @@ -2804,8 +2787,8 @@ static void do_st4_mmu(CPUArchState *env, target_ulong addr, uint32_t val, if ((l.memop & MO_BSWAP) != MO_LE) { val = bswap32(val); } - val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); - (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, @@ -2838,8 +2821,8 @@ static void do_st8_mmu(CPUArchState *env, target_ulong addr, uint64_t val, if ((l.memop & MO_BSWAP) != MO_LE) { val = bswap64(val); } - val = do_st_leN(env, &l.page[0], val, l.mmu_idx, ra); - (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, ra); + val = do_st_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 522bafe44e..8a29dfd532 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -1086,7 +1086,7 @@ void cpu_stw_be_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, validate_memop(oi, MO_BEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stw_be_p(haddr, val); + store_atom_2(env, ra, haddr, get_memop(oi), be16_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1098,7 +1098,7 @@ void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, validate_memop(oi, MO_BEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stl_be_p(haddr, val); + store_atom_4(env, ra, haddr, get_memop(oi), be32_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1110,7 +1110,7 @@ void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, validate_memop(oi, MO_BEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stq_be_p(haddr, val); + store_atom_8(env, ra, haddr, get_memop(oi), be64_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1122,7 +1122,7 @@ void cpu_stw_le_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, validate_memop(oi, MO_LEUW); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stw_le_p(haddr, val); + store_atom_2(env, ra, haddr, get_memop(oi), le16_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1134,7 +1134,7 @@ void cpu_stl_le_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, validate_memop(oi, MO_LEUL); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stl_le_p(haddr, val); + store_atom_4(env, ra, haddr, get_memop(oi), le32_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1146,7 +1146,7 @@ void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, validate_memop(oi, MO_LEUQ); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - stq_le_p(haddr, val); + store_atom_8(env, ra, haddr, get_memop(oi), le64_to_cpu(val)); clear_helper_retaddr(); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 5169073431..07abbdee3f 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -21,6 +21,12 @@ #else # define HAVE_al16_fast false #endif +#if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128) +# define HAVE_al16 true +#else +# define HAVE_al16 false +#endif + /** * required_atomicity: @@ -548,3 +554,488 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, g_assert_not_reached(); } } + +/** + * store_atomic2: + * @pv: host address + * @val: value to store + * + * Atomically store 2 aligned bytes to @pv. + */ +static inline void store_atomic2(void *pv, uint16_t val) +{ + uint16_t *p = __builtin_assume_aligned(pv, 2); + qatomic_set(p, val); +} + +/** + * store_atomic4: + * @pv: host address + * @val: value to store + * + * Atomically store 4 aligned bytes to @pv. + */ +static inline void store_atomic4(void *pv, uint32_t val) +{ + uint32_t *p = __builtin_assume_aligned(pv, 4); + qatomic_set(p, val); +} + +/** + * store_atomic8: + * @pv: host address + * @val: value to store + * + * Atomically store 8 aligned bytes to @pv. + */ +static inline void store_atomic8(void *pv, uint64_t val) +{ + uint64_t *p = __builtin_assume_aligned(pv, 8); + + qemu_build_assert(HAVE_al8); + qatomic_set__nocheck(p, val); +} + +/** + * store_atom_4x2 + */ +static inline void store_atom_4_by_2(void *pv, uint32_t val) +{ + store_atomic2(pv, val >> (HOST_BIG_ENDIAN ? 16 : 0)); + store_atomic2(pv + 2, val >> (HOST_BIG_ENDIAN ? 0 : 16)); +} + +/** + * store_atom_8_by_2 + */ +static inline void store_atom_8_by_2(void *pv, uint64_t val) +{ + store_atom_4_by_2(pv, val >> (HOST_BIG_ENDIAN ? 32 : 0)); + store_atom_4_by_2(pv + 4, val >> (HOST_BIG_ENDIAN ? 0 : 32)); +} + +/** + * store_atom_8_by_4 + */ +static inline void store_atom_8_by_4(void *pv, uint64_t val) +{ + store_atomic4(pv, val >> (HOST_BIG_ENDIAN ? 32 : 0)); + store_atomic4(pv + 4, val >> (HOST_BIG_ENDIAN ? 0 : 32)); +} + +/** + * store_atom_insert_al4: + * @p: host address + * @val: shifted value to store + * @msk: mask for value to store + * + * Atomically store @val to @p, masked by @msk. + */ +static void store_atom_insert_al4(uint32_t *p, uint32_t val, uint32_t msk) +{ + uint32_t old, new; + + p = __builtin_assume_aligned(p, 4); + old = qatomic_read(p); + do { + new = (old & ~msk) | val; + } while (!__atomic_compare_exchange_n(p, &old, new, true, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)); +} + +/** + * store_atom_insert_al8: + * @p: host address + * @val: shifted value to store + * @msk: mask for value to store + * + * Atomically store @val to @p masked by @msk. + */ +static void store_atom_insert_al8(uint64_t *p, uint64_t val, uint64_t msk) +{ + uint64_t old, new; + + qemu_build_assert(HAVE_al8); + p = __builtin_assume_aligned(p, 8); + old = qatomic_read__nocheck(p); + do { + new = (old & ~msk) | val; + } while (!__atomic_compare_exchange_n(p, &old, new, true, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)); +} + +/** + * store_atom_insert_al16: + * @p: host address + * @val: shifted value to store + * @msk: mask for value to store + * + * Atomically store @val to @p masked by @msk. + */ +static void store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) +{ +#if defined(CONFIG_ATOMIC128) + __uint128_t *pu, old, new; + + /* With CONFIG_ATOMIC128, we can avoid the memory barriers. */ + pu = __builtin_assume_aligned(ps, 16); + old = *pu; + do { + new = (old & ~msk.u) | val.u; + } while (!__atomic_compare_exchange_n(pu, &old, new, true, + __ATOMIC_RELAXED, __ATOMIC_RELAXED)); +#elif defined(CONFIG_CMPXCHG128) + __uint128_t *pu, old, new; + + /* + * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always + * defer to libatomic, so we must use __sync_val_compare_and_swap_16 + * and accept the sequential consistency that comes with it. + */ + pu = __builtin_assume_aligned(ps, 16); + do { + old = *pu; + new = (old & ~msk.u) | val.u; + } while (!__sync_bool_compare_and_swap_16(pu, old, new)); +#else + qemu_build_not_reached(); +#endif +} + +/** + * store_bytes_leN: + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * Store @size bytes at @p. The bytes to store are extracted in little-endian order + * from @val_le; return the bytes of @val_le beyond @size that have not been stored. + */ +static uint64_t store_bytes_leN(void *pv, int size, uint64_t val_le) +{ + uint8_t *p = pv; + for (int i = 0; i < size; i++, val_le >>= 8) { + p[i] = val_le; + } + return val_le; +} + +/** + * store_parts_leN + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically on each aligned part. + */ +G_GNUC_UNUSED +static uint64_t store_parts_leN(void *pv, int size, uint64_t val_le) +{ + do { + int n; + + /* Find minimum of alignment and size */ + switch (((uintptr_t)pv | size) & 7) { + case 4: + store_atomic4(pv, le32_to_cpu(val_le)); + val_le >>= 32; + n = 4; + break; + case 2: + case 6: + store_atomic2(pv, le16_to_cpu(val_le)); + val_le >>= 16; + n = 2; + break; + default: + *(uint8_t *)pv = val_le; + val_le >>= 8; + n = 1; + break; + case 0: + g_assert_not_reached(); + } + pv += n; + size -= n; + } while (size != 0); + + return val_le; +} + +/** + * store_whole_le4 + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically as a whole. + * Four aligned bytes are guaranteed to cover the store. + */ +static uint64_t store_whole_le4(void *pv, int size, uint64_t val_le) +{ + int sz = size * 8; + int o = (uintptr_t)pv & 3; + int sh = o * 8; + uint32_t m = MAKE_64BIT_MASK(0, sz); + uint32_t v; + + if (HOST_BIG_ENDIAN) { + v = bswap32(val_le) >> sh; + m = bswap32(m) >> sh; + } else { + v = val_le << sh; + m <<= sh; + } + store_atom_insert_al4(pv - o, v, m); + return val_le >> sz; +} + +/** + * store_whole_le8 + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically as a whole. + * Eight aligned bytes are guaranteed to cover the store. + */ +static uint64_t store_whole_le8(void *pv, int size, uint64_t val_le) +{ + int sz = size * 8; + int o = (uintptr_t)pv & 7; + int sh = o * 8; + uint64_t m = MAKE_64BIT_MASK(0, sz); + uint64_t v; + + qemu_build_assert(HAVE_al8); + if (HOST_BIG_ENDIAN) { + v = bswap64(val_le) >> sh; + m = bswap64(m) >> sh; + } else { + v = val_le << sh; + m <<= sh; + } + store_atom_insert_al8(pv - o, v, m); + return val_le >> sz; +} + +/** + * store_whole_le16 + * @pv: host address + * @size: number of bytes to store + * @val_le: data to store + * + * As store_bytes_leN, but atomically as a whole. + * 16 aligned bytes are guaranteed to cover the store. + */ +static uint64_t store_whole_le16(void *pv, int size, Int128 val_le) +{ + int sz = size * 8; + int o = (uintptr_t)pv & 15; + int sh = o * 8; + Int128 m, v; + + qemu_build_assert(HAVE_al16); + + /* Like MAKE_64BIT_MASK(0, sz), but larger. */ + if (sz <= 64) { + m = int128_make64(MAKE_64BIT_MASK(0, sz)); + } else { + m = int128_make128(-1, MAKE_64BIT_MASK(0, sz - 64)); + } + + if (HOST_BIG_ENDIAN) { + v = int128_urshift(bswap128(val_le), sh); + m = int128_urshift(bswap128(m), sh); + } else { + v = int128_lshift(val_le, sh); + m = int128_lshift(m, sh); + } + store_atom_insert_al16(pv - o, v, m); + + /* Unused if sz <= 64. */ + return int128_gethi(val_le) >> (sz - 64); +} + +/** + * store_atom_2: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 2 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_2(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, uint16_t val) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 1) == 0)) { + store_atomic2(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + if (atmax == MO_8) { + stw_he_p(pv, val); + return; + } + + /* + * The only case remaining is MO_ATOM_WITHIN16. + * Big or little endian, we want the middle two bytes in each test. + */ + if ((pi & 3) == 1) { + store_atom_insert_al4(pv - 1, (uint32_t)val << 8, MAKE_64BIT_MASK(8, 16)); + return; + } else if ((pi & 7) == 3) { + if (HAVE_al8) { + store_atom_insert_al8(pv - 3, (uint64_t)val << 24, MAKE_64BIT_MASK(24, 16)); + return; + } + } else if ((pi & 15) == 7) { + if (HAVE_al16) { + Int128 v = int128_lshift(int128_make64(val), 56); + Int128 m = int128_lshift(int128_make64(0xffff), 56); + store_atom_insert_al16(pv - 7, v, m); + return; + } + } else { + g_assert_not_reached(); + } + + cpu_loop_exit_atomic(env_cpu(env), ra); +} + +/** + * store_atom_4: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 4 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_4(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, uint32_t val) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (likely((pi & 3) == 0)) { + store_atomic4(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + stl_he_p(pv, val); + return; + case MO_16: + store_atom_4_by_2(pv, val); + return; + case -MO_16: + { + uint32_t val_le = cpu_to_le32(val); + int s2 = pi & 3; + int s1 = 4 - s2; + + switch (s2) { + case 1: + val_le = store_whole_le4(pv, s1, val_le); + *(uint8_t *)(pv + 3) = val_le; + break; + case 3: + *(uint8_t *)pv = val_le; + store_whole_le4(pv + 1, s2, val_le >> 8); + break; + case 0: /* aligned */ + case 2: /* atmax MO_16 */ + default: + g_assert_not_reached(); + } + } + return; + case MO_32: + if ((pi & 7) < 4) { + if (HAVE_al8) { + store_whole_le8(pv, 4, cpu_to_le32(val)); + return; + } + } else { + if (HAVE_al16) { + store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val))); + return; + } + } + cpu_loop_exit_atomic(env_cpu(env), ra); + default: + g_assert_not_reached(); + } +} + +/** + * store_atom_8: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 8 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_8(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, uint64_t val) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + + if (HAVE_al8 && likely((pi & 7) == 0)) { + store_atomic8(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + stq_he_p(pv, val); + return; + case MO_16: + store_atom_8_by_2(pv, val); + return; + case MO_32: + store_atom_8_by_4(pv, val); + return; + case -MO_32: + if (HAVE_al8) { + uint64_t val_le = cpu_to_le64(val); + int s2 = pi & 7; + int s1 = 8 - s2; + + switch (s2) { + case 1 ... 3: + val_le = store_whole_le8(pv, s1, val_le); + store_bytes_leN(pv + s1, s2, val_le); + break; + case 5 ... 7: + val_le = store_bytes_leN(pv, s1, val_le); + store_whole_le8(pv + s1, s2, val_le); + break; + case 0: /* aligned */ + case 4: /* atmax MO_32 */ + default: + g_assert_not_reached(); + } + return; + } + break; + case MO_64: + if (HAVE_al16) { + store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val))); + return; + } + break; + default: + g_assert_not_reached(); + } + cpu_loop_exit_atomic(env_cpu(env), ra); +} From patchwork Tue Apr 25 19:30:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676828 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880471wrs; Tue, 25 Apr 2023 12:43:09 -0700 (PDT) X-Google-Smtp-Source: AKy350aKhoiremMBGaDv1vqvnkW8y6Q1z4ZyA9GQGWl7wTuLTQtUUHobp7UymjnwaqI/z0/mDGor X-Received: by 2002:a05:622a:64a:b0:3eb:9b03:b5ba with SMTP id a10-20020a05622a064a00b003eb9b03b5bamr29863813qtb.37.1682451789357; Tue, 25 Apr 2023 12:43:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451789; cv=none; d=google.com; s=arc-20160816; b=vut3sDZPxwJvT2a4G0nEghBZQUBO1/6g+S4mdtxu+1arW9XSkTUrPDOZF06iXCidr1 oSeQ2Aj0MhJYGSXwmFv3O41kHHpSev0LyGoVkMtSz6EWRW/fWcuW522/7pvsyfHUUo57 vw4FkJuIBqyuywgDqZoz0R7LY+l8DOQz4JHumxAA2sjZCqiu/8uuVMfhxOxMBhuOqJif +sbt8D8FSkQgHSyGNFlOiyyHPTimrwYPQJw8Vdt1lqq0oopAnG8lsyaO/i7YGUlh9VTe RrPwClVXwr4GfaghS9KgXSJLV4YswFo6wy/1NXdGnmNj+Jaq3bkK+MvRjwTm0LWinXpC FV7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=a9PQZw7N1cpbJGxx9QlJTxfD8yUIeHJGfpj65uurq3A=; b=wW8ILeBc6ySCsi/9ETFJ7ezHu2eBD1WTbuoKqQBq2zy9vT7Zms4/iTFFbFltiXRStq gUcAEIWMdUs4hf6NaIslbklvqf0RIIZhgfy/ZUzJcM8NQZxnu38/owG39egRPqcbVTcf 89mo0o5mAPKtv8hSs9GX6qgfFiPfu5p36fcBbBrSrzKNFxiWBpF0tgNdcYpHzneNYaih Jy6sq8mQGKFv+PNLoRaWzpdJmAYym4ZQ/h91udT04NJZ5fM4jbao2hWl26yhIW9tGH93 36ky/KGyehRrtMOGROxW2C0+zPC/wqEisH48XCPwxBF3pXNWWzQZXJlv4L+iJz6zpMJf sQVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rdHB5RvZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n8-20020a05620a294800b0074cd65ba9a3si9635715qkp.740.2023.04.25.12.43.09 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:09 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rdHB5RvZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOPs-0001bX-Qx; Tue, 25 Apr 2023 15:33:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOPl-0001Iy-05 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:26 -0400 Received: from mail-lf1-x133.google.com ([2a00:1450:4864:20::133]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOPj-0004UE-6Y for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:24 -0400 Received: by mail-lf1-x133.google.com with SMTP id 2adb3069b0e04-4eed6ddcae1so26818667e87.0 for ; Tue, 25 Apr 2023 12:33:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451201; x=1685043201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a9PQZw7N1cpbJGxx9QlJTxfD8yUIeHJGfpj65uurq3A=; b=rdHB5RvZZMJvU4Hj2LXflPxFbf2BpqAdtXsnS6o0Yj+sy0tERwuJ8mDqky+Bh9ytEI g/lAiEykFS8K5piZ5simeCD2l9ktOcLnm3CIxjQDKTuknjk50c24VpEiTI5MX9XcHMFr ElGWLTKXte510RQ7Pr+GZvQDIt502TyOXA4NDMnVTLWkiPRCWGjde7px3TVBwW7BoN/1 zqNq66gyZhMUiPTmOO6+kMaSaRVrVBqwMaCwPN9EhZkimuqXa65aHZn3qts1OZbU/t1Q SzJztr3qdYIdDfwsKGd2Hm8Sc8B7L9uESGv59yZaxvmkeLuceTT96zmpiLhQuVznSQZe Orsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451201; x=1685043201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a9PQZw7N1cpbJGxx9QlJTxfD8yUIeHJGfpj65uurq3A=; b=HfOarwskptmO2+iCfkel41VX17OTbMl4BYthkhUMeDVI3KWVGFMUYCZMpwRlCCVq8H yXX7hynr4CzLXARfBL/NUFObEBm874vD6YQSJtT7gGDz5O/qC6Rbq2VPAYVfgcEndgS7 WCIV3r60x9RQB2193z+pSdgoiFi0gzOv3m0uxyFPV3oQe1ei2zmwbo6mtrnuSvprRHdw g0qTPS+WCPBhHW13xNY3REVgKUEmfS7j/20NkPkQxlcy2rlLZWcH+edH9WqHAXzVIwya 9dB7wrYktnidAe9h5sKjvazyeFS2lMuPs7co1bv8B1G62kA+JivwXwe/wah8RfZJpuu1 poYA== X-Gm-Message-State: AAQBX9ep9G3qHzDD5j+0jjv7B0ieSm6QlnLJYPjvvr4ZalZ5OFhVmm+f Kkq+0KalPOsRhQDJ6k4L16dVvOdkFhJLy76P65xBBg== X-Received: by 2002:a2e:800c:0:b0:2a8:e84e:4e5a with SMTP id j12-20020a2e800c000000b002a8e84e4e5amr4208459ljg.21.1682451201534; Tue, 25 Apr 2023 12:33:21 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 08/57] target/loongarch: Do not include tcg-ldst.h Date: Tue, 25 Apr 2023 20:30:57 +0100 Message-Id: <20230425193146.2106111-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::133; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x133.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org This header is supposed to be private to tcg and in fact does not need to be included here at all. Signed-off-by: Richard Henderson Reviewed-by: Song Gao --- target/loongarch/csr_helper.c | 1 - target/loongarch/iocsr_helper.c | 1 - 2 files changed, 2 deletions(-) diff --git a/target/loongarch/csr_helper.c b/target/loongarch/csr_helper.c index 7e02787895..6526367946 100644 --- a/target/loongarch/csr_helper.c +++ b/target/loongarch/csr_helper.c @@ -15,7 +15,6 @@ #include "exec/cpu_ldst.h" #include "hw/irq.h" #include "cpu-csr.h" -#include "tcg/tcg-ldst.h" target_ulong helper_csrrd_pgd(CPULoongArchState *env) { diff --git a/target/loongarch/iocsr_helper.c b/target/loongarch/iocsr_helper.c index 505853e17b..dda9845d6c 100644 --- a/target/loongarch/iocsr_helper.c +++ b/target/loongarch/iocsr_helper.c @@ -12,7 +12,6 @@ #include "exec/helper-proto.h" #include "exec/exec-all.h" #include "exec/cpu_ldst.h" -#include "tcg/tcg-ldst.h" #define GET_MEMTXATTRS(cas) \ ((MemTxAttrs){.requester_id = env_cpu(cas)->cpu_index}) From patchwork Tue Apr 25 19:30:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676827 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880408wrs; Tue, 25 Apr 2023 12:43:00 -0700 (PDT) X-Google-Smtp-Source: AKy350agX0a4U0/GSMemP/vEb5ne2OIz+28KyJDLzHuFayjFFj0j54XrxHK6OtB0XBJO3ZY+Edvc X-Received: by 2002:a05:622a:1301:b0:3ef:68ea:e253 with SMTP id v1-20020a05622a130100b003ef68eae253mr19754325qtk.19.1682451780028; Tue, 25 Apr 2023 12:43:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451780; cv=none; d=google.com; s=arc-20160816; b=FvXov0UYeWcPdRjosoT0vMV6VMxU6RtpptZWCLhMPVajHSfZrRp1v8cKHBqHW6jTsp FFKhSSa5mRqOt+w//61rj9l6j4K6IpoStLuEPx/hICAHxg4njevj2ty5DtSCma6eQW/9 gheMZAkjah5uWm+2bRc3rhXEoxV9SnlRplecx0lvAEuaJcy3OmjR9OosAFe6DreYGhlk Luwu0dkUy/rvl6Q5Ug1V6wcoiKHEZ20p/cvbsmQQrjyiYE0eFdoqmcCgfxbtqyNdCuJp zVTOM8fKFcuPPE+mIGx4wTCNFt36BCrAiVv+My0Qher6Ur3WomS5rDnuo4qGNbEmKp5i oMQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=rxZwQJHawZv5oVTAUi91mw2vikw3d8R4oKSnqfrVAdk=; b=WT+u91OQ+ZIEq2+RR6Afja3UKhHg2iKtwea5AzhUqI5+4sEAi49vFba6KPa+bwuW1J A0YeXLtuSRbFlw0JGjZwfSKI4T1bOs91EzWyxm1iVQfkA8gvzpZcbTbfZkhPmiBSCS9P 1BdIK7gbiskFvDU9UnopEoRv8xmiLKSHCBXVhGUatp6ArUScT6IRSLbmPQquZw/2NA5a APiPI2ZxEFYtk82km0c3j7G9YrURhrIJ0uHBk3gbhai4Jo2p1agcDLJ3Rspq7odJqH0m 3qFNcqEPC9y5fECJokpBF5uUeY4OhEnupBC1Hg9ECDlYUYrCRxqhb1t7ayAjYN5vpt9u iLmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="V/pIe7ra"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x16-20020a05620a01f000b0074c41d1f010si8608964qkn.630.2023.04.25.12.42.59 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:00 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="V/pIe7ra"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQ1-0001su-5S; Tue, 25 Apr 2023 15:33:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQ0-0001sJ-BB for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:40 -0400 Received: from mail-lj1-x231.google.com ([2a00:1450:4864:20::231]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOPq-0004WB-PR for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:40 -0400 Received: by mail-lj1-x231.google.com with SMTP id 38308e7fff4ca-2a8ba693f69so59513711fa.0 for ; Tue, 25 Apr 2023 12:33:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451208; x=1685043208; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rxZwQJHawZv5oVTAUi91mw2vikw3d8R4oKSnqfrVAdk=; b=V/pIe7raIrKtefRWo5QY7YIgk3uvPBjUCz3yi5o4gN5mhNldG+tnwmLflYskJP2hPS VdZmpQ83ZVspOVw6V3Hbejx4nS4pDpeRSZ7DuwLi3Yb0Uzr/bRryQxO6qA3cYkUVDMjY nOh2hrxx9PSeLuIW/S3b34jSYe1EU3Ez/BfzU9GntF0ObMRN3mjz1tw077KYs/9YZ/GL Nhgra/57Bng6rk8QcOiMNDM6ry5sO4OhxfnAgJi0vCCW8Msjuk/XDP5tQngR1NDTdSN0 TaCp7v0pwMtWqZ4hQ1EyrV1yCrpKrGs29p/RPlNnnBCZjzDJpk8Q1bmsYkcyTRsDH0+v Ga1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451208; x=1685043208; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rxZwQJHawZv5oVTAUi91mw2vikw3d8R4oKSnqfrVAdk=; b=DQ1uIrraeMkXG7UGHpwZ4qdcOHDHx/0DIzKDuERhjLeXrGlJ6kE2aLpjvT840ZV+tp j/5ixdn2cCFOiQoKwXLWLanJFjHqkoITKPepNvW1ZYw2csgw0Xq6UW0RqCoCnturK6rU uCd2t5meb58uFOq2wQwHxICnaKH+ithCx5Cec4k/vHRlsW6z7A2m5BzudAHi1mOO1R9/ ICyIQaS4MPdWrxqFaBcva1rxxMnlG7xgTnl/8Mo4RPRKUHn3eD0ctmB2Ub9x3gJIkK3u J+wke7zyBwX6y4Ev60UMrYi+hfmx6Dj8y00d565j5mimjHZYsuA1tPMcItF7/lmvW4ym zsRQ== X-Gm-Message-State: AAQBX9dcCm71hev76VcW5lI2UyV8jH2T3AJ6b95QVFA2Aij80reyFJkT gA1KO25R3eePszMWctxX6qg7kyDc+Yfqedy65x6wag== X-Received: by 2002:a2e:3517:0:b0:2a0:3f9f:fec6 with SMTP id z23-20020a2e3517000000b002a03f9ffec6mr4147172ljz.37.1682451207751; Tue, 25 Apr 2023 12:33:27 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:27 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 09/57] tcg: Unify helper_{be,le}_{ld,st}* Date: Tue, 25 Apr 2023 20:30:58 +0100 Message-Id: <20230425193146.2106111-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::231; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x231.google.com X-Spam_score_int: -16 X-Spam_score: -1.7 X-Spam_bar: - X-Spam_report: (-1.7 / 5.0 requ) BAYES_00=-1.9, DKIM_INVALID=0.1, DKIM_SIGNED=0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- include/tcg/tcg-ldst.h | 60 ++++------ accel/tcg/cputlb.c | 190 ++++++++++--------------------- tcg/tcg.c | 21 ++++ tcg/tci.c | 61 ++++------ docs/devel/loads-stores.rst | 36 ++---- tcg/aarch64/tcg-target.c.inc | 33 ------ tcg/arm/tcg-target.c.inc | 37 ------ tcg/i386/tcg-target.c.inc | 30 +---- tcg/loongarch64/tcg-target.c.inc | 23 ---- tcg/mips/tcg-target.c.inc | 31 ----- tcg/ppc/tcg-target.c.inc | 30 +---- tcg/riscv/tcg-target.c.inc | 42 ------- tcg/s390x/tcg-target.c.inc | 31 +---- tcg/sparc64/tcg-target.c.inc | 32 +----- 14 files changed, 146 insertions(+), 511 deletions(-) diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 684e394b06..3d897ca942 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -28,51 +28,35 @@ #ifdef CONFIG_SOFTMMU /* Value zero-extended to tcg register size. */ -tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); /* Value sign-extended to tcg register size. */ -tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_le_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); -tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); /* * Value extended to at least uint32_t, so that some ABIs do not require * zero-extension from uint8_t or uint16_t. */ -void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr); -void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr); +void helper_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr); +void helper_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr); +void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr); +void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t retaddr); #else diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 43206437e9..02e2d64d5e 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2006,25 +2006,6 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, cpu_loop_exit_atomic(env_cpu(env), retaddr); } -/* - * Verify that we have passed the correct MemOp to the correct function. - * - * In the case of the helper_*_mmu functions, we will have done this by - * using the MemOp to look up the helper during code generation. - * - * In the case of the cpu_*_mmu functions, this is up to the caller. - * We could present one function to target code, and dispatch based on - * the MemOp, but so far we have worked hard to avoid an indirect function - * call along the memory path. - */ -static void validate_memop(MemOpIdx oi, MemOp expected) -{ -#ifdef CONFIG_DEBUG_TCG - MemOp have = get_memop(oi) & (MO_SIZE | MO_BSWAP); - assert(have == expected); -#endif -} - /* * Load Helpers * @@ -2292,10 +2273,10 @@ static uint8_t do_ld1_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); } -tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_UB); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8); return do_ld1_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2323,17 +2304,10 @@ static uint16_t do_ld2_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return ret; } -tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUW); - return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); -} - -tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16); return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2357,17 +2331,10 @@ static uint32_t do_ld4_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return ret; } -tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUL); - return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); -} - -tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32); return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2391,17 +2358,10 @@ static uint64_t do_ld8_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, return ret; } -uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUQ); - return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); -} - -uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUQ); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64); return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); } @@ -2410,35 +2370,22 @@ uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, * avoid this for 64-bit data, or for 32-bit data on 32-bit host. */ - -tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - return (int8_t)helper_ret_ldub_mmu(env, addr, oi, retaddr); + return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr); } -tcg_target_ulong helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - return (int16_t)helper_le_lduw_mmu(env, addr, oi, retaddr); + return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr); } -tcg_target_ulong helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr) { - return (int16_t)helper_be_lduw_mmu(env, addr, oi, retaddr); -} - -tcg_target_ulong helper_le_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return (int32_t)helper_le_ldul_mmu(env, addr, oi, retaddr); -} - -tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return (int32_t)helper_be_ldul_mmu(env, addr, oi, retaddr); + return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr); } /* @@ -2454,7 +2401,7 @@ uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { uint8_t ret; - validate_memop(oi, MO_UB); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB); ret = do_ld1_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2465,7 +2412,7 @@ uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, { uint16_t ret; - validate_memop(oi, MO_BEUW); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUW); ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2476,7 +2423,7 @@ uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, { uint32_t ret; - validate_memop(oi, MO_BEUL); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUL); ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2487,7 +2434,7 @@ uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, { uint64_t ret; - validate_memop(oi, MO_BEUQ); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUQ); ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2498,7 +2445,7 @@ uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, { uint16_t ret; - validate_memop(oi, MO_LEUW); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUW); ret = do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2509,7 +2456,7 @@ uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, { uint32_t ret; - validate_memop(oi, MO_LEUL); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUL); ret = do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2520,7 +2467,7 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, { uint64_t ret; - validate_memop(oi, MO_LEUQ); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUQ); ret = do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; @@ -2548,8 +2495,8 @@ Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - h = helper_be_ldq_mmu(env, addr, new_oi, ra); - l = helper_be_ldq_mmu(env, addr + 8, new_oi, ra); + h = helper_ldq_mmu(env, addr, new_oi, ra); + l = helper_ldq_mmu(env, addr + 8, new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return int128_make128(l, h); @@ -2577,8 +2524,8 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - l = helper_le_ldq_mmu(env, addr, new_oi, ra); - h = helper_le_ldq_mmu(env, addr + 8, new_oi, ra); + l = helper_ldq_mmu(env, addr, new_oi, ra); + h = helper_ldq_mmu(env, addr + 8, new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return int128_make128(l, h); @@ -2722,13 +2669,13 @@ static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, } } -void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) +void helper_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; bool crosspage; - validate_memop(oi, MO_UB); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_8); crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); tcg_debug_assert(!crosspage); @@ -2757,17 +2704,10 @@ static void do_st2_mmu(CPUArchState *env, target_ulong addr, uint16_t val, do_st_1(env, &l.page[1], b, l.mmu_idx, ra); } -void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) +void helper_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUW); - do_st2_mmu(env, addr, val, oi, retaddr); -} - -void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUW); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16); do_st2_mmu(env, addr, val, oi, retaddr); } @@ -2791,17 +2731,10 @@ static void do_st4_mmu(CPUArchState *env, target_ulong addr, uint32_t val, (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } -void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) +void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUL); - do_st4_mmu(env, addr, val, oi, retaddr); -} - -void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUL); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32); do_st4_mmu(env, addr, val, oi, retaddr); } @@ -2825,17 +2758,10 @@ static void do_st8_mmu(CPUArchState *env, target_ulong addr, uint64_t val, (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); } -void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) +void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t retaddr) { - validate_memop(oi, MO_LEUQ); - do_st8_mmu(env, addr, val, oi, retaddr); -} - -void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - validate_memop(oi, MO_BEUQ); + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64); do_st8_mmu(env, addr, val, oi, retaddr); } @@ -2851,49 +2777,55 @@ static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) void cpu_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_ret_stb_mmu(env, addr, val, oi, retaddr); + helper_stb_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stw_be_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_be_stw_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUW); + do_st2_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stl_be_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_be_stl_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUL); + do_st4_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stq_be_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_be_stq_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_BEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stw_le_mmu(CPUArchState *env, target_ulong addr, uint16_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_le_stw_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUW); + do_st2_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stl_le_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_le_stl_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUL); + do_st4_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } void cpu_stq_le_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { - helper_le_stq_mmu(env, addr, val, oi, retaddr); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == MO_LEUQ); + do_st8_mmu(env, addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } @@ -2918,8 +2850,8 @@ void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 val, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - helper_be_stq_mmu(env, addr, int128_gethi(val), new_oi, ra); - helper_be_stq_mmu(env, addr + 8, int128_getlo(val), new_oi, ra); + helper_stq_mmu(env, addr, int128_gethi(val), new_oi, ra); + helper_stq_mmu(env, addr + 8, int128_getlo(val), new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -2945,8 +2877,8 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; new_oi = make_memop_idx(mop, mmu_idx); - helper_le_stq_mmu(env, addr, int128_getlo(val), new_oi, ra); - helper_le_stq_mmu(env, addr + 8, int128_gethi(val), new_oi, ra); + helper_stq_mmu(env, addr, int128_getlo(val), new_oi, ra); + helper_stq_mmu(env, addr + 8, int128_gethi(val), new_oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } diff --git a/tcg/tcg.c b/tcg/tcg.c index 610df88626..5dfab02175 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -197,6 +197,27 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *l, const TCGLdstHelperParam *p) __attribute__((unused)); +#ifdef CONFIG_SOFTMMU +static void * const qemu_ld_helpers[MO_SSIZE + 1] = { + [MO_UB] = helper_ldub_mmu, + [MO_SB] = helper_ldsb_mmu, + [MO_UW] = helper_lduw_mmu, + [MO_SW] = helper_ldsw_mmu, + [MO_UL] = helper_ldul_mmu, + [MO_UQ] = helper_ldq_mmu, +#if TCG_TARGET_REG_BITS == 64 + [MO_SL] = helper_ldsl_mmu, +#endif +}; + +static void * const qemu_st_helpers[MO_SIZE + 1] = { + [MO_8] = helper_stb_mmu, + [MO_16] = helper_stw_mmu, + [MO_32] = helper_stl_mmu, + [MO_64] = helper_stq_mmu, +}; +#endif + TCGContext tcg_init_ctx; __thread TCGContext *tcg_ctx; diff --git a/tcg/tci.c b/tcg/tci.c index fc67e7e767..5bde2e1f2e 100644 --- a/tcg/tci.c +++ b/tcg/tci.c @@ -293,31 +293,21 @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr, uintptr_t ra = (uintptr_t)tb_ptr; #ifdef CONFIG_SOFTMMU - switch (mop & (MO_BSWAP | MO_SSIZE)) { + switch (mop & MO_SSIZE) { case MO_UB: - return helper_ret_ldub_mmu(env, taddr, oi, ra); + return helper_ldub_mmu(env, taddr, oi, ra); case MO_SB: - return helper_ret_ldsb_mmu(env, taddr, oi, ra); - case MO_LEUW: - return helper_le_lduw_mmu(env, taddr, oi, ra); - case MO_LESW: - return helper_le_ldsw_mmu(env, taddr, oi, ra); - case MO_LEUL: - return helper_le_ldul_mmu(env, taddr, oi, ra); - case MO_LESL: - return helper_le_ldsl_mmu(env, taddr, oi, ra); - case MO_LEUQ: - return helper_le_ldq_mmu(env, taddr, oi, ra); - case MO_BEUW: - return helper_be_lduw_mmu(env, taddr, oi, ra); - case MO_BESW: - return helper_be_ldsw_mmu(env, taddr, oi, ra); - case MO_BEUL: - return helper_be_ldul_mmu(env, taddr, oi, ra); - case MO_BESL: - return helper_be_ldsl_mmu(env, taddr, oi, ra); - case MO_BEUQ: - return helper_be_ldq_mmu(env, taddr, oi, ra); + return helper_ldsb_mmu(env, taddr, oi, ra); + case MO_UW: + return helper_lduw_mmu(env, taddr, oi, ra); + case MO_SW: + return helper_ldsw_mmu(env, taddr, oi, ra); + case MO_UL: + return helper_ldul_mmu(env, taddr, oi, ra); + case MO_SL: + return helper_ldsl_mmu(env, taddr, oi, ra); + case MO_UQ: + return helper_ldq_mmu(env, taddr, oi, ra); default: g_assert_not_reached(); } @@ -382,27 +372,18 @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, uintptr_t ra = (uintptr_t)tb_ptr; #ifdef CONFIG_SOFTMMU - switch (mop & (MO_BSWAP | MO_SIZE)) { + switch (mop & MO_SIZE) { case MO_UB: - helper_ret_stb_mmu(env, taddr, val, oi, ra); + helper_stb_mmu(env, taddr, val, oi, ra); break; - case MO_LEUW: - helper_le_stw_mmu(env, taddr, val, oi, ra); + case MO_UW: + helper_stw_mmu(env, taddr, val, oi, ra); break; - case MO_LEUL: - helper_le_stl_mmu(env, taddr, val, oi, ra); + case MO_UL: + helper_stl_mmu(env, taddr, val, oi, ra); break; - case MO_LEUQ: - helper_le_stq_mmu(env, taddr, val, oi, ra); - break; - case MO_BEUW: - helper_be_stw_mmu(env, taddr, val, oi, ra); - break; - case MO_BEUL: - helper_be_stl_mmu(env, taddr, val, oi, ra); - break; - case MO_BEUQ: - helper_be_stq_mmu(env, taddr, val, oi, ra); + case MO_UQ: + helper_stq_mmu(env, taddr, val, oi, ra); break; default: g_assert_not_reached(); diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst index ad5dfe133e..d2cefc77a2 100644 --- a/docs/devel/loads-stores.rst +++ b/docs/devel/loads-stores.rst @@ -297,31 +297,20 @@ swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)`` Regexes for git grep - ``\`` -``helper_*_{ld,st}*_mmu`` +``helper_{ld,st}*_mmu`` ~~~~~~~~~~~~~~~~~~~~~~~~~ These functions are intended primarily to be called by the code -generated by the TCG backend. They may also be called by target -CPU helper function code. Like the ``cpu_{ld,st}_mmuidx_ra`` functions -they perform accesses by guest virtual address, with a given ``mmuidx``. +generated by the TCG backend. Like the ``cpu_{ld,st}_mmu`` functions +they perform accesses by guest virtual address, with a given ``MemOpIdx``. -These functions specify an ``opindex`` parameter which encodes -(among other things) the mmu index to use for the access. This parameter -should be created by calling ``make_memop_idx()``. +They differ from ``cpu_{ld,st}_mmu`` in that they take the endianness +of the operation only from the MemOpIdx, and loads extend the return +value to the size of a host general register (``tcg_target_ulong``). -The ``retaddr`` parameter should be the result of GETPC() called directly -from the top level HELPER(foo) function (or 0 if no guest CPU state -unwinding is required). +load: ``helper_ld{sign}{size}_mmu(env, addr, opindex, retaddr)`` -**TODO** The names of these functions are a bit odd for historical -reasons because they were originally expected to be called only from -within generated code. We should rename them to bring them more in -line with the other memory access functions. The explicit endianness -is the only feature they have beyond ``*_mmuidx_ra``. - -load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)`` - -store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)`` +store: ``helper_{size}_mmu(env, addr, val, opindex, retaddr)`` ``sign`` - (empty) : for 32 or 64 bit sizes @@ -334,14 +323,9 @@ store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)`` - ``l`` : 32 bits - ``q`` : 64 bits -``endian`` - - ``le`` : little endian - - ``be`` : big endian - - ``ret`` : target endianness - Regexes for git grep - - ``\`` - - ``\`` + - ``\`` + - ``\`` ``address_space_*`` ~~~~~~~~~~~~~~~~~~~ diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 62dd22d73c..e6636c1f8b 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1587,39 +1587,6 @@ typedef struct { } HostAddress; #ifdef CONFIG_SOFTMMU -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * MemOpIdx oi, uintptr_t ra) - */ -static void * const qemu_ld_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_ldub_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_lduw_mmu, - [MO_32] = helper_be_ldul_mmu, - [MO_64] = helper_be_ldq_mmu, -#else - [MO_16] = helper_le_lduw_mmu, - [MO_32] = helper_le_ldul_mmu, - [MO_64] = helper_le_ldq_mmu, -#endif -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, MemOpIdx oi, - * uintptr_t ra) - */ -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_stw_mmu, - [MO_32] = helper_be_stl_mmu, - [MO_64] = helper_be_stq_mmu, -#else - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -#endif -}; - static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_REG_TMP } }; diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index df514e56fc..8b0d526659 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1333,43 +1333,6 @@ typedef struct { } HostAddress; #ifdef CONFIG_SOFTMMU -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * int mmu_idx, uintptr_t ra) - */ -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_lduw_mmu, - [MO_UL] = helper_be_ldul_mmu, - [MO_UQ] = helper_be_ldq_mmu, - [MO_SW] = helper_be_ldsw_mmu, - [MO_SL] = helper_be_ldul_mmu, -#else - [MO_UW] = helper_le_lduw_mmu, - [MO_UL] = helper_le_ldul_mmu, - [MO_UQ] = helper_le_ldq_mmu, - [MO_SW] = helper_le_ldsw_mmu, - [MO_SL] = helper_le_ldul_mmu, -#endif -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, int mmu_idx, uintptr_t ra) - */ -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_stw_mmu, - [MO_32] = helper_be_stl_mmu, - [MO_64] = helper_be_stq_mmu, -#else - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -#endif -}; - static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { /* We arrive at the slow path via "BLNE", so R14 contains l->raddr. */ diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index b77a4c71a6..b5bb4bf45d 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1778,32 +1778,6 @@ typedef struct { } HostAddress; #if defined(CONFIG_SOFTMMU) -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * int mmu_idx, uintptr_t ra) - */ -static void * const qemu_ld_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, int mmu_idx, uintptr_t ra) - */ -static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, -}; - /* * Because i686 has no register parameters and because x86_64 has xchg * to handle addr/data register overlap, we have placed all input arguments @@ -1844,7 +1818,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) } tcg_out_ld_helper_args(s, l, &ldst_helper_param); - tcg_out_branch(s, 1, qemu_ld_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_branch(s, 1, qemu_ld_helpers[opc & MO_SIZE]); tcg_out_ld_helper_ret(s, l, false, &ldst_helper_param); tcg_out_jmp(s, l->raddr); @@ -1866,7 +1840,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) } tcg_out_st_helper_args(s, l, &ldst_helper_param); - tcg_out_branch(s, 1, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_branch(s, 1, qemu_st_helpers[opc & MO_SIZE]); tcg_out_jmp(s, l->raddr); return true; diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 83fa45c802..d1bc29826f 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -784,29 +784,6 @@ static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val, */ #if defined(CONFIG_SOFTMMU) -/* - * helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * MemOpIdx oi, uintptr_t ra) - */ -static void * const qemu_ld_helpers[4] = { - [MO_8] = helper_ret_ldub_mmu, - [MO_16] = helper_le_lduw_mmu, - [MO_32] = helper_le_ldul_mmu, - [MO_64] = helper_le_ldq_mmu, -}; - -/* - * helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, MemOpIdx oi, - * uintptr_t ra) - */ -static void * const qemu_st_helpers[4] = { - [MO_8] = helper_ret_stb_mmu, - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -}; - static bool tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_b(s, 0); diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index 5ad9867882..7770ef46bd 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1076,37 +1076,6 @@ static void tcg_out_call(TCGContext *s, const tcg_insn_unit *arg, } #if defined(CONFIG_SOFTMMU) -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_lduw_mmu, - [MO_SW] = helper_be_ldsw_mmu, - [MO_UL] = helper_be_ldul_mmu, - [MO_SL] = helper_be_ldsl_mmu, - [MO_UQ] = helper_be_ldq_mmu, -#else - [MO_UW] = helper_le_lduw_mmu, - [MO_SW] = helper_le_ldsw_mmu, - [MO_UL] = helper_le_ldul_mmu, - [MO_UQ] = helper_le_ldq_mmu, - [MO_SL] = helper_le_ldsl_mmu, -#endif -}; - -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_UB] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_stw_mmu, - [MO_UL] = helper_be_stl_mmu, - [MO_UQ] = helper_be_stq_mmu, -#else - [MO_UW] = helper_le_stw_mmu, - [MO_UL] = helper_le_stl_mmu, - [MO_UQ] = helper_le_stq_mmu, -#endif -}; - /* We have four temps, we might as well expose three of them. */ static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 3, .tmp = { TCG_TMP0, TCG_TMP1, TCG_TMP2 } diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 5a4ec0470a..86343ea410 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -1966,32 +1966,6 @@ static const uint32_t qemu_stx_opc[(MO_SIZE + MO_BSWAP) + 1] = { }; #if defined (CONFIG_SOFTMMU) -/* helper signature: helper_ld_mmu(CPUState *env, target_ulong addr, - * int mmu_idx, uintptr_t ra) - */ -static void * const qemu_ld_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, -}; - -/* helper signature: helper_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, int mmu_idx, uintptr_t ra) - */ -static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, -}; - static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { if (arg < 0) { @@ -2020,7 +1994,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_ld_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, LK, qemu_ld_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, LK, qemu_ld_helpers[opc & MO_SIZE]); tcg_out_ld_helper_ret(s, lb, false, &ldst_helper_param); tcg_out_b(s, 0, lb->raddr); @@ -2036,7 +2010,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_st_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, LK, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, LK, qemu_st_helpers[opc & MO_SIZE]); tcg_out_b(s, 0, lb->raddr); return true; diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index d5239418dd..1f101cbf35 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -861,48 +861,6 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) */ #if defined(CONFIG_SOFTMMU) -/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, - * MemOpIdx oi, uintptr_t ra) - */ -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, -#if HOST_BIG_ENDIAN - [MO_UW] = helper_be_lduw_mmu, - [MO_SW] = helper_be_ldsw_mmu, - [MO_UL] = helper_be_ldul_mmu, -#if TCG_TARGET_REG_BITS == 64 - [MO_SL] = helper_be_ldsl_mmu, -#endif - [MO_UQ] = helper_be_ldq_mmu, -#else - [MO_UW] = helper_le_lduw_mmu, - [MO_SW] = helper_le_ldsw_mmu, - [MO_UL] = helper_le_ldul_mmu, -#if TCG_TARGET_REG_BITS == 64 - [MO_SL] = helper_le_ldsl_mmu, -#endif - [MO_UQ] = helper_le_ldq_mmu, -#endif -}; - -/* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, - * uintxx_t val, MemOpIdx oi, - * uintptr_t ra) - */ -static void * const qemu_st_helpers[MO_SIZE + 1] = { - [MO_8] = helper_ret_stb_mmu, -#if HOST_BIG_ENDIAN - [MO_16] = helper_be_stw_mmu, - [MO_32] = helper_be_stl_mmu, - [MO_64] = helper_be_stq_mmu, -#else - [MO_16] = helper_le_stw_mmu, - [MO_32] = helper_le_stl_mmu, - [MO_64] = helper_le_stq_mmu, -#endif -}; - static void tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_jump(s, OPC_JAL, TCG_REG_ZERO, 0); diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index aacbaf21d5..968977be98 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -438,33 +438,6 @@ static const uint8_t tcg_cond_to_ltr_cond[] = { [TCG_COND_GEU] = S390_CC_ALWAYS, }; -#ifdef CONFIG_SOFTMMU -static void * const qemu_ld_helpers[(MO_SSIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LESW] = helper_le_ldsw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LESL] = helper_le_ldsl_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BESW] = helper_be_ldsw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BESL] = helper_be_ldsl_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, -}; - -static void * const qemu_st_helpers[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, -}; -#endif - static const tcg_insn_unit *tb_ret_addr; uint64_t s390_facilities[3]; @@ -1721,7 +1694,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_ld_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, qemu_ld_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, qemu_ld_helpers[opc & MO_SIZE]); tcg_out_ld_helper_ret(s, lb, false, &ldst_helper_param); tgen_gotoi(s, S390_CC_ALWAYS, lb->raddr); @@ -1738,7 +1711,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) } tcg_out_st_helper_args(s, lb, &ldst_helper_param); - tcg_out_call_int(s, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); + tcg_out_call_int(s, qemu_st_helpers[opc & MO_SIZE]); tgen_gotoi(s, S390_CC_ALWAYS, lb->raddr); return true; diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 7e6466d3b6..e997db2645 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -919,33 +919,11 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) } #ifdef CONFIG_SOFTMMU -static const tcg_insn_unit *qemu_ld_trampoline[(MO_SSIZE | MO_BSWAP) + 1]; -static const tcg_insn_unit *qemu_st_trampoline[(MO_SIZE | MO_BSWAP) + 1]; +static const tcg_insn_unit *qemu_ld_trampoline[MO_SSIZE + 1]; +static const tcg_insn_unit *qemu_st_trampoline[MO_SIZE + 1]; static void build_trampolines(TCGContext *s) { - static void * const qemu_ld_helpers[] = { - [MO_UB] = helper_ret_ldub_mmu, - [MO_SB] = helper_ret_ldsb_mmu, - [MO_LEUW] = helper_le_lduw_mmu, - [MO_LESW] = helper_le_ldsw_mmu, - [MO_LEUL] = helper_le_ldul_mmu, - [MO_LEUQ] = helper_le_ldq_mmu, - [MO_BEUW] = helper_be_lduw_mmu, - [MO_BESW] = helper_be_ldsw_mmu, - [MO_BEUL] = helper_be_ldul_mmu, - [MO_BEUQ] = helper_be_ldq_mmu, - }; - static void * const qemu_st_helpers[] = { - [MO_UB] = helper_ret_stb_mmu, - [MO_LEUW] = helper_le_stw_mmu, - [MO_LEUL] = helper_le_stl_mmu, - [MO_LEUQ] = helper_le_stq_mmu, - [MO_BEUW] = helper_be_stw_mmu, - [MO_BEUL] = helper_be_stl_mmu, - [MO_BEUQ] = helper_be_stq_mmu, - }; - int i; for (i = 0; i < ARRAY_SIZE(qemu_ld_helpers); ++i) { @@ -1210,9 +1188,9 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr, /* We use the helpers to extend SB and SW data, leaving the case of SL needing explicit extending below. */ if ((memop & MO_SSIZE) == MO_SL) { - func = qemu_ld_trampoline[memop & (MO_BSWAP | MO_SIZE)]; + func = qemu_ld_trampoline[MO_UL]; } else { - func = qemu_ld_trampoline[memop & (MO_BSWAP | MO_SSIZE)]; + func = qemu_ld_trampoline[memop & MO_SSIZE]; } tcg_debug_assert(func != NULL); tcg_out_call_nodelay(s, func, false); @@ -1353,7 +1331,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr, tcg_out_movext(s, (memop & MO_SIZE) == MO_64 ? TCG_TYPE_I64 : TCG_TYPE_I32, TCG_REG_O2, data_type, memop & MO_SIZE, data); - func = qemu_st_trampoline[memop & (MO_BSWAP | MO_SIZE)]; + func = qemu_st_trampoline[memop & MO_SIZE]; tcg_debug_assert(func != NULL); tcg_out_call_nodelay(s, func, false); /* delay slot */ From patchwork Tue Apr 25 19:30:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676819 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2877424wrs; Tue, 25 Apr 2023 12:35:09 -0700 (PDT) X-Google-Smtp-Source: AKy350Zey0izSYCb/iIYgoDIlh3Oi/Bs9j4xHrdl/jGcooVcsa3uIQjWB98eb6F9ukCK6NDkPWgK X-Received: by 2002:a05:622a:170b:b0:3bb:7875:1bd6 with SMTP id h11-20020a05622a170b00b003bb78751bd6mr33627278qtk.21.1682451309519; Tue, 25 Apr 2023 12:35:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451309; cv=none; d=google.com; s=arc-20160816; b=My9I7/IbyBSxdbaxPDaEM2MPRw2ht3R0FphXy/zj2plBtwbA+DZfIKrXKazesJSdFM GvNCFI9jCQaFVfuOTCwNMgzGMs8SKPPPSZ8yZHDvVLKvxPDGe3r35eOy9s9Vc3TABvyz 0eVxr5MoheItqQ+pM2BrAde+nR/1aZXC7FE6Ve++sa/YSC7GU0g7bA08bijhcyN8BLB+ lG76uThSw79kCmEb8kS+cj8nn+u6wAdCW0pz3BkVNP2CuuqZoUOQmNkQY6BkEklDssek sX3Tqnpj5P+9dgsm3MwHrNt3qrFpggvyys0yP8m3/qwUZaKlHTbS3M3IhaOlOvegoIy1 tnMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yyMjuklubFgJgURYcO2GA40rDkgKk3K+6/9AFMvD4To=; b=sAg7K+0pTrRLV6uqA/M/ZHE6ozetCpDo543qc7QTNXqgsSmG834G0q+W5EIfTq2/OF ABZj8D2VcfRv2whKrAjDayiAjBl44EyHWJp/Df+YHnmbHAijGpgo8jaNN/Bgs392LdDu 7jwXQGk/kQIBJaeYYH/oZ9RLtbERhLIg0SCPYqjCrNtjgz0W7qzWr9K1BR+M/itk/4jT 7KpKVLPalIYcEL/J9BYlcfXMtCKJ4UiKY8A4TjP5ajznVnzDh+GrPR1pMpKbnEDNzhf1 9tVSw5H/LH7TsIeHqyrZ5jlHmvensGtof7OUZSS/0fjcs96mn2SYQdjUPL2vvP1y1cgH lPew== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="lPo/xqm+"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u27-20020a37ab1b000000b0074a498be84csi8721921qke.488.2023.04.25.12.35.09 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:35:09 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="lPo/xqm+"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQD-0002CU-0I; Tue, 25 Apr 2023 15:33:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQB-00029j-GH for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:51 -0400 Received: from mail-lj1-x22e.google.com ([2a00:1450:4864:20::22e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOPv-0004XU-Rj for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:51 -0400 Received: by mail-lj1-x22e.google.com with SMTP id 38308e7fff4ca-2a8ba5f1d6bso59706281fa.2 for ; Tue, 25 Apr 2023 12:33:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451213; x=1685043213; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yyMjuklubFgJgURYcO2GA40rDkgKk3K+6/9AFMvD4To=; b=lPo/xqm+e4UTfgzJc3lOAWV5hBmaFAp9HMOCX9mksG6OeVD4iNZQLys4VSDroEE2i6 MANxGLsIlgpozKKyBzc3QArXRmhhvW0wanm5xUJdOxEftuKyxCZjAC8pC7RAJ3LPR27W n5VyIfA3kJQxS8X+OrfNATObgyviH+cmz3LEu+dTgpAPs+pqqHf3NSHkibcVhRnR6tIX mjG8Epvbt08vBSX/P8bQmVOGO8zR4cjQFsI0lyn+NAtIpiWHeCdyVKDREvZ7Gk6oLFC/ g7XBWKD06n/HF2cEdwbX0mMqAyIc0zU9qQxx2tP0as964TfYAb0EkVtDSGGF1I+u/pPn XcSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451213; x=1685043213; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yyMjuklubFgJgURYcO2GA40rDkgKk3K+6/9AFMvD4To=; b=UGVnUg8Bvxqj2QfNoFvzHHremkbPzATNV8g1+oWu/k36mUBN8pp8Y1MggUnVUCUY3f pBjZ+bZ0E+Lxyfni9Lh1wtMo29TcnTzOMUhGqrfg8C06UZyVWNMhz0B+PSCWFHT0kALD pR3MjEUhyo1BdlZgCzuBlrJUXyXgs//lX6FUUR5g/m1csT0q26WgSBKw6q/9u8+HD8GQ aGYdMM7eWws3iSTuykd5eOng/IrN/1BYajHrSqzfp3rEwpqCOzix5rrLrUoayXjNKGFH c8MzHCp1C8kgGiCDVGv00YxpvjG9nVJPXFIQ/eqNLT2xTNINKSckOkRY7N6eJa2TvinM HIHA== X-Gm-Message-State: AAQBX9f1CUCWCm/72Yhbm7csarGYmmeokk7Hp8PN0dBOd9sU3H+7Jibe e0CwpiqVNLiWWasuObXLCAwuk6dKvQd/+PA2iHtiNA== X-Received: by 2002:a2e:9c44:0:b0:2a8:e670:c3cc with SMTP id t4-20020a2e9c44000000b002a8e670c3ccmr3590060ljj.16.1682451212964; Tue, 25 Apr 2023 12:33:32 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:32 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 10/57] accel/tcg: Implement helper_{ld, st}*_mmu for user-only Date: Tue, 25 Apr 2023 20:30:59 +0100 Message-Id: <20230425193146.2106111-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22e; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, T_SPF_HELO_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org TCG backends may need to defer to a helper to implement the atomicity required by a given operation. Mirror the interface used in system mode. Signed-off-by: Richard Henderson --- include/tcg/tcg-ldst.h | 6 +- accel/tcg/user-exec.c | 392 ++++++++++++++++++++++++++++------------- tcg/tcg.c | 6 +- 3 files changed, 278 insertions(+), 126 deletions(-) diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 3d897ca942..57fafa14b1 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -25,8 +25,6 @@ #ifndef TCG_LDST_H #define TCG_LDST_H -#ifdef CONFIG_SOFTMMU - /* Value zero-extended to tcg register size. */ tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr); @@ -58,10 +56,10 @@ void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr); -#else +#ifdef CONFIG_USER_ONLY G_NORETURN void helper_unaligned_ld(CPUArchState *env, target_ulong addr); G_NORETURN void helper_unaligned_st(CPUArchState *env, target_ulong addr); -#endif /* CONFIG_SOFTMMU */ +#endif /* CONFIG_USER_ONLY */ #endif /* TCG_LDST_H */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 8a29dfd532..b6b054890d 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -889,21 +889,6 @@ void page_reset_target_data(target_ulong start, target_ulong last) { } /* The softmmu versions of these helpers are in cputlb.c. */ -/* - * Verify that we have passed the correct MemOp to the correct function. - * - * We could present one function to target code, and dispatch based on - * the MemOp, but so far we have worked hard to avoid an indirect function - * call along the memory path. - */ -static void validate_memop(MemOpIdx oi, MemOp expected) -{ -#ifdef CONFIG_DEBUG_TCG - MemOp have = get_memop(oi) & (MO_SIZE | MO_BSWAP); - assert(have == expected); -#endif -} - void helper_unaligned_ld(CPUArchState *env, target_ulong addr) { cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_LOAD, GETPC()); @@ -914,10 +899,9 @@ void helper_unaligned_st(CPUArchState *env, target_ulong addr) cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, GETPC()); } -static void *cpu_mmu_lookup(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t ra, MMUAccessType type) +static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra, MMUAccessType type) { - MemOp mop = get_memop(oi); int a_bits = get_alignment_bits(mop); void *ret; @@ -933,100 +917,206 @@ static void *cpu_mmu_lookup(CPUArchState *env, target_ulong addr, #include "ldst_atomicity.c.inc" -uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) +static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) { void *haddr; uint8_t ret; - validate_memop(oi, MO_UB); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); + tcg_debug_assert((mop & MO_SIZE) == MO_8); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); ret = ldub_p(haddr); clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + return do_ld1_mmu(env, addr, get_memop(oi), ra); +} + +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra); +} + +uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + uint8_t ret = do_ld1_mmu(env, addr, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } +static uint16_t do_ld2_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint16_t ret; + + tcg_debug_assert((mop & MO_SIZE) == MO_16); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_2(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint16_t ret = do_ld2_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap16(ret); + } + return ret; +} + +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + int16_t ret = do_ld2_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap16(ret); + } + return ret; +} + uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint16_t ret; - validate_memop(oi, MO_BEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_2(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld2_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_be16(ret); } -uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - uint32_t ret; - - validate_memop(oi, MO_BEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_4(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return cpu_to_be32(ret); -} - -uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - uint64_t ret; - - validate_memop(oi, MO_BEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_8(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return cpu_to_be64(ret); -} - uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint16_t ret; - validate_memop(oi, MO_LEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_2(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld2_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le16(ret); } +static uint32_t do_ld4_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint32_t ret; + + tcg_debug_assert((mop & MO_SIZE) == MO_32); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_4(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint32_t ret = do_ld4_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap32(ret); + } + return ret; +} + +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + int32_t ret = do_ld4_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap32(ret); + } + return ret; +} + +uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint32_t ret; + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld4_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return cpu_to_be32(ret); +} + uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint32_t ret; - validate_memop(oi, MO_LEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_4(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld4_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le32(ret); } +static uint64_t do_ld8_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint64_t ret; + + tcg_debug_assert((mop & MO_SIZE) == MO_64); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_8(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint64_t ret = do_ld8_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap64(ret); + } + return ret; +} + +uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + uint64_t ret; + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld8_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return cpu_to_be64(ret); +} + uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); uint64_t ret; - validate_memop(oi, MO_LEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret = load_atom_8(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld8_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le64(ret); } @@ -1037,7 +1127,7 @@ Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, void *haddr; Int128 ret; - validate_memop(oi, MO_128 | MO_BE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); memcpy(&ret, haddr, 16); clear_helper_retaddr(); @@ -1055,7 +1145,7 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, void *haddr; Int128 ret; - validate_memop(oi, MO_128 | MO_LE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); memcpy(&ret, haddr, 16); clear_helper_retaddr(); @@ -1067,87 +1157,153 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, return ret; } -void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, - MemOpIdx oi, uintptr_t ra) +static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, + MemOp mop, uintptr_t ra) { void *haddr; - validate_memop(oi, MO_UB); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); + tcg_debug_assert((mop & MO_SIZE) == MO_8); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); stb_p(haddr, val); clear_helper_retaddr(); +} + +void helper_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + do_st1_mmu(env, addr, val, get_memop(oi), ra); +} + +void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, + MemOpIdx oi, uintptr_t ra) +{ + do_st1_mmu(env, addr, val, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } +static void do_st2_he_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, + MemOp mop, uintptr_t ra) +{ + void *haddr; + + tcg_debug_assert((mop & MO_SIZE) == MO_16); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_2(env, ra, haddr, mop, val); + clear_helper_retaddr(); +} + +void helper_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap16(val); + } + do_st2_he_mmu(env, addr, val, mop, ra); +} + void cpu_stw_be_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); - validate_memop(oi, MO_BEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_2(env, ra, haddr, get_memop(oi), be16_to_cpu(val)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - -void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - - validate_memop(oi, MO_BEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_4(env, ra, haddr, get_memop(oi), be32_to_cpu(val)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - -void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - - validate_memop(oi, MO_BEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_8(env, ra, haddr, get_memop(oi), be64_to_cpu(val)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + do_st2_he_mmu(env, addr, be16_to_cpu(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stw_le_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + do_st2_he_mmu(env, addr, le16_to_cpu(val), mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); +} + +static void do_st4_he_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, + MemOp mop, uintptr_t ra) { void *haddr; - validate_memop(oi, MO_LEUW); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_2(env, ra, haddr, get_memop(oi), le16_to_cpu(val)); + tcg_debug_assert((mop & MO_SIZE) == MO_32); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_4(env, ra, haddr, mop, val); clear_helper_retaddr(); +} + +void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap32(val); + } + do_st4_he_mmu(env, addr, val, mop, ra); +} + +void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + do_st4_he_mmu(env, addr, be32_to_cpu(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stl_le_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + do_st4_he_mmu(env, addr, le32_to_cpu(val), mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); +} + +static void do_st8_he_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, + MemOp mop, uintptr_t ra) { void *haddr; - validate_memop(oi, MO_LEUL); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_4(env, ra, haddr, get_memop(oi), le32_to_cpu(val)); + tcg_debug_assert((mop & MO_SIZE) == MO_64); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_8(env, ra, haddr, mop, val); clear_helper_retaddr(); +} + +void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap64(val); + } + do_st8_he_mmu(env, addr, val, mop, ra); +} + +void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + do_st8_he_mmu(env, addr, cpu_to_be64(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); - validate_memop(oi, MO_LEUQ); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_8(env, ra, haddr, get_memop(oi), le64_to_cpu(val)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + do_st8_he_mmu(env, addr, cpu_to_le64(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } @@ -1156,7 +1312,7 @@ void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, { void *haddr; - validate_memop(oi, MO_128 | MO_BE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); if (!HOST_BIG_ENDIAN) { val = bswap128(val); @@ -1171,7 +1327,7 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, { void *haddr; - validate_memop(oi, MO_128 | MO_LE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); if (HOST_BIG_ENDIAN) { val = bswap128(val); diff --git a/tcg/tcg.c b/tcg/tcg.c index 5dfab02175..d7659fdc67 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -197,8 +197,7 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *l, const TCGLdstHelperParam *p) __attribute__((unused)); -#ifdef CONFIG_SOFTMMU -static void * const qemu_ld_helpers[MO_SSIZE + 1] = { +static void * const qemu_ld_helpers[MO_SSIZE + 1] __attribute__((unused)) = { [MO_UB] = helper_ldub_mmu, [MO_SB] = helper_ldsb_mmu, [MO_UW] = helper_lduw_mmu, @@ -210,13 +209,12 @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] = { #endif }; -static void * const qemu_st_helpers[MO_SIZE + 1] = { +static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = { [MO_8] = helper_stb_mmu, [MO_16] = helper_stw_mmu, [MO_32] = helper_stl_mmu, [MO_64] = helper_stq_mmu, }; -#endif TCGContext tcg_init_ctx; __thread TCGContext *tcg_ctx; From patchwork Tue Apr 25 19:31:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676826 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880338wrs; Tue, 25 Apr 2023 12:42:48 -0700 (PDT) X-Google-Smtp-Source: AKy350ZjpszDJCextCXz30OQqhqVDPGoVz8l08axI87k740AYExFoPm+El3q7s1aC5ZOJKphsnlT X-Received: by 2002:a05:6214:622:b0:5ad:45f2:4307 with SMTP id a2-20020a056214062200b005ad45f24307mr30477263qvx.11.1682451768088; Tue, 25 Apr 2023 12:42:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451768; cv=none; d=google.com; s=arc-20160816; b=mABA5jBTJPOkGYo8P8pYV6IF/SAjyc2vZczpJZSjlFdQ0nVSckAYHrigSMLSfAPPJd sjA1kGbXUfa4L+dxfMsL3nXo/8piNPd/tOGdm3DC5P5AJoQ5NqkPfUiBOyG5viKtnIwv 2pS7keIhc9fwty1eszAWFS10dDq/tMA9SSBd+yCiXAPwwsOdYhG++JCebQlV0/si+OEI +UpReCIWSTCQMRdVsdqNkaLb1XKKXxgTWa7hn+7yYE7JV6+WiqJEXTB8oqKTDUmv9nPz J6+cRfzfl9NOdA+blAsurTxICXa6Xh7OByILphHSHgNYVfDly7/b/lLyZPfPxQ3ue7Tu 6Ncw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=WeZLbg9PNWPpx3WburKOtddAsovGFs3y9uMBDN6DTFA=; b=A3/O/N4yp7K1OyfPajy0SL+LUPlpNWc7MgXCByL+ZotwRU2IlkXadkPR4lWPtD8db6 4aBh1V91xl9qbYqEnUyFPMqVQWLVGPkJ9a5LzWBVuQvvlFm0rYiuKM27WVgfvkGBKX0o K17d42O6AZ0bVVmX7bSboSGCKPzY5Iu7q9Y8zCeF6/tdmIC1pczFqpZc3hZaAUmVtC6U zvDvbxyrwXY7PMD79ETMrDoMs+8vRB/81IFzbR18ohoGXS0gpZIhqKxh3uuXrxH+yxRa q0RAyZ+w0HYaR2+kjzjGvhiT4bQe2BIIjwh1hR3LDGmxiLkSGG5t1LL+4zqRsnbRQ9gr xiZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CsdamwVk; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id jq11-20020ad45fcb000000b005e99c8eed95si9271546qvb.487.2023.04.25.12.42.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:42:48 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CsdamwVk; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQE-0002Ep-FN; Tue, 25 Apr 2023 15:33:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQC-0002Ae-QH for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:52 -0400 Received: from mail-lf1-x132.google.com ([2a00:1450:4864:20::132]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQ0-0004UD-6q for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:52 -0400 Received: by mail-lf1-x132.google.com with SMTP id 2adb3069b0e04-4eed6ddcae1so26819840e87.0 for ; Tue, 25 Apr 2023 12:33:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451219; x=1685043219; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WeZLbg9PNWPpx3WburKOtddAsovGFs3y9uMBDN6DTFA=; b=CsdamwVkbiauTYrARPNTLgxeIzdGS9bbIrRj2Ssr4tNCNn64tAWiTz64w4rbc2fGEi f8UXww4tGqehxaOxA8ttjIUnLSSEZv+6nlYhwh5CyKJh4e126cRgugXhy3lZIthWsNy3 TgDK7l6SMoUV+ahKbJ4FyZl8AmemaMaVUCId/VkZO9EJ4sIN9IzjTDHgJgey3O7fbPXi 2J2yeL48r6tzO0JKlMXzsFAO7MPUSsPBjZ5P74eHZEMrUb5d9OZaC7nABHp+BgkMY4j8 TuLvtBJ2k20eYdjyUTdFEHXAk8rgQb+9wrq6KV9Ri+AWFnZO8H2WN+3sNxmHU3pEcsCn c18Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451219; x=1685043219; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WeZLbg9PNWPpx3WburKOtddAsovGFs3y9uMBDN6DTFA=; b=MG0jcscQBvdj2TqZ3BtGbTyVUSdRYswK/CzdNc7xusgK/3YfwhmaK+FV6hwVY560xo 41DpT/35qxcM0UCS4kQW6OiEXtSyPlyIWgR67YXN4JoURlA95jslyRc62KiHvz9Yl8iC NKMGmsSFHoq+NfwxczGftB4+wKU0MsrGp6dV4n1vNVilw1yfcBoVYwT+1OfL6UBTp5q/ P2CmrPFTiuc+85wfI5jrq5Ap5iDk5h9Gk4T/rSqLJxQVLrC1DSZROVqi6yC/16zVwjsh iQHy0HSM4Jh9PBMGH9531i6lLqdgMc4B04TtiBIJFzfOKstRvxC9JPSwcbxwJ2M9eVH2 FYGA== X-Gm-Message-State: AAQBX9ckWb1gb9kfj2cpJjT+u46o7T914cQ+k8O4Dnww3GxNOgDtsIql 5KN82dsaboqIEdLT5XB/E1kp3x7vIF4T+vRxhwonqA== X-Received: by 2002:a2e:9b87:0:b0:2aa:4550:9169 with SMTP id z7-20020a2e9b87000000b002aa45509169mr3649048lji.20.1682451219309; Tue, 25 Apr 2023 12:33:39 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:39 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 11/57] tcg/tci: Use helper_{ld,st}*_mmu for user-only Date: Tue, 25 Apr 2023 20:31:00 +0100 Message-Id: <20230425193146.2106111-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::132; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x132.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, T_SPF_HELO_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org We can now fold these two pieces of code. Signed-off-by: Richard Henderson --- tcg/tci.c | 89 ------------------------------------------------------- 1 file changed, 89 deletions(-) diff --git a/tcg/tci.c b/tcg/tci.c index 5bde2e1f2e..15f2f8c463 100644 --- a/tcg/tci.c +++ b/tcg/tci.c @@ -292,7 +292,6 @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr, MemOp mop = get_memop(oi); uintptr_t ra = (uintptr_t)tb_ptr; -#ifdef CONFIG_SOFTMMU switch (mop & MO_SSIZE) { case MO_UB: return helper_ldub_mmu(env, taddr, oi, ra); @@ -311,58 +310,6 @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr, default: g_assert_not_reached(); } -#else - void *haddr = g2h(env_cpu(env), taddr); - unsigned a_mask = (1u << get_alignment_bits(mop)) - 1; - uint64_t ret; - - set_helper_retaddr(ra); - if (taddr & a_mask) { - helper_unaligned_ld(env, taddr); - } - switch (mop & (MO_BSWAP | MO_SSIZE)) { - case MO_UB: - ret = ldub_p(haddr); - break; - case MO_SB: - ret = ldsb_p(haddr); - break; - case MO_LEUW: - ret = lduw_le_p(haddr); - break; - case MO_LESW: - ret = ldsw_le_p(haddr); - break; - case MO_LEUL: - ret = (uint32_t)ldl_le_p(haddr); - break; - case MO_LESL: - ret = (int32_t)ldl_le_p(haddr); - break; - case MO_LEUQ: - ret = ldq_le_p(haddr); - break; - case MO_BEUW: - ret = lduw_be_p(haddr); - break; - case MO_BESW: - ret = ldsw_be_p(haddr); - break; - case MO_BEUL: - ret = (uint32_t)ldl_be_p(haddr); - break; - case MO_BESL: - ret = (int32_t)ldl_be_p(haddr); - break; - case MO_BEUQ: - ret = ldq_be_p(haddr); - break; - default: - g_assert_not_reached(); - } - clear_helper_retaddr(); - return ret; -#endif } static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, @@ -371,7 +318,6 @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, MemOp mop = get_memop(oi); uintptr_t ra = (uintptr_t)tb_ptr; -#ifdef CONFIG_SOFTMMU switch (mop & MO_SIZE) { case MO_UB: helper_stb_mmu(env, taddr, val, oi, ra); @@ -388,41 +334,6 @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val, default: g_assert_not_reached(); } -#else - void *haddr = g2h(env_cpu(env), taddr); - unsigned a_mask = (1u << get_alignment_bits(mop)) - 1; - - set_helper_retaddr(ra); - if (taddr & a_mask) { - helper_unaligned_st(env, taddr); - } - switch (mop & (MO_BSWAP | MO_SIZE)) { - case MO_UB: - stb_p(haddr, val); - break; - case MO_LEUW: - stw_le_p(haddr, val); - break; - case MO_LEUL: - stl_le_p(haddr, val); - break; - case MO_LEUQ: - stq_le_p(haddr, val); - break; - case MO_BEUW: - stw_be_p(haddr, val); - break; - case MO_BEUL: - stl_be_p(haddr, val); - break; - case MO_BEUQ: - stq_be_p(haddr, val); - break; - default: - g_assert_not_reached(); - } - clear_helper_retaddr(); -#endif } #if TCG_TARGET_REG_BITS == 64 From patchwork Tue Apr 25 19:31:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676821 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2879009wrs; Tue, 25 Apr 2023 12:39:13 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7gvnnkhqi+JdY3tIJ0C4yg0dXABrtAtnFM+tgfqc31ycIvv7KuLGnrEGQ3ZptgWMaBwzPt X-Received: by 2002:a05:6214:27e8:b0:616:4c4b:c9b9 with SMTP id jt8-20020a05621427e800b006164c4bc9b9mr6335930qvb.37.1682451553158; Tue, 25 Apr 2023 12:39:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451553; cv=none; d=google.com; s=arc-20160816; b=CzAZlEmUU27NJ6v543UKwAPFJFmdilCEGURDW2ZFRObgqIBoHMdPEeFyL5spt68xGj 5//PtSIAikqnZf5k2xQfjL9lFg7t37skVqfBkXo7SBMDpl5w2snStcBzkn+CEviWrqE3 acTDgdNRaSMi9RrpYl92dQtDCnlTBB3GQUmga7tUmUL+LGUM8yiVs7LZdQoFqitranXi OTMG4MU9cHsBYMn6a25MkT7noV7fJ+H8idIZsCdxPq6oIMob6N17sHAU7KLzKpuCiDaO pz/Tx+jmRaAMZgFiz+mP4OMrQwd8PyezgCkl0W5bOkJb3Udss8XMzHEEqWIvPMn9e/Ev Dygg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=hjKIW3B+no0/RKzEaGGSmbf1m0iKSreYFZlDK3Kpznc=; b=oMZC/vhOA2nEv3tx06jaK2czX30jaCEWGjijzzLTVR38pvWofSTdH+wANe32QVuKzC oAAHYIdRynn+5Il3K3D/q2JkG5CSH+VO2T6y7upnmanrSbsclfiSf8EQCmKpCn9xhA+n 1p8J+IQsyFIlEf+ERvicFQG4aoaXarWfY1MVIyBRwEc+XcSJfm2hA/05Mvy8g8CZ8Lky w2zsvuovE9OSd4x1y1BAW1UFK7W+tavZY/qcvqQv4Ze4A5BKvMZoVy8A+51Cv/KFzAjt 5T2OeBNI5gCC06PXvlTvipIWX+VLPyqmre+Digp8Ba/MgImBcW6+drEjx1309+cUcJ0x tRvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DWkpFVbI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id fv14-20020a056214240e00b0061648a35d5dsi2591016qvb.551.2023.04.25.12.39.12 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:39:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DWkpFVbI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQK-0002QC-50; Tue, 25 Apr 2023 15:34:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQI-0002IW-4K for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:58 -0400 Received: from mail-lj1-x22b.google.com ([2a00:1450:4864:20::22b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQB-0004cM-Lo for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:33:57 -0400 Received: by mail-lj1-x22b.google.com with SMTP id 38308e7fff4ca-2a8b3ecf59fso60900181fa.0 for ; Tue, 25 Apr 2023 12:33:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451230; x=1685043230; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hjKIW3B+no0/RKzEaGGSmbf1m0iKSreYFZlDK3Kpznc=; b=DWkpFVbIeIrskWERInBfbLcQbqt5Yqxz7unsyBhkL4ktQM23ijive+uHZ84L7Xxc/a J96sg9x8ZYIFaLwnBeCO6OLK193MpI53YJP+u1XZdaUdM6regJqyGIMQX5hXUa6NTh7a 7wKueYWU4wvcpPlha68HdiP42Nm0mL8tkh/MTdwVpCbdlYxwI+LUlYuGzhlI/p/agRnz 9bfE2UUw5iT9XBApVJ5Lii2gLnaArxCQwnTcWGFqDODKjZBdqoXZ1CmhT3Q+bOHxuHx+ Vff7jj4OixVsmPfYqpt/fgIFo52uwiAa9p8ZI6HlmjZOtPRYkGJBTJmAIye0QhqtejNi 9y1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451230; x=1685043230; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hjKIW3B+no0/RKzEaGGSmbf1m0iKSreYFZlDK3Kpznc=; b=i7EoCZQEbGz5fa5txl7XOEO9vRP7T5miSuaK6g3T9P4uomDxUrHo8aUXrtw/H1djbi T9x1pPO7caUCZrwikXgNDvFlFGHsDs8va0CuWkXZa1BQKEb0oNjDnqtcXGuUw3/Sc/RI /6mF635XeCUfNvSO7Rl9Pn5gw9png1WZbWza6htCv3jeJ/nhI2NkLQ4haoPDe4/52M1h q2yGYf3R35NVLAL6B01uB3MVzERLS0/8kFlbF/MwG9IWtwF2xJlVJMatBBjwH1us4GqF fhFI5XzdIBN3Kgx1Z0C7ZeDBkCzy5+QdIfiZL1MC/A4xwpXrcd4TQflc9Spsmvs9oRPb qUQw== X-Gm-Message-State: AAQBX9d8wRYtu9Fk+qvSPK8X7xnlLJh4YWcJRtACVl60Sd+yUvWHHnNd aA/HErByFo3F+E97DYyiGDYTgpFPl+A+QVSBMCoGPg== X-Received: by 2002:a2e:9d4a:0:b0:2ab:bd1:93da with SMTP id y10-20020a2e9d4a000000b002ab0bd193damr2187437ljj.10.1682451229873; Tue, 25 Apr 2023 12:33:49 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:49 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 12/57] tcg: Add 128-bit guest memory primitives Date: Tue, 25 Apr 2023 20:31:01 +0100 Message-Id: <20230425193146.2106111-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22b; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- accel/tcg/tcg-runtime.h | 3 + include/tcg/tcg-ldst.h | 4 + accel/tcg/cputlb.c | 392 +++++++++++++++++++++++++-------- accel/tcg/user-exec.c | 94 ++++++-- tcg/tcg-op.c | 184 +++++++++++----- accel/tcg/ldst_atomicity.c.inc | 189 ++++++++++++++++ 6 files changed, 688 insertions(+), 178 deletions(-) diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h index e141a6ab24..a7a2038901 100644 --- a/accel/tcg/tcg-runtime.h +++ b/accel/tcg/tcg-runtime.h @@ -39,6 +39,9 @@ DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env) DEF_HELPER_FLAGS_3(memset, TCG_CALL_NO_RWG, ptr, ptr, int, ptr) #endif /* IN_HELPER_PROTO */ +DEF_HELPER_FLAGS_3(ld_i128, TCG_CALL_NO_WG, i128, env, tl, i32) +DEF_HELPER_FLAGS_4(st_i128, TCG_CALL_NO_WG, void, env, tl, i128, i32) + DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG, i32, env, tl, i32, i32, i32) DEF_HELPER_FLAGS_5(atomic_cmpxchgw_be, TCG_CALL_NO_WG, diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 57fafa14b1..64f48e6990 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -34,6 +34,8 @@ tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr); uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr); +Int128 helper_ld16_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t retaddr); /* Value sign-extended to tcg register size. */ tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, @@ -55,6 +57,8 @@ void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr); void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr); +void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr); #ifdef CONFIG_USER_ONLY diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 02e2d64d5e..d6cd599f82 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -40,6 +40,7 @@ #include "qemu/plugin-memory.h" #endif #include "tcg/tcg-ldst.h" +#include "exec/helper-proto.h" /* DEBUG defines, enable DEBUG_TLB_LOG to log to the CPU_LOG_MMU target */ /* #define DEBUG_TLB */ @@ -2156,6 +2157,31 @@ static uint64_t do_ld_whole_be8(CPUArchState *env, uintptr_t ra, return (ret_be << (p->size * 8)) | x; } +/** + * do_ld_parts_be16 + * @p: translation parameters + * @ret_be: accumulated data + * + * As do_ld_bytes_beN, but with one atomic load. + * 16 aligned bytes are guaranteed to cover the load. + */ +static Int128 do_ld_whole_be16(CPUArchState *env, uintptr_t ra, + MMULookupPageData *p, uint64_t ret_be) +{ + int o = p->addr & 15; + Int128 x, y = load_atomic16_or_exit(env, ra, p->haddr - o); + int size = p->size; + + if (!HOST_BIG_ENDIAN) { + y = bswap128(y); + } + y = int128_lshift(y, o * 8); + y = int128_urshift(y, (16 - size) * 8); + x = int128_make64(ret_be); + x = int128_lshift(x, size * 8); + return int128_or(x, y); +} + /* * Wrapper for the above. */ @@ -2200,6 +2226,59 @@ static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p, } } +/* + * Wrapper for the above, for 8 < size < 16. + */ +static Int128 do_ld16_beN(CPUArchState *env, MMULookupPageData *p, + uint64_t a, int mmu_idx, MemOp mop, uintptr_t ra) +{ + int size = p->size; + uint64_t b; + MemOp atmax; + + if (unlikely(p->flags & TLB_MMIO)) { + p->size = size - 8; + a = do_ld_mmio_beN(env, p, a, mmu_idx, MMU_DATA_LOAD, ra); + p->addr += p->size; + p->size = 8; + b = do_ld_mmio_beN(env, p, 0, mmu_idx, MMU_DATA_LOAD, ra); + } else { + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the load as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax != MO_ATMAX_SIZE) { + atmax >>= MO_ATMAX_SHIFT; + if (unlikely(size >= (1 << atmax))) { + return do_ld_whole_be16(env, ra, p, a); + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: + p->size = size - 8; + a = do_ld_bytes_beN(p, a); + b = ldq_be_p(p->haddr + size - 8); + break; + case MO_ATOM_SUBALIGN: + p->size = size - 8; + a = do_ld_parts_beN(p, a); + p->haddr += size - 8; + p->size = 8; + b = do_ld_parts_beN(p, 0); + break; + default: + g_assert_not_reached(); + } + } + + return int128_make128(b, a); +} + static uint8_t do_ld_1(CPUArchState *env, MMULookupPageData *p, int mmu_idx, MMUAccessType type, uintptr_t ra) { @@ -2388,6 +2467,80 @@ tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr); } +static Int128 do_ld16_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MMULookupLocals l; + bool crosspage; + uint64_t a, b; + Int128 ret; + int first; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD, &l); + if (likely(!crosspage)) { + /* Perform the load host endian. */ + if (unlikely(l.page[0].flags & TLB_MMIO)) { + QEMU_IOTHREAD_LOCK_GUARD(); + a = io_readx(env, l.page[0].full, l.mmu_idx, addr, + ra, MMU_DATA_LOAD, MO_64); + b = io_readx(env, l.page[0].full, l.mmu_idx, addr + 8, + ra, MMU_DATA_LOAD, MO_64); + ret = int128_make128(HOST_BIG_ENDIAN ? b : a, + HOST_BIG_ENDIAN ? a : b); + } else { + ret = load_atom_16(env, ra, l.page[0].haddr, l.memop); + } + if (l.memop & MO_BSWAP) { + ret = bswap128(ret); + } + return ret; + } + + first = l.page[0].size; + if (first == 8) { + MemOp mop8 = (l.memop & ~MO_SIZE) | MO_64; + + a = do_ld_8(env, &l.page[0], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); + b = do_ld_8(env, &l.page[1], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); + if ((mop8 & MO_BSWAP) == MO_LE) { + ret = int128_make128(a, b); + } else { + ret = int128_make128(b, a); + } + return ret; + } + + if (first < 8) { + a = do_ld_beN(env, &l.page[0], 0, l.mmu_idx, + MMU_DATA_LOAD, l.memop, ra); + ret = do_ld16_beN(env, &l.page[1], a, l.mmu_idx, l.memop, ra); + } else { + ret = do_ld16_beN(env, &l.page[0], 0, l.mmu_idx, l.memop, ra); + b = int128_getlo(ret); + ret = int128_lshift(ret, l.page[1].size * 8); + a = int128_gethi(ret); + b = do_ld_beN(env, &l.page[1], b, l.mmu_idx, + MMU_DATA_LOAD, l.memop, ra); + ret = int128_make128(b, a); + } + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = bswap128(ret); + } + return ret; +} + +Int128 helper_ld16_mmu(CPUArchState *env, target_ulong addr, + uint32_t oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128); + return do_ld16_mmu(env, addr, oi, retaddr); +} + +Int128 helper_ld_i128(CPUArchState *env, target_ulong addr, uint32_t oi) +{ + return helper_ld16_mmu(env, addr, oi, GETPC()); +} + /* * Load helpers for cpu_ldst.h. */ @@ -2476,59 +2629,23 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - uint64_t h, l; + Int128 ret; - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_BE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_LOAD, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - h = helper_ldq_mmu(env, addr, new_oi, ra); - l = helper_ldq_mmu(env, addr + 8, new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return int128_make128(l, h); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_BE|MO_128)); + ret = do_ld16_mmu(env, addr, oi, ra); + plugin_load_cb(env, addr, oi); + return ret; } Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - uint64_t h, l; + Int128 ret; - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_LE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_LOAD, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - l = helper_ldq_mmu(env, addr, new_oi, ra); - h = helper_ldq_mmu(env, addr + 8, new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return int128_make128(l, h); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_LE|MO_128)); + ret = do_ld16_mmu(env, addr, oi, ra); + plugin_load_cb(env, addr, oi); + return ret; } /* @@ -2609,6 +2726,57 @@ static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p, } } +/* + * Wrapper for the above, for 8 < size < 16. + */ +static uint64_t do_st16_leN(CPUArchState *env, MMULookupPageData *p, + Int128 val_le, int mmu_idx, + MemOp mop, uintptr_t ra) +{ + int size = p->size; + MemOp atmax; + + if (unlikely(p->flags & TLB_MMIO)) { + p->size = 8; + do_st_mmio_leN(env, p, int128_getlo(val_le), mmu_idx, ra); + p->size = size - 8; + p->addr += 8; + return do_st_mmio_leN(env, p, int128_gethi(val_le), mmu_idx, ra); + } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { + return int128_gethi(val_le) >> ((size - 8) * 8); + } + + switch (mop & MO_ATOM_MASK) { + case MO_ATOM_WITHIN16: + /* + * It is a given that we cross a page and therefore there is no + * atomicity for the store as a whole, but there may be a subobject + * as defined by ATMAX which does not cross a 16-byte boundary. + */ + atmax = mop & MO_ATMAX_MASK; + if (atmax != MO_ATMAX_SIZE) { + atmax >>= MO_ATMAX_SHIFT; + if (unlikely(size >= (1 << atmax))) { + if (HAVE_al16) { + return store_whole_le16(p->haddr, p->size, val_le); + } else { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + } + } + /* fall through */ + case MO_ATOM_IFALIGN: + case MO_ATOM_NONE: + stq_le_p(p->haddr, int128_getlo(val_le)); + return store_bytes_leN(p->haddr + 8, p->size - 8, int128_gethi(val_le)); + case MO_ATOM_SUBALIGN: + store_parts_leN(p->haddr, 8, int128_getlo(val_le)); + return store_parts_leN(p->haddr + 8, p->size - 8, int128_gethi(val_le)); + default: + g_assert_not_reached(); + } +} + static void do_st_1(CPUArchState *env, MMULookupPageData *p, uint8_t val, int mmu_idx, uintptr_t ra) { @@ -2765,6 +2933,80 @@ void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, do_st8_mmu(env, addr, val, oi, retaddr); } +static void do_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t ra) +{ + MMULookupLocals l; + bool crosspage; + uint64_t a, b; + int first; + + crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + if (likely(!crosspage)) { + /* Swap to host endian if necessary, then store. */ + if (l.memop & MO_BSWAP) { + val = bswap128(val); + } + if (unlikely(l.page[0].flags & TLB_MMIO)) { + QEMU_IOTHREAD_LOCK_GUARD(); + if (HOST_BIG_ENDIAN) { + b = int128_getlo(val), a = int128_gethi(val); + } else { + a = int128_getlo(val), b = int128_gethi(val); + } + io_writex(env, l.page[0].full, l.mmu_idx, a, addr, ra, MO_64); + io_writex(env, l.page[0].full, l.mmu_idx, b, addr + 8, ra, MO_64); + } else if (unlikely(l.page[0].flags & TLB_DISCARD_WRITE)) { + /* nothing */ + } else { + store_atom_16(env, ra, l.page[0].haddr, l.memop, val); + } + return; + } + + first = l.page[0].size; + if (first == 8) { + MemOp mop8 = (l.memop & ~(MO_SIZE | MO_BSWAP)) | MO_64; + + if (l.memop & MO_BSWAP) { + val = bswap128(val); + } + if (HOST_BIG_ENDIAN) { + b = int128_getlo(val), a = int128_gethi(val); + } else { + a = int128_getlo(val), b = int128_gethi(val); + } + do_st_8(env, &l.page[0], a, l.mmu_idx, mop8, ra); + do_st_8(env, &l.page[1], b, l.mmu_idx, mop8, ra); + return; + } + + if ((l.memop & MO_BSWAP) != MO_LE) { + val = bswap128(val); + } + if (first < 8) { + do_st_leN(env, &l.page[0], int128_getlo(val), l.mmu_idx, l.memop, ra); + val = int128_urshift(val, first * 8); + do_st16_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); + } else { + b = do_st16_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + do_st_leN(env, &l.page[1], b, l.mmu_idx, l.memop, ra); + } +} + +void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128); + do_st16_mmu(env, addr, val, oi, retaddr); +} + +void helper_st_i128(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi) +{ + helper_st16_mmu(env, addr, val, oi, GETPC()); +} + /* * Store Helpers for cpu_ldst.h */ @@ -2829,58 +3071,20 @@ void cpu_stq_le_mmu(CPUArchState *env, target_ulong addr, uint64_t val, plugin_store_cb(env, addr, oi); } -void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 val, - MemOpIdx oi, uintptr_t ra) +void cpu_st16_be_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_BE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_STORE, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - helper_stq_mmu(env, addr, int128_gethi(val), new_oi, ra); - helper_stq_mmu(env, addr + 8, int128_getlo(val), new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_BE|MO_128)); + do_st16_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } -void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, - MemOpIdx oi, uintptr_t ra) +void cpu_st16_le_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) { - MemOp mop = get_memop(oi); - int mmu_idx = get_mmuidx(oi); - MemOpIdx new_oi; - unsigned a_bits; - - tcg_debug_assert((mop & (MO_BSWAP|MO_SSIZE)) == (MO_LE|MO_128)); - a_bits = get_alignment_bits(mop); - - /* Handle CPU specific unaligned behaviour */ - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_STORE, - mmu_idx, ra); - } - - /* Construct an unaligned 64-bit replacement MemOpIdx. */ - mop = (mop & ~(MO_SIZE | MO_AMASK)) | MO_64 | MO_UNALN; - new_oi = make_memop_idx(mop, mmu_idx); - - helper_stq_mmu(env, addr, int128_getlo(val), new_oi, ra); - helper_stq_mmu(env, addr + 8, int128_gethi(val), new_oi, ra); - - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP|MO_SIZE)) == (MO_LE|MO_128)); + do_st16_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); } #include "ldst_common.c.inc" diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index b6b054890d..98a24fc308 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -1121,18 +1121,45 @@ uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, return cpu_to_le64(ret); } -Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) +static Int128 do_ld16_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) { void *haddr; Int128 ret; - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - memcpy(&ret, haddr, 16); + tcg_debug_assert((mop & MO_SIZE) == MO_128); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret = load_atom_16(env, ra, haddr, mop); clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return ret; +} +Int128 helper_ld16_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + Int128 ret = do_ld16_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret = bswap128(ret); + } + return ret; +} + +Int128 helper_ld_i128(CPUArchState *env, target_ulong addr, MemOpIdx oi) +{ + return helper_ld16_mmu(env, addr, oi, GETPC()); +} + +Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + Int128 ret; + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); + ret = do_ld16_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); if (!HOST_BIG_ENDIAN) { ret = bswap128(ret); } @@ -1142,15 +1169,12 @@ Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr addr, Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); Int128 ret; - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - memcpy(&ret, haddr, 16); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); + ret = do_ld16_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - if (HOST_BIG_ENDIAN) { ret = bswap128(ret); } @@ -1307,33 +1331,57 @@ void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } -void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, - Int128 val, MemOpIdx oi, uintptr_t ra) +static void do_st16_he_mmu(CPUArchState *env, abi_ptr addr, Int128 val, + MemOp mop, uintptr_t ra) { void *haddr; - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_BE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); + tcg_debug_assert((mop & MO_SIZE) == MO_128); + haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_16(env, ra, haddr, mop, val); + clear_helper_retaddr(); +} + +void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + if (mop & MO_BSWAP) { + val = bswap128(val); + } + do_st16_he_mmu(env, addr, val, mop, ra); +} + +void helper_st_i128(CPUArchState *env, target_ulong addr, + Int128 val, MemOpIdx oi) +{ + helper_st16_mmu(env, addr, val, oi, GETPC()); +} + +void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, + Int128 val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop = get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) == MO_BE); if (!HOST_BIG_ENDIAN) { val = bswap128(val); } - memcpy(haddr, &val, 16); - clear_helper_retaddr(); + do_st16_he_mmu(env, addr, val, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, Int128 val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop = get_memop(oi); - tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) == (MO_128 | MO_LE)); - haddr = cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); + tcg_debug_assert((mop & MO_BSWAP) == MO_LE); if (HOST_BIG_ENDIAN) { val = bswap128(val); } - memcpy(haddr, &val, 16); - clear_helper_retaddr(); + do_st16_he_mmu(env, addr, val, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 3136cef81a..9101d334b6 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -3119,6 +3119,48 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop) } } +/* + * Return true if @mop, without knowledge of the pointer alignment, + * does not require 16-byte atomicity, and it would be adventagous + * to avoid a call to a helper function. + */ +static bool use_two_i64_for_i128(MemOp mop) +{ +#ifdef CONFIG_SOFTMMU + /* Two softmmu tlb lookups is larger than one function call. */ + return false; +#else + /* + * For user-only, two 64-bit operations may well be smaller than a call. + * Determine if that would be legal for the requested atomicity. + */ + MemOp atom = mop & MO_ATOM_MASK; + MemOp atmax = mop & MO_ATMAX_MASK; + + /* In a serialized context, no atomicity is required. */ + if (!(tcg_ctx->gen_tb->cflags & CF_PARALLEL)) { + return true; + } + + if (atmax == MO_ATMAX_SIZE) { + atmax = mop & MO_SIZE; + } else { + atmax >>= MO_ATMAX_SHIFT; + } + switch (atom) { + case MO_ATOM_NONE: + return true; + case MO_ATOM_IFALIGN: + case MO_ATOM_SUBALIGN: + return atmax < MO_128; + case MO_ATOM_WITHIN16: + return atmax == MO_8; + default: + g_assert_not_reached(); + } +#endif +} + static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) { MemOp mop_1 = orig, mop_2; @@ -3164,93 +3206,113 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) ret[1] = mop_2; } +#if TARGET_LONG_BITS == 64 +#define tcg_temp_ebb_new tcg_temp_ebb_new_i64 +#else +#define tcg_temp_ebb_new tcg_temp_ebb_new_i32 +#endif + void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOp mop[2]; - TCGv addr_p8; - TCGv_i64 x, y; + MemOpIdx oi = make_memop_idx(memop, idx); - canonicalize_memop_i128_as_i64(mop, memop); + tcg_debug_assert((memop & MO_SIZE) == MO_128); + tcg_debug_assert((memop & MO_SIGN) == 0); tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); addr = plugin_prep_mem_callbacks(addr); - /* TODO: respect atomicity of the operation. */ /* TODO: allow the tcg backend to see the whole operation. */ - /* - * Since there are no global TCGv_i128, there is no visible state - * changed if the second load faults. Load directly into the two - * subwords. - */ - if ((memop & MO_BSWAP) == MO_LE) { - x = TCGV128_LOW(val); - y = TCGV128_HIGH(val); + if (use_two_i64_for_i128(memop)) { + MemOp mop[2]; + TCGv addr_p8; + TCGv_i64 x, y; + + canonicalize_memop_i128_as_i64(mop, memop); + + /* + * Since there are no global TCGv_i128, there is no visible state + * changed if the second load faults. Load directly into the two + * subwords. + */ + if ((memop & MO_BSWAP) == MO_LE) { + x = TCGV128_LOW(val); + y = TCGV128_HIGH(val); + } else { + x = TCGV128_HIGH(val); + y = TCGV128_LOW(val); + } + + gen_ldst_i64(INDEX_op_qemu_ld_i64, x, addr, mop[0], idx); + + if ((mop[0] ^ memop) & MO_BSWAP) { + tcg_gen_bswap64_i64(x, x); + } + + addr_p8 = tcg_temp_ebb_new(); + tcg_gen_addi_tl(addr_p8, addr, 8); + gen_ldst_i64(INDEX_op_qemu_ld_i64, y, addr_p8, mop[1], idx); + tcg_temp_free(addr_p8); + + if ((mop[0] ^ memop) & MO_BSWAP) { + tcg_gen_bswap64_i64(y, y); + } } else { - x = TCGV128_HIGH(val); - y = TCGV128_LOW(val); + gen_helper_ld_i128(val, cpu_env, addr, tcg_constant_i32(oi)); } - gen_ldst_i64(INDEX_op_qemu_ld_i64, x, addr, mop[0], idx); - - if ((mop[0] ^ memop) & MO_BSWAP) { - tcg_gen_bswap64_i64(x, x); - } - - addr_p8 = tcg_temp_new(); - tcg_gen_addi_tl(addr_p8, addr, 8); - gen_ldst_i64(INDEX_op_qemu_ld_i64, y, addr_p8, mop[1], idx); - tcg_temp_free(addr_p8); - - if ((mop[0] ^ memop) & MO_BSWAP) { - tcg_gen_bswap64_i64(y, y); - } - - plugin_gen_mem_callbacks(addr, make_memop_idx(memop, idx), - QEMU_PLUGIN_MEM_R); + plugin_gen_mem_callbacks(addr, oi, QEMU_PLUGIN_MEM_R); } void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOp mop[2]; - TCGv addr_p8; - TCGv_i64 x, y; + MemOpIdx oi = make_memop_idx(memop, idx); - canonicalize_memop_i128_as_i64(mop, memop); + tcg_debug_assert((memop & MO_SIZE) == MO_128); + tcg_debug_assert((memop & MO_SIGN) == 0); tcg_gen_req_mo(TCG_MO_ST_LD | TCG_MO_ST_ST); addr = plugin_prep_mem_callbacks(addr); - /* TODO: respect atomicity of the operation. */ /* TODO: allow the tcg backend to see the whole operation. */ - if ((memop & MO_BSWAP) == MO_LE) { - x = TCGV128_LOW(val); - y = TCGV128_HIGH(val); + if (use_two_i64_for_i128(memop)) { + MemOp mop[2]; + TCGv addr_p8; + TCGv_i64 x, y; + + canonicalize_memop_i128_as_i64(mop, memop); + + if ((memop & MO_BSWAP) == MO_LE) { + x = TCGV128_LOW(val); + y = TCGV128_HIGH(val); + } else { + x = TCGV128_HIGH(val); + y = TCGV128_LOW(val); + } + + addr_p8 = tcg_temp_ebb_new(); + if ((mop[0] ^ memop) & MO_BSWAP) { + TCGv_i64 t = tcg_temp_ebb_new_i64(); + + tcg_gen_bswap64_i64(t, x); + gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr, mop[0], idx); + tcg_gen_bswap64_i64(t, y); + tcg_gen_addi_tl(addr_p8, addr, 8); + gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr_p8, mop[1], idx); + tcg_temp_free_i64(t); + } else { + gen_ldst_i64(INDEX_op_qemu_st_i64, x, addr, mop[0], idx); + tcg_gen_addi_tl(addr_p8, addr, 8); + gen_ldst_i64(INDEX_op_qemu_st_i64, y, addr_p8, mop[1], idx); + } + tcg_temp_free(addr_p8); } else { - x = TCGV128_HIGH(val); - y = TCGV128_LOW(val); + gen_helper_st_i128(cpu_env, addr, val, tcg_constant_i32(oi)); } - addr_p8 = tcg_temp_new(); - if ((mop[0] ^ memop) & MO_BSWAP) { - TCGv_i64 t = tcg_temp_ebb_new_i64(); - - tcg_gen_bswap64_i64(t, x); - gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr, mop[0], idx); - tcg_gen_bswap64_i64(t, y); - tcg_gen_addi_tl(addr_p8, addr, 8); - gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr_p8, mop[1], idx); - tcg_temp_free_i64(t); - } else { - gen_ldst_i64(INDEX_op_qemu_st_i64, x, addr, mop[0], idx); - tcg_gen_addi_tl(addr_p8, addr, 8); - gen_ldst_i64(INDEX_op_qemu_st_i64, y, addr_p8, mop[1], idx); - } - tcg_temp_free(addr_p8); - - plugin_gen_mem_callbacks(addr, make_memop_idx(memop, idx), - QEMU_PLUGIN_MEM_W); + plugin_gen_mem_callbacks(addr, oi, QEMU_PLUGIN_MEM_W); } static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 07abbdee3f..e61121d6bf 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -423,6 +423,21 @@ static inline uint64_t load_atom_8_by_4(void *pv) } } +/** + * load_atom_8_by_8_or_4: + * @pv: host address + * + * Load 8 bytes from aligned @pv, with at least 4-byte atomicity. + */ +static inline uint64_t load_atom_8_by_8_or_4(void *pv) +{ + if (HAVE_al8_fast) { + return load_atomic8(pv); + } else { + return load_atom_8_by_4(pv); + } +} + /** * load_atom_2: * @p: host address @@ -555,6 +570,64 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, } } +/** + * load_atom_16: + * @p: host address + * @memop: the full memory op + * + * Load 16 bytes from @p, honoring the atomicity of @memop. + */ +static Int128 load_atom_16(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop) +{ + uintptr_t pi = (uintptr_t)pv; + int atmax; + Int128 r; + uint64_t a, b; + + /* + * If the host does not support 8-byte atomics, wait until we have + * examined the atomicity parameters below. + */ + if (HAVE_al16_fast && likely((pi & 15) == 0)) { + return load_atomic16(pv); + } + + atmax = required_atomicity(env, pi, memop); + switch (atmax) { + case MO_8: + memcpy(&r, pv, 16); + return r; + case MO_16: + a = load_atom_8_by_2(pv); + b = load_atom_8_by_2(pv + 8); + break; + case MO_32: + a = load_atom_8_by_4(pv); + b = load_atom_8_by_4(pv + 8); + break; + case MO_64: + if (!HAVE_al8) { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + a = load_atomic8(pv); + b = load_atomic8(pv + 8); + break; + case -MO_64: + if (!HAVE_al8) { + cpu_loop_exit_atomic(env_cpu(env), ra); + } + a = load_atom_extract_al8x2(pv); + b = load_atom_extract_al8x2(pv + 8); + break; + case MO_128: + return load_atomic16_or_exit(env, ra, pv); + default: + g_assert_not_reached(); + } + return int128_make128(HOST_BIG_ENDIAN ? b : a, HOST_BIG_ENDIAN ? a : b); +} + /** * store_atomic2: * @pv: host address @@ -596,6 +669,40 @@ static inline void store_atomic8(void *pv, uint64_t val) qatomic_set__nocheck(p, val); } +/** + * store_atomic16: + * @pv: host address + * @val: value to store + * + * Atomically store 16 aligned bytes to @pv. + */ +static inline void store_atomic16(void *pv, Int128 val) +{ +#if defined(CONFIG_ATOMIC128) + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + Int128Alias new; + + new.s = val; + qatomic_set__nocheck(pu, new.u); +#elif defined(CONFIG_CMPXCHG128) + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + __uint128_t o; + Int128Alias n; + + /* + * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always + * defer to libatomic, so we must use __sync_val_compare_and_swap_16 + * and accept the sequential consistency that comes with it. + */ + n.s = val; + do { + o = *pu; + } while (!__sync_bool_compare_and_swap_16(pu, o, n.u)); +#else + qemu_build_not_reached(); +#endif +} + /** * store_atom_4x2 */ @@ -1039,3 +1146,85 @@ static void store_atom_8(CPUArchState *env, uintptr_t ra, } cpu_loop_exit_atomic(env_cpu(env), ra); } + +/** + * store_atom_16: + * @p: host address + * @val: the value to store + * @memop: the full memory op + * + * Store 16 bytes to @p, honoring the atomicity of @memop. + */ +static void store_atom_16(CPUArchState *env, uintptr_t ra, + void *pv, MemOp memop, Int128 val) +{ + uintptr_t pi = (uintptr_t)pv; + uint64_t a, b; + int atmax; + + if (HAVE_al16_fast && likely((pi & 15) == 0)) { + store_atomic16(pv, val); + return; + } + + atmax = required_atomicity(env, pi, memop); + + a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val); + b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val); + switch (atmax) { + case MO_8: + memcpy(pv, &val, 16); + return; + case MO_16: + store_atom_8_by_2(pv, a); + store_atom_8_by_2(pv + 8, b); + return; + case MO_32: + store_atom_8_by_4(pv, a); + store_atom_8_by_4(pv + 8, b); + return; + case MO_64: + if (HAVE_al8) { + store_atomic8(pv, a); + store_atomic8(pv + 8, b); + return; + } + break; + case -MO_64: + if (HAVE_al16) { + uint64_t val_le; + int s2 = pi & 15; + int s1 = 16 - s2; + + if (HOST_BIG_ENDIAN) { + val = bswap128(val); + } + switch (s2) { + case 1 ... 7: + val_le = store_whole_le16(pv, s1, val); + store_bytes_leN(pv + s1, s2, val_le); + break; + case 9 ... 15: + store_bytes_leN(pv, s1, int128_getlo(val)); + val = int128_urshift(val, s1 * 8); + store_whole_le16(pv + s1, s2, val); + break; + case 0: /* aligned */ + case 8: /* atmax MO_64 */ + default: + g_assert_not_reached(); + } + return; + } + break; + case MO_128: + if (HAVE_al16) { + store_atomic16(pv, val); + return; + } + break; + default: + g_assert_not_reached(); + } + cpu_loop_exit_atomic(env_cpu(env), ra); +} From patchwork Tue Apr 25 19:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676820 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2878071wrs; Tue, 25 Apr 2023 12:36:47 -0700 (PDT) X-Google-Smtp-Source: AKy350b56+wFWx01GGqQVsDFOHIl/yLW/k69kENWXk6e8p81DvIPujSMGHjIbJ6ygnkPFQAVt46d X-Received: by 2002:a05:622a:1810:b0:3eb:1082:ec93 with SMTP id t16-20020a05622a181000b003eb1082ec93mr30993493qtc.41.1682451407611; Tue, 25 Apr 2023 12:36:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451407; cv=none; d=google.com; s=arc-20160816; b=0yp/TRoWa+pHxtAyI1zyJula/2JKsmj0vNF5yUymFK/cbnICFFYL4QOuvbF1hPV3f5 4brRhJoLNtDtpMlCUyPzUEuiMI76h2RFE4/MsR3Aof+/7nFybT9INMnYDhD34QRRmBlj EOScvQ38PHV1ddwIQrKCUI29AtQ7d04iT5JHQMC4c6o77dV8gd00qAcwCGe8PFWbYKoV Xm8KHw4lL7IjRIsbF+kBNl948b88gNNH1TmTlwBTUuNkgLQaSwR2S/ZiD+6MySTweEK0 9aCeW/9lPLNcRQ8M/DB4TSvu07nRL5KsKNHOdBQhXHd4qaoAkBfx31vmupTZrYK4IaNC Kmeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=TSknonhIYWt5umpLJq71osBBG655js63QRsSvJLPEd4=; b=Tc/w3dF/fqBfrDUt5s+4XavCo21kCO5KXHu+Tdvdfo/wWQFU99WbEkyTXcJofYRmbJ AcWC96LerwSWcgr0XI0poaUN7RNf+ry6EzssdCThP80H0I8t8NCbcOZ8G0qqRlylcH6W iFntll9GuDp50ARoXH0TCN/YCHfpceMiOtnRggarZCiRzUHMcDpfZ7u8lp0piHUyxkj5 0XzhoH+RGBP6zE6nAqWh6+aK3kpmMz7WEQwzDw1f3zg7rxqnGNmX+B4Au7lUhN8RE3GB oITkDfSIHkntkrny+4iGbNJlFxMz5CUk9VZ90ezltOTHZxXE7G9cWWkzcfDk2S4P3Izx kDdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qCOExT+1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id z16-20020a05622a061000b003e64495e392si9279534qta.248.2023.04.25.12.36.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:36:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qCOExT+1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQO-0002Yx-QV; Tue, 25 Apr 2023 15:34:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQN-0002WN-4B for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:03 -0400 Received: from mail-lf1-x130.google.com ([2a00:1450:4864:20::130]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQE-0004dY-4F for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:02 -0400 Received: by mail-lf1-x130.google.com with SMTP id 2adb3069b0e04-4edcdfa8638so6738198e87.2 for ; Tue, 25 Apr 2023 12:33:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451232; x=1685043232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TSknonhIYWt5umpLJq71osBBG655js63QRsSvJLPEd4=; b=qCOExT+1rz04oLGR5g6vna1z/8KI4bH+iFMsVLuItympwBqx41w+Sod3E9Ymj+nBM5 ZIeheMa08YR6mfTD5LRFX5ntA2KXo/7pOtjUVAcKXYHaKO8dTnM+hMdB4yX1Amb2gPiP wB7lV3Ncw8E5gMkDiswpr7Tvbm93GupmW90TLVxkOELZgvDYoKP4c7OWh2pYciNEn7hI p9CapxX9QE1tSkl7e0hNAOL7S3Hh82CIzuRuoG8+jM3JVxCNsBbGLCm3YLi0PIaiTATd BAu95c2cNNzsHbQikv/ul+Fbi4omuj+VzJN/koEi1lV9ccP+LPFzv8jzODwp1EgnedaE gX0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451232; x=1685043232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TSknonhIYWt5umpLJq71osBBG655js63QRsSvJLPEd4=; b=FbsqLA74U6KFZly4XFylszWvRd8LtBkLcCbnHNit/nYNtURScki7XBazmQssuXERpu jUGvywcxe1rVU2iHI32pF/gyxVllvtv/ytrDDcRBH7jSuRDUR8p1ULDa3vMYJWCLtu0e BnCtyuPxunheX9SSC0rDlfDoxqGPBWwTHeyj541AMW0K8EZbuGxLFuz6d8fRqWoYQhfU vJ/S+vXraBSR64UzF7vDOj9ITvlaclR9kssnKtFYdan5m8atQaTRJaGYEkP5Fey6Zgm5 QYQvqWlkcCqfRbXTH63Vo5ufWG9343FiPonu2UvDU93LXGlRGr6iG3ZpunN0mNkbvPbG N/Sw== X-Gm-Message-State: AAQBX9dx85gOg3tEntBvjs73Ogd5Wsjp7r1GDi+Zt0jAXQKqzxEgPcB6 vYiRj6RwH5Za81ZIIOxqfedUQS34el0bPLQexRJOEQ== X-Received: by 2002:a2e:9456:0:b0:2a8:c7bb:eeaa with SMTP id o22-20020a2e9456000000b002a8c7bbeeaamr3538656ljh.24.1682451232403; Tue, 25 Apr 2023 12:33:52 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:52 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 13/57] meson: Detect atomic128 support with optimization Date: Tue, 25 Apr 2023 20:31:02 +0100 Message-Id: <20230425193146.2106111-14-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::130; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x130.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org There is an edge condition prior to gcc13 for which optimization is required to generate 16-byte atomic sequences. Detect this. Signed-off-by: Richard Henderson --- accel/tcg/ldst_atomicity.c.inc | 38 ++++++++++++++++++------- meson.build | 52 ++++++++++++++++++++++------------ 2 files changed, 61 insertions(+), 29 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index e61121d6bf..c43f101ebe 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -16,6 +16,23 @@ #endif #define HAVE_al8_fast (ATOMIC_REG_SIZE >= 8) +/* + * If __alignof(unsigned __int128) < 16, GCC may refuse to inline atomics + * that are supported by the host, e.g. s390x. We can force the pointer to + * have our known alignment with __builtin_assume_aligned, however prior to + * GCC 13 that was only reliable with optimization enabled. See + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107389 + */ +#if defined(CONFIG_ATOMIC128_OPT) +# if !defined(__OPTIMIZE__) +# define ATTRIBUTE_ATOMIC128_OPT __attribute__((optimize("O1"))) +# endif +# define CONFIG_ATOMIC128 +#endif +#ifndef ATTRIBUTE_ATOMIC128_OPT +# define ATTRIBUTE_ATOMIC128_OPT +#endif + #if defined(CONFIG_ATOMIC128) # define HAVE_al16_fast true #else @@ -136,7 +153,8 @@ static inline uint64_t load_atomic8(void *pv) * * Atomically load 16 aligned bytes from @pv. */ -static inline Int128 load_atomic16(void *pv) +static inline Int128 ATTRIBUTE_ATOMIC128_OPT +load_atomic16(void *pv) { #ifdef CONFIG_ATOMIC128 __uint128_t *p = __builtin_assume_aligned(pv, 16); @@ -340,7 +358,8 @@ static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra, * cross an 16-byte boundary then the access must be 16-byte atomic, * otherwise the access must be 8-byte atomic. */ -static inline uint64_t load_atom_extract_al16_or_al8(void *pv, int s) +static inline uint64_t ATTRIBUTE_ATOMIC128_OPT +load_atom_extract_al16_or_al8(void *pv, int s) { #if defined(CONFIG_ATOMIC128) uintptr_t pi = (uintptr_t)pv; @@ -676,28 +695,24 @@ static inline void store_atomic8(void *pv, uint64_t val) * * Atomically store 16 aligned bytes to @pv. */ -static inline void store_atomic16(void *pv, Int128 val) +static inline void ATTRIBUTE_ATOMIC128_OPT +store_atomic16(void *pv, Int128Alias val) { #if defined(CONFIG_ATOMIC128) __uint128_t *pu = __builtin_assume_aligned(pv, 16); - Int128Alias new; - - new.s = val; - qatomic_set__nocheck(pu, new.u); + qatomic_set__nocheck(pu, val.u); #elif defined(CONFIG_CMPXCHG128) __uint128_t *pu = __builtin_assume_aligned(pv, 16); __uint128_t o; - Int128Alias n; /* * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always * defer to libatomic, so we must use __sync_val_compare_and_swap_16 * and accept the sequential consistency that comes with it. */ - n.s = val; do { o = *pu; - } while (!__sync_bool_compare_and_swap_16(pu, o, n.u)); + } while (!__sync_bool_compare_and_swap_16(pu, o, val.u)); #else qemu_build_not_reached(); #endif @@ -779,7 +794,8 @@ static void store_atom_insert_al8(uint64_t *p, uint64_t val, uint64_t msk) * * Atomically store @val to @p masked by @msk. */ -static void store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) +static void ATTRIBUTE_ATOMIC128_OPT +store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) { #if defined(CONFIG_ATOMIC128) __uint128_t *pu, old, new; diff --git a/meson.build b/meson.build index c44d05a13f..f71653d0c8 100644 --- a/meson.build +++ b/meson.build @@ -2241,23 +2241,21 @@ config_host_data.set('HAVE_BROKEN_SIZE_MAX', not cc.compiles(''' return printf("%zu", SIZE_MAX); }''', args: ['-Werror'])) -atomic_test = ''' +# See if 64-bit atomic operations are supported. +# Note that without __atomic builtins, we can only +# assume atomic loads/stores max at pointer size. +config_host_data.set('CONFIG_ATOMIC64', cc.links(''' #include int main(void) { - @0@ x = 0, y = 0; + uint64_t x = 0, y = 0; y = __atomic_load_n(&x, __ATOMIC_RELAXED); __atomic_store_n(&x, y, __ATOMIC_RELAXED); __atomic_compare_exchange_n(&x, &y, x, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED); __atomic_exchange_n(&x, y, __ATOMIC_RELAXED); __atomic_fetch_add(&x, y, __ATOMIC_RELAXED); return 0; - }''' - -# See if 64-bit atomic operations are supported. -# Note that without __atomic builtins, we can only -# assume atomic loads/stores max at pointer size. -config_host_data.set('CONFIG_ATOMIC64', cc.links(atomic_test.format('uint64_t'))) + }''')) has_int128 = cc.links(''' __int128_t a; @@ -2275,21 +2273,39 @@ if has_int128 # "do we have 128-bit atomics which are handled inline and specifically not # via libatomic". The reason we can't use libatomic is documented in the # comment starting "GCC is a house divided" in include/qemu/atomic128.h. - has_atomic128 = cc.links(atomic_test.format('unsigned __int128')) + # We only care about these operations on 16-byte aligned pointers, so + # force 16-byte alignment of the pointer, which may be greater than + # __alignof(unsigned __int128) for the host. + atomic_test_128 = ''' + int main(int ac, char **av) { + unsigned __int128 *p = __builtin_assume_aligned(av[ac - 1], sizeof(16)); + p[1] = __atomic_load_n(&p[0], __ATOMIC_RELAXED); + __atomic_store_n(&p[2], p[3], __ATOMIC_RELAXED); + __atomic_compare_exchange_n(&p[4], &p[5], p[6], 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED); + return 0; + }''' + has_atomic128 = cc.links(atomic_test_128) config_host_data.set('CONFIG_ATOMIC128', has_atomic128) if not has_atomic128 - has_cmpxchg128 = cc.links(''' - int main(void) - { - unsigned __int128 x = 0, y = 0; - __sync_val_compare_and_swap_16(&x, y, x); - return 0; - } - ''') + # Even with __builtin_assume_aligned, the above test may have failed + # without optimization enabled. Try again with optimizations locally + # enabled for the function. See + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107389 + has_atomic128_opt = cc.links('__attribute__((optimize("O1")))' + atomic_test_128) + config_host_data.set('CONFIG_ATOMIC128_OPT', has_atomic128_opt) - config_host_data.set('CONFIG_CMPXCHG128', has_cmpxchg128) + if not has_atomic128_opt + config_host_data.set('CONFIG_CMPXCHG128', cc.links(''' + int main(void) + { + unsigned __int128 x = 0, y = 0; + __sync_val_compare_and_swap_16(&x, y, x); + return 0; + } + ''')) + endif endif endif From patchwork Tue Apr 25 19:31:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676833 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880712wrs; Tue, 25 Apr 2023 12:43:44 -0700 (PDT) X-Google-Smtp-Source: AKy350ZsqCCO+WvEa6wO84d4zfrsLBKm3NoM+fs/czCfEntBzJePay7nDdthsRE9HhFSVX86eBQ1 X-Received: by 2002:ac8:7fc2:0:b0:3ef:5993:42e0 with SMTP id b2-20020ac87fc2000000b003ef599342e0mr28637184qtk.10.1682451823879; Tue, 25 Apr 2023 12:43:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451823; cv=none; d=google.com; s=arc-20160816; b=AYtEIcOj5Uch6SzaZIwx6KXEZe0pCbwwBpMFRyny5RjH+haT+pPBL4Qagt9F+QObgy F8h2/XTn5S88fFzFfvQcK5RPH7VDHNQhvUA15i1eh5eyv4EskgXH8mr4zir4Z/nD+DLY a5PTy5IP6bTDNDFVaAig3w9OpmUumdm1W5Rh3axnIkSoFUvNiwIv7lGxSlL/LZQljSlH ZmcIOyXFc5fzT4KgZzO07P8vO01jPAyy0UB+rqmWohlcjDG+JlIai5VFSgmJB8LU++HL pm+6PynBKyuJO3qHSpSuwNbMI4jooRtkKdLrFny3tObWRFH4Xvf+PgD0QMGmNXxOgasa zKEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=uLKGPSGMAiW9EHCX31qD8RhUBJV5TJE49TIwgfGSsuA=; b=mkNPkdZ10y8PaIV88UpZ2EybF/Rs/AHA4vqWDIRbqu4oAg9Godpvrb9JjzdULVmcf8 3PEzr9+6nlznGUDrZMaqZYEyk+gZqMHcVnLPYLlXTdk/4FKL2Ao2QXMgp92lbJVbp/Z2 lJODJwMVSUSUQ3QYem8noAfx1nbul5d+PV5uh2l5UMX0KLx269jyJ2ec2z8FY6y5kV1a l7ZG6mZ+CGDe1tC0p92nj/dAwEbDbvb88RPIM1ZbcTZTrUZVgMcKIunlSbrArO+qJesY eIiRUn3mLfD4p5mVlRW5/2+AH2fTzm6Nm3CS2z9tkhhZQ+WGo5LSriHOCoM38pgj2TwQ cHrg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=einTuaOs; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u25-20020a05622a199900b003ecb0b742f3si9521740qtc.88.2023.04.25.12.43.43 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:43 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=einTuaOs; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQS-0002js-Ac; Tue, 25 Apr 2023 15:34:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQQ-0002bp-BI for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:06 -0400 Received: from mail-lf1-x136.google.com ([2a00:1450:4864:20::136]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQG-0004el-RF for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:06 -0400 Received: by mail-lf1-x136.google.com with SMTP id 2adb3069b0e04-4ec86aeeb5cso6649870e87.3 for ; Tue, 25 Apr 2023 12:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451235; x=1685043235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uLKGPSGMAiW9EHCX31qD8RhUBJV5TJE49TIwgfGSsuA=; b=einTuaOsm7dfzZ/MwApYiBc8Jm6AHLdre3p8Ty7vOshFFbnpXocgYREVeAoHMvdGzU +mOCMBw4avFiMuJH3mcmFni+YX1C2pD24yDwyJQstEPno8Um1FOj1RcyBqbQ/Uxl3lHN lsg9Jv8bWWVyRtoC83JecswVEK78U2leQRx00iAt4cmJl8hL/agjcK0BADAlfPFc36dE e+hW4gQ4Xgu7wXe8K2GUyUMstLDgSWRXuZB79flxipNT8EE3hTC1ySGECcQOV7s7jObN nym8SCyk3m7Rnxu89F8ZoFvolzv3MJULCiTn9+5rBF5+w4d5s2jwKySw/5y7vuTABitE /D+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451235; x=1685043235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uLKGPSGMAiW9EHCX31qD8RhUBJV5TJE49TIwgfGSsuA=; b=MZmyIFOfg947qXoKf3kPQTBJ+Xvn7Uh+3zomY3zpHWBGK+OSeZlywMFldBSFXM8bfB rjWygNGSOmn6uIHZCHPwmNP5a0FNRX5NE+EX+Ng3dkiU0xl7x2Sk0wQ4Sbr3z7vXmpfq AZ2kRGPRhdfhHQFKxQLWc5cp3nxznOyq1s0Mx2QTiPiUT4s/wBeNPy6n7mzJe7Udp/ca Vwbpg7ec+CaVk0aag9SAut70wYVA/0ogjUQcAIbspZx3hNwFKyIoWeodV1vA6twwv22h K+NUWnVkXDDvIN+s0I356+xJhAqed1TkS4AuWVoth8+zwM5RkCiH/OIMZaK4Rvfj/vQh 6b3Q== X-Gm-Message-State: AAQBX9fSWxU74p5BbEkB/e8Fgoi1Gf2rIXjX6fu4THi0CPq7jufPy6z+ BaVH4F9eHybjqCjBNiB7WfesgfgGOrlQezvEytmQuA== X-Received: by 2002:ac2:5939:0:b0:4d1:3d1d:4914 with SMTP id v25-20020ac25939000000b004d13d1d4914mr4921505lfi.33.1682451235111; Tue, 25 Apr 2023 12:33:55 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:54 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 14/57] tcg/i386: Add have_atomic16 Date: Tue, 25 Apr 2023 20:31:03 +0100 Message-Id: <20230425193146.2106111-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::136; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x136.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Notice when Intel or AMD have guaranteed that vmovdqa is atomic. The new variable will also be used in generated code. Signed-off-by: Richard Henderson --- include/qemu/cpuid.h | 18 ++++++++++++++++++ tcg/i386/tcg-target.h | 1 + tcg/i386/tcg-target.c.inc | 27 +++++++++++++++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/include/qemu/cpuid.h b/include/qemu/cpuid.h index 1451e8ef2f..35325f1995 100644 --- a/include/qemu/cpuid.h +++ b/include/qemu/cpuid.h @@ -71,6 +71,24 @@ #define bit_LZCNT (1 << 5) #endif +/* + * Signatures for different CPU implementations as returned from Leaf 0. + */ + +#ifndef signature_INTEL_ecx +/* "Genu" "ineI" "ntel" */ +#define signature_INTEL_ebx 0x756e6547 +#define signature_INTEL_edx 0x49656e69 +#define signature_INTEL_ecx 0x6c65746e +#endif + +#ifndef signature_AMD_ecx +/* "Auth" "enti" "cAMD" */ +#define signature_AMD_ebx 0x68747541 +#define signature_AMD_edx 0x69746e65 +#define signature_AMD_ecx 0x444d4163 +#endif + static inline unsigned xgetbv_low(unsigned c) { unsigned a, d; diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index d4f2a6f8c2..0421776cb8 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -120,6 +120,7 @@ extern bool have_avx512dq; extern bool have_avx512vbmi2; extern bool have_avx512vl; extern bool have_movbe; +extern bool have_atomic16; /* optional instructions */ #define TCG_TARGET_HAS_div2_i32 1 diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index b5bb4bf45d..696c656f3b 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -185,6 +185,7 @@ bool have_avx512dq; bool have_avx512vbmi2; bool have_avx512vl; bool have_movbe; +bool have_atomic16; #ifdef CONFIG_CPUID_H static bool have_bmi2; @@ -4026,6 +4027,32 @@ static void tcg_target_init(TCGContext *s) have_avx512dq = (b7 & bit_AVX512DQ) != 0; have_avx512vbmi2 = (c7 & bit_AVX512VBMI2) != 0; } + + /* + * The Intel SDM has added: + * Processors that enumerate support for Intel® AVX + * (by setting the feature flag CPUID.01H:ECX.AVX[bit 28]) + * guarantee that the 16-byte memory operations performed + * by the following instructions will always be carried + * out atomically: + * - MOVAPD, MOVAPS, and MOVDQA. + * - VMOVAPD, VMOVAPS, and VMOVDQA when encoded with VEX.128. + * - VMOVAPD, VMOVAPS, VMOVDQA32, and VMOVDQA64 when encoded + * with EVEX.128 and k0 (masking disabled). + * Note that these instructions require the linear addresses + * of their memory operands to be 16-byte aligned. + * + * AMD has provided an even stronger guarantee that processors + * with AVX provide 16-byte atomicity for all cachable, + * naturally aligned single loads and stores, e.g. MOVDQU. + * + * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104688 + */ + if (have_avx1) { + __cpuid(0, a, b, c, d); + have_atomic16 = (c == signature_INTEL_ecx || + c == signature_AMD_ecx); + } } } } From patchwork Tue Apr 25 19:31:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676859 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883182wrs; Tue, 25 Apr 2023 12:50:21 -0700 (PDT) X-Google-Smtp-Source: AKy350aJYU6sw8pkMaFVYdhDfSXRPw1LJSQ5NdAOZStyXn3h0vFaSO3EgQBTgUVreobe7JI8qNqf X-Received: by 2002:ac8:5892:0:b0:3e3:9117:66e8 with SMTP id t18-20020ac85892000000b003e3911766e8mr31812913qta.35.1682452221568; Tue, 25 Apr 2023 12:50:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452221; cv=none; d=google.com; s=arc-20160816; b=GEJbIiu0m18vuaaRtm5OQCFE3/LYmzzhly7NyYelMInyNU6JyE6R595M5V1yfPntVf Y0nd0VtM/3nU/Ht5VU7KYY8hEd7QDzQUrmlBVniUCE3bsbfUzbB6IWlmSFRKuqiPzl3+ fpZRHpeeVZ7jZi4Pqqj5dE3lQht/d1YRqJ7io4QUEv/3bMkr8vZHQ7APOjRqrMuu7ZVf +G5eagITEjwjcaDf7ctTJ20kfEI6R/VXpek5hw0E5siUuAvxQvnIBV47DZI8ycUb7OBP P6z5DCQUpM8xU3mK7I1/awKN5B++FRtXxWZQbSc1j4Sy6tLyOgtPTdx0l+irdgQh6Rq0 yC2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RsDonkABHa22YkU168z0UM28Gx8szOriNnOnZlCHhck=; b=JrjUSlhjvYJVuBV4sNffdw8/YXqQZ+jb0unwG58LW//nLuLXaNvMtAA0vmPD86kXj4 p51KdGBP9qqfHlCOJFxXWY4euhPno1utO3AgaVLipf/mzD3n5chqJcha9bHmo0RAB5k3 P+R3eoSqoGFcSjtMOnsZGRVvcs2RNB6J2IJdPMy8LoaEIYj7lp9pArotrFaPMw9P1n73 XPk9m7DgmqFCLdWaC/j/uIAvkemHFMUt8AjSO0sj0IBsbFfPnpymLybIRuMLE923fvmh oMXdmQG1lhOgoW/v0JucUqKvrGJofwmwAeyb0qE5wNzZsOZlFTEy1/xO1lXUu45c2CiM BbvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ovwdcRwZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x62-20020a376341000000b0074dfd94ea83si8818385qkb.742.2023.04.25.12.50.21 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:50:21 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ovwdcRwZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQT-0002t8-RX; Tue, 25 Apr 2023 15:34:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQR-0002db-Fh for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:07 -0400 Received: from mail-lj1-x230.google.com ([2a00:1450:4864:20::230]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQI-0004cN-K1 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:07 -0400 Received: by mail-lj1-x230.google.com with SMTP id 38308e7fff4ca-2a8b62cfaceso60836921fa.2 for ; Tue, 25 Apr 2023 12:33:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451238; x=1685043238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RsDonkABHa22YkU168z0UM28Gx8szOriNnOnZlCHhck=; b=ovwdcRwZx1Ntatwqq1g9zv7Dq5jaUZGb1mHamz3ogTkhfdx+Ifdpg0GS6syBlPRxcO 89mKbmvgzh/fXnyoNydoPS+tJrxhnosOyTm91NO743E7pTn42YN/W6Z0BWmyIktiXZ0v 8+hb+44XF5ULIZ5rhIxDOPvzG4rSCGOq9JCjr+mGUQZ2VwtWNDXBt4WJOhvS8u6vaYWU qXwVIfZ6lUcxzGi96qAgRh0vocyB/eT1cDjTqiSXmZgJl4/a+1qb9m5dobPXrEfWR8LE mPVa6ulggpvUPWgKubwcJxTr/I795O3VsRNkUGZCTxuIp37vqDzVdapxM7OljTrAgzfH y/bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451238; x=1685043238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RsDonkABHa22YkU168z0UM28Gx8szOriNnOnZlCHhck=; b=Q4njrSsjN7+F2SU6wRB3Tfx/wEyFsfFcjdjcRl6aEtqktUaY5aq/AetMJW2K769K58 bdOZCtuvd/gmi/RBfa6FSy+WaqbNf4MfU9yAFHKLotcR/mvFkDCN4EO1tR5sYxrl5OXx r7fdbJ/WeWQ5lg5miFR4Y6DHN/TUVbeKWNCO+W7Gg59Id9f7JbtB0co8C3pKNwDWCglx yAAWhR5jKvk9buBLXisz8ZCR+/g5WnSa3GvG9CokE8cHg1sDvz7vykFTchONK64x2TA+ 4OIITAFFPjbnoxIPYq1p9BGJEXzjy8daJTpINwakWwfgDtMxmw0ERbcrwZ8S+GM9gPPv ePrA== X-Gm-Message-State: AAQBX9dAuSNjd6uHnzXL98wi0A7N76hdb35+pEP2f/W5Z+4+8pPEetNS HiS1xM5+ZmT9jnaV5gs7DUzbdqw236HjPhF4mSlGOA== X-Received: by 2002:a2e:87ce:0:b0:2a7:9690:ca01 with SMTP id v14-20020a2e87ce000000b002a79690ca01mr3866113ljj.18.1682451237797; Tue, 25 Apr 2023 12:33:57 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:33:57 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 15/57] accel/tcg: Use have_atomic16 in ldst_atomicity.c.inc Date: Tue, 25 Apr 2023 20:31:04 +0100 Message-Id: <20230425193146.2106111-16-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::230; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x230.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Hosts using Intel and AMD AVX cpus are quite common. Add fast paths through ldst_atomicity using this. Signed-off-by: Richard Henderson --- accel/tcg/ldst_atomicity.c.inc | 76 +++++++++++++++++++++++++++------- 1 file changed, 60 insertions(+), 16 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index c43f101ebe..874fde6937 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -35,6 +35,14 @@ #if defined(CONFIG_ATOMIC128) # define HAVE_al16_fast true +#elif defined(CONFIG_TCG_INTERPRETER) +/* + * FIXME: host specific detection for this is in tcg/$host/, + * but we're using tcg/tci/ instead. + */ +# define HAVE_al16_fast false +#elif defined(__x86_64__) +# define HAVE_al16_fast likely(have_atomic16) #else # define HAVE_al16_fast false #endif @@ -162,6 +170,12 @@ load_atomic16(void *pv) r.u = qatomic_read__nocheck(p); return r.s; +#elif defined(__x86_64__) + Int128Alias r; + + /* Via HAVE_al16_fast, have_atomic16 is true. */ + asm("vmovdqa %1, %0" : "=x" (r.u) : "m" (*(Int128 *)pv)); + return r.s; #else qemu_build_not_reached(); #endif @@ -383,6 +397,24 @@ load_atom_extract_al16_or_al8(void *pv, int s) r = qatomic_read__nocheck(p16); } return r >> shr; +#elif defined(__x86_64__) + uintptr_t pi = (uintptr_t)pv; + int shr = (pi & 7) * 8; + uint64_t a, b; + + /* Via HAVE_al16_fast, have_atomic16 is true. */ + pv = (void *)(pi & ~7); + if (pi & 8) { + uint64_t *p8 = __builtin_assume_aligned(pv, 16, 8); + a = qatomic_read__nocheck(p8); + b = qatomic_read__nocheck(p8 + 1); + } else { + asm("vmovdqa %2, %0\n\tvpextrq $1, %0, %1" + : "=x"(a), "=r"(b) : "m" (*(__uint128_t *)pv)); + } + asm("shrd %b2, %1, %0" : "+r"(a) : "r"(b), "c"(shr)); + + return a; #else qemu_build_not_reached(); #endif @@ -699,23 +731,35 @@ static inline void ATTRIBUTE_ATOMIC128_OPT store_atomic16(void *pv, Int128Alias val) { #if defined(CONFIG_ATOMIC128) - __uint128_t *pu = __builtin_assume_aligned(pv, 16); - qatomic_set__nocheck(pu, val.u); -#elif defined(CONFIG_CMPXCHG128) - __uint128_t *pu = __builtin_assume_aligned(pv, 16); - __uint128_t o; - - /* - * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always - * defer to libatomic, so we must use __sync_val_compare_and_swap_16 - * and accept the sequential consistency that comes with it. - */ - do { - o = *pu; - } while (!__sync_bool_compare_and_swap_16(pu, o, val.u)); -#else - qemu_build_not_reached(); + { + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + qatomic_set__nocheck(pu, val.u); + return; + } #endif +#if defined(__x86_64__) + if (HAVE_al16_fast) { + asm("vmovdqa %1, %0" : "=m"(*(__uint128_t *)pv) : "x" (val.u)); + return; + } +#endif +#if defined(CONFIG_CMPXCHG128) + { + __uint128_t *pu = __builtin_assume_aligned(pv, 16); + __uint128_t o; + + /* + * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always + * defer to libatomic, so we must use __sync_val_compare_and_swap_16 + * and accept the sequential consistency that comes with it. + */ + do { + o = *pu; + } while (!__sync_bool_compare_and_swap_16(pu, o, val.u)); + return; + } +#endif + qemu_build_not_reached(); } /** From patchwork Tue Apr 25 19:31:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676848 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882025wrs; Tue, 25 Apr 2023 12:47:07 -0700 (PDT) X-Google-Smtp-Source: AKy350YxC+4RERHhdBbC9od0Z+tyCwsQ61vJZqgxxUuQ/T9f+Srw9GABFzzHxeRT3WrkTkrzqRFO X-Received: by 2002:a05:6214:1c43:b0:5e0:ad80:6846 with SMTP id if3-20020a0562141c4300b005e0ad806846mr36781588qvb.0.1682452027693; Tue, 25 Apr 2023 12:47:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452027; cv=none; d=google.com; s=arc-20160816; b=ADRItsXQB12R6NXWb3w7VFM3WuSeUCSMhCEMZ+B5CRso3/e+u6fVSRiR49cdIsZiOH 7qDf8Qg97HhaxWATJjFjpSylW7JnMLFjQXF/jv1Fv/aIPEauLxXmSBZ8KW6XPd2Hyt0f TIK6RW3PbLOCHRrrDOTHQP5G7M4A9cb3NCrKk9MfmptfQpsHT3hIlKh070FDpKCgvPgQ oUdpDG54MbFaxpixRoXCzuO5OX6MKcpvFGUs5vtj0HvRyPuU2veqgUCMapLh+mzno4sK Na/obKHL2rKGwHzrlKQnzaqgfqa8p+5H8Q8wplR3yciecprGDFUpdgJYhcwJlr8ZLqw+ pffQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=i9vVtuysfBMlh/Al6RBdJn0LgfXo4dvluqytL552K7Y=; b=oOvjNi0ImztteQmbZqclzj0pOL3pvRxZFTIlsgXl/hbgBJSB98RycLarToEzUNWJ6Y 4ilTnjyzMoNjU7ghrwxzgBXSIUUs1ABlCEhma2/+Ki3BRLrA8oMY10Ga6QlA0hCfhChy w2vmUNr1N0MfDPp6CRHtBfo6xnQWtaHIznovwsg2KJjF5vZwiXNI4++YL+VisXZMTi2b ll7Sj5S8d5rClwONb02G9W33SIBzcKuNzLkWZYVPhie1SaLCGe0QljAOB/im7boNxJeN 8B0DUPp+ykPkQNAesk+bvZEQtr0OKinqbI3HwdGH1mkCeaJGJuHY5wAeQ9Gd5v2nwag7 e7yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="PCi/E1kZ"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id g9-20020a0562140ac900b005ef6bbe3d20si9594416qvi.207.2023.04.25.12.47.07 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:47:07 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="PCi/E1kZ"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQh-0003WD-Ob; Tue, 25 Apr 2023 15:34:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQd-0003TI-CP for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:19 -0400 Received: from mail-lj1-x232.google.com ([2a00:1450:4864:20::232]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQM-0004gm-Ad for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:18 -0400 Received: by mail-lj1-x232.google.com with SMTP id 38308e7fff4ca-2a8a5f6771fso60410491fa.3 for ; Tue, 25 Apr 2023 12:34:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451240; x=1685043240; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=i9vVtuysfBMlh/Al6RBdJn0LgfXo4dvluqytL552K7Y=; b=PCi/E1kZuNO3EyT8b7f1NyxkbDbJFkR+o7G4elBMfWhEkoUsIdpCqxE8ojpbPG4MP2 hMlGYKkrtHEDRE+EQSSTNV23/NGMW9y7nizrsclrrrlY/PHegtGEojV/vIS/gMHJvspo x+bb2e4IWxF9pa+7d4P5LJRr3hhjvVXmHmqrPs+CGkwB5gpAo83vtbPnbJ8JpeRhCs/u kevUlkmKLRXVPrtXzScaAyF8DZVK/4uMlp624vJLxvCbptxUpsN5by47efV4JOy6mrnv hOJJ2mvIJzTE1KCt1BR6mg2vZk9U+neSITG/iKbawxjPma8N87QGD1T6UIzk7Xxiyy44 7/Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451240; x=1685043240; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i9vVtuysfBMlh/Al6RBdJn0LgfXo4dvluqytL552K7Y=; b=AdtpzeHrYsujS247POQj/PPuGoviZEMfj3NDZIty0SZL8W4avvFi2M5y9ti1BPt0vE IecmWjucRpD5FiUaHuiHHF9I9ET95g4PS/Dz0G1vn7dbQ5Jg4M02ooNfKm9Vgjzv77cv pfQPcdvn8RgQiTgffgcnhMVrPA/0tdPoz4vuE0I6mFeKPAnGnT3N4E5PGP2JIuC8ekgO mdffBLEZCXkLUYd69PMsMdrzUtfxd+LaYSgzEhjussDUYAXRROJpn/Ny3CB1GLJLpPmg xuDJw8HadoTq2C6GNWaNl3Jhwx0Alum9+8G0ARGrztYBYq+Hdzys1Y3Y1D+8sZUMCimQ sdLg== X-Gm-Message-State: AAQBX9dTyGEvMEoeraPdhoOPddw+gLh2euK032hIspx4lJZPSwAfQX2q QtykDArLQyEW33ROvcmBXbd8kV545f+vPc/Fgm2VQQ== X-Received: by 2002:a2e:8046:0:b0:2a9:f59a:a2ae with SMTP id p6-20020a2e8046000000b002a9f59aa2aemr3832658ljg.18.1682451240718; Tue, 25 Apr 2023 12:34:00 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.33.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:00 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 16/57] accel/tcg: Add aarch64 specific support in ldst_atomicity Date: Tue, 25 Apr 2023 20:31:05 +0100 Message-Id: <20230425193146.2106111-17-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::232; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x232.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org We have code in atomic128.h noting that through GCC 8, there was no support for atomic operations on __uint128. This has been fixed in GCC 10. But we can still improve over any basic compare-and-swap loop using the ldxp/stxp instructions. Signed-off-by: Richard Henderson --- accel/tcg/ldst_atomicity.c.inc | 60 ++++++++++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 3 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 874fde6937..cf4a0e4a6e 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -247,7 +247,22 @@ static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv) * In system mode all guest pages are writable, and for user-only * we have just checked writability. Try cmpxchg. */ -#if defined(CONFIG_CMPXCHG128) +#if defined(__aarch64__) + /* We can do better than cmpxchg for AArch64. */ + { + uint64_t l, h; + uint32_t fail; + + /* The load must be paired with the store to guarantee not tearing. */ + asm("0: ldxp %0, %1, %3\n\t" + "stxp %w2, %0, %1, %3\n\t" + "cbnz %w2, 0b" + : "=&r"(l), "=&r"(h), "=&r"(fail) : "Q"(*p)); + + qemu_build_assert(!HOST_BIG_ENDIAN); + return int128_make128(l, h); + } +#elif defined(CONFIG_CMPXCHG128) /* Swap 0 with 0, with the side-effect of returning the old value. */ { Int128Alias r; @@ -743,7 +758,22 @@ store_atomic16(void *pv, Int128Alias val) return; } #endif -#if defined(CONFIG_CMPXCHG128) +#if defined(__aarch64__) + /* We can do better than cmpxchg for AArch64. */ + { + uint64_t l, h, t; + + qemu_build_assert(!HOST_BIG_ENDIAN); + l = int128_getlo(val.s); + h = int128_gethi(val.s); + + asm("0: ldxp %0, xzr, %1\n\t" + "stxp %w0, %2, %3, %1\n\t" + "cbnz %w0, 0b" + : "=&r"(t), "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + return; + } +#elif defined(CONFIG_CMPXCHG128) { __uint128_t *pu = __builtin_assume_aligned(pv, 16); __uint128_t o; @@ -841,7 +871,31 @@ static void store_atom_insert_al8(uint64_t *p, uint64_t val, uint64_t msk) static void ATTRIBUTE_ATOMIC128_OPT store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias msk) { -#if defined(CONFIG_ATOMIC128) +#if defined(__aarch64__) + /* + * GCC only implements __sync* primitives for int128 on aarch64. + * We can do better without the barriers, and integrating the + * arithmetic into the load-exclusive/store-conditional pair. + */ + uint64_t tl, th, vl, vh, ml, mh; + uint32_t fail; + + qemu_build_assert(!HOST_BIG_ENDIAN); + vl = int128_getlo(val.s); + vh = int128_gethi(val.s); + ml = int128_getlo(msk.s); + mh = int128_gethi(msk.s); + + asm("0: ldxp %[l], %[h], %[mem]\n\t" + "bic %[l], %[l], %[ml]\n\t" + "bic %[h], %[h], %[mh]\n\t" + "orr %[l], %[l], %[vl]\n\t" + "orr %[h], %[h], %[vh]\n\t" + "stxp %w[f], %[l], %[h], %[mem]\n\t" + "cbnz %w[f], 0b\n" + : [mem] "+Q"(*ps), [f] "=&r"(fail), [l] "=&r"(tl), [h] "=&r"(th) + : [vl] "r"(vl), [vh] "r"(vh), [ml] "r"(ml), [mh] "r"(mh)); +#elif defined(CONFIG_ATOMIC128) __uint128_t *pu, old, new; /* With CONFIG_ATOMIC128, we can avoid the memory barriers. */ From patchwork Tue Apr 25 19:31:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676872 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2884304wrs; Tue, 25 Apr 2023 12:53:41 -0700 (PDT) X-Google-Smtp-Source: AKy350bsptQeVZPC54oExCoQpPGa+0N+HktTxDU7cH8aw7Xbjn8OYh7FG4nsDdyjl6c7gWydbKxn X-Received: by 2002:ac8:5c14:0:b0:3b9:e0b2:9a49 with SMTP id i20-20020ac85c14000000b003b9e0b29a49mr30817786qti.60.1682452420769; Tue, 25 Apr 2023 12:53:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452420; cv=none; d=google.com; s=arc-20160816; b=df1zUJwEhlGteEpBcquOQS6swB4+SQ1kjri74MH3NKD8tLdZPF6q+b+L6ZtyTjegGn 2XGz5FnkBFWkQdL0eWOuaGnONQCtc1UhL2rlJVatPxnvcSAtd9lKJMrvfFHzyO6aWnx/ NAmyZLd5qYm0TbQ1vjTFsYefbbiGxviW6jMEwxICcPkn5pDk8HNM/6lWVMQOg1m+meiL co1XChNa6cgUPectUAXUcODK78gf5HHcPjC2gDV/MvLO0qzCixODJajBW6X8JBdbNiKw CCVNWuDreyRo7wNoBanBx613lJJTiftLGh9axNNxitYo/f2x+FV3yZKMPaIvm+52rAKB TCBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=fOANmGOo5iyv6fa4+ddDWc+mmS1WoR4SXYoPSJOnShA=; b=lBqN8/Vu07qSGSz4hBwlijfDvuQuDA0jGYxM7JUNd5dfITFDQuD6GtEcCPnT8dYc75 6pe79NdlQ7cmBylgw1B5INNWHNVOhEz4eJtCr8XGuuJOJpZcWwIegcmRLi7a5qL8KAdJ Vjqv0Du/8im9GBE5LFmkj/f302JEovM5Ff0YIcCQ6Dg4Gyev3++Fin0M9OHFgodZrxst t4piGguE5pJ8m+c9qxLHnKdfMEC7qLm/p79OlTkZWEyQHuVNTFjAWFJrKW7saFhQB8Gd b+IUJ3jk+YP1tWDodJ0wV1pIzHFvjxTuxywd82TzhTrUNcV9CZVSUfENHqMeTdtOZQVr m06Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="R/zlNyC8"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y8-20020a376408000000b0074a36dbc43csi9041494qkb.507.2023.04.25.12.53.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:53:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="R/zlNyC8"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQm-0003eQ-34; Tue, 25 Apr 2023 15:34:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQg-0003WL-CU for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:23 -0400 Received: from mail-lj1-x236.google.com ([2a00:1450:4864:20::236]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQP-0004iG-5F for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:22 -0400 Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-2a8eb8db083so60780171fa.3 for ; Tue, 25 Apr 2023 12:34:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451243; x=1685043243; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fOANmGOo5iyv6fa4+ddDWc+mmS1WoR4SXYoPSJOnShA=; b=R/zlNyC8qxd196HlYiCC9/vwJTwepJoGZsPEFWSQ5HQyVKdlmNGasKcgnbJvBFvx0E b2l6ViMUKHLPU2tLs/+Z+eXPdMcmyC9k+hqYZci9ytdQhscf7dimLEks12eqeDkAc6IU sgkZ10yN276J6NamTcDCiBq+mOurj+GbPX7MwwBIPVV+vJOoGNUxQRBKT+YNgyUMvEtu 5Az8E6sHM0fIAH28J5E0ExKaeia2uBLBQU5HgykUOd47xn/EU79RFxSc3qgNwgGnKO/A JjtyiaTrvuIPauaTG2H+IfZTHr9mjyO0h6q70FaVI6pB+J5ns78czZxjtDZWX7piLDWS zfvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451243; x=1685043243; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fOANmGOo5iyv6fa4+ddDWc+mmS1WoR4SXYoPSJOnShA=; b=VHrucjfpTm4tOkAOlfP/Cq+q+Qm14m1FGWpCjVutUIXsB0N1Gi7uzC/8Ye6coordTI Gd11T1ZeDbVBYDhErBlTnjh2zaFpkF4sUksgwN7+uwDH3DFYeE4EPcEvwYvbI3/FfwOv yZuGsvST3Jknx6Me3UdQzborhB3t4jpSPqLxgcVz8NNt3TahEO9QZnFVuitPYGYjC0y2 shQnNXbcEp8hdF6so/AKTu2mTCguGVRVsHbrYtL916eklWrhbRXnad2GQ7KMsMC+AoUp OgQr+63KaGmW1u5zv9n0UgqZaRDVeC+A9UPjN7WelsS4rTwyei9lmxZaFDZGCvtxj+nC yS9Q== X-Gm-Message-State: AAQBX9ei7EP5ltbmMJMeudqejobSJ4mhvwgTW5I8a8Slfv9JrDzxmFx0 lYycsDWKyYVV+l9sctL2cvCxe7scXTzKq4gRDRUP9g== X-Received: by 2002:a05:651c:40e:b0:2a7:a779:eae4 with SMTP id 14-20020a05651c040e00b002a7a779eae4mr3738988lja.9.1682451243658; Tue, 25 Apr 2023 12:34:03 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:03 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 17/57] tcg/aarch64: Detect have_lse, have_lse2 for linux Date: Tue, 25 Apr 2023 20:31:06 +0100 Message-Id: <20230425193146.2106111-18-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::236; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x236.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Notice when the host has additional atomic instructions. The new variables will also be used in generated code. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.h | 3 +++ tcg/aarch64/tcg-target.c.inc | 12 ++++++++++++ 2 files changed, 15 insertions(+) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index c0b0f614ba..3c0b0d312d 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -57,6 +57,9 @@ typedef enum { #define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_EVEN #define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_NORMAL +extern bool have_lse; +extern bool have_lse2; + /* optional instructions */ #define TCG_TARGET_HAS_div_i32 1 #define TCG_TARGET_HAS_rem_i32 1 diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index e6636c1f8b..fc551a3d10 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -13,6 +13,9 @@ #include "../tcg-ldst.c.inc" #include "../tcg-pool.c.inc" #include "qemu/bitops.h" +#ifdef __linux__ +#include +#endif /* We're going to re-use TCGType in setting of the SF bit, which controls the size of the operation performed. If we know the values match, it @@ -71,6 +74,9 @@ static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot) return TCG_REG_X0 + slot; } +bool have_lse; +bool have_lse2; + #define TCG_REG_TMP TCG_REG_X30 #define TCG_VEC_TMP TCG_REG_V31 @@ -2899,6 +2905,12 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) static void tcg_target_init(TCGContext *s) { +#ifdef __linux__ + unsigned long hwcap = qemu_getauxval(AT_HWCAP); + have_lse = hwcap & HWCAP_ATOMICS; + have_lse2 = hwcap & HWCAP_USCAT; +#endif + tcg_target_available_regs[TCG_TYPE_I32] = 0xffffffffu; tcg_target_available_regs[TCG_TYPE_I64] = 0xffffffffu; tcg_target_available_regs[TCG_TYPE_V64] = 0xffffffff00000000ull; From patchwork Tue Apr 25 19:31:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676823 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880316wrs; Tue, 25 Apr 2023 12:42:46 -0700 (PDT) X-Google-Smtp-Source: AKy350Y1AeatI2bSC5PZNDLikIMaV00XRGGyt2pzjlVfFxL2Hc1urceKlBbxNws3jwDNGwHsE9zx X-Received: by 2002:a05:6214:230f:b0:5a1:6212:93be with SMTP id gc15-20020a056214230f00b005a1621293bemr33103265qvb.29.1682451765829; Tue, 25 Apr 2023 12:42:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451765; cv=none; d=google.com; s=arc-20160816; b=eLJtsxxKNOt2DKRefP1YOz9NxBzH0nIjiHhHKC1aNfy/oiSTNg6+JwteVoVaaDuaUY zd1tvb+6hAMpyyHaAZBlYkD4sTIrZJ3VICHlawCgKeRzMnOMH0iy/1/Jk3fN9nWbwq/r 6iumIomzgDZvyLjKSHfVSmlbeEgimNGepK+4KO3esCa1CePVLjNAb59jKD+GXrwIEtWY OXppJEhIijk8zN5dTK7+PviNF3vKN3C+OkGKNdEIO2gPFWgqS44XnRHxWJyt56nEjGku I0KXU9lixafc8iFIEWgNgip3ZVjBOA063pcaDuR7S5MZ5HSMbBYlDSVAczkKaM1qQ6Ql 04Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=SlwZz/TkkaqpzfKRH/vz8Q4X3wFSx4Ts4VLXAbAY/Hk=; b=qF8FC4r6vafwPd5bJ9emUMB2aW1kcMY//bdNEq+dDJ+B2uktL0PiS0BIHz01WMxpOQ RKBLE0lyqw49xw7+Go7J4v7mcQGJ90hsJxieBq5xCUjnT+9kbxeskvApWcIEsc8gV6tu KdGJX1spViwI1e1zI/YFjjGWebHWXwUiyVYXg9/FbQrGbKyk0frRFySxhySwXOcDKukM IGElLIHJLOM0UIgAltsU9WbX2IBs4fUcXiOh+Vi35RXVM+xv57e1CEqeAdFWt6UwA0dd wZsZTzH0pEUvs/VxvECKCtwo3dgd3xmBLqaps8NMaTClBFqNBuBG5JLgkkxvBFE8EoDU J1fg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zCmv0Dgm; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 14-20020a05620a048e00b0074d16ddafb9si8933899qkr.448.2023.04.25.12.42.45 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:42:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zCmv0Dgm; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQo-0003jp-E9; Tue, 25 Apr 2023 15:34:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQj-0003YK-3Q for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:25 -0400 Received: from mail-lj1-x22d.google.com ([2a00:1450:4864:20::22d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQT-0004k4-JP for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:24 -0400 Received: by mail-lj1-x22d.google.com with SMTP id 38308e7fff4ca-2a7b02615f1so61477801fa.0 for ; Tue, 25 Apr 2023 12:34:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451248; x=1685043248; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SlwZz/TkkaqpzfKRH/vz8Q4X3wFSx4Ts4VLXAbAY/Hk=; b=zCmv0DgmQ159PkPi+U7jG+n13JygEiACYEQO07VPMFbEk6vVeGtOflXZY3BQcblIbc syu/IhmiJmFLCQKOICVOLWYZwlMIDKPNs1Vyc/Hzse1wN1ZxEaFKlbo97JudtGsHYQ0B gaxsIIWgpfN0Ga6QMNm35vkK0gy8O1tTZZsFCNEJXtA6FKrN1q37pQIGxKpoykoggt+d 6OBof6cuUGX6qdjW9neizLAAb1MWy9RnNzg7HqMTAVRifFFRguTCe0/X8osCfMcSallY qShFpeVxItnfQTv8kIjHD95w4kBqW+TghLJ0Lgizx1Iiw+V6DtM/EPqlWgV2D4L4bPyK vFFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451248; x=1685043248; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SlwZz/TkkaqpzfKRH/vz8Q4X3wFSx4Ts4VLXAbAY/Hk=; b=RSsnZtf52o1zhQLzBsQrsXo5K4YryuBJOFaDRxiUH9FgnexjniKrGPx3ro/Qfj0PIO CqRH8ITEMXpyfmDbDzeW+R0hY1ySPYc5DmxhNThpjYata43/kQnwvIrPwT3NyGa5/3CJ ElWFKp/X4ve2PYtUZtOAM3OtYPGZgwZV2HdNZfzDN4GBft+v3Waj3AUgjya2RtkKYpzR WDhTXP/Xj6WusOI7tANbeaN8E86/fczl303ovc3cIqEe71MhVR9HmHbgL5Hw+C9A2kOB 35ooNFH4msloBnbmgwe/VfiL9aciGjSyR8LeYJdHUQFEgX/LBmIu5NSqCMUkE1psjygV T3iw== X-Gm-Message-State: AAQBX9eM4no+OGIh1e4wyaDjBMYGvl4xkkndbM/giQkVlYFMkRkXae4U erQ5UyelMzqfOE5pA4bU87bZudzZ6LqyTwPFAWbesQ== X-Received: by 2002:a2e:98d8:0:b0:2a8:ee05:ca1e with SMTP id s24-20020a2e98d8000000b002a8ee05ca1emr3877303ljj.13.1682451247776; Tue, 25 Apr 2023 12:34:07 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 18/57] tcg/aarch64: Detect have_lse, have_lse2 for darwin Date: Tue, 25 Apr 2023 20:31:07 +0100 Message-Id: <20230425193146.2106111-19-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22d; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01, T_SPF_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org These features are present for Apple M1. Tested-by: Philippe Mathieu-Daudé Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.c.inc | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index fc551a3d10..3adc5fd3a3 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -16,6 +16,9 @@ #ifdef __linux__ #include #endif +#ifdef CONFIG_DARWIN +#include +#endif /* We're going to re-use TCGType in setting of the SF bit, which controls the size of the operation performed. If we know the values match, it @@ -2903,6 +2906,27 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) } } +#ifdef CONFIG_DARWIN +static bool sysctl_for_bool(const char *name) +{ + int val = 0; + size_t len = sizeof(val); + + if (sysctlbyname(name, &val, &len, NULL, 0) == 0) { + return val != 0; + } + + /* + * We might in ask for properties not present in older kernels, + * but we're only asking about static properties, all of which + * should be 'int'. So we shouln't see ENOMEM (val too small), + * or any of the other more exotic errors. + */ + assert(errno == ENOENT); + return false; +} +#endif + static void tcg_target_init(TCGContext *s) { #ifdef __linux__ @@ -2910,6 +2934,10 @@ static void tcg_target_init(TCGContext *s) have_lse = hwcap & HWCAP_ATOMICS; have_lse2 = hwcap & HWCAP_USCAT; #endif +#ifdef CONFIG_DARWIN + have_lse = sysctl_for_bool("hw.optional.arm.FEAT_LSE"); + have_lse2 = sysctl_for_bool("hw.optional.arm.FEAT_LSE2"); +#endif tcg_target_available_regs[TCG_TYPE_I32] = 0xffffffffu; tcg_target_available_regs[TCG_TYPE_I64] = 0xffffffffu; From patchwork Tue Apr 25 19:31:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676868 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2884113wrs; Tue, 25 Apr 2023 12:53:03 -0700 (PDT) X-Google-Smtp-Source: AKy350YUpUOEDDvoKmnr6VdewzUJM+URWu4w/Fh3DJDDRc2IbbKOpcjBFXcqZEt1V/iYP4Bhd6wq X-Received: by 2002:a05:622a:14c7:b0:3ef:4d4f:4cce with SMTP id u7-20020a05622a14c700b003ef4d4f4ccemr31646086qtx.40.1682452382766; Tue, 25 Apr 2023 12:53:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452382; cv=none; d=google.com; s=arc-20160816; b=fMFWgQhLq5BAUc4AydNWCxJrtdYxYG8xqfswBaFrHmZQ3N/8mYCSYhNVJv22HKVL8T T56j8QLwK3nSRNZJudzjd6sMaTIY098cys94LVkRyQg6v/vwWDf0Oqsmt23OFj+NVsXA leNcRaLWBtZ3nBUPMkLSCOFpeAwQOW+tErw7BlGRMJz7wJQATEz/9gX2ucSo642e74sW UDNeD0FiZAUrxP3h6P5tXyJpVugAPN3fz9437FjlvHuXwMMd0Ec8bCW5s3Rxbcw3J6Fs xgVY8k5kdof/X4jzNg8lRwpRXJXDJjgJt6L9fBT5Ev0N6lmSnh8VVWKb5N8U8lYP0ZiY iDPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=nOmbSp7hBydjLoN7MiEgB21hH67bcjILkQNA+Q9UF3g=; b=t0UrVnILyYbaL8vpH54H5oamvivNXNiQkSv5ARj930Sz3cVfQOhZZw2m49cHwEmwZK vpeJaMbxR2eFLCfYATyIwZg1bkqXPpelyizS+ZIUxxiKTYYDfESq+rLSVtQMchpEjDvH wjkCK3quVmd1RKGcEGGDb6ldsjvuSk5YmK56cgVeVVcTNTw1BU7OQP5aVZqk5PO1sKNF +ePmySgOZ3ZKiWTCC2RVABvGBZrfJ+/lEUlAajr1bw8zRkYBzzTO1S3OAcEtIX72hDx6 oRrD1CFEfi/F723QAqZdWw53W14i+oVRSpCHi0KJuPeJvL5j8ejS3+PXRYTu75DaWuzb U8tA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="caSHb/VR"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y2-20020a05620a44c200b0074f55c33d7esi4578010qkp.479.2023.04.25.12.53.02 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:53:02 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="caSHb/VR"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOQm-0003et-AU; Tue, 25 Apr 2023 15:34:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQj-0003YN-Dk for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:25 -0400 Received: from mail-lj1-x236.google.com ([2a00:1450:4864:20::236]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQW-0004lG-Cf for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:24 -0400 Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-2a8bbea12d7so61233811fa.3 for ; Tue, 25 Apr 2023 12:34:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451251; x=1685043251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nOmbSp7hBydjLoN7MiEgB21hH67bcjILkQNA+Q9UF3g=; b=caSHb/VRtHNv1oSEGjjimxXApRFD+9t42CO5NfLoqxjnra7/Gg9fWA5IXD4c1DXFzG zvp56NU+N877FbfQ/ZqYE+jpixC0Ko0qXy9VSNIgAUYhnS1cIXr928znPKA1/G0utwkk z60Kq5qr7xhrZUKHY5T7GcAv3jTgRRWCbo8VnGrJy+CJEP00H4fE1G/FaBnngvyGML0i ZyFckwKsUOHDNzQPEpyvUZvtfxI4k2uCIzJC1YFqAYHCcsHna8MG4LLsffOOj+CAxmtA vxnR55kznfjVvd9HzrZIeTdHjv3M38blx6ffm+EzpOQwkjFdJLzXjdY0wuY0J4qXqIAm oijw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451251; x=1685043251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nOmbSp7hBydjLoN7MiEgB21hH67bcjILkQNA+Q9UF3g=; b=RfnY/BBADNrxL2QRY5bTYV0PxZDIBIw9fmCYToOJxwsmZXHyHkuZqqurq5Wqnlaj7P sJI0ghpXbf426vfgqmxEaMX2N5q2TzTxyzyCvRfu/F/sKznfcL5Pbg50wXm9whiZ5C45 r/gs6x4yvaVilKr/6onjWResp+C/wmcpNuWyBAArXisABunmVdi1v5XjLGki+bFZkgs1 cNvEVto8Zo+DorqmsFQtYx8hrqV6RMvHeAacdLQXrwI1EM/iFVi2n2AeTRrJcTjsCJmW cSrxm9Hbs4TJxp8isg8h5ADepMZd3ga/9EMm5xX6U77yVxNpQ/4EUq8yvKSKuYay63pD bnAA== X-Gm-Message-State: AAQBX9eckHUbEeyOfQ5lCrbL12Q930ee8mhOkCjkDuUlquRsZCDIiaPz Gq70Q9Rc0btWkyqJYJGyTRtEy+4g2aCKRKTW2cw0Yg== X-Received: by 2002:a2e:3213:0:b0:2a8:e642:8cdb with SMTP id y19-20020a2e3213000000b002a8e6428cdbmr3582700ljy.49.1682451250716; Tue, 25 Apr 2023 12:34:10 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:10 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 19/57] accel/tcg: Add have_lse2 support in ldst_atomicity Date: Tue, 25 Apr 2023 20:31:08 +0100 Message-Id: <20230425193146.2106111-20-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::236; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x236.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Add fast paths for FEAT_LSE2, using the detection in tcg. Signed-off-by: Richard Henderson --- accel/tcg/ldst_atomicity.c.inc | 37 ++++++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-) diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index cf4a0e4a6e..4c4287c8c9 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -41,6 +41,8 @@ * but we're using tcg/tci/ instead. */ # define HAVE_al16_fast false +#elif defined(__aarch64__) +# define HAVE_al16_fast likely(have_lse2) #elif defined(__x86_64__) # define HAVE_al16_fast likely(have_atomic16) #else @@ -48,6 +50,8 @@ #endif #if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128) # define HAVE_al16 true +#elif defined(__aarch64__) +# define HAVE_al16 true #else # define HAVE_al16 false #endif @@ -170,6 +174,14 @@ load_atomic16(void *pv) r.u = qatomic_read__nocheck(p); return r.s; +#elif defined(__aarch64__) + uint64_t l, h; + + /* Via HAVE_al16_fast, FEAT_LSE2 is present: LDP becomes atomic. */ + asm("ldp %0, %1, %2" : "=r"(l), "=r"(h) : "m"(*(__uint128_t *)pv)); + + qemu_build_assert(!HOST_BIG_ENDIAN); + return int128_make128(l, h); #elif defined(__x86_64__) Int128Alias r; @@ -412,6 +424,18 @@ load_atom_extract_al16_or_al8(void *pv, int s) r = qatomic_read__nocheck(p16); } return r >> shr; +#elif defined(__aarch64__) + /* + * Via HAVE_al16_fast, FEAT_LSE2 is present. + * LDP becomes single-copy atomic if 16-byte aligned, and + * single-copy atomic on the parts if 8-byte aligned. + */ + uintptr_t pi = (uintptr_t)pv; + int shr = (pi & 7) * 8; + uint64_t l, h; + + asm("ldp %0, %1, %2" : "=r"(l), "=r"(h) : "m"(*(__uint128_t *)(pi & ~7))); + return (l >> shr) | (h << (-shr & 63)); #elif defined(__x86_64__) uintptr_t pi = (uintptr_t)pv; int shr = (pi & 7) * 8; @@ -767,10 +791,15 @@ store_atomic16(void *pv, Int128Alias val) l = int128_getlo(val.s); h = int128_gethi(val.s); - asm("0: ldxp %0, xzr, %1\n\t" - "stxp %w0, %2, %3, %1\n\t" - "cbnz %w0, 0b" - : "=&r"(t), "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + if (HAVE_al16_fast) { + /* Via HAVE_al16_fast, FEAT_LSE2 is present: STP becomes atomic. */ + asm("stp %1, %2, %0" : "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + } else { + asm("0: ldxp %0, xzr, %1\n\t" + "stxp %w0, %2, %3, %1\n\t" + "cbnz %w0, 0b" + : "=&r"(t), "=Q"(*(__uint128_t *)pv) : "r"(l), "r"(h)); + } return; } #elif defined(CONFIG_CMPXCHG128) From patchwork Tue Apr 25 19:31:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676844 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881619wrs; Tue, 25 Apr 2023 12:46:02 -0700 (PDT) X-Google-Smtp-Source: AKy350Yt7kiY3xVcHo4IyBda4Srz6r1rMA3f5tt5nCPy+IfGBvNW304gN/0Sa1sXwljyiZfiaGOe X-Received: by 2002:ac8:5c46:0:b0:3ef:58f5:9ff7 with SMTP id j6-20020ac85c46000000b003ef58f59ff7mr30142354qtj.53.1682451962484; Tue, 25 Apr 2023 12:46:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451962; cv=none; d=google.com; s=arc-20160816; b=zvtYB6fAST9JGM+PZuowIJ+W1lLDJqaB57HPxeCw8T2y/c4TZxPyhAJQmH577v6Uvz QmxhQuCPs0LPnWHRAFGG64wgz7jpuHCIZDWlPPz/v8X++ICfoVdXLdtIslnlyBXBdH+U zm6sEOJB+mR8zAhNOekFsHkGHYt9WeXZ9uS0Aihh6edzS8MeUreIBHR9H8MYXQdSbpdn AG43p/9T9DQjKCdjiIas6CpMhvvN9sHRcX8WmCZejJvpomc4X1C7NpCKCMC252D4onCn vA4u5+Mpbh7nyfsIrufhAl5v/qtrTSxWOmE+hZ3sAdXvCwFmvA8dGoidiSVfY459FmOt vDLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=oGeKBjwjj2E8w/f9+TfXbJ+6j6aLsf9VhcvO9oNW35I=; b=Cc8u81Gm4JcPdl53cBkFgpQ8/2RGHCt5ZMEm1c7ZyG8alng0o2+kW9Y4oSXEtycOXJ sVJhHnFr7E1ngaX8kNTR5mLGAjI8JE9iGC8lXsD7scwLxAGWFu04Z1biFg3ZgsldyNUK 8g0ZVY/N8RTx7WzDB0L8DtlgYG3BDdpBj97/Kz9csjjI2B/l40+O1SSCfnuUS8Rq5M3T XH69mafDfWspntO2izkFyStisD8gB+rddXkLY/rat+JqYl7G5fjhI1m1yEEJFAMfUU/h 6k4D0oPRhhSZMvcEMoaCTTr0A99ewkU7clmte9w5mYkQOYj04nMFxO4WLYhNUyu01Equ C/jA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NrntvLZJ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h8-20020ac87d48000000b003ef3808dea9si1012454qtb.377.2023.04.25.12.46.02 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:46:02 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NrntvLZJ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOR2-0003us-70; Tue, 25 Apr 2023 15:34:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQk-0003aV-Gr for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:26 -0400 Received: from mail-lj1-x233.google.com ([2a00:1450:4864:20::233]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQY-0004dZ-Hz for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:25 -0400 Received: by mail-lj1-x233.google.com with SMTP id 38308e7fff4ca-2a8a6602171so59398131fa.0 for ; Tue, 25 Apr 2023 12:34:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451253; x=1685043253; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oGeKBjwjj2E8w/f9+TfXbJ+6j6aLsf9VhcvO9oNW35I=; b=NrntvLZJUfj0+vtXua8gkDNDcStI/DQgjhUuy3ko37kA2ecnz+5WMYTg9QTrUnapJk dzPqJAkJrIS+bRbqh9ORJByMZltD5WTyIe9N67dcpW3ZjunbH3gwo11xq86JsZogFvmx T/BUuEwvV+rmiOLd25eKwY+9Gnwkt4WuyO24Gr0tozJ/j4RHA0ih1medIxJichTH4r/A T5+6zVHiZVja2r7Mv+Zg0PxK/2AZd0DwBqmw5LwYnFKNXTjK/d15h28HPeNUNYyM5nBJ 6xTa0tNRjraQe2bz/0wXAOh+/KhKFDm5koVY51PpGUgYkoOpDYc8M2MYB7Gpgao/6nmA v0ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451253; x=1685043253; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oGeKBjwjj2E8w/f9+TfXbJ+6j6aLsf9VhcvO9oNW35I=; b=kKeFSuokoOxobm4ViRVKFR0ynXFqJlLxtFIM9bhAbvNHkEbb2akC3/wzwm6ohLvDJ+ uxpgaJIAsy5xgu9HaNbdAzLbYDRTsxubM4NSjz7n0VxIskkX8JPiL3X6VEnYmyrj9qvo tx8SCpfyqNLX+sE1NqWIxwI9p1qQ8YdGL6UjZ0HcEYj9f4rO8r5hz9rV/RBNFXK/IsCM 01qRKNBmukDQj83DoKznuR/FF6VFWbgpTZTMAxrMHNZPW65f1z4SfjY8Mp4Ko9/b639q 6ZCQwkQsryGKlV5nJOB44CHUqrYHa7hM5b/iqvj4h7r6rfR2aPt0Pqm1iCmXLBqrj99/ IGXw== X-Gm-Message-State: AAQBX9dEXJlD7v0g8RPv2Ey7Rv62JQGtxLY/WFbYQlr6dQ3ufuSiY9DT 4rA/2oYrKmqvqBByZ5ZzelAJjA9KFQRIPBpVCwhxQA== X-Received: by 2002:a2e:9c03:0:b0:2ab:e50:315a with SMTP id s3-20020a2e9c03000000b002ab0e50315amr2185784lji.51.1682451253648; Tue, 25 Apr 2023 12:34:13 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 20/57] tcg: Introduce TCG_OPF_TYPE_MASK Date: Tue, 25 Apr 2023 20:31:09 +0100 Message-Id: <20230425193146.2106111-21-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::233; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x233.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Reorg TCG_OPF_64BIT and TCG_OPF_VECTOR into a two-bit field so that we can add TCG_OPF_128BIT without requiring another bit. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- include/tcg/tcg.h | 22 ++++++++++++---------- tcg/optimize.c | 15 ++++++++++++--- tcg/tcg.c | 4 ++-- tcg/aarch64/tcg-target.c.inc | 8 +++++--- tcg/tci/tcg-target.c.inc | 3 ++- 5 files changed, 33 insertions(+), 19 deletions(-) diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h index b19e167e1d..efbd891f87 100644 --- a/include/tcg/tcg.h +++ b/include/tcg/tcg.h @@ -932,24 +932,26 @@ typedef struct TCGArgConstraint { /* Bits for TCGOpDef->flags, 8 bits available, all used. */ enum { + /* Two bits describing the output type. */ + TCG_OPF_TYPE_MASK = 0x03, + TCG_OPF_32BIT = 0x00, + TCG_OPF_64BIT = 0x01, + TCG_OPF_VECTOR = 0x02, + TCG_OPF_128BIT = 0x03, /* Instruction exits the translation block. */ - TCG_OPF_BB_EXIT = 0x01, + TCG_OPF_BB_EXIT = 0x04, /* Instruction defines the end of a basic block. */ - TCG_OPF_BB_END = 0x02, + TCG_OPF_BB_END = 0x08, /* Instruction clobbers call registers and potentially update globals. */ - TCG_OPF_CALL_CLOBBER = 0x04, + TCG_OPF_CALL_CLOBBER = 0x10, /* Instruction has side effects: it cannot be removed if its outputs are not used, and might trigger exceptions. */ - TCG_OPF_SIDE_EFFECTS = 0x08, - /* Instruction operands are 64-bits (otherwise 32-bits). */ - TCG_OPF_64BIT = 0x10, + TCG_OPF_SIDE_EFFECTS = 0x20, /* Instruction is optional and not implemented by the host, or insn is generic and should not be implemened by the host. */ - TCG_OPF_NOT_PRESENT = 0x20, - /* Instruction operands are vectors. */ - TCG_OPF_VECTOR = 0x40, + TCG_OPF_NOT_PRESENT = 0x40, /* Instruction is a conditional branch. */ - TCG_OPF_COND_BRANCH = 0x80 + TCG_OPF_COND_BRANCH = 0x80, }; typedef struct TCGOpDef { diff --git a/tcg/optimize.c b/tcg/optimize.c index 9614fa3638..37d46f2a1f 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -2051,12 +2051,21 @@ void tcg_optimize(TCGContext *s) copy_propagate(&ctx, op, def->nb_oargs, def->nb_iargs); /* Pre-compute the type of the operation. */ - if (def->flags & TCG_OPF_VECTOR) { + switch (def->flags & TCG_OPF_TYPE_MASK) { + case TCG_OPF_VECTOR: ctx.type = TCG_TYPE_V64 + TCGOP_VECL(op); - } else if (def->flags & TCG_OPF_64BIT) { + break; + case TCG_OPF_128BIT: + ctx.type = TCG_TYPE_I128; + break; + case TCG_OPF_64BIT: ctx.type = TCG_TYPE_I64; - } else { + break; + case TCG_OPF_32BIT: ctx.type = TCG_TYPE_I32; + break; + default: + qemu_build_not_reached(); } /* Assume all bits affected, no bits known zero, no sign reps. */ diff --git a/tcg/tcg.c b/tcg/tcg.c index d7659fdc67..8216855810 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -2294,7 +2294,7 @@ static void tcg_dump_ops(TCGContext *s, FILE *f, bool have_prefs) nb_iargs = def->nb_iargs; nb_cargs = def->nb_cargs; - if (def->flags & TCG_OPF_VECTOR) { + if ((def->flags & TCG_OPF_TYPE_MASK) == TCG_OPF_VECTOR) { col += ne_fprintf(f, "v%d,e%d,", 64 << TCGOP_VECL(op), 8 << TCGOP_VECE(op)); } @@ -4782,7 +4782,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op) tcg_out_extrl_i64_i32(s, new_args[0], new_args[1]); break; default: - if (def->flags & TCG_OPF_VECTOR) { + if ((def->flags & TCG_OPF_TYPE_MASK) == TCG_OPF_VECTOR) { tcg_out_vec_op(s, op->opc, TCGOP_VECL(op), TCGOP_VECE(op), new_args, const_args); } else { diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 3adc5fd3a3..43acb4fbcb 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1921,9 +1921,11 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg args[TCG_MAX_OP_ARGS], const int const_args[TCG_MAX_OP_ARGS]) { - /* 99% of the time, we can signal the use of extension registers - by looking to see if the opcode handles 64-bit data. */ - TCGType ext = (tcg_op_defs[opc].flags & TCG_OPF_64BIT) != 0; + /* + * 99% of the time, we can signal the use of extension registers + * by looking to see if the opcode handles 32-bit data or not. + */ + TCGType ext = (tcg_op_defs[opc].flags & TCG_OPF_TYPE_MASK) != TCG_OPF_32BIT; /* Hoist the loads of the most common arguments. */ TCGArg a0 = args[0]; diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc index 4cf03a579c..e31640d109 100644 --- a/tcg/tci/tcg-target.c.inc +++ b/tcg/tci/tcg-target.c.inc @@ -790,7 +790,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, CASE_32_64(sextract) /* Optional (TCG_TARGET_HAS_sextract_*). */ { TCGArg pos = args[2], len = args[3]; - TCGArg max = tcg_op_defs[opc].flags & TCG_OPF_64BIT ? 64 : 32; + TCGArg max = ((tcg_op_defs[opc].flags & TCG_OPF_TYPE_MASK) + == TCG_OPF_32BIT ? 32 : 64); tcg_debug_assert(pos < max); tcg_debug_assert(pos + len <= max); From patchwork Tue Apr 25 19:31:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676870 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2884158wrs; Tue, 25 Apr 2023 12:53:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5vUoSf1sgLYwtoOZz55sLb9NgX9w1gJArG245wOR1u146wLUEoWXOju0okms0udr2fWajj X-Received: by 2002:ac8:7f47:0:b0:3d4:17dc:3fcf with SMTP id g7-20020ac87f47000000b003d417dc3fcfmr180186qtk.5.1682452390992; Tue, 25 Apr 2023 12:53:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452390; cv=none; d=google.com; s=arc-20160816; b=DoMUS143pZfcku0bG8RKsol9WN5sX6E/uC0aSNWgBNZzIociTNy+hnZUUKfwIL1TSo RcA9Hh0Gm3RIHPQTvXEc9veJ2JKwQpP++uRDt8tKSrl8QZ0Lvf3AVISfMuyxgpSPZR/Q Ls1/EkvyYVdkZp30gLLTYcpwbzFj1AZ2HMWlF+VDz47eWvDWd5GSMxDWlTovVLlzRm+t HH02iz2d2ENAU0Y2Hsq3G1HVxi32yHwce2EL96jz8kM7Skzo+rkRQqdkvzanS3m4OXvq GfWuGKKl5eS3t1xy6ub9sD3ONPg64lVMmM/kwevp+O3ZdF+9ZsuA2f8dSwGfCbOpMzRa 0zdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yFNpxX1Hv6Le1GaviCqP+AapXmbmH1XTYQpvYr8uxBQ=; b=jprefLXFv3nxOQpPOWNgztA2UN/+2CTi+Dx0GH2GOkVvDbk1sjyYaMlBLzUPcuKcRh W/XGK6OEUuY1X/WhwDfvnD8mtiVgPSGRz1/GechGvTqeJF0j2xRo4MVlUx3zJdx+IW+C wvadnfdBBN+OzAhQKKpmHSsGeAnu2jqe0h8bD7I0yydl7Xeyh0OSNqgj2XergdGhRrDW Bv3rRVugoom8HPcU/hZoj29NhcJTwRdULvVB1/Os2wRzHsWdxcssVCVok9z17Rh96xEI 8tfptG6B0KrjXkioQblR/ZQ5UMAhCEqfIuw3ulSIefYmLW5W+Dm7jLfgs7WpJFb82t3b rUhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YBfr66Rd; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j10-20020a05622a038a00b003b81b32738dsi9334546qtx.393.2023.04.25.12.53.10 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:53:10 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YBfr66Rd; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOR7-0004Ad-5e; Tue, 25 Apr 2023 15:34:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQm-0003eo-6c for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:28 -0400 Received: from mail-lj1-x22f.google.com ([2a00:1450:4864:20::22f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQb-0004XT-9X for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:27 -0400 Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2a8b766322bso59734741fa.1 for ; Tue, 25 Apr 2023 12:34:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451256; x=1685043256; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yFNpxX1Hv6Le1GaviCqP+AapXmbmH1XTYQpvYr8uxBQ=; b=YBfr66RdtnWQf+hrljta2rDzeYB9pqMHrJtK+nUBSjkI3PYUyyJfkw4OW5eazv1ez5 lUpqRfrpElF/MqdZlyDNUkIbUikOH2niM99aEyHrzpWjPcqzlWcYMKCFxTtWcZEShwST EAVz9TTTmmGBHlek84hhDgtvkRknjJpZAOOZv3xOSIn/5Q61YkxMdHNSkP2h5vxC6I2l K2uLtm11lcDDHahT4Q/3bn6u37NdQQNPutxvaj8D4tkBjx5PDf/h/bDjUNW6eF8S3p3K Qk/5ypr3xXpX4QGRRXpnVNIGCAApdow6nrhfkcFa5aoYeV0qhC1BgeaePhvzz8b93oSC kgog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451256; x=1685043256; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yFNpxX1Hv6Le1GaviCqP+AapXmbmH1XTYQpvYr8uxBQ=; b=IgO7/ntgzC678ypvgPftRbI4Sg9HmLk6faR9zcojHX60dB6kKVcbuZdPGNfxE37b3+ 91eBDQZJsNnCkSPCNl0nan89+Kge28s6h1wktKfkj6kkn8YcBACRVnv6Wu5LV/5ZFwbc Uocj0OOLEptn0qzsMgHPPEsfjPNX/IEKfvJeNWergmXbkS0REHgkvbHki6jt+niM60mp MJMiEtfXa9Alc5u5kGUBwPPzRXaefMMeEja9luinpumuBxIaR7DFB7pPw7er4gnYETGu BF5CBILiY3HqI+cMIjGh8kRtAP0YnerKEN0GwZ2jqGaMimk2Pgfq64oTScSvATKWfIfM yDAw== X-Gm-Message-State: AAQBX9ejzFw3wUcbyLsZulKr/Gpxybqj2eth8QbPXc1i1hFLlb5uWB2Z fQnhoJ2UJu+QU8HoXDRnE3APMAG6Y+bc2t3lUNFAcw== X-Received: by 2002:a2e:3e1a:0:b0:2a8:c42f:6913 with SMTP id l26-20020a2e3e1a000000b002a8c42f6913mr3418897lja.36.1682451256425; Tue, 25 Apr 2023 12:34:16 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:16 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 21/57] tcg/i386: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:10 +0100 Message-Id: <20230425193146.2106111-22-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22f; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/i386/tcg-target.c.inc | 48 ++------------------------------------- 1 file changed, 2 insertions(+), 46 deletions(-) diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 696c656f3b..32ef9ad4e4 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1778,7 +1778,6 @@ typedef struct { int seg; } HostAddress; -#if defined(CONFIG_SOFTMMU) /* * Because i686 has no register parameters and because x86_64 has xchg * to handle addr/data register overlap, we have placed all input arguments @@ -1846,51 +1845,8 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) tcg_out_jmp(s, l->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - /* resolve label address */ - tcg_patch32(l->label_ptr[0], s->code_ptr - l->label_ptr[0] - 4); - - if (TCG_TARGET_REG_BITS == 32) { - int ofs = 0; - - tcg_out_st(s, TCG_TYPE_PTR, TCG_AREG0, TCG_REG_ESP, ofs); - ofs += 4; - - tcg_out_st(s, TCG_TYPE_I32, l->addrlo_reg, TCG_REG_ESP, ofs); - ofs += 4; - if (TARGET_LONG_BITS == 64) { - tcg_out_st(s, TCG_TYPE_I32, l->addrhi_reg, TCG_REG_ESP, ofs); - ofs += 4; - } - - tcg_out_pushi(s, (uintptr_t)l->raddr); - } else { - tcg_out_mov(s, TCG_TYPE_TL, tcg_target_call_iarg_regs[1], - l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_AREG0); - - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_RAX, (uintptr_t)l->raddr); - tcg_out_push(s, TCG_REG_RAX); - } - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_jmp(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} +#ifndef CONFIG_SOFTMMU static HostAddress x86_guest_base = { .index = -1 }; @@ -1922,7 +1878,7 @@ static inline int setup_guest_base_seg(void) return 0; } #endif /* setup_guest_base_seg */ -#endif /* SOFTMMU */ +#endif /* !SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Tue Apr 25 19:31:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676824 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880332wrs; Tue, 25 Apr 2023 12:42:47 -0700 (PDT) X-Google-Smtp-Source: AKy350Zq7koXDaZqjPtM0hCU5k+1bJMdl1mNZeUPSrmhiIWLVcjN+CPBzfNIfRnPErEg+ovI9/Wr X-Received: by 2002:a05:6214:1c0f:b0:5ac:d6e6:452b with SMTP id u15-20020a0562141c0f00b005acd6e6452bmr35808811qvc.24.1682451767479; Tue, 25 Apr 2023 12:42:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451767; cv=none; d=google.com; s=arc-20160816; b=wqnwSZv1ryV+wD+eHbHwadwMT4+IGb+ulCfwxPYYwB+iWWt8/KP05wUc1jcsAE1iCy XLd49+YmJqwZ1onzVOf0mAx+i2UWf4ViJuhm3WEzW5ptXY+vX3FbcvOIR84Ee+rtUbCx liA3AUl97xdvgUpLz8LKoiQFLUz8cYhkS/12T1zWyQm3iN8AbmQ6TSHDRSA/ozU2cdMw d0obE7LQEbiPgGY+L4LQI1dl/xU2Z2ry14sK+3Vpaz+pkkv+iwaqvd9H8S9jKcryjTJN OnA1aH82cBvlU+DnPe661MYO7FbZFVB0YfmV6mdFv+w43w8MabxrV/t1tFDhpyJMY5+G 8Puw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=5j/yMASWGStjwMM59Hf05R5CdVFVYJotd2cRr9oH9m0=; b=r9MpwSzxMQiTOf2FfiVDU9VR2zlpp7GmEEdXsqsbpvSabAszhcV/LePjW/AZihjwil okU+ZmqIpBY2u21NM1rrfB+D/ksMKRI16yKHpzBV9TKhAm4B1CYMtMvEEPU2tmjNpuge uTzgfjGKy0AbE9C4LhE1vEAIGTlP/YHao6SAGNJqo4l0fL+s3IButj1xCYXLKnam4ul0 Q835rshgo7bpy8GnREPGD4iAhm2yXwatIQDIcaVEjeij587qLNCtTEMiDfi4H6FPg8VS oRHB11by8KN/qkJNn6ZoAqLniQgi8dUNi+YJw4fmo8xKVTxlNFSMiZs1f0SYaB8Jlwwu fb0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GmIMAPFZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id jp12-20020ad45f8c000000b005f5436e9474si9436113qvb.454.2023.04.25.12.42.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:42:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GmIMAPFZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORO-0004el-Dq; Tue, 25 Apr 2023 15:35:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQo-0003kl-66 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:30 -0400 Received: from mail-lj1-x234.google.com ([2a00:1450:4864:20::234]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQf-0004q0-2s for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:29 -0400 Received: by mail-lj1-x234.google.com with SMTP id 38308e7fff4ca-2a8bdcf87f4so56991321fa.2 for ; Tue, 25 Apr 2023 12:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451259; x=1685043259; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5j/yMASWGStjwMM59Hf05R5CdVFVYJotd2cRr9oH9m0=; b=GmIMAPFZs3NCKAV3HQdLzqQ9H3UBFOesOvGP92RJVXvYacKijZ1LJBDwMpJcEkcy1V +Xckyne2DIiiGefwBNUkpHuRLrra47RkkcTnZjnYIxS20m2x0S4vWleLGhJnDvDrxg73 C8WyWsjCZKlNXqpik6lamgOM5Il4pTS67daPU9o8pUXYBHt51NU5tvMwSDBfs4rShS1i XivvC4Oc889uTI4MNIdnyLIqDQ8MGrK8QGJXROIt94H0nue3vnU8Idn9Y83qfxCyG2zv dcApWKkGvvMdTbvaDR+nQenWxZgzQnBTubajAn4BIk/ynxEfsVNvzTP6lNGUJYIojggx YreA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451259; x=1685043259; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5j/yMASWGStjwMM59Hf05R5CdVFVYJotd2cRr9oH9m0=; b=RllzCWvUoZI/ExBVP2vVPddKrd03pgNK3l1qntYFH/sINCpLVtBzKzCBmqceHbTHUe Aq+5qteVNSUvCIUs2uPrK5E9FbbN0e1GTcMvYiueSdX+pKgw+621h4Qs32qnTpefwtA9 9mcL4kj4ubKJNWTLVh4+IBAH1KGAqqCYetZVO0QuYTyneZXq8e4RcIqtvwP0fAZMWlrb e4SOpRuQZJ8Z7ltQVug9hfNc6NtKEtWF+9hlrxGpBcVYx5FHg6XC9Ksyrra+NVP306js 7cFxmmQB/ddAuYOG0RXuRACcU6p8GQcvzpcnKMvU8+ogY/JIrG2OONXIbz6X1bjTVI+m b/XA== X-Gm-Message-State: AAQBX9cFDiCbg+md7sGcocdjTVXEaqAchwmo3NZWqKdb0u3eCgZcKpVb kqPTrO+ugmpInv+2N84nUW/0EWeCgDJvr1hWR4P/OQ== X-Received: by 2002:a2e:9059:0:b0:2a7:7771:2534 with SMTP id n25-20020a2e9059000000b002a777712534mr3804531ljg.35.1682451259198; Tue, 25 Apr 2023 12:34:19 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:18 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 22/57] tcg/aarch64: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:11 +0100 Message-Id: <20230425193146.2106111-23-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::234; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x234.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.c.inc | 35 ----------------------------------- 1 file changed, 35 deletions(-) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 43acb4fbcb..09c9ecad0f 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1595,7 +1595,6 @@ typedef struct { TCGType index_ext; } HostAddress; -#ifdef CONFIG_SOFTMMU static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_REG_TMP } }; @@ -1628,40 +1627,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tcg_out_goto(s, lb->raddr); return true; } -#else -static void tcg_out_adr(TCGContext *s, TCGReg rd, const void *target) -{ - ptrdiff_t offset = tcg_pcrel_diff(s, target); - tcg_debug_assert(offset == sextract64(offset, 0, 21)); - tcg_out_insn(s, 3406, ADR, rd, offset); -} - -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!reloc_pc19(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_X1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_X0, TCG_AREG0); - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_adr(s, TCG_REG_LR, l->raddr); - tcg_out_goto_long(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* CONFIG_SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Tue Apr 25 19:31:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676871 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2884163wrs; Tue, 25 Apr 2023 12:53:11 -0700 (PDT) X-Google-Smtp-Source: AKy350ZULBhBEwhLiVzygeg3lfE0G8L7SDGy2CoPpmmcUxkR4NfhjbyCOqxRHzmu5NiKBYZzFcYj X-Received: by 2002:a05:622a:353:b0:3ef:3cdf:c29d with SMTP id r19-20020a05622a035300b003ef3cdfc29dmr31820004qtw.52.1682452391331; Tue, 25 Apr 2023 12:53:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452391; cv=none; d=google.com; s=arc-20160816; b=HXmsspR6xvnKztQqI8MRiF4V+fqk/UDvuMOKVgZhXiotZTlM9dOqFEEcwJnSoXgWP4 9ZZiyTfr0JReeyVLcSrZOdKyq4HF+old8qexfOdae3OTMM9kCVYjZbuFHiEbxCuWzguk BOX6h/z0VLvCx3J7T4QKJK85Z1BNlxoQKJ1KPg7xYr+os63xtsafRQEoktdRgZ2vgVk8 FtgxlEF+Y0JyewJ+0IbiktFXyajWjHFUUFUQsrZZ8gryjNY2Uxg7tgKDV8VqzcBWtEY1 C+uVTifPIAk1drPg/OoRJLCu1BoMRbj9VX0aMu+OFNw8RqwaJzhwlFrvU2pjmyEekkAu RuKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=H9XjeSNeXS5Zbmd8g+CH15v6ygPNvoJcpM3DeEPFM4Y=; b=zOrPhRHNxniTq081p0JMPOICWSUEA2q6KuHkfOmSqOp4aUYEWrXoVN+3doIFp6ThIE A2kdunM3JoF+A5lMHlU5fWPZr/Z56zEio1xQZGCkmbbttCGnShfpkmgDcdBCNxjBJb1R EBLdZQrkSkXqtT2XgMN3XxID0IDI/kpbh3g6OtO2c5upz1E1LQ3qgdfmDJV4jJL7kgC4 r1tRxOSp6E+HZ+230Bj9YuSsxUQ4vUgadVkSeFX9KBAqo+qakG/YM68Wwbn4/ZpY5+ms UV+VDCyBnXuIHSH4zcPKP+Kl1lQyb8OZnyBPa5Rx9bdxwu0V2PYoWd3rLCNEMvyJoeOy ukFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VeZrJKWb; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d125-20020a376883000000b0074acd08028fsi9038261qkc.292.2023.04.25.12.53.11 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:53:11 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VeZrJKWb; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORR-0004qH-7Y; Tue, 25 Apr 2023 15:35:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQr-0003oP-Iv for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:35 -0400 Received: from mail-lj1-x233.google.com ([2a00:1450:4864:20::233]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQk-0004rn-BG for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:33 -0400 Received: by mail-lj1-x233.google.com with SMTP id 38308e7fff4ca-2a8bca69e8bso59260741fa.3 for ; Tue, 25 Apr 2023 12:34:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451263; x=1685043263; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H9XjeSNeXS5Zbmd8g+CH15v6ygPNvoJcpM3DeEPFM4Y=; b=VeZrJKWbqt1v2tq7WBTkORWe/bPwYoLDgzQF4EZrFLzH/H/NHwlZeJBlFeZRpE1TEZ MxIyWxI2eN69eE7PEUYwFbxe7bsqO95BU3Bd+OgjBDUBRu+eCwJXpxboernQngiMjk4q 7+hsm/W/KK615pNR7+JIS+w4IIHffg5tLDJLa1zDYtFGWKockZ2EKU3SSGizA5Djt+5q WjLQOC54ZsF6iPMlLCTB92cTJSONySg4e58Pq24ic41I3fOoWuII3Ost056L0AxLWR+0 thYu4Lv1HvGASiLjgQBOKBABnEXZlc564szMppmw1bj7oUZ7WHBjLKAiur+GLekP2TI3 H0jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451263; x=1685043263; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H9XjeSNeXS5Zbmd8g+CH15v6ygPNvoJcpM3DeEPFM4Y=; b=jhQyQW3mkLh+Y6kMzuabJe5nyjq4pdBTlxSngh41r/30pGO+Fzwi22nZBDoT5osmBC 2DZj/npmV0vYyvw/gzBm77bhiUfh9ZUEebGuTyJWhna57HX7P4riAgfRM+zOVMNaT8ZD 8TTXuatfHTM6NP1Eouyn7c/N6NUZKPz7Lq+G/ZYv5gH7DnPFAXpD/O/T0TgEJJMPqVJ8 lkzXKz9seHWk5mi+FXbPf7i7+PUH4McxkaF25lE4F67Z91/UJ3B9TxsrwsB7VwGhixZz 8byHgTaX0LgZxF8C7sOjM6mSpeGipm0KlrL9QTpFKZ1ADZgTwsqYqcxAd+iaMoj5i9BO epEA== X-Gm-Message-State: AAQBX9dGqZC+5xc81fMwdgE6R0AMaq20WHb7/jqdZCZNWPBMf500BWpY fdwerCRA/Chdkm1j/y4hGVcXyZQyByZIms5lrC9wwg== X-Received: by 2002:a2e:9686:0:b0:2a8:c759:af08 with SMTP id q6-20020a2e9686000000b002a8c759af08mr4263534lji.32.1682451263295; Tue, 25 Apr 2023 12:34:23 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:23 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 23/57] tcg/ppc: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:12 +0100 Message-Id: <20230425193146.2106111-24-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::233; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x233.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/ppc/tcg-target.c.inc | 44 ---------------------------------------- 1 file changed, 44 deletions(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 86343ea410..94a9f70e17 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -1965,7 +1965,6 @@ static const uint32_t qemu_stx_opc[(MO_SIZE + MO_BSWAP) + 1] = { [MO_BSWAP | MO_UQ] = STDBRX, }; -#if defined (CONFIG_SOFTMMU) static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { if (arg < 0) { @@ -2015,49 +2014,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tcg_out_b(s, 0, lb->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!reloc_pc14(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - if (TCG_TARGET_REG_BITS < TARGET_LONG_BITS) { - TCGReg arg = TCG_REG_R4; - - arg |= (TCG_TARGET_CALL_ARG_I64 == TCG_CALL_ARG_EVEN); - if (l->addrlo_reg != arg) { - tcg_out_mov(s, TCG_TYPE_I32, arg, l->addrhi_reg); - tcg_out_mov(s, TCG_TYPE_I32, arg + 1, l->addrlo_reg); - } else if (l->addrhi_reg != arg + 1) { - tcg_out_mov(s, TCG_TYPE_I32, arg + 1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_I32, arg, l->addrhi_reg); - } else { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_R0, arg); - tcg_out_mov(s, TCG_TYPE_I32, arg, arg + 1); - tcg_out_mov(s, TCG_TYPE_I32, arg + 1, TCG_REG_R0); - } - } else { - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_R4, l->addrlo_reg); - } - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_R3, TCG_AREG0); - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_call_int(s, 0, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* SOFTMMU */ typedef struct { TCGReg base; From patchwork Tue Apr 25 19:31:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676838 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880917wrs; Tue, 25 Apr 2023 12:44:11 -0700 (PDT) X-Google-Smtp-Source: AKy350bbGX/XX0cPtLR0nO3kH/zKODfmUWVeOrH3xHEq0y6jHHYSNhTuWFDlSbo5ZvSdgDXvrBZP X-Received: by 2002:ad4:5bec:0:b0:5cf:fff6:57cb with SMTP id k12-20020ad45bec000000b005cffff657cbmr25992265qvc.30.1682451851571; Tue, 25 Apr 2023 12:44:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451851; cv=none; d=google.com; s=arc-20160816; b=hvtIVG+n/dWpoI1bC+u6sJsbfCRIZ01cWgkqm58tuDYb8cAPYEEjEXYVimggQgBWf6 TRnLBqBemQ7BfPu/e7Aw2jI1RwfxvqbB2fCUfgdRjvjT+X5X1tel1aIq+temBcE/9/BX qg5GMt256DGZJdC8ZUaE1cpZ+iSzDl5LwsoaOqJNW8uxOIIyH0GBdzqeZbunl6c+pZI8 fSaBTLfqiksnGS7zwHTNmGNE0mdZDM+GO3GrI423E1FgJQTJAzrbVTPSI7MJeHdMIKce HIAWXiDzr2Ui/81cb45z2/niqjMWyordP7T/O6ZixrDgAOgfTCybt3C/9439TVkufERY gmjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=FBYlnrp5r0dnt/Qn2+Cyqv4p//zkrum+IysmcShvf5s=; b=s5Eh5zsUlwZosugKUn1O42WzFC3U8w6e1JtV4iRiUHGimDxRnHCPopZpqATmN0KxCm J5S2hV0+9p7RuuCjEngsRFx9rxVvorcZnGH/OMYdpN3vJE8huTxG3cS1EgMs75WSIdHc JiBmyjX+0euCiS4O+yJiJffYmCnUAA++cCssqBDZPLEWPqx+lQt+mmL3yqyJfK/fMJv4 9d+1m3sShlJUriTY82Fxx3Y2owoBuUeT+0B3J9tb41OpjbiBGGY2BAFim6g32NU0slyP 4eLj41LcuvypEBuzGTV7ZcQInjaybnWGLm3rgJpzsKKdoY6gdnOYU8EKeMfIWkANC91d Pt9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uj33vHSB; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r14-20020a056214124e00b005eda1d1d9aasi9314768qvv.598.2023.04.25.12.44.11 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:44:11 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uj33vHSB; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORS-0004yl-Kx; Tue, 25 Apr 2023 15:35:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQu-0003qv-5V for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:38 -0400 Received: from mail-lj1-x234.google.com ([2a00:1450:4864:20::234]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQl-0004tg-Ms for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:35 -0400 Received: by mail-lj1-x234.google.com with SMTP id 38308e7fff4ca-2a8c51ba511so58078801fa.1 for ; Tue, 25 Apr 2023 12:34:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451266; x=1685043266; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FBYlnrp5r0dnt/Qn2+Cyqv4p//zkrum+IysmcShvf5s=; b=uj33vHSBexalm8f5KqIJMO9XhDCjmVSkS1LQc8ThAQ4Oqp7b4eqjR6EdCeA+r6hQ0z W8AqmgUiWrbBPhLZPs2ZEeXrU//HQkxp8dbgs1vtXKvNCfJDqMgrqk12mbHNeZZEDNye U6GQXV44r6pPvtwEKBmJcZBZasMx3uYVb0cG4gTF9RGjGqHRKF6mZ4xMlPDrtVd1TQbE 52pDb6JJAwSiuBb1E1UzARhbdxtwguD2ByTC7yBbKcfFA5ISFn+E/SA0TChS6GFQxrT0 8sQ30MBXvo8lvD2BHK2kKCkl+hp4kfl4r2oDQoDlnMMyz3LiK0czmBztvARSsM5+626s hjIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451266; x=1685043266; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FBYlnrp5r0dnt/Qn2+Cyqv4p//zkrum+IysmcShvf5s=; b=E/4UGFd8MblqjbTNkvQOtdLkVG+75/S/lxu1pyh6OH3RqwXK1R96wX4vTyIyXOekgC 7gS7T4wq+zCY2cP1SWRLWsFehCnnNHhy16izpYYRh0WBq/uTNiICXSHkpBEJugVZsImN YFVCpHwxKomMmHbTNIfKfuK8wEXMFNbfwMnSXCMNWM/JuywtKzEUZzTUWIOk2Ssv5IhV sEmoo1caLkwNcuha/BSuJ/YNW3B7Ch4ey9Yfel/MHo0G1ZDQcZxIchVsapjaaV5+2xYY c7OMw1Me/cFBQXC3b0b5HRpIGmlXuWiNPYwNjNqOTneXQTXaYKzaMSU6OalLDjEtOG+W Kfsw== X-Gm-Message-State: AAQBX9eKBWuJZo3RsMFn/oTllkvH8JfhMsET8I7SeoASfFTQnSVcwtPY X+Wp9vnff4QE+5xW2Z6GEquooYb2+F/mcgcWtNawlQ== X-Received: by 2002:a2e:8ecb:0:b0:2a8:ea1e:bdf4 with SMTP id e11-20020a2e8ecb000000b002a8ea1ebdf4mr3905958ljl.52.1682451266012; Tue, 25 Apr 2023 12:34:26 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:25 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 24/57] tcg/loongarch64: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:13 +0100 Message-Id: <20230425193146.2106111-25-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::234; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x234.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 30 ------------------------------ 1 file changed, 30 deletions(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index d1bc29826f..e651ec5c71 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -783,7 +783,6 @@ static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val, * Load/store helpers for SoftMMU, and qemu_ld/st implementations */ -#if defined(CONFIG_SOFTMMU) static bool tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_b(s, 0); @@ -822,35 +821,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) tcg_out_call_int(s, qemu_st_helpers[opc & MO_SIZE], false); return tcg_out_goto(s, l->raddr); } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - /* resolve label address */ - if (!reloc_br_sk16(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_A1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_A0, TCG_AREG0); - - /* tail call, with the return address back inline. */ - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_RA, (uintptr_t)l->raddr); - tcg_out_call_int(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st), true); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -#endif /* CONFIG_SOFTMMU */ typedef struct { TCGReg base; From patchwork Tue Apr 25 19:31:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676822 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2879172wrs; Tue, 25 Apr 2023 12:39:37 -0700 (PDT) X-Google-Smtp-Source: AKy350ZTQKCmCVSaBNkveE53vuVdjwhJ9TVNcBPMuTt95IJb76fGSYvrzSvusCcuHQevLkJm1hKc X-Received: by 2002:a05:6214:1255:b0:5ef:33af:762 with SMTP id r21-20020a056214125500b005ef33af0762mr30457046qvv.37.1682451576888; Tue, 25 Apr 2023 12:39:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451576; cv=none; d=google.com; s=arc-20160816; b=yDDocjydDlvOZBlqCF3fRF6VRMRypHI9IwwgZgsluVxntbcq0qkI8dU1XP+OXXxAIq OI147mi2/X/Houh8qUmOx67mXZzU+Otc8fOPsMqxFIArDTlP6++DBPITgk+ZrbaU5Eya XxMYXxwMsjlwm0aOVlm+eZqG6crfB50c0oTAUFurMCvgL+Lw+jL+fOJVh7N2AnhZAYwB k3S3MYBZyYFKcNhRCsDbo8IJMrDTEqN2rNEwkQBNFKAXqUhCTuTGcDyR6TOXioW1IUVp /mmKY7MDzbnM22YPFo9BKaMyVh6Rbuk1fT78qKKrS8wlKT5Ba/47amRskCoQh7ihbVVd w9hA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=+P0K5bbe2wOaUg/ZSYckKIQ1gR+lhjFre48UvlDpErQ=; b=fBPo/iMixDfU4QCTkhX9V/JUvn80ICtQe8LCqXzejCZWuhS3EEQEn9//MnQ92ajgfM r2YeIUyINse6t4TGsV7Muq+gOokBTlCKnHlKwXCPJwhTvJ7LEilRSxD218+KxpjPrnDH jFLzMA8zeEfLMSpAXT/BJ09y2ZOaZx7pE21UVvhmL3w6h95B0kJwudTdGrPjDCQCF1+S CSKUou36i1pXsWsIxS3cLU7AKgIswNWtj9xEe0i1ND6NZN1dnffxpL1bdMp1W2R9HWdR 01uKneEXU2cvezWRq/d3Zvav0wGBTlJT5IX+jSmZxVRigSA0ZE5txmcNb1BZlKT6tSTQ t2PQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=FSP6bOgz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h22-20020a05620a401600b0074fb79aed78si2780848qko.73.2023.04.25.12.39.36 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:39:36 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=FSP6bOgz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORY-0005G6-53; Tue, 25 Apr 2023 15:35:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOQy-0003vD-BY for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:42 -0400 Received: from mail-lj1-x232.google.com ([2a00:1450:4864:20::232]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQo-0004lK-Ur for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:40 -0400 Received: by mail-lj1-x232.google.com with SMTP id 38308e7fff4ca-2a8db10a5d4so61442711fa.1 for ; Tue, 25 Apr 2023 12:34:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451269; x=1685043269; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+P0K5bbe2wOaUg/ZSYckKIQ1gR+lhjFre48UvlDpErQ=; b=FSP6bOgzGR5Q2pKiUF7N7/ERmd96FPv6dCD/xt2coqsO1JgZpuYsOM04rhCDlQR5Zz gt1NU2t7RzCcd8Hb+7YXO+JprNCrR6E5nHPp/TJgUYFHTgTWiUvvmJr5k+G0jHXgRzjI XCse2YoulRFIR6/rFlttampKkt3aWDUUZNQ3AGc/6ruXD6OhBebR9cdsMnt/QOBxBf8g eKVKL4yLKldJiTh3vsTKGkAMmx5bzoHz2DHKpPXqh3zDdkyUKkEg7JyQg8FiQPrjrpJg UM4Aeddef8I5XTirfhw3lly2KJj1IV8OLnRSyIX6BxYfFzPM5Jbal1TXMlWde7qhiBuH Ixzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451269; x=1685043269; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+P0K5bbe2wOaUg/ZSYckKIQ1gR+lhjFre48UvlDpErQ=; b=OYrLZpcr1tSMt3/TnakKixAclq8GTrZfgtlN7+Amqn75SG9A408bP4y2Nageo3Rfck NCZHx8sXCkk6k4iVGLQGPtJR+LmBtJv1LLDKiSLuTTq1XtbO2LOqER4T3KOj83q0P391 xwe5aDVLu2p6TeKDw7WHEbNMi4Z5Vn82OTiURVqHWzaVsy7rAxGRC+PA9HidJqK0K02/ Q79Wt7duNIVTtXohnduStuZhIqyI/GLmww1y5MuOfyhdIQz2si6h3FkJQoDyFwQR0bKF epQG+s6gnPYSz82aL/hYXyQANpeSPmBW/uV0eo2FDTOCd+Ub29XI8l/rBoxqV4PgWCBt Ndxg== X-Gm-Message-State: AAQBX9cHR2+/24d00/oJD6/erXFbj7ARUkHXO4DedNwun+fUx/NuX+vV fPaqeIy8oTQ9dLIpg4oJpmgAMlBGdBb+Ip1DH2qohg== X-Received: by 2002:a2e:8189:0:b0:2aa:3c81:db92 with SMTP id e9-20020a2e8189000000b002aa3c81db92mr3777802ljg.19.1682451269120; Tue, 25 Apr 2023 12:34:29 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:28 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 25/57] tcg/riscv: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:14 +0100 Message-Id: <20230425193146.2106111-26-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::232; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x232.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/riscv/tcg-target.c.inc | 29 ----------------------------- 1 file changed, 29 deletions(-) diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 1f101cbf35..8522561a28 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -860,7 +860,6 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) * Load/store and TLB */ -#if defined(CONFIG_SOFTMMU) static void tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) { tcg_out_opc_jump(s, OPC_JAL, TCG_REG_ZERO, 0); @@ -907,34 +906,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) tcg_out_goto(s, l->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - /* resolve label address */ - if (!reloc_sbimm12(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_A1, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_A0, TCG_AREG0); - - /* tail call, with the return address back inline. */ - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_RA, (uintptr_t)l->raddr); - tcg_out_call_int(s, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st), true); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* CONFIG_SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Tue Apr 25 19:31:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676832 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880690wrs; Tue, 25 Apr 2023 12:43:41 -0700 (PDT) X-Google-Smtp-Source: AKy350aGv5Qj8l6mjonQ2VMcKFpJGz2s5QqKNFeptdCdjXqxsmL7URWFNQwIB4cNsdVN2k/zPdHT X-Received: by 2002:a05:622a:1387:b0:3ea:bac1:a5a0 with SMTP id o7-20020a05622a138700b003eabac1a5a0mr29391572qtk.37.1682451820828; Tue, 25 Apr 2023 12:43:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451820; cv=none; d=google.com; s=arc-20160816; b=K/ofhOHlfizQF0uRWEWVhiv5Fw1qFqRuXlw+SiKutv7SYeHXWFvhRmbi6DB+i4VImZ WY5Ie88NQ/E8ib4iJ4fMyl6WwEQx+dHyXO40qu0+/EN+Yj2wq5Qs3RU71EHHFdnMUZUu Uu+33ovN1hIhxcMKd5WWplENBPdvgWHfmEQooCAUHlnX3fuyTZwyK3x0EebFdNfp9Hrg oJX4eduidAvhngVi86ewwqaT4tz589MEaqmCYQSdDi4qTSa3or2n215ntfiAVqAxKQ6r o6PH26PjAeXUtKS91SLwGEnZphynJb4DfGkdCOCcHHN2GMPOyw8pKhTcvYvrhda7qDio eAdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=knj9vLRVgMuHbxUsbTpxPOOdPonmqcw2C7rsb1Cv/YA=; b=wMYm3WXwowGP+4L1KUxQsk2g9JdDnUW0NtR7cS9c1zXKjY4UEUvnYKu+ldfcf90j5H SlkzABZKiD8bI/TKGEzB+9eVsbKPgdgtnXDXCDVWJLk2WTfX8X80vLqi+cWcWE+ZyqSL YuEzs3RoKnzNV1y92nm/R0Kl/ONSn1e6SyWnsP0+WakZjXloxD3wnFQDxbAWaPwvm/nM mOG5hIKnaz2tvG268jZlgJaOPKl0vfdNckSf20CimV83AlP3+2Cv2U9Yka4P9/EU8twW u0StIngAJC3idQu7BhH9VnC3H//Zubmze4y6NwPUtsWEPT9YDJU6avWXGnN5mOBMAyyi ks0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MtkJwE47; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d10-20020ac85d8a000000b003eebe989fdbsi1516516qtx.468.2023.04.25.12.43.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MtkJwE47; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORa-0005OV-Fk; Tue, 25 Apr 2023 15:35:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOR2-00042t-Li for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:44 -0400 Received: from mail-lj1-x231.google.com ([2a00:1450:4864:20::231]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQs-0004x1-FO for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:42 -0400 Received: by mail-lj1-x231.google.com with SMTP id 38308e7fff4ca-2a8afef50f2so59474021fa.2 for ; Tue, 25 Apr 2023 12:34:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451273; x=1685043273; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=knj9vLRVgMuHbxUsbTpxPOOdPonmqcw2C7rsb1Cv/YA=; b=MtkJwE475/S4xmtvoKag7BeexsnqmP6eCYFvkzc4s9eWv8duFfAIE8PChhEgGXH7Mz JZplUYUyGVorFRxgyKNfaMEqtVTtwQRY+SkWSYwazGhubc9MnDG0QYNcN69t7UDP/eHC D1hoQZ0QYlbsMt5HV703uboV5tx3lTXb1Wiy9QC//dVhTt9UhA6AMJ+oDBwtqrsmy0Pv 0BTajOW1HItSCTdbEOnFG2e1aENDyNdxaZYMA6RJpz4HyUTihOe4dK3A57RXori1dhAx DqdyhkflaqIjmSWAVfX50ib+dR4HedCINF3oXCgjvOZoEH34VkYOVFNIt923B0OJMP+G bBaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451273; x=1685043273; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=knj9vLRVgMuHbxUsbTpxPOOdPonmqcw2C7rsb1Cv/YA=; b=ZzAiDTQmukfU+OXOae0FqhmiQBJgHS3/Cc0XVJWjpJm4BryeAQ7czDoHeDvpler6h5 Wd19onJyM2kM8kUiFzK8Ob8umURV4gGn4DKdHAgoGBO7e7O4IRFb56cwcGvuVps8errZ lN+HSQzDRD/575jtzovYRSzaKiSVYPYPlUyH80TXPYUam8lRU/G9dwhYI4u1Fu/nEFp5 ZeiV1jARd2mg1z5IQ96TZkDiEW+qdnV7GD5vF/QLukLv5XOIBCy81EpUIEvoa5CnTIX8 IW5dJzpjQwMc1zmqyP9xoztVS/MIKuy/fAltzJ9oRIbCMk4vGx/DyYgHN9bJe8jrOShW T4AQ== X-Gm-Message-State: AAQBX9cVRbCwQWtkulQQN3OSRcx2MUQyt2jCYJuXaK4cFG+Yo1MdNivC 5cxN0P3I69l1mC3SzA23LilvqA0sMG7raqKnenaSSg== X-Received: by 2002:ac2:5230:0:b0:4ed:c893:adf5 with SMTP id i16-20020ac25230000000b004edc893adf5mr4804098lfl.40.1682451272822; Tue, 25 Apr 2023 12:34:32 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:32 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 26/57] tcg/arm: Adjust constraints on qemu_ld/st Date: Tue, 25 Apr 2023 20:31:15 +0100 Message-Id: <20230425193146.2106111-27-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::231; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x231.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Always reserve r3 for tlb softmmu lookup. Fix a bug in user-only ALL_QLDST_REGS, in that r14 is clobbered by the BLNE that leads to the misaligned trap. Signed-off-by: Richard Henderson --- tcg/arm/tcg-target-con-set.h | 16 ++++++++-------- tcg/arm/tcg-target-con-str.h | 5 ++--- tcg/arm/tcg-target.c.inc | 23 ++++++++--------------- 3 files changed, 18 insertions(+), 26 deletions(-) diff --git a/tcg/arm/tcg-target-con-set.h b/tcg/arm/tcg-target-con-set.h index b8849b2478..229ae258ac 100644 --- a/tcg/arm/tcg-target-con-set.h +++ b/tcg/arm/tcg-target-con-set.h @@ -12,19 +12,19 @@ C_O0_I1(r) C_O0_I2(r, r) C_O0_I2(r, rIN) -C_O0_I2(s, s) +C_O0_I2(q, q) C_O0_I2(w, r) -C_O0_I3(s, s, s) -C_O0_I3(S, p, s) +C_O0_I3(q, q, q) +C_O0_I3(Q, p, q) C_O0_I4(r, r, rI, rI) -C_O0_I4(S, p, s, s) -C_O1_I1(r, l) +C_O0_I4(Q, p, q, q) +C_O1_I1(r, q) C_O1_I1(r, r) C_O1_I1(w, r) C_O1_I1(w, w) C_O1_I1(w, wr) C_O1_I2(r, 0, rZ) -C_O1_I2(r, l, l) +C_O1_I2(r, q, q) C_O1_I2(r, r, r) C_O1_I2(r, r, rI) C_O1_I2(r, r, rIK) @@ -39,8 +39,8 @@ C_O1_I2(w, w, wZ) C_O1_I3(w, w, w, w) C_O1_I4(r, r, r, rI, rI) C_O1_I4(r, r, rIN, rIK, 0) -C_O2_I1(e, p, l) -C_O2_I2(e, p, l, l) +C_O2_I1(e, p, q) +C_O2_I2(e, p, q, q) C_O2_I2(r, r, r, r) C_O2_I4(r, r, r, r, rIN, rIK) C_O2_I4(r, r, rI, rI, rIN, rIK) diff --git a/tcg/arm/tcg-target-con-str.h b/tcg/arm/tcg-target-con-str.h index 24b4b59feb..f83f1d3919 100644 --- a/tcg/arm/tcg-target-con-str.h +++ b/tcg/arm/tcg-target-con-str.h @@ -10,9 +10,8 @@ */ REGS('e', ALL_GENERAL_REGS & 0x5555) /* even regs */ REGS('r', ALL_GENERAL_REGS) -REGS('l', ALL_QLOAD_REGS) -REGS('s', ALL_QSTORE_REGS) -REGS('S', ALL_QSTORE_REGS & 0x5555) /* even qstore */ +REGS('q', ALL_QLDST_REGS) +REGS('Q', ALL_QLDST_REGS & 0x5555) /* even qldst */ REGS('w', ALL_VECTOR_REGS) /* diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 8b0d526659..a02804dd69 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -353,23 +353,16 @@ static bool patch_reloc(tcg_insn_unit *code_ptr, int type, #define ALL_VECTOR_REGS 0xffff0000u /* - * r0-r2 will be overwritten when reading the tlb entry (softmmu only) - * and r0-r1 doing the byte swapping, so don't use these. - * r3 is removed for softmmu to avoid clashes with helper arguments. + * r0-r3 will be overwritten when reading the tlb entry (softmmu only); + * r14 will be overwritten by the BLNE branching to the slow path. */ #ifdef CONFIG_SOFTMMU -#define ALL_QLOAD_REGS \ +#define ALL_QLDST_REGS \ (ALL_GENERAL_REGS & ~((1 << TCG_REG_R0) | (1 << TCG_REG_R1) | \ (1 << TCG_REG_R2) | (1 << TCG_REG_R3) | \ (1 << TCG_REG_R14))) -#define ALL_QSTORE_REGS \ - (ALL_GENERAL_REGS & ~((1 << TCG_REG_R0) | (1 << TCG_REG_R1) | \ - (1 << TCG_REG_R2) | (1 << TCG_REG_R14) | \ - ((TARGET_LONG_BITS == 64) << TCG_REG_R3))) #else -#define ALL_QLOAD_REGS ALL_GENERAL_REGS -#define ALL_QSTORE_REGS \ - (ALL_GENERAL_REGS & ~((1 << TCG_REG_R0) | (1 << TCG_REG_R1))) +#define ALL_QLDST_REGS (ALL_GENERAL_REGS & ~(1 << TCG_REG_R14)) #endif /* @@ -2203,13 +2196,13 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) return C_O1_I4(r, r, r, rI, rI); case INDEX_op_qemu_ld_i32: - return TARGET_LONG_BITS == 32 ? C_O1_I1(r, l) : C_O1_I2(r, l, l); + return TARGET_LONG_BITS == 32 ? C_O1_I1(r, q) : C_O1_I2(r, q, q); case INDEX_op_qemu_ld_i64: - return TARGET_LONG_BITS == 32 ? C_O2_I1(e, p, l) : C_O2_I2(e, p, l, l); + return TARGET_LONG_BITS == 32 ? C_O2_I1(e, p, q) : C_O2_I2(e, p, q, q); case INDEX_op_qemu_st_i32: - return TARGET_LONG_BITS == 32 ? C_O0_I2(s, s) : C_O0_I3(s, s, s); + return TARGET_LONG_BITS == 32 ? C_O0_I2(q, q) : C_O0_I3(q, q, q); case INDEX_op_qemu_st_i64: - return TARGET_LONG_BITS == 32 ? C_O0_I3(S, p, s) : C_O0_I4(S, p, s, s); + return TARGET_LONG_BITS == 32 ? C_O0_I3(Q, p, q) : C_O0_I4(Q, p, q, q); case INDEX_op_st_vec: return C_O0_I2(w, r); From patchwork Tue Apr 25 19:31:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676831 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880587wrs; Tue, 25 Apr 2023 12:43:25 -0700 (PDT) X-Google-Smtp-Source: AKy350Zxkfv9Dpion9ceyPT+hmkZMxXp3GAgdzeZYw+3EGIOoCAMQ5wQfOefyg8XWW2l/wSNJ9l1 X-Received: by 2002:a05:6214:27cc:b0:5ef:807b:2a96 with SMTP id ge12-20020a05621427cc00b005ef807b2a96mr27810954qvb.21.1682451805672; Tue, 25 Apr 2023 12:43:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451805; cv=none; d=google.com; s=arc-20160816; b=zpRjwQvonB4oHYAk/Kfp5gx/cpUpqnvIxnBg3HtePBsQh/VK/7m7oWF3528pPIdyko H0lXF+18hMvcbb9mJsYqXa7Cma8kS8MLi2ri+OozSxJb4dFOub+kEw783d2MGapcazKH sao4nra6crP3rHohdrAWipH7RFcTljuQwTTkRic/yJOywC40lKXdN9K7LlgDxzEm118E l38Jb+q7IrBZ8vTcDYYK+El6UfkdZj1yzzDVro9HQy+R2oFx5cKUVYq06Jl0lz/SYoT9 CH4DJRRpQimTwvmXoiJWWB1rTlsazPTXUxjdiMPn4lEQ2yTfeGKJCVE5raBpxgmX0ZGp SFeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Xpq7kVqaR8GJII9yFdioUA+mHFs0Vvahpm47qCN+m6U=; b=WMFn28UF+UfNJ2ZKVO3nJahO0b3l8pGBZM1T5shVoEdh2WF8hPVoU0pldZvLCQk+bq gww5ItGX0BnXwIW9NWQSvokqktEbU6FaQbQwABb5spZtDJ9vjOQ1k8c/Ry7PGu9SX0hy 7Q3lQa56rD70Y6p715ctQh9NMR1aXqP+SvU3kcg7zauoJs4f+7C8Br4B0/lIJtz5UNyL mhVmmiFMPMco2ggoXpAZI2RB6RQuafBp3GTS2BlQ420G6TukDJSTvIJy62GNRNyBDnQ3 VHSpH7n5ZOXMOPuYkrDtWOguqHgvkbzaNIKuNnDR55ejTVFIEvC0T+1B0lsJn2cLuMgR Uh8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jt7TNt5D; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 4-20020a05621420a400b006164c4bc9d6si1972587qvd.542.2023.04.25.12.43.25 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:25 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jt7TNt5D; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORZ-0005Js-2R; Tue, 25 Apr 2023 15:35:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOR2-00042j-JC for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:44 -0400 Received: from mail-lj1-x22b.google.com ([2a00:1450:4864:20::22b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQv-0004fq-PS for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:43 -0400 Received: by mail-lj1-x22b.google.com with SMTP id 38308e7fff4ca-2a8baeac4d1so60911771fa.1 for ; Tue, 25 Apr 2023 12:34:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451277; x=1685043277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Xpq7kVqaR8GJII9yFdioUA+mHFs0Vvahpm47qCN+m6U=; b=jt7TNt5DWjizBJapfZ31R5C06O4aDIUDs7PHZggrEDcEbISgEv8g25mUVe7JeZQ3hj 7tXtcPN1cIg9OBwdbs+OcStjudqj2xKhDRWCmgeWaAdSMtxz4nGFZ8Jj4tZYVdClsxis EE3e2ohYoOwWaZo7svq5MvATiGurNypqgiowXqdqtwXoI1yNO1wUEV+acvL2YSdjZRu3 XyroGDOrBZ4slsL9/U7Lci3AB+3iBoI/aZZG/fcaTRg322bQFghMsp2IH9qFl9BKu8FH TbpIbfo2AJHAOy9Kh/f+r1Y78Qb69tAZ56T8p9h0TfylkFbzYPOK7fLfNUoBtDLpVUGY tiuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451277; x=1685043277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xpq7kVqaR8GJII9yFdioUA+mHFs0Vvahpm47qCN+m6U=; b=RGDXAB997uC4cTYtwVbl8qrp0U6P0yOPifJk5CXW70eceeoVxS/2xp35xo68CbAlZQ TPFh+eNq7YCpAVbLwk+MLRUvgCTsy5UG+FSocMt6aKGN5U3+AGe6RZRhMq1cJWr+9cd9 j8GnlWw32O8l55xGURwOFXEAf3qwcMwmIV4H1yii6xHjGXPuisNOFvhDPqffc22X2kWw N7r9+Jpec7ns9tTN3gghNZq5W9KVZAS5vX5jj1csqnlAwlTiIPeJcH43nrizn5A6A9D7 j4ptNIWGrb+NDwD9XOxsv565jrEcckr3rr7o9N5VTbg6C+nea3WqaCbi2rNo3ebthkRA +xuQ== X-Gm-Message-State: AAQBX9dHe32HvmilIeY3WwQdbRf3YosH9it3LVHdGrard4A1K/Iug+O5 PLIFrp1zYSldE8ww6yiBF1DbGl+yR7eoYJJetSltNA== X-Received: by 2002:a2e:82cd:0:b0:2a8:b188:139a with SMTP id n13-20020a2e82cd000000b002a8b188139amr3574025ljh.45.1682451276744; Tue, 25 Apr 2023 12:34:36 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:36 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 27/57] tcg/arm: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:16 +0100 Message-Id: <20230425193146.2106111-28-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22b; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/arm/tcg-target.c.inc | 45 ---------------------------------------- 1 file changed, 45 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index a02804dd69..eb0542f32e 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1325,7 +1325,6 @@ typedef struct { bool index_scratch; } HostAddress; -#ifdef CONFIG_SOFTMMU static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { /* We arrive at the slow path via "BLNE", so R14 contains l->raddr. */ @@ -1368,50 +1367,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tcg_out_goto(s, COND_AL, qemu_st_helpers[opc & MO_SIZE]); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!reloc_pc24(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - if (TARGET_LONG_BITS == 64) { - /* 64-bit target address is aligned into R2:R3. */ - TCGMovExtend ext[2] = { - { .dst = TCG_REG_R2, .dst_type = TCG_TYPE_I32, - .src = l->addrlo_reg, - .src_type = TCG_TYPE_I32, .src_ext = MO_UL }, - { .dst = TCG_REG_R3, .dst_type = TCG_TYPE_I32, - .src = l->addrhi_reg, - .src_type = TCG_TYPE_I32, .src_ext = MO_UL }, - }; - tcg_out_movext2(s, &ext[0], &ext[1], TCG_REG_TMP); - } else { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_R1, l->addrlo_reg); - } - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_R0, TCG_AREG0); - - /* - * Tail call to the helper, with the return address back inline, - * just for the clarity of the debugging traceback -- the helper - * cannot return. We have used BLNE to arrive here, so LR is - * already set. - */ - tcg_out_goto(s, COND_AL, (const void *) - (l->is_ld ? helper_unaligned_ld : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* SOFTMMU */ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGReg addrlo, TCGReg addrhi, From patchwork Tue Apr 25 19:31:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676842 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881273wrs; Tue, 25 Apr 2023 12:45:15 -0700 (PDT) X-Google-Smtp-Source: AKy350b2k4BG2SDmgiPPFfgJ0C6nyDJQzvD2z7vm9jgkngO3aBKJtmH4xXSEo7qz8euEMr0s03Jt X-Received: by 2002:ad4:5fc5:0:b0:5e6:1bf5:1af2 with SMTP id jq5-20020ad45fc5000000b005e61bf51af2mr31861888qvb.18.1682451915163; Tue, 25 Apr 2023 12:45:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451915; cv=none; d=google.com; s=arc-20160816; b=v1p3r6j17Uf2lmv7UOkTQ+LmXYULpfXVWvUtvYsSVyqpKLguqPRHt1rFRffKzelSGQ xk9qO/NdSvkC5Pmb2ABRRTLr2LapThRnEJKzlLv58KDQpmV8kw4RTz41Gg6BoldMLIzg jHt1J6RMHg/SGONZbP3QR4aL61PRCqjJUjWYh//ItFpgbnHOiFJmMEGlFfkrtxrm1z/B F2I1d39yG0yRg9z5mq4QAOBZHhVPlaMvf86Qzk9qoI7JG7UYHLDw7j1jdga484ek2voh JhGDlh7p3P02Yy2zZKE2dllHx3cOiYHJY7gTIQRAivV1A8Tlx33BNPWELSCD6GfqcV3U TIOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RTRyUJJ//8McRlEJCpdcC3CCDoyJJ5ltYvB50/bCFGE=; b=nSPrTtW83Chg0kJKGwG/4VyT98W++i++53utWZrwKxS3Wv08TxAl7VSrxR4W4W0Imd evrjl0FvQGCT8qoICuUppa4amTbnvNyxtzFdmnIo0Ux+MYAboG2g0CBrKB2zL9O7tRkx HplFZBZ1CBp3Ay3t29/oQSxn2+OEBylG4oK8xAxzB/uGUT0XJzoBEOlkY1lr9Dui0MxE m9Mx/fyYZ6lnbKoO4TIuFSdamfV0vZgyixrygD3QdUvCTbfGdw1x68kTyS26naodKTyV LkKxvcqTdu4TcP+PDx2t0NeLEvoU21WO7TrQPzHDattku/xynyBi8A/JgMqmA/EDmjaV RK5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JucbdGKM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i8-20020ad45c68000000b005355b070980si9303830qvh.463.2023.04.25.12.45.15 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:45:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JucbdGKM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORr-0005h3-IY; Tue, 25 Apr 2023 15:35:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOR7-0004Ch-Af for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:50 -0400 Received: from mail-lj1-x22a.google.com ([2a00:1450:4864:20::22a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOQz-00050V-Sn for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:47 -0400 Received: by mail-lj1-x22a.google.com with SMTP id 38308e7fff4ca-2a8baeac4d1so60912301fa.1 for ; Tue, 25 Apr 2023 12:34:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451279; x=1685043279; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RTRyUJJ//8McRlEJCpdcC3CCDoyJJ5ltYvB50/bCFGE=; b=JucbdGKMAchPxx+TKrrskyDhoA5x9eyO2R4fxUbLt9rH5Quqn58Eeua6J21M97NhCY tfxi2VOa+cWFKOC5iY8Lrr6IfVcnrAJpttBs15pnH7RU/zmCH473/kdzGrnZOrNOwFcP u6ojuX0ukfks96mPjQ6rgji5Z4umdRqMIpdr8EMD8KbeCnxSm3BHd9hWc4c6e8wco/Uv Sgskva6A0RM/Iqj0LrCbLTDFjjxFHaWGM4qci3Y4K/n7GxrRwuFM7CbokjNvfVfQuQQ8 l2nyufQFJp5D5KEjjNLH/ZIGl72IVUMUMTKqOG8aUDr+SmsrLaflu/EDL/jCnYqvh3ug 9pag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451279; x=1685043279; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RTRyUJJ//8McRlEJCpdcC3CCDoyJJ5ltYvB50/bCFGE=; b=ELXqDchZTerKEybBi4wy/z9H2uJzPNtFSH+JPMRt87uYX8+ADkWXpLve5wCWf22mP6 JKC8c4g/yDtg7+XI5tDKWzUKoq7KuUepF40asHz67fas4qT3SHwSJcc4a+mRRgOMYAUK jk5jvVpNJoqFdULNJaIx03X9g8/PwOWnMDINVQ3t4fsw8LfP+mjLMWZdDZL3ireVto5W vsTBXx8Gias2J93s6p8gB/yKXrdoqmhyRJZDo7T8Vw5hNrrzYgQoz88wIfeovS8qAc4y fXS7xbff5GrvG99qLxnewty1rOsYBWUuhWpYUZ5rEF66dmAzvvRBlDCyXCKMwJxAE3WW lVIQ== X-Gm-Message-State: AAQBX9dq9pruLhWjMHBBApWtEjRaZqNEVeWOPxFoYSBkBkubuCu2VhoA +hjgtQXE+21rHWGlQTYoKYQ9Dt5SvzTd4Ja8RLzY/Q== X-Received: by 2002:a2e:8216:0:b0:2a8:ae90:83d6 with SMTP id w22-20020a2e8216000000b002a8ae9083d6mr3579235ljg.48.1682451279612; Tue, 25 Apr 2023 12:34:39 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:39 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 28/57] tcg/mips: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:17 +0100 Message-Id: <20230425193146.2106111-29-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22a; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/mips/tcg-target.c.inc | 57 ++------------------------------------- 1 file changed, 2 insertions(+), 55 deletions(-) diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index 7770ef46bd..fa0f334e8d 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1075,7 +1075,6 @@ static void tcg_out_call(TCGContext *s, const tcg_insn_unit *arg, tcg_out_nop(s); } -#if defined(CONFIG_SOFTMMU) /* We have four temps, we might as well expose three of them. */ static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 3, .tmp = { TCG_TMP0, TCG_TMP1, TCG_TMP2 } @@ -1088,8 +1087,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) /* resolve label address */ if (!reloc_pc16(l->label_ptr[0], tgt_rx) - || (TCG_TARGET_REG_BITS < TARGET_LONG_BITS - && !reloc_pc16(l->label_ptr[1], tgt_rx))) { + || (l->label_ptr[1] && !reloc_pc16(l->label_ptr[1], tgt_rx))) { return false; } @@ -1118,8 +1116,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) /* resolve label address */ if (!reloc_pc16(l->label_ptr[0], tgt_rx) - || (TCG_TARGET_REG_BITS < TARGET_LONG_BITS - && !reloc_pc16(l->label_ptr[1], tgt_rx))) { + || (l->label_ptr[1] && !reloc_pc16(l->label_ptr[1], tgt_rx))) { return false; } @@ -1139,56 +1136,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - void *target; - - if (!reloc_pc16(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { - return false; - } - - if (TCG_TARGET_REG_BITS < TARGET_LONG_BITS) { - /* A0 is env, A1 is skipped, A2:A3 is the uint64_t address. */ - TCGReg a2 = MIPS_BE ? l->addrhi_reg : l->addrlo_reg; - TCGReg a3 = MIPS_BE ? l->addrlo_reg : l->addrhi_reg; - - if (a3 != TCG_REG_A2) { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A2, a2); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A3, a3); - } else if (a2 != TCG_REG_A3) { - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A3, a3); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A2, a2); - } else { - tcg_out_mov(s, TCG_TYPE_I32, TCG_TMP0, TCG_REG_A2); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A2, TCG_REG_A3); - tcg_out_mov(s, TCG_TYPE_I32, TCG_REG_A3, TCG_TMP0); - } - } else { - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_A1, l->addrlo_reg); - } - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_A0, TCG_AREG0); - - /* - * Tail call to the helper, with the return address back inline. - * We have arrived here via BNEL, so $31 is already set. - */ - target = (l->is_ld ? helper_unaligned_ld : helper_unaligned_st); - tcg_out_call_int(s, target, true); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* SOFTMMU */ - typedef struct { TCGReg base; MemOp align; From patchwork Tue Apr 25 19:31:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676843 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881387wrs; Tue, 25 Apr 2023 12:45:29 -0700 (PDT) X-Google-Smtp-Source: AKy350bgjUUGPy5Osy15e2BFIHb/k7DiNRv9vedbC8UAYUAavu2h0zKJY+e+0z1D7NWhfwdFfv+O X-Received: by 2002:a05:6214:248d:b0:56c:37a:58b2 with SMTP id gi13-20020a056214248d00b0056c037a58b2mr36352661qvb.15.1682451929081; Tue, 25 Apr 2023 12:45:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451929; cv=none; d=google.com; s=arc-20160816; b=CXTMkBW5vTOwjzgM0YcMGbcn/2c9C5gVLknRDQBy3KcKDiL7iG2vtCRUEIEsgZJKRz ISHax+kgv2OHP3UT6gkDTt9htAIUbrEMsnSBTKVjUp6ItRkopErI+fFgZClGqqXARCvD 4Z9+GeKjzwv7xkIgvNff9PC8yojS5GRIBN2kC6PAPFgml9Yaf1LimqwmDyLe9UfDexsQ ywlYpjopZYGR2PIOFt9lRKN4+Pq7BmlyIIxc6jqal8cneeu95A8e0rA8KAHG4iCrwtta u44hWEbpvva2cp4bPbNiiY40NDOa9gSLWnCf0h9EDRykhY5nnllVhOGVvtFejqyM436J 2rdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ZBFIqUFc75JDtrlEORyx+gi0WPPCYnPmt/oOy5jLScI=; b=baSVv4tem1kcROOJAU9rfbdqMytYL9QbwBB0jen55WmWgoLQ2jVLrmiGbXiQbIOBxR Is3mx0j27D6At4qYtZynCICAhv+QlxISI2bmnpRj0xjUzSLW+QGkG5ZR62HpWdvqUsz/ dnOtd17p2WBXI8l9fo9zH72KifwFpuWxdgGURvvSsgDsp/zot/hzmS0vbTudOQpbIZRF KzP5ryJIXNSFAVbkhcE/Kchin7c+Ls6RsaL+AfDu6bjS3uAmYDDe7piMiLDwv0uAgwk2 AjZ+CbUjb4H7yBdSl70SLzKkBl2Z7pczUrV8sA7ssdFRZygPN7/kbyp3JoXPlr0UIqd5 W10g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IwkSO3oQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r142-20020a37a894000000b00745dc545031si8988142qke.363.2023.04.25.12.45.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:45:29 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IwkSO3oQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS5-00061a-MU; Tue, 25 Apr 2023 15:35:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORB-0004L0-6s for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:54 -0400 Received: from mail-lj1-x236.google.com ([2a00:1450:4864:20::236]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOR3-0004iG-3s for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:52 -0400 Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-2a8eb8db083so60791041fa.3 for ; Tue, 25 Apr 2023 12:34:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451283; x=1685043283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZBFIqUFc75JDtrlEORyx+gi0WPPCYnPmt/oOy5jLScI=; b=IwkSO3oQw9L8O1QXDsNgVkRCSX0gq+onqW4DPhYmvtIipyiZPI/dSH22ogL9sYgee/ MFC8B9AFkLJ4TWb5ZRlWmXzsbOfOdYo7jhqHnOJoFDYQyi4OYzSsnxQtA4WeCxYdPg1e qKFbATzakM38BA4VI8qwLEVue8EhqHYNb5kev4Lc6XB9cmIrk8gtuYdP1UID5n7TSe+g 8sv3HeqXI0mD3EjNlXWL3MnLMcMzb0SQH+KuOijOOA9M5cHw7XsLOt8Rl+E1oPfCbe3Z wJQys438rKFA+Wz3wmkyCxs1Fr6tLHqf5RdyEkuCVQDVAPdA/gvDwNJucTtSvcWAY0pG dY6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451283; x=1685043283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZBFIqUFc75JDtrlEORyx+gi0WPPCYnPmt/oOy5jLScI=; b=axnTFEA1HouQDIM+A8MrD/6hX0if+cNgZNaisWlDl23pGYLBCuusK/fkXAw7bbUIX8 eVFRKUz5y2rRyXfdYNjfoTU2v9E91HGwINfsFMFS17G5HcIcgV5/Lmj5c3oIQdDuYndv OZpcJkqgkSK1ehF1YSY+r257HCvMejiOoim+tILMfhQ++9ZYwu4jqGiJ2Fj2hqPmTW/C gk5cqbg7Dlg6BKpG4IKhr2Z9JTzaORSRVwHcLTNBMtKGgU/K8UYHHr88Cea80l6rll4s /VoWLTEpi5svMWrCLM3SetjW32gAdzSRH62m2wxT1+K1OX70NIdIPBysAB6TR+4OqvUv hAUA== X-Gm-Message-State: AAQBX9fzhEKst4mgUBpTK4BCapAUOCoCktBXzQ6ErZ+tV3Nd8+pSbfCS F96jLpdhD9zNmWHdzRbY+42HhZ3RpknkUjiEXLt1PA== X-Received: by 2002:a2e:93cd:0:b0:2ab:24f:c3c1 with SMTP id p13-20020a2e93cd000000b002ab024fc3c1mr2713746ljh.46.1682451283114; Tue, 25 Apr 2023 12:34:43 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:42 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 29/57] tcg/s390x: Use full load/store helpers in user-only mode Date: Tue, 25 Apr 2023 20:31:18 +0100 Message-Id: <20230425193146.2106111-30-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::236; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x236.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01, T_SPF_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Signed-off-by: Richard Henderson --- tcg/s390x/tcg-target.c.inc | 29 ----------------------------- 1 file changed, 29 deletions(-) diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index 968977be98..de8aed5f77 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -1679,7 +1679,6 @@ static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data, } } -#if defined(CONFIG_SOFTMMU) static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_TMP0 } }; @@ -1716,34 +1715,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) tgen_gotoi(s, S390_CC_ALWAYS, lb->raddr); return true; } -#else -static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) -{ - if (!patch_reloc(l->label_ptr[0], R_390_PC16DBL, - (intptr_t)tcg_splitwx_to_rx(s->code_ptr), 2)) { - return false; - } - - tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_R3, l->addrlo_reg); - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_R2, TCG_AREG0); - - /* "Tail call" to the helper, with the return address back inline. */ - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R14, (uintptr_t)l->raddr); - tgen_gotoi(s, S390_CC_ALWAYS, (const void *)(l->is_ld ? helper_unaligned_ld - : helper_unaligned_st)); - return true; -} - -static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} - -static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) -{ - return tcg_out_fail_alignment(s, l); -} -#endif /* CONFIG_SOFTMMU */ /* * For softmmu, perform the TLB load and compare. From patchwork Tue Apr 25 19:31:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676841 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881201wrs; Tue, 25 Apr 2023 12:45:02 -0700 (PDT) X-Google-Smtp-Source: AKy350YhA0mFXQjJK1jg9xlgfgHzSXnsRgJxn31sgBcnuvWIR1qQaRL9p30of0iJ1ySa/Mz15GBy X-Received: by 2002:ac8:7f07:0:b0:3ef:57da:d4c4 with SMTP id f7-20020ac87f07000000b003ef57dad4c4mr34103995qtk.27.1682451902607; Tue, 25 Apr 2023 12:45:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451902; cv=none; d=google.com; s=arc-20160816; b=b//NT2927qIbxkhbPzeXeSZ2GyfQvpYsjKc6JkDkkFVdIBwgJaXQwWTc1uZTb80V31 SkvCNKYGvTZOph+3FxzD+WAGdp+Jd+6Sz4y6WBffqPu93b097+xPzEMbqqtmW6GvSl/0 C3yD/BdwAGfAG0Bg0JuBjSh0UlYIyD+V70STk9ceP6E1MQUmwbSPdiGJszAfqVAPP0hK rkkKUzw4ymDi8K7UVV4Tatz84b9gJSSFQFS1jw2HiLqic+QP5VTlZifExV9yqbEatId7 NNZK2C3Z+3VqXLUz5iofs/1W8ARDCntDUsN36srA55mno7IdpwCEzK6cZ6b07KvVKnrY Ghbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=YyONh+q6+JPHpqSgz1eedz8qFY8evuyzGRq8DlknugQ=; b=Nju+W91pWH/l/EBZrqABv11Bp3TO8DJxv+Jy3G/Dc/qIt4sFbw9v4UWraGw8fKGo12 pdmWsJZd2XurlcE3nLTxigv/BGMq2MeC0jsz5hMaP/XL6uYmaGG2AQfvaTHQknhuJe2d 2SvJLBEpwl8b0UlMK05Aq3iXTj3FiKwuMwTA7sQYvB4OqSQU8qftb5RN42RcG5k/vAvH T/nFXagnLBE+KvYOgLdmlPx7gJYQavfjVsSKRh7p85+5ofK9KFyOh9uY5E0NxtRV0Qx2 BAd8XyKQw+Iz2ijRFK9JdXd7Qo0iI51mSHynG/eP8orcnWPhvAqrJW0Ot55WrQhYL8Tq 64RA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rMKJQysL; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 200-20020a3707d1000000b0074d96c3658bsi8668960qkh.568.2023.04.25.12.45.02 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:45:02 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rMKJQysL; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS3-0005ug-JW; Tue, 25 Apr 2023 15:35:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORA-0004I9-Ei for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:52 -0400 Received: from mail-lf1-x12f.google.com ([2a00:1450:4864:20::12f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOR5-00054A-9E for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:51 -0400 Received: by mail-lf1-x12f.google.com with SMTP id 2adb3069b0e04-4eff055d4d3so3380491e87.3 for ; Tue, 25 Apr 2023 12:34:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451285; x=1685043285; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YyONh+q6+JPHpqSgz1eedz8qFY8evuyzGRq8DlknugQ=; b=rMKJQysLKXuVdE5xNwwzuvY92+k2XiQhe8H+YtNOz5CBLxQotHSxmW7HWBbwvInB4a VVz/f79zBmukyRjcvpUNH/7AitUmW8Lp0+pqr81Kox2Os8Wxxu0cS043bbs9DjEfdKgG IO3IHFefwXirMV2CoqmGqd4foIggkUhWArdo1guilWiqq83pXL2A7Z8jyuW4/aY82q4q EgDAYAyKUwzL5UZdY58pfju+ernZnzywt/HOjWJjSkg66/PdiGhkMzMh4dcH9eSY0BXC XZ/cUmyAGAr1DASSHDlf0JKIJUJ8b2lBgah7iMo/FRjMg+RrZtGg2C/UcRcJeeqqf6FI zKyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451285; x=1685043285; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YyONh+q6+JPHpqSgz1eedz8qFY8evuyzGRq8DlknugQ=; b=bufnmNKVnO1XkvcZtDew4MgUbKO355eqHM2Ua8k2EMPU3bbRJrblic/Bm47sQl+sPF DhnwWKtQP+Unk5kLWldHS8i0O/t1xZt4B+j0EoI58xkbPTG0XubDEv6aTvuyBNlohEui sLafiR1uIxSLcAMhT0Q/vlsI+R/J7W8N2PdUG6BHVa21urTfZVGvKgRYqE+PDskFkUIf 13JG2nmQ3W3O3LVaehPMrLqgu7Zo+Yt/WJRunTdKigY5rJbnnzxwpThtr7v6LXVLWS2V Jfe8taR/nxZhj60/SKVcyDd/r5MaQbUYUaQYQ6jf0fwhRG7yWGn7IPy470a1IFUTaz9l jE3Q== X-Gm-Message-State: AAQBX9ecjWmLKJ55rD0FEyZo1Qy9TliW1jzYU3iJ+gXEn6PPcLnHL9Xd wHar4GjLDO7R2vr4iCdTrniPp18DoUOmDaujcB4XTg== X-Received: by 2002:ac2:454c:0:b0:4de:e802:b7e3 with SMTP id j12-20020ac2454c000000b004dee802b7e3mr5187676lfm.19.1682451285752; Tue, 25 Apr 2023 12:34:45 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:45 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 30/57] tcg/sparc64: Allocate %g2 as a third temporary Date: Tue, 25 Apr 2023 20:31:19 +0100 Message-Id: <20230425193146.2106111-31-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::12f; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x12f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index e997db2645..64464ab363 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -83,9 +83,10 @@ static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { #define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32) #define ALL_QLDST_REGS (ALL_GENERAL_REGS & ~SOFTMMU_RESERVE_REGS) -/* Define some temporary registers. T2 is used for constant generation. */ +/* Define some temporary registers. T3 is used for constant generation. */ #define TCG_REG_T1 TCG_REG_G1 -#define TCG_REG_T2 TCG_REG_O7 +#define TCG_REG_T2 TCG_REG_G2 +#define TCG_REG_T3 TCG_REG_O7 #ifndef CONFIG_SOFTMMU # define TCG_GUEST_BASE_REG TCG_REG_I5 @@ -110,7 +111,6 @@ static const int tcg_target_reg_alloc_order[] = { TCG_REG_I4, TCG_REG_I5, - TCG_REG_G2, TCG_REG_G3, TCG_REG_G4, TCG_REG_G5, @@ -492,8 +492,8 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, static void tcg_out_movi(TCGContext *s, TCGType type, TCGReg ret, tcg_target_long arg) { - tcg_debug_assert(ret != TCG_REG_T2); - tcg_out_movi_int(s, type, ret, arg, false, TCG_REG_T2); + tcg_debug_assert(ret != TCG_REG_T3); + tcg_out_movi_int(s, type, ret, arg, false, TCG_REG_T3); } static void tcg_out_ext8s(TCGContext *s, TCGType type, TCGReg rd, TCGReg rs) @@ -885,10 +885,8 @@ static void tcg_out_jmpl_const(TCGContext *s, const tcg_insn_unit *dest, { uintptr_t desti = (uintptr_t)dest; - /* Be careful not to clobber %o7 for a tail call. */ tcg_out_movi_int(s, TCG_TYPE_PTR, TCG_REG_T1, - desti & ~0xfff, in_prologue, - tail_call ? TCG_REG_G2 : TCG_REG_O7); + desti & ~0xfff, in_prologue, TCG_REG_T2); tcg_out_arithi(s, tail_call ? TCG_REG_G0 : TCG_REG_O7, TCG_REG_T1, desti & 0xfff, JMPL); } @@ -1856,6 +1854,7 @@ static void tcg_target_init(TCGContext *s) tcg_regset_set_reg(s->reserved_regs, TCG_REG_O6); /* stack pointer */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_T1); /* for internal use */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_T2); /* for internal use */ + tcg_regset_set_reg(s->reserved_regs, TCG_REG_T3); /* for internal use */ } #define ELF_HOST_MACHINE EM_SPARCV9 From patchwork Tue Apr 25 19:31:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676850 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882440wrs; Tue, 25 Apr 2023 12:48:16 -0700 (PDT) X-Google-Smtp-Source: AKy350YU0YFoIGELeaWaK6C+CJ17bnzAILCVsx5q8e0mxc0thNHDqvpRBZ1hUlmyLn5PqE3smBJS X-Received: by 2002:a05:622a:303:b0:3ef:59e2:b9b5 with SMTP id q3-20020a05622a030300b003ef59e2b9b5mr25839567qtw.47.1682452096629; Tue, 25 Apr 2023 12:48:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452096; cv=none; d=google.com; s=arc-20160816; b=hD/HES26Pu0VoMt6U1dccwD+6WS1CAr8ihnTPNF0bFEB8EsMrC0O6IB5Jv+wMaUZka mJjnAwJWOcPQpp98FptZq6ED87W5z6glrcEw2kRuJXq15efOKtjcX4dwj8197OAYkFxz Gn8NwMuuzg47tpav2LrtH+NgnjkjHur/6CwggdRO4k5QgZkiGCHSgUy59hz47+h6AIff FkcUrfbQipl46wSGwop0U/y6ClTdXZX7sJO1KSfxs4WtVmIs+PWSzv9Kc2sKtpfZvra+ /ljwyKPl8sGB+4ZROyYF3frihX5FQfd8JAK6CH0ZE1wfo8Zo/AwNRmxVThrFpNGozKfo MFSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=szr01PFwF0Du6v147zPw0I7dg80IAC207oSuL39I73s=; b=MY2PKOU6RB4zUyBmxMmuEHU8I1e0mrkwAdN1h8Su7snl8peE9RcE+HQdXddGRCNVPj IktIknd+ZT6F8cBMfqwOQzH2vA2BYGiwQ4T9owY2NcMc/k7YsHtNzVTfwzBlyFn+DD34 o2H1lnFn+xGRCcqjArw2mCTLdeAb+XHTxy1cBdQjd+6nLLECaWAQckdGBi3O2fyzevfD V2S8YhzXPqSfoVHvhomLPRocw4qmzF2SpOnfmSa3S6GmXxQXWTdLpBWFw45f2qAsROB4 yG23aCo02n+Yl/hLXQQ1nF+3w/n5vM8Asz7kvUZOnQ99ZvwR1ofOUCFY73YDhbU4QcI0 /imQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="VrveoN/r"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id e17-20020a05622a111100b003eea3013d6dsi9483093qty.320.2023.04.25.12.48.16 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:48:16 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="VrveoN/r"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORr-0005fY-C6; Tue, 25 Apr 2023 15:35:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORB-0004L3-RI for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:54 -0400 Received: from mail-lj1-x229.google.com ([2a00:1450:4864:20::229]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOR7-0004oo-7F for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:53 -0400 Received: by mail-lj1-x229.google.com with SMTP id 38308e7fff4ca-2a8b766322bso59742321fa.1 for ; Tue, 25 Apr 2023 12:34:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451288; x=1685043288; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=szr01PFwF0Du6v147zPw0I7dg80IAC207oSuL39I73s=; b=VrveoN/r5Dd+JVtZDqk6fcDBrrHz2MEgYqjQeBrvr0TMPgHJQ3xbxUn9nK325NtnEL sMi5fIEmxHPFxU8Y6vnUSfBSbp6T00jw40hm7PkkpYrMjI/Z2/SdrmjTYhUpBHiRpyGq iyPSazOiwVayn74u11iA/oT5Mi53XXYlKPlm7br5MTaTU/Jmg3yifpw1xclaCU2S9hOy 8Ac70MqaRQhTzrsqmg8JjE5K0d2dARJvJSACJw8TdUo2Kd9MaFVLimhuOjirQrAMGQfZ K5jl8037TdBF4+ehhy6m5rx34qgLTrPxi1lgMdMOw1G6s+13nvcOEp/sQdvdnr6cwLgm fEdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451288; x=1685043288; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=szr01PFwF0Du6v147zPw0I7dg80IAC207oSuL39I73s=; b=iofj50FpiHrmGQragFXgpapaMDh0Q9HhUyA8LhfJEFbnegXPlSpjihZsy8l1HM+tLe qsy8vUoY2CSWpL8H2a1fIxBtn39Fkl+ZGiXGXu40UlIwfc3UjqGqVNOhaHk8TwsVv22r 6EGFlefewCGdnjykXoDXM+IybGANxokC7XpYnwW6WpHIjOlIwHOc1PVLszvfcX9GncXh a0g58nHl9GTpMwytI6JmdSkxP1qcw8AlXCeCxVwwS65SK555BtXsxsjsKwXV4jqcFBNn dUSTCnO3W2Ba5rYpnnup8Uaw8maXiNxmxonyeE8Al8RQ04Nj+i5rSrejabIAaQcN2kE8 NV6A== X-Gm-Message-State: AAQBX9fKSuz7I5DJJx1lXIdrAmAPGRCanFaTuBp36CIgW6zrH0X1S9Rp ZQgnLSW2kIP7asst7f4CeLkoswK44i8eksEl7Lk2FQ== X-Received: by 2002:a2e:90d5:0:b0:2a8:b168:981f with SMTP id o21-20020a2e90d5000000b002a8b168981fmr3545705ljg.46.1682451288322; Tue, 25 Apr 2023 12:34:48 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:48 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 31/57] tcg/sparc64: Rename tcg_out_movi_imm13 to tcg_out_movi_s13 Date: Tue, 25 Apr 2023 20:31:20 +0100 Message-Id: <20230425193146.2106111-32-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::229; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x229.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Emphasize that the constant is signed. Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 64464ab363..2e6127d506 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -399,7 +399,7 @@ static void tcg_out_sethi(TCGContext *s, TCGReg ret, uint32_t arg) tcg_out32(s, SETHI | INSN_RD(ret) | ((arg & 0xfffffc00) >> 10)); } -static void tcg_out_movi_imm13(TCGContext *s, TCGReg ret, int32_t arg) +static void tcg_out_movi_s13(TCGContext *s, TCGReg ret, int32_t arg) { tcg_out_arithi(s, ret, TCG_REG_G0, arg, ARITH_OR); } @@ -408,7 +408,7 @@ static void tcg_out_movi_imm32(TCGContext *s, TCGReg ret, int32_t arg) { if (check_fit_i32(arg, 13)) { /* A 13-bit constant sign-extended to 64-bits. */ - tcg_out_movi_imm13(s, ret, arg); + tcg_out_movi_s13(s, ret, arg); } else { /* A 32-bit constant zero-extended to 64 bits. */ tcg_out_sethi(s, ret, arg); @@ -425,15 +425,15 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, tcg_target_long hi, lo = (int32_t)arg; tcg_target_long test, lsb; - /* A 32-bit constant, or 32-bit zero-extended to 64-bits. */ - if (type == TCG_TYPE_I32 || arg == (uint32_t)arg) { - tcg_out_movi_imm32(s, ret, arg); + /* A 13-bit constant sign-extended to 64-bits. */ + if (check_fit_tl(arg, 13)) { + tcg_out_movi_s13(s, ret, arg); return; } - /* A 13-bit constant sign-extended to 64-bits. */ - if (check_fit_tl(arg, 13)) { - tcg_out_movi_imm13(s, ret, arg); + /* A 32-bit constant, or 32-bit zero-extended to 64-bits. */ + if (type == TCG_TYPE_I32 || arg == (uint32_t)arg) { + tcg_out_movi_imm32(s, ret, arg); return; } @@ -767,7 +767,7 @@ static void tcg_out_setcond_i32(TCGContext *s, TCGCond cond, TCGReg ret, default: tcg_out_cmp(s, c1, c2, c2const); - tcg_out_movi_imm13(s, ret, 0); + tcg_out_movi_s13(s, ret, 0); tcg_out_movcc(s, cond, MOVCC_ICC, ret, 1, 1); return; } @@ -803,11 +803,11 @@ static void tcg_out_setcond_i64(TCGContext *s, TCGCond cond, TCGReg ret, /* For 64-bit signed comparisons vs zero, we can avoid the compare if the input does not overlap the output. */ if (c2 == 0 && !is_unsigned_cond(cond) && c1 != ret) { - tcg_out_movi_imm13(s, ret, 0); + tcg_out_movi_s13(s, ret, 0); tcg_out_movr(s, cond, ret, c1, 1, 1); } else { tcg_out_cmp(s, c1, c2, c2const); - tcg_out_movi_imm13(s, ret, 0); + tcg_out_movi_s13(s, ret, 0); tcg_out_movcc(s, cond, MOVCC_XCC, ret, 1, 1); } } @@ -844,7 +844,7 @@ static void tcg_out_addsub2_i64(TCGContext *s, TCGReg rl, TCGReg rh, if (use_vis3_instructions && !is_sub) { /* Note that ADDXC doesn't accept immediates. */ if (bhconst && bh != 0) { - tcg_out_movi_imm13(s, TCG_REG_T2, bh); + tcg_out_movi_s13(s, TCG_REG_T2, bh); bh = TCG_REG_T2; } tcg_out_arith(s, rh, ah, bh, ARITH_ADDXC); @@ -866,7 +866,7 @@ static void tcg_out_addsub2_i64(TCGContext *s, TCGReg rl, TCGReg rh, * so the adjustment fits 12 bits. */ if (bhconst) { - tcg_out_movi_imm13(s, TCG_REG_T2, bh + (is_sub ? -1 : 1)); + tcg_out_movi_s13(s, TCG_REG_T2, bh + (is_sub ? -1 : 1)); } else { tcg_out_arithi(s, TCG_REG_T2, bh, 1, is_sub ? ARITH_SUB : ARITH_ADD); @@ -1036,7 +1036,7 @@ static void tcg_target_qemu_prologue(TCGContext *s) tcg_code_gen_epilogue = tcg_splitwx_to_rx(s->code_ptr); tcg_out_arithi(s, TCG_REG_G0, TCG_REG_I7, 8, RETURN); /* delay slot */ - tcg_out_movi_imm13(s, TCG_REG_O0, 0); + tcg_out_movi_s13(s, TCG_REG_O0, 0); build_trampolines(s); } @@ -1430,7 +1430,7 @@ static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) { if (check_fit_ptr(a0, 13)) { tcg_out_arithi(s, TCG_REG_G0, TCG_REG_I7, 8, RETURN); - tcg_out_movi_imm13(s, TCG_REG_O0, a0); + tcg_out_movi_s13(s, TCG_REG_O0, a0); return; } else { intptr_t tb_diff = tcg_tbrel_diff(s, (void *)a0); From patchwork Tue Apr 25 19:31:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676835 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880813wrs; Tue, 25 Apr 2023 12:43:56 -0700 (PDT) X-Google-Smtp-Source: AKy350brXe33Ej/bb/rJxMCmd21dirXYATBQI84Bs8OxwuppLIGwwYKwQePJc4WQRU0HgouAs1Mf X-Received: by 2002:ac8:5f49:0:b0:3e6:4ff0:ef0a with SMTP id y9-20020ac85f49000000b003e64ff0ef0amr32476790qta.20.1682451835947; Tue, 25 Apr 2023 12:43:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451835; cv=none; d=google.com; s=arc-20160816; b=hn+xiaYJxqENEHaL84lsVYK074Kc6jzC8sQxCoSifqEccm0z5bo4GNTUP6gvtgSCEu uIOIxTZ8sKhvsf3gitX1mkW7RaGmSZ9hNvAYvaNJMXlwO4vxXuln23uhL/Z4dWxNBrrW HW98HitNv9eO4nfZwJrzGhZqhORbwGP1gVEyezZA+SW2qlURWWulh9EBKQElTI+DT8Dz aAy4XYtKErOMTx8fUOCnlMIXWO1DweGJ+wGGFjU5As7V+fEdezT9N2vDTIcMKStJ6SjL tqo6tTiCgcY2cUUvhFUpMOL/1TZgh56W9GvBHENYNaVEDTo69FaoRXO9wVrhQAAxzJRG EI9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=larFjkYHXxL78J3QFkp+Ja9AW0TXM73Zch/rm8HTMf0=; b=O1FgF+K1AfTJWZYJNHKWIst2lFdEUJ0ToyQdR0uWWYJN9iweEFWEjGjsDMVFfpKIO1 pkHvi3/sfpDhZiRqHmWbrT/huYEbUzoRqxHdbfBMRhM5kOMDjV4pJUfVmCHvtOPiE9GG 2pwfRuTrF/8/vzXDahVMpzxOj0lxtgSv0DGdr4oMZEWpJBS7TYeDyDTMkBjLQ67T0QUI 9LimNnNXbJFjVhbh4SIkw+ohmsmwSrZWbzWZbBdvT0AhkDShYaDqMHq159QbT/3ropbP kN49xmPhiWxFJ7g/GAfGaMIa3DpRnfopBRqoE1J6Cf3EDTLXwktcoh/oXGgeB3otvY+o 8Rmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fJZEYhCQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id m21-20020a05620a24d500b00745729062b3si9759614qkn.92.2023.04.25.12.43.55 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:55 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fJZEYhCQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS1-0005sh-WD; Tue, 25 Apr 2023 15:35:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORE-0004OO-Ua for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:58 -0400 Received: from mail-lj1-x234.google.com ([2a00:1450:4864:20::234]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORA-0004tg-6x for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:34:56 -0400 Received: by mail-lj1-x234.google.com with SMTP id 38308e7fff4ca-2a8c51ba511so58083981fa.1 for ; Tue, 25 Apr 2023 12:34:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451291; x=1685043291; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=larFjkYHXxL78J3QFkp+Ja9AW0TXM73Zch/rm8HTMf0=; b=fJZEYhCQ0MoXKYNiI8U/nAIDoyltxe/oUgt481Y4bPlnngW2ezw14q9UmjF5pZ/vVc xSXALpxHvUq4cvW7lKgZYhj+C0b/j8xN3HHF5khESwrTQ5lbMJrT1uO3pVxOh6LwPjvt xurlj2iDGHDxnCHvKfanC5wkaxv/sxedG1WkrzurTQAiLkVBn7UyEiQUn3JUHDBlK7Yt JlLPNGFYuOPirtznisI7PkUllauNwrlKRoyESrgOMYgRC9N/LmXSrrYC4G0gqDViTs6I oyy7Rv/fFl0JTqxRN4S6BC2GG36tXfDmD3mZ4neBVLjPycDQX8J/pM8lk3Gl69OBp80m +3hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451291; x=1685043291; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=larFjkYHXxL78J3QFkp+Ja9AW0TXM73Zch/rm8HTMf0=; b=LbnKm5LoYIGyeU3bL5Isqba0EZ97JdejNm1FCDIXUdQsZdXOB76JmAfy394WRzfMhF tQ8kNJkYCk49xcCriGwK7Edx5TDPvb7OQxJClL+z1pKpN/ldXbpEPPAeFhfiGt+w9Ibi IQAgpjhhLvFTeCQMCXWC+lhhXonJCtSErY4IdFFHkQyYSoRgpwnBNdgKtFN5ZHhpX/6f U9TixIUp78BsNjetT+yf8T1rddgKvzZsmhqFqtco1q27JGlqV/Mk6xwWsqr8Wvm5NiJN 4FMDAHeZMgqBedtr2VJzjtH3TVzH96owOFsSRR9QI9XcSx+tqA4LuRI9cSMUpC7TxfGB O1LA== X-Gm-Message-State: AAQBX9e26LtxvyXsHX1ZP5NanW1AWUAzgcq9kJxfbShu4iLv9VFQyKcc gv9hOkK6iwwAcfRt2XOTy+Sqcz7SzHeuGCOCRRW6iA== X-Received: by 2002:a2e:a17a:0:b0:2a7:7d70:6bb with SMTP id u26-20020a2ea17a000000b002a77d7006bbmr3790456ljl.2.1682451291014; Tue, 25 Apr 2023 12:34:51 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:50 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 32/57] tcg/sparc64: Rename tcg_out_movi_imm32 to tcg_out_movi_u32 Date: Tue, 25 Apr 2023 20:31:21 +0100 Message-Id: <20230425193146.2106111-33-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::234; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x234.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Emphasize that the constant is unsigned. Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 2e6127d506..e244209890 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -399,22 +399,18 @@ static void tcg_out_sethi(TCGContext *s, TCGReg ret, uint32_t arg) tcg_out32(s, SETHI | INSN_RD(ret) | ((arg & 0xfffffc00) >> 10)); } +/* A 13-bit constant sign-extended to 64 bits. */ static void tcg_out_movi_s13(TCGContext *s, TCGReg ret, int32_t arg) { tcg_out_arithi(s, ret, TCG_REG_G0, arg, ARITH_OR); } -static void tcg_out_movi_imm32(TCGContext *s, TCGReg ret, int32_t arg) +/* A 32-bit constant zero-extended to 64 bits. */ +static void tcg_out_movi_u32(TCGContext *s, TCGReg ret, uint32_t arg) { - if (check_fit_i32(arg, 13)) { - /* A 13-bit constant sign-extended to 64-bits. */ - tcg_out_movi_s13(s, ret, arg); - } else { - /* A 32-bit constant zero-extended to 64 bits. */ - tcg_out_sethi(s, ret, arg); - if (arg & 0x3ff) { - tcg_out_arithi(s, ret, ret, arg & 0x3ff, ARITH_OR); - } + tcg_out_sethi(s, ret, arg); + if (arg & 0x3ff) { + tcg_out_arithi(s, ret, ret, arg & 0x3ff, ARITH_OR); } } @@ -433,7 +429,7 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, /* A 32-bit constant, or 32-bit zero-extended to 64-bits. */ if (type == TCG_TYPE_I32 || arg == (uint32_t)arg) { - tcg_out_movi_imm32(s, ret, arg); + tcg_out_movi_u32(s, ret, arg); return; } @@ -477,13 +473,13 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, /* A 64-bit constant decomposed into 2 32-bit pieces. */ if (check_fit_i32(lo, 13)) { hi = (arg - lo) >> 32; - tcg_out_movi_imm32(s, ret, hi); + tcg_out_movi_u32(s, ret, hi); tcg_out_arithi(s, ret, ret, 32, SHIFT_SLLX); tcg_out_arithi(s, ret, ret, lo, ARITH_ADD); } else { hi = arg >> 32; - tcg_out_movi_imm32(s, ret, hi); - tcg_out_movi_imm32(s, scratch, lo); + tcg_out_movi_u32(s, ret, hi); + tcg_out_movi_u32(s, scratch, lo); tcg_out_arithi(s, ret, ret, 32, SHIFT_SLLX); tcg_out_arith(s, ret, ret, scratch, ARITH_OR); } From patchwork Tue Apr 25 19:31:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676864 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883913wrs; Tue, 25 Apr 2023 12:52:26 -0700 (PDT) X-Google-Smtp-Source: AKy350b/hv6XisSo/PuEr1DVoZh13Xrcw14q2t+tH3O8SuGIiHPGg5xG40mQ/+y+dT83Mz3fvZ+e X-Received: by 2002:a05:6214:300b:b0:5c9:a0ce:df25 with SMTP id ke11-20020a056214300b00b005c9a0cedf25mr23452031qvb.20.1682452346230; Tue, 25 Apr 2023 12:52:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452346; cv=none; d=google.com; s=arc-20160816; b=kdJ2zNvZaO+WaomIZ9RykKVjUmhjP/vne3pk9BrCB/FJFLAm3A1dVIrzkglged2SeW OH2kjxxU6aPgRW+rW/6GFmOASA7f+J+lNrZewYRy9W/NQFRg9zo75pf1HJpF2I1DrE5y u34KCW3VpaEAfrOQGsWI79ElVkNnmO+0df8DlI+KyP3n2JX7hnEGhWhrdbmR/uJP80Pq NWYZHgQAdoAYIZa8xDjHBMMCoikQvTjz3kwZ+ZvH0d3QoMRu04jqDahI5fU4OHTFqTUc 20jc8/JB3m2/ejFuktcY49xFxHHNt8YhxnMFXRYfvWXIHN/kZ7xf1S4Y8zKKNKGiov/y eAzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=6s7rQstWrDxaC1BjVuEYZobw8VSYDLiLkliwIltqh+E=; b=NDhpHdmPcT98GRPjicbDT47XxXsBQEFcmCxUT7XwFG8RkrjYas/fn51+kvQb9twEpB 4muxkMJkAQVNd0p07ukvQBKvddpFLkXC/v9FnEwGLKlqbS7jGqCcXHd68EiN1whnL10S Z+SpmR9ldALoxVkKQyWSWdsOChmMIO34AGJPKbLtbXfRYR74NSdbGvFkNMfEcN2OEYAw FvJE2VaMZMNb064cDEDph3awDXIHXUxYTWP9ans/0S1FaqF3EfxfvNeJli3iDFaWq4t0 BRt3C8dTqK5NxHm8RZMNDjYYCf1qEQDA6oF0Qx4RiU8sW4HTsbZmrWP/UQGLfeAfNDJB pZNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cejMgGVX; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id gh6-20020a05621429c600b005efa7722749si9142558qvb.201.2023.04.25.12.52.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:52:26 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cejMgGVX; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS7-00068Q-93; Tue, 25 Apr 2023 15:35:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORX-0005Fv-R0 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:15 -0400 Received: from mail-lj1-x232.google.com ([2a00:1450:4864:20::232]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORJ-00059j-1i for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:15 -0400 Received: by mail-lj1-x232.google.com with SMTP id 38308e7fff4ca-2a7ac89b82dso59898671fa.1 for ; Tue, 25 Apr 2023 12:34:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451297; x=1685043297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6s7rQstWrDxaC1BjVuEYZobw8VSYDLiLkliwIltqh+E=; b=cejMgGVXMpZVF2BRpBxRtwq9/Bb7QnOR7SFG5M7gpFh3egQXp6cDTaQvbEM6hfq9Ky NyaWgjRDly73YzNOMMN2wJxpM+/XMJhTTT1BP/yDCoxk5o8AAEBWw8ZqQQAFtnrggIjA ELapJLD/PlQO7k+4b8y2oqG2itcqrNdxy7N4w2StrwHVVxd3FA75NSsU+wWeseP6WyCn +uNIHTFTcsM7fmbyK4IqJuV4CdMzscTFihejmdSt02LR9eaw2Rd61AmGlYYSNGVP2Lwx LN0eTFWWcYFO4q3FVxwH/Tm1ZYSdccNoI6pR+ckhWTpJMVdzSJe9ouO8ldkO2KjmdmSl tNFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451297; x=1685043297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6s7rQstWrDxaC1BjVuEYZobw8VSYDLiLkliwIltqh+E=; b=W0HK6dikT3K7FnCvIV/VTZGrbcUyE6EEkgcsHYYlr7Lt9z7I2Wu5yXpsKIArm7l1PT cJgugHgzELeeq8SlO0rFBy0EYuwnGPNxeV9xUTVG9jrwKEXQ2JbgKPrmO0cTcFOjlLR6 Dxi3IXqN/pHbrJdTft/g/LbMSQQJTfBTcbt+fbi1FPDbcHBjVS4cGeH/mBlXGtAJsNQA 16v9yshNgm4e88WvjJuileKH+e46qIzW5K8T0sK65Tkb4FdG0H5najYFo/Ni8FcYalRH iSbb2h06MYKCjLj/uy30AyK19fORjVYA/gxDFOwN5P0fpr9siFl3jRwVrKuPc+YFiByp k4lg== X-Gm-Message-State: AAQBX9dLmdrYseaSNVuthFGh0movAPmzM1wyFLdg5YNoGWzV8VTzkVLm oyG863YSF+gSafvRJKxknNDOomjcnPqsAy+7RJSZ/A== X-Received: by 2002:a2e:3803:0:b0:2a8:b995:ffe5 with SMTP id f3-20020a2e3803000000b002a8b995ffe5mr3771287lja.25.1682451296951; Tue, 25 Apr 2023 12:34:56 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:34:56 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 33/57] tcg/sparc64: Split out tcg_out_movi_s32 Date: Tue, 25 Apr 2023 20:31:22 +0100 Message-Id: <20230425193146.2106111-34-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::232; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x232.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index e244209890..4375a06377 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -405,6 +405,13 @@ static void tcg_out_movi_s13(TCGContext *s, TCGReg ret, int32_t arg) tcg_out_arithi(s, ret, TCG_REG_G0, arg, ARITH_OR); } +/* A 32-bit constant sign-extended to 64 bits. */ +static void tcg_out_movi_s32(TCGContext *s, TCGReg ret, int32_t arg) +{ + tcg_out_sethi(s, ret, ~arg); + tcg_out_arithi(s, ret, ret, (arg & 0x3ff) | -0x400, ARITH_XOR); +} + /* A 32-bit constant zero-extended to 64 bits. */ static void tcg_out_movi_u32(TCGContext *s, TCGReg ret, uint32_t arg) { @@ -444,8 +451,7 @@ static void tcg_out_movi_int(TCGContext *s, TCGType type, TCGReg ret, /* A 32-bit constant sign-extended to 64-bits. */ if (arg == lo) { - tcg_out_sethi(s, ret, ~arg); - tcg_out_arithi(s, ret, ret, (arg & 0x3ff) | -0x400, ARITH_XOR); + tcg_out_movi_s32(s, ret, arg); return; } From patchwork Tue Apr 25 19:31:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676865 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883928wrs; Tue, 25 Apr 2023 12:52:29 -0700 (PDT) X-Google-Smtp-Source: AKy350aB0i4EEjuQmFhjdTe0RI7sOh3fzjBqQLCeNxeKaQtGKe0v3eGbI3n2qKmCfFxMOL/BMot1 X-Received: by 2002:ad4:5fcb:0:b0:5e7:56cc:c04a with SMTP id jq11-20020ad45fcb000000b005e756ccc04amr30018764qvb.47.1682452348882; Tue, 25 Apr 2023 12:52:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452348; cv=none; d=google.com; s=arc-20160816; b=tVTNVQiuR/fsPA7hlhgv4Xhatu0SvCgz9cM5/QSt4XgCilqvT++soVVUE9UBTuRm3A 6b+xST+a6K6gexOxMDhnDSn7V30uXylroOpKtKk/rI0QVqpCyEH0jOGYT+wJvklloBEl QECUJCYGcf/2jJqsGapy5wHw4OvGk+mkxOJFsXAEtBr+cy6Uafhr24XEDr8/HwYnFniJ jq8FBWzEt+ffkGz1QEoKqwG3DeobP6lN0vrIRP/DM2htuRj8ybGpLTLQXtmlV7/VGKBX d1RYlfv5HtqsbNf/eBno8FGjqilQboFIGC0t838NH64IrXbTY4t+hf2hpD9y4XHGP74L yJ5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8KAB/H13Cm0D6bHkoWgDoGCikvqnWURt7Ewb7NyDsZI=; b=BpWssvZHg1QCe33ApYH2ptRvSdTg42r1zqicUzbp05yh0WxpOGqoehlxyn0mG4M/Bq x+yPisR5Y2iJ30ntIIUV+H4+FWZIDw7xcH/xgLiQcqdznYoBFZsa64JPaQKuv97LaN5p vZoyAQxy1wqqV3tj1WRPVDkzzX/81Sx1qMDL9c+bWIsz/qktb2BWFOmtJETTfzEPcsFC QTyLFHuOu71abLGxZP24AjyDXrZ7JqU3USjwmTe6Yi0eWtr22XCK5UKx4m/X9pXv4Fwd M37chFbWwleqBnkzZUKfp0KDfFLMSGdUwH0WUoYvAEKrP3/9NNuufNPq2DffdzKXoMuJ zWEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ne9UO96R; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id kd7-20020a056214400700b005ef665a190asi8934876qvb.428.2023.04.25.12.52.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:52:28 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ne9UO96R; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prORf-0005Rr-DV; Tue, 25 Apr 2023 15:35:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORa-0005Oe-9t for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:18 -0400 Received: from mail-lf1-x129.google.com ([2a00:1450:4864:20::129]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORL-0005Am-FI for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:18 -0400 Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-4efe9a98736so3894765e87.1 for ; Tue, 25 Apr 2023 12:35:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451301; x=1685043301; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8KAB/H13Cm0D6bHkoWgDoGCikvqnWURt7Ewb7NyDsZI=; b=Ne9UO96Rm2ak6T6Gm74G8opSOPHc2dH89usifEEJZUk7Yzg2NPpTT1digHd+qVPDu1 +6WSIbfsLUgX1A6/fclLbd1gvSsaOT06x3ZvrV5q66WPQvpQPm5kAkanmtogA0fPjf+W 1LjdhilzDZmzZeXRHaBGdUpUbANA7HeF9XQ5QJSzokV4JOkEcCTgRzU/CJ8MKw4PVbG5 9bMwvu/IoqzOk5wVU6zh9nQiyWsJcVaighpMANQwAWMvdZbpmdw53mL2lWj8f2qvyJvW ndteh6z08sorxn/KYMICDwqvmWhYW7y58M1ECqoVI3sEthrY4Bfxnb9HD2+WRywpecT2 VaWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451301; x=1685043301; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8KAB/H13Cm0D6bHkoWgDoGCikvqnWURt7Ewb7NyDsZI=; b=Kd0J0hPQaEau8CQ0h5tFqr7cm8/WDUZAt1ka97I6pS0WVzswsJoSpbIyfDQSw5lbvv R4zwmo+UARIyTcw3zS04QOhzjle17GBBXF0T2MpKB28kPGLSY/+42QXPjAio0kdrltcB r7p0zw633BSjzDrma0VAMLLKT+T80G0EnMdNeXm+fV3Ny1LEGD7gJ4EV1xggUYJE2v9F OZDqKeNMkd7qrvisQzB5bqaMwYzeQSInv8XH4RPONi/fs/RqSG8olMoWvMzwM0Iqh403 GXmfuXhfzi3gMUgmlG+gs49+ru3HL3MTsPzSVlUQHByxjGvdd6wlQznZLrMKcBjJylOl JdqA== X-Gm-Message-State: AAQBX9drI9zZX8n48qKwgUKrSFVUgsh3euTYWOfBDhuz+hdZKpLl4EI8 UcpIZHKwdwnRk+cFqQxzX7dmw8Q9wholNvzEZyAYSg== X-Received: by 2002:ac2:50c4:0:b0:4ed:cb37:7d8c with SMTP id h4-20020ac250c4000000b004edcb377d8cmr4690621lfm.67.1682451301105; Tue, 25 Apr 2023 12:35:01 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.34.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:35:00 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 34/57] tcg/sparc64: Use standard slow path for softmmu Date: Tue, 25 Apr 2023 20:31:23 +0100 Message-Id: <20230425193146.2106111-35-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::129; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x129.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Drop the target-specific trampolines for the standard slow path. This lets us use tcg_out_helper_{ld,st}_args, and handles the new atomicity bits within MemOp. At the same time, use the full load/store helpers for user-only mode. Drop inline unaligned access support for user-only mode, as it does not handle atomicity. Use TCG_REG_T[1-3] in the tlb lookup, instead of TCG_REG_O[0-2]. This allows the constraints to be simplified. Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target-con-set.h | 2 - tcg/sparc64/tcg-target-con-str.h | 1 - tcg/sparc64/tcg-target.h | 1 + tcg/sparc64/tcg-target.c.inc | 610 +++++++++---------------------- 4 files changed, 182 insertions(+), 432 deletions(-) diff --git a/tcg/sparc64/tcg-target-con-set.h b/tcg/sparc64/tcg-target-con-set.h index 31e6fea1fc..434bf25072 100644 --- a/tcg/sparc64/tcg-target-con-set.h +++ b/tcg/sparc64/tcg-target-con-set.h @@ -12,8 +12,6 @@ C_O0_I1(r) C_O0_I2(rZ, r) C_O0_I2(rZ, rJ) -C_O0_I2(sZ, s) -C_O1_I1(r, s) C_O1_I1(r, r) C_O1_I2(r, r, r) C_O1_I2(r, rZ, rJ) diff --git a/tcg/sparc64/tcg-target-con-str.h b/tcg/sparc64/tcg-target-con-str.h index 8f5c7aef97..0577ec4942 100644 --- a/tcg/sparc64/tcg-target-con-str.h +++ b/tcg/sparc64/tcg-target-con-str.h @@ -9,7 +9,6 @@ * REGS(letter, register_mask) */ REGS('r', ALL_GENERAL_REGS) -REGS('s', ALL_QLDST_REGS) /* * Define constraint letters for constants: diff --git a/tcg/sparc64/tcg-target.h b/tcg/sparc64/tcg-target.h index ffe22b1d21..7434cc99d4 100644 --- a/tcg/sparc64/tcg-target.h +++ b/tcg/sparc64/tcg-target.h @@ -155,6 +155,7 @@ extern bool use_vis3_instructions; #define TCG_TARGET_DEFAULT_MO (0) #define TCG_TARGET_HAS_MEMORY_BSWAP 1 +#define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS #endif diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 4375a06377..0237188d65 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -27,6 +27,7 @@ #error "unsupported code generation mode" #endif +#include "../tcg-ldst.c.inc" #include "../tcg-pool.c.inc" #ifdef CONFIG_DEBUG_TCG @@ -70,18 +71,7 @@ static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { #define TCG_CT_CONST_S13 0x200 #define TCG_CT_CONST_ZERO 0x400 -/* - * For softmmu, we need to avoid conflicts with the first 3 - * argument registers to perform the tlb lookup, and to call - * the helper function. - */ -#ifdef CONFIG_SOFTMMU -#define SOFTMMU_RESERVE_REGS MAKE_64BIT_MASK(TCG_REG_O0, 3) -#else -#define SOFTMMU_RESERVE_REGS 0 -#endif -#define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32) -#define ALL_QLDST_REGS (ALL_GENERAL_REGS & ~SOFTMMU_RESERVE_REGS) +#define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32) /* Define some temporary registers. T3 is used for constant generation. */ #define TCG_REG_T1 TCG_REG_G1 @@ -918,82 +908,6 @@ static void tcg_out_mb(TCGContext *s, TCGArg a0) tcg_out32(s, MEMBAR | (a0 & TCG_MO_ALL)); } -#ifdef CONFIG_SOFTMMU -static const tcg_insn_unit *qemu_ld_trampoline[MO_SSIZE + 1]; -static const tcg_insn_unit *qemu_st_trampoline[MO_SIZE + 1]; - -static void build_trampolines(TCGContext *s) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(qemu_ld_helpers); ++i) { - if (qemu_ld_helpers[i] == NULL) { - continue; - } - - /* May as well align the trampoline. */ - while ((uintptr_t)s->code_ptr & 15) { - tcg_out_nop(s); - } - qemu_ld_trampoline[i] = tcg_splitwx_to_rx(s->code_ptr); - - /* Set the retaddr operand. */ - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_O3, TCG_REG_O7); - /* Tail call. */ - tcg_out_jmpl_const(s, qemu_ld_helpers[i], true, true); - /* delay slot -- set the env argument */ - tcg_out_mov_delay(s, TCG_REG_O0, TCG_AREG0); - } - - for (i = 0; i < ARRAY_SIZE(qemu_st_helpers); ++i) { - if (qemu_st_helpers[i] == NULL) { - continue; - } - - /* May as well align the trampoline. */ - while ((uintptr_t)s->code_ptr & 15) { - tcg_out_nop(s); - } - qemu_st_trampoline[i] = tcg_splitwx_to_rx(s->code_ptr); - - /* Set the retaddr operand. */ - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_O4, TCG_REG_O7); - - /* Tail call. */ - tcg_out_jmpl_const(s, qemu_st_helpers[i], true, true); - /* delay slot -- set the env argument */ - tcg_out_mov_delay(s, TCG_REG_O0, TCG_AREG0); - } -} -#else -static const tcg_insn_unit *qemu_unalign_ld_trampoline; -static const tcg_insn_unit *qemu_unalign_st_trampoline; - -static void build_trampolines(TCGContext *s) -{ - for (int ld = 0; ld < 2; ++ld) { - void *helper; - - while ((uintptr_t)s->code_ptr & 15) { - tcg_out_nop(s); - } - - if (ld) { - helper = helper_unaligned_ld; - qemu_unalign_ld_trampoline = tcg_splitwx_to_rx(s->code_ptr); - } else { - helper = helper_unaligned_st; - qemu_unalign_st_trampoline = tcg_splitwx_to_rx(s->code_ptr); - } - - /* Tail call. */ - tcg_out_jmpl_const(s, helper, true, true); - /* delay slot -- set the env argument */ - tcg_out_mov_delay(s, TCG_REG_O0, TCG_AREG0); - } -} -#endif - /* Generate global QEMU prologue and epilogue code */ static void tcg_target_qemu_prologue(TCGContext *s) { @@ -1039,8 +953,6 @@ static void tcg_target_qemu_prologue(TCGContext *s) tcg_out_arithi(s, TCG_REG_G0, TCG_REG_I7, 8, RETURN); /* delay slot */ tcg_out_movi_s13(s, TCG_REG_O0, 0); - - build_trampolines(s); } static void tcg_out_nop_fill(tcg_insn_unit *p, int count) @@ -1051,381 +963,224 @@ static void tcg_out_nop_fill(tcg_insn_unit *p, int count) } } -#if defined(CONFIG_SOFTMMU) +static const TCGLdstHelperParam ldst_helper_param = { + .ntmp = 1, .tmp = { TCG_REG_T1 } +}; -/* We expect to use a 13-bit negative offset from ENV. */ -QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) > 0); -QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12)); - -/* Perform the TLB load and compare. - - Inputs: - ADDRLO and ADDRHI contain the possible two parts of the address. - - MEM_INDEX and S_BITS are the memory context and log2 size of the load. - - WHICH is the offset into the CPUTLBEntry structure of the slot to read. - This should be offsetof addr_read or addr_write. - - The result of the TLB comparison is in %[ix]cc. The sanitized address - is in the returned register, maybe %o0. The TLB addend is in %o1. */ - -static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index, - MemOp opc, int which) +static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) { + MemOp opc = get_memop(lb->oi); + MemOp sgn; + + if (!patch_reloc(lb->label_ptr[0], R_SPARC_WDISP19, + (intptr_t)tcg_splitwx_to_rx(s->code_ptr), 0)) { + return false; + } + + /* Use inline tcg_out_ext32s; otherwise let the helper sign-extend. */ + sgn = (opc & MO_SIZE) < MO_32 ? MO_SIGN : 0; + + tcg_out_ld_helper_args(s, lb, &ldst_helper_param); + tcg_out_call(s, qemu_ld_helpers[opc & (MO_SIZE | sgn)], NULL); + tcg_out_ld_helper_ret(s, lb, sgn, &ldst_helper_param); + + tcg_out_bpcc0(s, COND_A, BPCC_A | BPCC_PT, 0); + return patch_reloc(s->code_ptr - 1, R_SPARC_WDISP19, + (intptr_t)lb->raddr, 0); +} + +static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) +{ + MemOp opc = get_memop(lb->oi); + + if (!patch_reloc(lb->label_ptr[0], R_SPARC_WDISP19, + (intptr_t)tcg_splitwx_to_rx(s->code_ptr), 0)) { + return false; + } + + tcg_out_st_helper_args(s, lb, &ldst_helper_param); + tcg_out_call(s, qemu_st_helpers[opc & MO_SIZE], NULL); + + tcg_out_bpcc0(s, COND_A, BPCC_A | BPCC_PT, 0); + return patch_reloc(s->code_ptr - 1, R_SPARC_WDISP19, + (intptr_t)lb->raddr, 0); +} + +typedef struct { + TCGReg base; + TCGReg index; +} HostAddress; + +/* + * For softmmu, perform the TLB load and compare. + * For useronly, perform any required alignment tests. + * In both cases, return a TCGLabelQemuLdst structure if the slow path + * is required and fill in @h with the host address for the fast path. + */ +static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, + TCGReg addr_reg, MemOpIdx oi, + bool is_ld) +{ + TCGLabelQemuLdst *ldst = NULL; + MemOp opc = get_memop(oi); + unsigned a_bits = get_alignment_bits(opc); + unsigned s_bits = opc & MO_SIZE; + unsigned a_mask; + + /* We don't support unaligned accesses. */ + a_bits = MAX(a_bits, s_bits); + a_mask = (1u << a_bits) - 1; + +#ifdef CONFIG_SOFTMMU + int mem_index = get_mmuidx(oi); int fast_off = TLB_MASK_TABLE_OFS(mem_index); int mask_off = fast_off + offsetof(CPUTLBDescFast, mask); int table_off = fast_off + offsetof(CPUTLBDescFast, table); - const TCGReg r0 = TCG_REG_O0; - const TCGReg r1 = TCG_REG_O1; - const TCGReg r2 = TCG_REG_O2; - unsigned s_bits = opc & MO_SIZE; - unsigned a_bits = get_alignment_bits(opc); - tcg_target_long compare_mask; + int cmp_off = is_ld ? offsetof(CPUTLBEntry, addr_read) + : offsetof(CPUTLBEntry, addr_write); + int add_off = offsetof(CPUTLBEntry, addend); + int compare_mask; + int cc; /* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */ - tcg_out_ld(s, TCG_TYPE_PTR, r0, TCG_AREG0, mask_off); - tcg_out_ld(s, TCG_TYPE_PTR, r1, TCG_AREG0, table_off); + QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) > 0); + QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12)); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_T2, TCG_AREG0, mask_off); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_T3, TCG_AREG0, table_off); /* Extract the page index, shifted into place for tlb index. */ - tcg_out_arithi(s, r2, addr, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS, - SHIFT_SRL); - tcg_out_arith(s, r2, r2, r0, ARITH_AND); + tcg_out_arithi(s, TCG_REG_T1, addr_reg, + TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS, SHIFT_SRL); + tcg_out_arith(s, TCG_REG_T1, TCG_REG_T1, TCG_REG_T2, ARITH_AND); /* Add the tlb_table pointer, creating the CPUTLBEntry address into R2. */ - tcg_out_arith(s, r2, r2, r1, ARITH_ADD); + tcg_out_arith(s, TCG_REG_T1, TCG_REG_T1, TCG_REG_T3, ARITH_ADD); - /* Load the tlb comparator and the addend. */ - tcg_out_ld(s, TCG_TYPE_TL, r0, r2, which); - tcg_out_ld(s, TCG_TYPE_PTR, r1, r2, offsetof(CPUTLBEntry, addend)); + /* Load the tlb comparator and the addend. */ + tcg_out_ld(s, TCG_TYPE_TL, TCG_REG_T2, TCG_REG_T1, cmp_off); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_T1, TCG_REG_T1, add_off); + h->base = TCG_REG_T1; - /* Mask out the page offset, except for the required alignment. - We don't support unaligned accesses. */ - if (a_bits < s_bits) { - a_bits = s_bits; - } - compare_mask = (tcg_target_ulong)TARGET_PAGE_MASK | ((1 << a_bits) - 1); + /* Mask out the page offset, except for the required alignment. */ + compare_mask = TARGET_PAGE_MASK | a_mask; if (check_fit_tl(compare_mask, 13)) { - tcg_out_arithi(s, r2, addr, compare_mask, ARITH_AND); + tcg_out_arithi(s, TCG_REG_T3, addr_reg, compare_mask, ARITH_AND); } else { - tcg_out_movi(s, TCG_TYPE_TL, r2, compare_mask); - tcg_out_arith(s, r2, addr, r2, ARITH_AND); + tcg_out_movi_s32(s, TCG_REG_T3, compare_mask); + tcg_out_arith(s, TCG_REG_T3, addr_reg, TCG_REG_T3, ARITH_AND); } - tcg_out_cmp(s, r0, r2, 0); + tcg_out_cmp(s, TCG_REG_T2, TCG_REG_T3, 0); - /* If the guest address must be zero-extended, do so now. */ + ldst = new_ldst_label(s); + ldst->is_ld = is_ld; + ldst->oi = oi; + ldst->addrlo_reg = addr_reg; + ldst->label_ptr[0] = s->code_ptr; + + /* bne,pn %[xi]cc, label0 */ + cc = TARGET_LONG_BITS == 64 ? BPCC_XCC : BPCC_ICC; + tcg_out_bpcc0(s, COND_NE, BPCC_PN | cc, 0); +#else + if (a_bits != s_bits) { + /* + * Test for at least natural alignment, and defer + * everything else to the helper functions. + */ + tcg_debug_assert(check_fit_tl(a_mask, 13)); + tcg_out_arithi(s, TCG_REG_G0, addr_reg, a_mask, ARITH_ANDCC); + + ldst = new_ldst_label(s); + ldst->is_ld = is_ld; + ldst->oi = oi; + ldst->addrlo_reg = addr_reg; + ldst->label_ptr[0] = s->code_ptr; + + /* bne,pn %icc, label0 */ + tcg_out_bpcc0(s, COND_NE, BPCC_PN | BPCC_ICC, 0); + } + h->base = guest_base ? TCG_GUEST_BASE_REG : TCG_REG_G0; +#endif + + /* If the guest address must be zero-extended, do in the delay slot. */ if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, r0, addr); - return r0; + tcg_out_ext32u(s, TCG_REG_T2, addr_reg); + h->index = TCG_REG_T2; + } else { + if (ldst) { + tcg_out_nop(s); + } + h->index = addr_reg; } - return addr; + return ldst; } -#endif /* CONFIG_SOFTMMU */ - -static const int qemu_ld_opc[(MO_SSIZE | MO_BSWAP) + 1] = { - [MO_UB] = LDUB, - [MO_SB] = LDSB, - [MO_UB | MO_LE] = LDUB, - [MO_SB | MO_LE] = LDSB, - - [MO_BEUW] = LDUH, - [MO_BESW] = LDSH, - [MO_BEUL] = LDUW, - [MO_BESL] = LDSW, - [MO_BEUQ] = LDX, - [MO_BESQ] = LDX, - - [MO_LEUW] = LDUH_LE, - [MO_LESW] = LDSH_LE, - [MO_LEUL] = LDUW_LE, - [MO_LESL] = LDSW_LE, - [MO_LEUQ] = LDX_LE, - [MO_LESQ] = LDX_LE, -}; - -static const int qemu_st_opc[(MO_SIZE | MO_BSWAP) + 1] = { - [MO_UB] = STB, - - [MO_BEUW] = STH, - [MO_BEUL] = STW, - [MO_BEUQ] = STX, - - [MO_LEUW] = STH_LE, - [MO_LEUL] = STW_LE, - [MO_LEUQ] = STX_LE, -}; static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr, MemOpIdx oi, TCGType data_type) { - MemOp memop = get_memop(oi); - tcg_insn_unit *label_ptr; + static const int ld_opc[(MO_SSIZE | MO_BSWAP) + 1] = { + [MO_UB] = LDUB, + [MO_SB] = LDSB, + [MO_UB | MO_LE] = LDUB, + [MO_SB | MO_LE] = LDSB, -#ifdef CONFIG_SOFTMMU - unsigned memi = get_mmuidx(oi); - TCGReg addrz; - const tcg_insn_unit *func; + [MO_BEUW] = LDUH, + [MO_BESW] = LDSH, + [MO_BEUL] = LDUW, + [MO_BESL] = LDSW, + [MO_BEUQ] = LDX, + [MO_BESQ] = LDX, - addrz = tcg_out_tlb_load(s, addr, memi, memop, - offsetof(CPUTLBEntry, addr_read)); + [MO_LEUW] = LDUH_LE, + [MO_LESW] = LDSH_LE, + [MO_LEUL] = LDUW_LE, + [MO_LESL] = LDSW_LE, + [MO_LEUQ] = LDX_LE, + [MO_LESQ] = LDX_LE, + }; - /* The fast path is exactly one insn. Thus we can perform the - entire TLB Hit in the (annulled) delay slot of the branch - over the TLB Miss case. */ + TCGLabelQemuLdst *ldst; + HostAddress h; - /* beq,a,pt %[xi]cc, label0 */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT - | (TARGET_LONG_BITS == 64 ? BPCC_XCC : BPCC_ICC), 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addrz, TCG_REG_O1, - qemu_ld_opc[memop & (MO_BSWAP | MO_SSIZE)]); + ldst = prepare_host_addr(s, &h, addr, oi, true); - /* TLB Miss. */ + tcg_out_ldst_rr(s, data, h.base, h.index, + ld_opc[get_memop(oi) & (MO_BSWAP | MO_SSIZE)]); - tcg_out_mov(s, TCG_TYPE_REG, TCG_REG_O1, addrz); - - /* We use the helpers to extend SB and SW data, leaving the case - of SL needing explicit extending below. */ - if ((memop & MO_SSIZE) == MO_SL) { - func = qemu_ld_trampoline[MO_UL]; - } else { - func = qemu_ld_trampoline[memop & MO_SSIZE]; + if (ldst) { + ldst->type = data_type; + ldst->datalo_reg = data; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); } - tcg_debug_assert(func != NULL); - tcg_out_call_nodelay(s, func, false); - /* delay slot */ - tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_O2, oi); - - /* We let the helper sign-extend SB and SW, but leave SL for here. */ - if ((memop & MO_SSIZE) == MO_SL) { - tcg_out_ext32s(s, data, TCG_REG_O0); - } else { - tcg_out_mov(s, TCG_TYPE_REG, data, TCG_REG_O0); - } - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#else - TCGReg index = (guest_base ? TCG_GUEST_BASE_REG : TCG_REG_G0); - unsigned a_bits = get_alignment_bits(memop); - unsigned s_bits = memop & MO_SIZE; - unsigned t_bits; - - if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, TCG_REG_T1, addr); - addr = TCG_REG_T1; - } - - /* - * Normal case: alignment equal to access size. - */ - if (a_bits == s_bits) { - tcg_out_ldst_rr(s, data, addr, index, - qemu_ld_opc[memop & (MO_BSWAP | MO_SSIZE)]); - return; - } - - /* - * Test for at least natural alignment, and assume most accesses - * will be aligned -- perform a straight load in the delay slot. - * This is required to preserve atomicity for aligned accesses. - */ - t_bits = MAX(a_bits, s_bits); - tcg_debug_assert(t_bits < 13); - tcg_out_arithi(s, TCG_REG_G0, addr, (1u << t_bits) - 1, ARITH_ANDCC); - - /* beq,a,pt %icc, label */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT | BPCC_ICC, 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addr, index, - qemu_ld_opc[memop & (MO_BSWAP | MO_SSIZE)]); - - if (a_bits >= s_bits) { - /* - * Overalignment: A successful alignment test will perform the memory - * operation in the delay slot, and failure need only invoke the - * handler for SIGBUS. - */ - tcg_out_call_nodelay(s, qemu_unalign_ld_trampoline, false); - /* delay slot -- move to low part of argument reg */ - tcg_out_mov_delay(s, TCG_REG_O1, addr); - } else { - /* Underalignment: load by pieces of minimum alignment. */ - int ld_opc, a_size, s_size, i; - - /* - * Force full address into T1 early; avoids problems with - * overlap between @addr and @data. - */ - tcg_out_arith(s, TCG_REG_T1, addr, index, ARITH_ADD); - - a_size = 1 << a_bits; - s_size = 1 << s_bits; - if ((memop & MO_BSWAP) == MO_BE) { - ld_opc = qemu_ld_opc[a_bits | MO_BE | (memop & MO_SIGN)]; - tcg_out_ldst(s, data, TCG_REG_T1, 0, ld_opc); - ld_opc = qemu_ld_opc[a_bits | MO_BE]; - for (i = a_size; i < s_size; i += a_size) { - tcg_out_ldst(s, TCG_REG_T2, TCG_REG_T1, i, ld_opc); - tcg_out_arithi(s, data, data, a_size, SHIFT_SLLX); - tcg_out_arith(s, data, data, TCG_REG_T2, ARITH_OR); - } - } else if (a_bits == 0) { - ld_opc = LDUB; - tcg_out_ldst(s, data, TCG_REG_T1, 0, ld_opc); - for (i = a_size; i < s_size; i += a_size) { - if ((memop & MO_SIGN) && i == s_size - a_size) { - ld_opc = LDSB; - } - tcg_out_ldst(s, TCG_REG_T2, TCG_REG_T1, i, ld_opc); - tcg_out_arithi(s, TCG_REG_T2, TCG_REG_T2, i * 8, SHIFT_SLLX); - tcg_out_arith(s, data, data, TCG_REG_T2, ARITH_OR); - } - } else { - ld_opc = qemu_ld_opc[a_bits | MO_LE]; - tcg_out_ldst_rr(s, data, TCG_REG_T1, TCG_REG_G0, ld_opc); - for (i = a_size; i < s_size; i += a_size) { - tcg_out_arithi(s, TCG_REG_T1, TCG_REG_T1, a_size, ARITH_ADD); - if ((memop & MO_SIGN) && i == s_size - a_size) { - ld_opc = qemu_ld_opc[a_bits | MO_LE | MO_SIGN]; - } - tcg_out_ldst_rr(s, TCG_REG_T2, TCG_REG_T1, TCG_REG_G0, ld_opc); - tcg_out_arithi(s, TCG_REG_T2, TCG_REG_T2, i * 8, SHIFT_SLLX); - tcg_out_arith(s, data, data, TCG_REG_T2, ARITH_OR); - } - } - } - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#endif /* CONFIG_SOFTMMU */ } static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr, MemOpIdx oi, TCGType data_type) { - MemOp memop = get_memop(oi); - tcg_insn_unit *label_ptr; + static const int st_opc[(MO_SIZE | MO_BSWAP) + 1] = { + [MO_UB] = STB, -#ifdef CONFIG_SOFTMMU - unsigned memi = get_mmuidx(oi); - TCGReg addrz; - const tcg_insn_unit *func; + [MO_BEUW] = STH, + [MO_BEUL] = STW, + [MO_BEUQ] = STX, - addrz = tcg_out_tlb_load(s, addr, memi, memop, - offsetof(CPUTLBEntry, addr_write)); + [MO_LEUW] = STH_LE, + [MO_LEUL] = STW_LE, + [MO_LEUQ] = STX_LE, + }; - /* The fast path is exactly one insn. Thus we can perform the entire - TLB Hit in the (annulled) delay slot of the branch over TLB Miss. */ - /* beq,a,pt %[xi]cc, label0 */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT - | (TARGET_LONG_BITS == 64 ? BPCC_XCC : BPCC_ICC), 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addrz, TCG_REG_O1, - qemu_st_opc[memop & (MO_BSWAP | MO_SIZE)]); + TCGLabelQemuLdst *ldst; + HostAddress h; - /* TLB Miss. */ + ldst = prepare_host_addr(s, &h, addr, oi, false); - tcg_out_mov(s, TCG_TYPE_REG, TCG_REG_O1, addrz); - tcg_out_movext(s, (memop & MO_SIZE) == MO_64 ? TCG_TYPE_I64 : TCG_TYPE_I32, - TCG_REG_O2, data_type, memop & MO_SIZE, data); + tcg_out_ldst_rr(s, data, h.base, h.index, + st_opc[get_memop(oi) & (MO_BSWAP | MO_SIZE)]); - func = qemu_st_trampoline[memop & MO_SIZE]; - tcg_debug_assert(func != NULL); - tcg_out_call_nodelay(s, func, false); - /* delay slot */ - tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_O3, oi); - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#else - TCGReg index = (guest_base ? TCG_GUEST_BASE_REG : TCG_REG_G0); - unsigned a_bits = get_alignment_bits(memop); - unsigned s_bits = memop & MO_SIZE; - unsigned t_bits; - - if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, TCG_REG_T1, addr); - addr = TCG_REG_T1; + if (ldst) { + ldst->type = data_type; + ldst->datalo_reg = data; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); } - - /* - * Normal case: alignment equal to access size. - */ - if (a_bits == s_bits) { - tcg_out_ldst_rr(s, data, addr, index, - qemu_st_opc[memop & (MO_BSWAP | MO_SIZE)]); - return; - } - - /* - * Test for at least natural alignment, and assume most accesses - * will be aligned -- perform a straight store in the delay slot. - * This is required to preserve atomicity for aligned accesses. - */ - t_bits = MAX(a_bits, s_bits); - tcg_debug_assert(t_bits < 13); - tcg_out_arithi(s, TCG_REG_G0, addr, (1u << t_bits) - 1, ARITH_ANDCC); - - /* beq,a,pt %icc, label */ - label_ptr = s->code_ptr; - tcg_out_bpcc0(s, COND_E, BPCC_A | BPCC_PT | BPCC_ICC, 0); - /* delay slot */ - tcg_out_ldst_rr(s, data, addr, index, - qemu_st_opc[memop & (MO_BSWAP | MO_SIZE)]); - - if (a_bits >= s_bits) { - /* - * Overalignment: A successful alignment test will perform the memory - * operation in the delay slot, and failure need only invoke the - * handler for SIGBUS. - */ - tcg_out_call_nodelay(s, qemu_unalign_st_trampoline, false); - /* delay slot -- move to low part of argument reg */ - tcg_out_mov_delay(s, TCG_REG_O1, addr); - } else { - /* Underalignment: store by pieces of minimum alignment. */ - int st_opc, a_size, s_size, i; - - /* - * Force full address into T1 early; avoids problems with - * overlap between @addr and @data. - */ - tcg_out_arith(s, TCG_REG_T1, addr, index, ARITH_ADD); - - a_size = 1 << a_bits; - s_size = 1 << s_bits; - if ((memop & MO_BSWAP) == MO_BE) { - st_opc = qemu_st_opc[a_bits | MO_BE]; - for (i = 0; i < s_size; i += a_size) { - TCGReg d = data; - int shift = (s_size - a_size - i) * 8; - if (shift) { - d = TCG_REG_T2; - tcg_out_arithi(s, d, data, shift, SHIFT_SRLX); - } - tcg_out_ldst(s, d, TCG_REG_T1, i, st_opc); - } - } else if (a_bits == 0) { - tcg_out_ldst(s, data, TCG_REG_T1, 0, STB); - for (i = 1; i < s_size; i++) { - tcg_out_arithi(s, TCG_REG_T2, data, i * 8, SHIFT_SRLX); - tcg_out_ldst(s, TCG_REG_T2, TCG_REG_T1, i, STB); - } - } else { - /* Note that ST*A with immediate asi must use indexed address. */ - st_opc = qemu_st_opc[a_bits + MO_LE]; - tcg_out_ldst_rr(s, data, TCG_REG_T1, TCG_REG_G0, st_opc); - for (i = a_size; i < s_size; i += a_size) { - tcg_out_arithi(s, TCG_REG_T2, data, i * 8, SHIFT_SRLX); - tcg_out_arithi(s, TCG_REG_T1, TCG_REG_T1, a_size, ARITH_ADD); - tcg_out_ldst_rr(s, TCG_REG_T2, TCG_REG_T1, TCG_REG_G0, st_opc); - } - } - } - - *label_ptr |= INSN_OFF19(tcg_ptr_byte_diff(s->code_ptr, label_ptr)); -#endif /* CONFIG_SOFTMMU */ } static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) @@ -1744,6 +1499,8 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_extu_i32_i64: case INDEX_op_extrl_i64_i32: case INDEX_op_extrh_i64_i32: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: return C_O1_I1(r, r); case INDEX_op_st8_i32: @@ -1753,6 +1510,8 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_st_i32: case INDEX_op_st32_i64: case INDEX_op_st_i64: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: return C_O0_I2(rZ, r); case INDEX_op_add_i32: @@ -1802,13 +1561,6 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_muluh_i64: return C_O1_I2(r, r, r); - case INDEX_op_qemu_ld_i32: - case INDEX_op_qemu_ld_i64: - return C_O1_I1(r, s); - case INDEX_op_qemu_st_i32: - case INDEX_op_qemu_st_i64: - return C_O0_I2(sZ, s); - default: g_assert_not_reached(); } From patchwork Tue Apr 25 19:31:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676853 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882842wrs; Tue, 25 Apr 2023 12:49:25 -0700 (PDT) X-Google-Smtp-Source: AKy350bQuKYY491Dk9IAdsr7Um4xW0RUGo8kE/QITuOpK/LzZXIFkNZ+TbnkfwIXL6LqViWFCIx4 X-Received: by 2002:a05:622a:30d:b0:3ef:3542:4469 with SMTP id q13-20020a05622a030d00b003ef35424469mr30481350qtw.46.1682452165280; Tue, 25 Apr 2023 12:49:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452165; cv=none; d=google.com; s=arc-20160816; b=azZqDJKpLOgWyWvYNLUO+V9rMsKYtE1dw9pDQsyd2ab19dQE42ekD30ToUGcZZZWeF aMj65Z7oeQFcuVsZd3o9u9l/8y9AuiDRtdxTBP0EOdMsoGd+C4+qU+xj/+MF3HB5AR6m sua7daU3XvsnV4zciEmBC5PUzm4zI9K3t8Iu0JftpfuilAMeEMOqUqHm4ioesItSEbUl +KJVddyW0lovN4sGokQXCF8P14v7Rd3UFrJ4miJgq2SpkNN8jN63yJbJc/dzrHm9EUVF 70zCLCOgOf0aahvyPqLT39r4ajzv645I2UDGS+uVJ0caaHBM7I0ewnAvita3jO/jv6YN 30Eg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=eK0NEMONjmg/1ucpaSNt26amgB3shibpw1tARmPQjiE=; b=Qrc5e4DBjvSWLBvnXd+Ys9FOxuYtKudXJcgW83DDMec8xQO9KbSQM68KcPBy9GphK9 9JxTWXfLt3cc8GxqOYrSDW6MPCHiZ0VBLRSPynxDOEJ9sr8zzdWKu3KnJgnVHPCK30ul HORHj4Y8vH1/pT1C6KibhCz2p1dGfN8Ay5vVwJCFQuSvmxu/x7w5Crm1BnwwedePfypX 7j1rRUNWp0QyVqlOSJCr8MJ/h136aRzWkIG37o6gwUiQJqFRCiaqW7VRndhz7rgZE/68 hYzqj9Ez3zXXsZ/VvKKrDNT+1BZAoZadsPxuhQw+vG47kWIFpHDPLwZamdmk0INvLc3v aDeg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JAGwvaaf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id k5-20020ac85fc5000000b003b84b93117csi9236327qta.554.2023.04.25.12.49.25 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:49:25 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JAGwvaaf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS7-0006AV-RF; Tue, 25 Apr 2023 15:35:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORh-0005Tp-JF for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:25 -0400 Received: from mail-lf1-x133.google.com ([2a00:1450:4864:20::133]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORP-0005Lp-Ec for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:25 -0400 Received: by mail-lf1-x133.google.com with SMTP id 2adb3069b0e04-4eed6ddcae1so26825484e87.0 for ; Tue, 25 Apr 2023 12:35:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451304; x=1685043304; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eK0NEMONjmg/1ucpaSNt26amgB3shibpw1tARmPQjiE=; b=JAGwvaafapQwmMsl+A0SFwjFn3gsSFSfYOWnUb3D7UsviPVpuhCvdkNJ6UTlTfHX/8 /AQMXamibfOmKon7evVEWelZMa7kzv7WRavTmhs2nzKbLTtsbQNoxt+4TAtUrCPE3WUH Rb95VAhzQ83X+JFR6kQaBcIeTNSNPQ71hXj616XLUqs9oyUjwlelwMFC/QDEpM5C2DUr YwyjeVf4zVUbecdM7ski4KztZXetMOC+9QUKoKSuVEXqo+IEjOhF4i7bSrjkjLufPss3 m8vxBsmh+oxF3e7omK5GOXUuEnu5x+Gb+9lU/pNF5tMzFlKVNqJANLcm5lKxGud+38UX PWsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451304; x=1685043304; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eK0NEMONjmg/1ucpaSNt26amgB3shibpw1tARmPQjiE=; b=RxRMCe90aWw2Bs2Xk46pZIZLTZjDBlCzMu4ijig7WbzU9gF0cdPiO+VZQfXhRqDMgp iC3pueAAe8Bil8FadhHX9sqUgjr3+RvOYewP2qmRE+3HkOPo16Qj/aHsL7DWFHgdxl8W +2GP6ubpq9LRZ9J74aFJa8Xnx6uNmGdNBrNMDpy0ejN3UTGvIegauMulaylATUQz9d3p x1msQ7ZcrIyoxY3TG45kOhTkgft+KHnkUY7QPKefArsUa43v75jtwEJpJufrXSPfc4uQ hqDww3pgwDr/LxkqXAM05AQw8l7qirbCUsrISb7Yis7PqB3Pn4MeXZVsTfyisL8N7ofV bjgQ== X-Gm-Message-State: AAQBX9c2vxpW6rcJnB/pSLMpf9uTJXY8uMFnI6+XXlBoqmYf8J2sJWBk 3KdgcEFkA6unEA7JiEzAEW6dDncNWLexeQjoh2Ns0g== X-Received: by 2002:a2e:aaa3:0:b0:2a8:ea1f:428c with SMTP id bj35-20020a2eaaa3000000b002a8ea1f428cmr4079149ljb.23.1682451303827; Tue, 25 Apr 2023 12:35:03 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.35.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:35:03 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 35/57] accel/tcg: Remove helper_unaligned_{ld,st} Date: Tue, 25 Apr 2023 20:31:24 +0100 Message-Id: <20230425193146.2106111-36-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::133; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x133.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org These functions are now unused. Signed-off-by: Richard Henderson --- include/tcg/tcg-ldst.h | 6 ------ accel/tcg/user-exec.c | 10 ---------- 2 files changed, 16 deletions(-) diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 64f48e6990..7dd57013e9 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -60,10 +60,4 @@ void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, void helper_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val, MemOpIdx oi, uintptr_t retaddr); -#ifdef CONFIG_USER_ONLY - -G_NORETURN void helper_unaligned_ld(CPUArchState *env, target_ulong addr); -G_NORETURN void helper_unaligned_st(CPUArchState *env, target_ulong addr); - -#endif /* CONFIG_USER_ONLY */ #endif /* TCG_LDST_H */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 98a24fc308..b6b9112118 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -889,16 +889,6 @@ void page_reset_target_data(target_ulong start, target_ulong last) { } /* The softmmu versions of these helpers are in cputlb.c. */ -void helper_unaligned_ld(CPUArchState *env, target_ulong addr) -{ - cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_LOAD, GETPC()); -} - -void helper_unaligned_st(CPUArchState *env, target_ulong addr) -{ - cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, GETPC()); -} - static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr, MemOp mop, uintptr_t ra, MMUAccessType type) { From patchwork Tue Apr 25 19:31:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676851 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882570wrs; Tue, 25 Apr 2023 12:48:40 -0700 (PDT) X-Google-Smtp-Source: AKy350ZhrYHkGGQtI82sDa1AF2A8AXNno/49ugGbK51JYna4Wv1+dIBA+AUkH5/qFORcDghz8qOD X-Received: by 2002:a05:6214:e84:b0:5fd:7701:88c5 with SMTP id hf4-20020a0562140e8400b005fd770188c5mr34292432qvb.6.1682452120007; Tue, 25 Apr 2023 12:48:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452119; cv=none; d=google.com; s=arc-20160816; b=reM+/R81Y/ZgOLGLznvODzYnUHd2+FTQ3BjKE4eixCpKNS+kL8C/CEwwuoi24wMkVP 7k+0WtkYULtlgbKi2NNcqgwHRCNEFtFmEI1qx9WOAzzzJc/pv+tDIlcRJDKxox8811PT /tfW6CCKNygPX0PDcrsSvmMprvw/8peZgevj+lVINcjm+W61y5uOuncqTXrLq+DLBQsX +WLQbNaXVEcf6K/bjraEvwYlt7nAvP5YqHJeENKrtPVuF/cetsH8o/1d8dtKEcJuIXvC kt7xGVuJtkMuVQePIg+5QWtXfp35gdFmi7mYavXH7dxAOy2y5YrBrBUqiWXRpvmKwYwO vp/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=g0XhypHJuKT2RLCHGyOOS8cpna/MQwPu76GFexENMHE=; b=Vn3FEVtezkyMw/FFQu8gyHnKHzA/JArwzLSgdZQSUlmFDPfwV67JvKZwcfrCfsmGoF 9aekgiIp5JxuC0UNXTb2ZG7eO5+CBdhu/mHXv3ihz80aVNzuwjdsh4z3gTcW27blxedO dybhuote65UprdW836x+YK6lQfJ02USP3B3iydPpTSnSlXVj2Ho+ezp67++p7MYSHqnR JGDRqWhzo5L1ogvQMC/2DMdPMkxaXJ/RirVDTkrZR724I0QlwFetudgCZHaXbonDC05h 0NHsvF16+pZCkjsJXZNftR8hep+HApI2hPyyCXvi7v3tYptVbOhFVUMFmGYJL6CS4OKs 4w6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jY9Pccdy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id k14-20020a05620a414e00b0074e325ed7f8si8845775qko.395.2023.04.25.12.48.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:48:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=jY9Pccdy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS8-0006Ew-L1; Tue, 25 Apr 2023 15:35:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORp-0005f5-HE for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:35 -0400 Received: from mail-lf1-x132.google.com ([2a00:1450:4864:20::132]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORQ-0005NK-Fj for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:32 -0400 Received: by mail-lf1-x132.google.com with SMTP id 2adb3069b0e04-4ec8eca56cfso6845302e87.0 for ; Tue, 25 Apr 2023 12:35:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451307; x=1685043307; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g0XhypHJuKT2RLCHGyOOS8cpna/MQwPu76GFexENMHE=; b=jY9PccdyiHArO3/JJSCICJ7FGAdsOcrBmmU33RDGy7T3AeAsX3mCVdJYeUHy052XTT FXFJ1d4O7nCxSQCJFC3BMWdxW/D07gQEnomYuof4BHfOriO25C7GurU1Rjo5gi6KzoZF IhIEEdPgF823nEyzy2LX9rYKaoJMtXWGPO1so+BqeX+x56jAXG0w7Dz7KqI+wdKSupwF NtVkltM8XWCQ3bXiUdEm7UCNGNGBvOZ2OGnH8CoLNg6EnWMqXJlZmxp/hp6MVZ0HOUPv 7DutXNFNajw+SHefo8lE4zbiapraLb0hEosaF0Fw+hIc9eHWDS9cXIzGviPHkTuGqAwA vmPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451307; x=1685043307; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g0XhypHJuKT2RLCHGyOOS8cpna/MQwPu76GFexENMHE=; b=Y/4NRA6EAV8SB0D133MlzBXWLRsNHZ/1EmNhmsBw8vTnsmjVLE3vsWEiU6bBUtjqcQ tFXn21D4UKymd/NmcmVnikSGAe+XDZZTGq/uKT86D36VkNnRj+el9l+SAiBqVzYSmahG Ajym/cxwmCJF5EYVHvlHXYaFvvFdfKa+SrE3Cm607/QPB3fnK/lX4IULyzrShSGvDJO0 Ol5ENPrtBnQvP48EvGDPiLxypEYkEU1UBft4UOCFDpdHGFTey2CuWI76UQVuaURjxZMe XCwuITQxI3xukYMB5qH0TK4aRgFWq4y1FJDiTnec+WIdQsIAmjzWHjYdEgl0wOqB/DWH SrpQ== X-Gm-Message-State: AAQBX9cPRkRXraKYDkVqECJlqf7bThiIPd7Bik3yIwUPB+11FdUsfW3J 1LS9xYP0SI8Yd6mSrdz34ca8cCWwJZJ+6oYLkpsbiQ== X-Received: by 2002:a05:6512:374b:b0:4b6:eca8:f6ca with SMTP id a11-20020a056512374b00b004b6eca8f6camr5131335lfs.67.1682451306802; Tue, 25 Apr 2023 12:35:06 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.35.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:35:06 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 36/57] tcg/loongarch64: Assert the host supports unaligned accesses Date: Tue, 25 Apr 2023 20:31:25 +0100 Message-Id: <20230425193146.2106111-37-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::132; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x132.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org This should be true of all server class loongarch64. Signed-off-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index e651ec5c71..ccc13ffdb4 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -30,6 +30,7 @@ */ #include "../tcg-ldst.c.inc" +#include #ifdef CONFIG_DEBUG_TCG static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { @@ -1674,6 +1675,11 @@ static void tcg_target_qemu_prologue(TCGContext *s) static void tcg_target_init(TCGContext *s) { + unsigned long hwcap = qemu_getauxval(AT_HWCAP); + + /* All server class loongarch have UAL; only embedded do not. */ + assert(hwcap & HWCAP_LOONGARCH_UAL); + tcg_target_available_regs[TCG_TYPE_I32] = ALL_GENERAL_REGS; tcg_target_available_regs[TCG_TYPE_I64] = ALL_GENERAL_REGS; From patchwork Tue Apr 25 19:31:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676867 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2884009wrs; Tue, 25 Apr 2023 12:52:39 -0700 (PDT) X-Google-Smtp-Source: AKy350YrAUD+neI4goYLEh3LnEpmNAsVNKB/idS0/orZ7wYl/SSbkGvMUa+AHkH96Eh4CUr4TN6o X-Received: by 2002:ac8:5a54:0:b0:3e4:eb1c:479b with SMTP id o20-20020ac85a54000000b003e4eb1c479bmr29421584qta.44.1682452359264; Tue, 25 Apr 2023 12:52:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452359; cv=none; d=google.com; s=arc-20160816; b=cvQ8liIXfv5PVv9F5eqohc7fdP8zKWN1DXw6UdfzbBgJMss2uSrz9uyz7DCVS2TT4T 8ADIQVnBXs5rmPZISETqFQkG5eGSAVzevsL7opBXoR2JCxaIiNX7GoqgJiqRyOgbS8lF XLWMKxVLCgWj+aog0C088wFpwbYuEu6tW3eKeo++Vw1FjHfuDrH2hqPLBaqLIZAAKBBo /90AzpwTW5gDB0ZBnoUwB5OO0pR5P5pA74pHb7ts4jp5AhxBR0oegr1FejDnroi67SV3 Anr9wDvUovx2AN5o0mHKZLwvf/77m/p90ta/usQrfe9Y3QMNmK5Ph4iJnJJI4m9oaGSu FcXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=KTISKKo85DNTfZp4wzvl7urwXJJu/e57aqD8BfD1Ntg=; b=NeqYl6x//r6cSLRRa/tc4iSrdhf45IXaCCvQCoWMqq3cS2yS7/l4Vzb4NvZ5C334gK sfslPXHIGHVZUMnkBe8+sNGobkG3d0e6ws3njo41faKTZD0shZV4U0E4tdztc0qxWHas NI7n042ktEl0AchL8TkSEB15odhqKML8cBeW7Bjyzs0sZtbs4ZZ13j6wdYhQC8i8g6NA XzaL1vABiy3RBD+Dak9MJU4k20ZPjEzwCk8rO/+S9h6fTC+z6dItVixJbzB3dZ7CJTb/ 32s8hvojxLCucz+2ISbpx0W2ROjZzpCChm5VpfP921hoO8VCCU6JkF4kX+klNex8+FHg +9sA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=msNiZHsQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id bm17-20020a05620a199100b00745f6c539fbsi9116789qkb.705.2023.04.25.12.52.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:52:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=msNiZHsQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOSA-0006In-5c; Tue, 25 Apr 2023 15:35:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORt-0005jf-MU for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:39 -0400 Received: from mail-lj1-x22d.google.com ([2a00:1450:4864:20::22d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORZ-0005Qb-GY for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:35 -0400 Received: by mail-lj1-x22d.google.com with SMTP id 38308e7fff4ca-2a7ac8a2c8bso59541031fa.3 for ; Tue, 25 Apr 2023 12:35:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451316; x=1685043316; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KTISKKo85DNTfZp4wzvl7urwXJJu/e57aqD8BfD1Ntg=; b=msNiZHsQwkL3+7+U2Iq/8mJaLMFCrUEZnfnUJzrNxTtSiS0xbW8K6VYvLs8xR0VuRF 93grbI7VA7sl2qJogZJ+lJMzM+Kii067MOTWpRexUY+3uhQGaB28ScEIc/YOBr1m7qMt ODOPYAAyi4zWfjBfH9B1e0VOZymORCphdd6TndMXf2zZy9aZDMYFYS0ivSUMxmwNhMAu Z7Tzx2N8uT4R57krGzya8eGs3LduQnGtKmwGmyGhtI2kLpryRJ9rCRx7lZHj7TCVEWn5 sbIDqCTvAV4XmEYVN+ny5xjcet6G6p1VFlf5Z8ARKLx+x4b3Zq1YjFKE/Yo+hN7IvDIY O4oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451316; x=1685043316; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KTISKKo85DNTfZp4wzvl7urwXJJu/e57aqD8BfD1Ntg=; b=IcEoX3F8JMVckJEr3e1e12Xu9iwXKkbuaok4ipLl1ndHj3j4RAG+3RQsXTXD+XPfC7 p0ugAVJtTQZrMzZ1S2iUUlFyDENX/COdN9uhmuKC2GBsM7ee6bPafWzDvatCUtBMxHxi PTHB2zHI6518ICpxvjUvuz4yc1fZZOcfwJulrxyVebrIeZ2TpP5H/i9DvtuRDpOZJfBW NZbn5jel0OAVdd8A6VLszKJzHR1M8iiTMcnh2skgOdIFhk0Zfb/w3RJIVt1oJrH7ML5y /xZ1i9vCtTTB+YVqDmlz+8xHThVtzYRBTiTfpNEdvwNu+dU8x+ZB3zyHRxHFIndvZOF8 CKuA== X-Gm-Message-State: AAQBX9f052fnRatByK+IeNpyGIC3sri6sQ20wesiHqsvWTq8v432z8YF MhBiZA3gJGbLunxcR8S4v+D2kojs70pO84HM2BpDqA== X-Received: by 2002:a2e:3216:0:b0:2a7:7ff7:9215 with SMTP id y22-20020a2e3216000000b002a77ff79215mr3931707ljy.10.1682451315875; Tue, 25 Apr 2023 12:35:15 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.35.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:35:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 37/57] tcg/loongarch64: Support softmmu unaligned accesses Date: Tue, 25 Apr 2023 20:31:26 +0100 Message-Id: <20230425193146.2106111-38-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::22d; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x22d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Test the final byte of an unaligned access. Use BSTRINS.D to clear the range of bits, rather than AND. Signed-off-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index ccc13ffdb4..20cb21b264 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -848,7 +848,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, int fast_ofs = TLB_MASK_TABLE_OFS(mem_index); int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask); int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table); - tcg_target_long compare_mask; ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -872,14 +871,20 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP2, offsetof(CPUTLBEntry, addend)); - /* We don't support unaligned accesses. */ + /* + * For aligned accesses, we check the first byte and include the alignment + * bits within the address. For unaligned access, we check that we don't + * cross pages using the address of the last byte of the access. + */ if (a_bits < s_bits) { - a_bits = s_bits; + unsigned a_mask = (1u << a_bits) - 1; + unsigned s_mask = (1u << s_bits) - 1; + tcg_out_addi(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg, s_mask - a_mask); + } else { + tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg); } - /* Clear the non-page, non-alignment bits from the address. */ - compare_mask = (tcg_target_long)TARGET_PAGE_MASK | ((1 << a_bits) - 1); - tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask); - tcg_out_opc_and(s, TCG_REG_TMP1, TCG_REG_TMP1, addr_reg); + tcg_out_opc_bstrins_d(s, TCG_REG_TMP1, TCG_REG_ZERO, + a_bits, TARGET_PAGE_BITS - 1); /* Compare masked address with the TLB entry. */ ldst->label_ptr[0] = s->code_ptr; From patchwork Tue Apr 25 19:31:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676834 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880767wrs; Tue, 25 Apr 2023 12:43:50 -0700 (PDT) X-Google-Smtp-Source: AKy350bM8rMZvqKZturqUhe+B6HODS0AurtA2WtccJIFfMyiyfTpfkY+hlArt2lKHzCK7PVm2LGT X-Received: by 2002:a05:622a:c5:b0:3ef:370b:e7e with SMTP id p5-20020a05622a00c500b003ef370b0e7emr24834042qtw.40.1682451830484; Tue, 25 Apr 2023 12:43:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451830; cv=none; d=google.com; s=arc-20160816; b=h2LmXN5pq7L7+XxkPvMUWw9LggpYTDt2n1NloYf6PUqcR/oIQe2KWuQfCbqt7Jpe5J oSvLDdwpF6fFCUKTlOQHUWie8rt6/YGuK83w4q6Joo0eT13OFS08ZSZ32xodHNrVVnm7 mNW+XGE3NuHF4BHqcGXMV46tC2NI2zynsMSUTb8Yh4PmDCdCaLNY/6K8ijBwRHuYaLug OoKHbwgjbNalByHXw5LxpqMvfaDsKSKLdM1wWCsRw/4OE7EQt9vUmzGn0A+gshl2Bd5k ImC6X/nxMEamK0kBAwur05SQRmJP48cL57Vq5HZpqDfxZbnRC87M1ie5uVJtB+C1x5z1 fqYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=wJE2oDaT5cp3o+g8E/nTmfAiwp2gC4b5DU7t513n6oA=; b=g58gLCmLMkZe1HUnklkQDdmATD6AyKXg/lwzTbYw7LOkA7WBt6v/fLpXcc/fHY5jX6 IWShiG4UvL2yAdvt6dM5sAPwC3R5gS++miiGuhLb+bkHUWT7VrxXN3sDTjRCsliz0Olq jNc7mt5UPtGzZSffFQ0u6G5AKN/S3pHX8ECjQABwhJIw5LkwckpOs7lhmc/kRze/+dL5 FxWF4NIyhNAkxRokvS8VXsxu5J2yvNNvoSu18s8a8ub4WX8QfS6tFgcItJnWwcQ+htv1 sN/038ktRZwNpVIpVKUUzHh2I+2xLiGePe4E+xAHuNYae7wGAV0CtcBe9g4oq4YXraeH v1Lg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nEtG+TTl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id p18-20020a05620a22b200b0074e41a67229si6261008qkh.768.2023.04.25.12.43.50 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:50 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nEtG+TTl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS5-00061W-LI; Tue, 25 Apr 2023 15:35:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORt-0005jb-Ls for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:39 -0400 Received: from mail-lf1-x132.google.com ([2a00:1450:4864:20::132]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORd-0005TN-U5 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:37 -0400 Received: by mail-lf1-x132.google.com with SMTP id 2adb3069b0e04-4efea4569f5so4051331e87.3 for ; Tue, 25 Apr 2023 12:35:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451319; x=1685043319; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wJE2oDaT5cp3o+g8E/nTmfAiwp2gC4b5DU7t513n6oA=; b=nEtG+TTlTB48f/Ku4NCQYtKMl9hMJW2oN9HGP3yQkmBO/Yw3BirZ1KqZW91X1wYYqT cCMjDlrydoFC1PUd1XHiNU8cntKzXBs1OF6LFj2ykTiuATZuWfjDHoj/ZRSzC9VKK/fk jBTxHiHf413uRCarHIM7yD6d3R4jQtsz0Pm0ut7OkPYTykpIWbAT0v/qwy2TMO7jbgFL qW4SzWMHOmKGWKKRucY0LJXyFFUY/bDRQVqhsNk9WPFsipDXcehtRz8ty75R7h7EVt3t t2p7a5MgIxyW0G31CVTn8wPT5+QDuUZKsrm9BpC/axHs00DOtgiMZPAha3bxKM06K/In jA1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451319; x=1685043319; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wJE2oDaT5cp3o+g8E/nTmfAiwp2gC4b5DU7t513n6oA=; b=k02B6cL7Whw4oYjFQ0GBzbTrMNgtUboO0bShg9+T6fjgRufpgNOPxz4Xz1+w3pE4d0 rU92G3EtdaCqdwfw/n9RedJkIKHxuxO989Ti6W1aDuqlMmmHtDVkyY1CCgzGtvXhFc3+ BjyTYIlgm+ZT78wx5g1f2dTnnvXQs9Mt7Q8bjzd2gu9nFt1g8OvXDRcq9I5WMzdtrpgf 2XCbOKeReLw0pO9KHlXAMgolwUgC9SApzU/hHI5oeuOK2qbDzpgPpRBiStzIm/F/sRsL i//bVEYb21GY0zg98XgTGeLq2qKCw83OYDQGpG2GBK1KdY1OWwGtAEH0SqHaccL27ybd 6wfA== X-Gm-Message-State: AAQBX9fLynktG0IVpP0jwAoz6J5Rmr6XEwzAd/UtpRzaEq0KE8EGimpb IVQUGLGxlb/QthkGSybda7skJKXdSN49moG7fAlIdQ== X-Received: by 2002:a19:c510:0:b0:4eb:20e:6aec with SMTP id w16-20020a19c510000000b004eb020e6aecmr4878062lfe.40.1682451318844; Tue, 25 Apr 2023 12:35:18 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.35.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:35:18 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 38/57] tcg/riscv: Support softmmu unaligned accesses Date: Tue, 25 Apr 2023 20:31:27 +0100 Message-Id: <20230425193146.2106111-39-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::132; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x132.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The system is required to emulate unaligned accesses, even if the hardware does not support it. The resulting trap may or may not be more efficient than the qemu slow path. There are linux kernel patches in flight to allow userspace to query hardware support; we can re-evaluate whether to enable this by default after that. In the meantime, softmmu now matches useronly, where we already assumed that unaligned accesses are supported. Signed-off-by: Richard Henderson --- tcg/riscv/tcg-target.c.inc | 48 ++++++++++++++++++++++---------------- 1 file changed, 28 insertions(+), 20 deletions(-) diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 8522561a28..3e4c91cce7 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -924,12 +924,13 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; + unsigned s_mask = (1u << s_bits) - 1; int mem_index = get_mmuidx(oi); int fast_ofs = TLB_MASK_TABLE_OFS(mem_index); int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask); int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table); - TCGReg mask_base = TCG_AREG0, table_base = TCG_AREG0; - tcg_target_long compare_mask; + int compare_mask; + TCGReg addr_adj; ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -938,14 +939,33 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) > 0); QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 11)); - tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP0, mask_base, mask_ofs); - tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, table_base, table_ofs); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP0, TCG_AREG0, mask_ofs); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, table_ofs); tcg_out_opc_imm(s, OPC_SRLI, TCG_REG_TMP2, addr_reg, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS); tcg_out_opc_reg(s, OPC_AND, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP0); tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP1); + /* + * For aligned accesses, we check the first byte and include the alignment + * bits within the address. For unaligned access, we check that we don't + * cross pages using the address of the last byte of the access. + */ + addr_adj = addr_reg; + if (a_bits < s_bits) { + addr_adj = TCG_REG_TMP0; + tcg_out_opc_imm(s, TARGET_LONG_BITS == 32 ? OPC_ADDIW : OPC_ADDI, + addr_adj, addr_reg, s_mask - a_mask); + } + compare_mask = TARGET_PAGE_MASK | a_mask; + if (compare_mask == sextreg(compare_mask, 0, 12)) { + tcg_out_opc_imm(s, OPC_ANDI, TCG_REG_TMP1, addr_adj, compare_mask); + } else { + tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask); + tcg_out_opc_reg(s, OPC_AND, TCG_REG_TMP1, TCG_REG_TMP1, addr_adj); + } + /* Load the tlb comparator and the addend. */ tcg_out_ld(s, TCG_TYPE_TL, TCG_REG_TMP0, TCG_REG_TMP2, is_ld ? offsetof(CPUTLBEntry, addr_read) @@ -953,29 +973,17 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP2, offsetof(CPUTLBEntry, addend)); - /* We don't support unaligned accesses. */ - if (a_bits < s_bits) { - a_bits = s_bits; - } - /* Clear the non-page, non-alignment bits from the address. */ - compare_mask = (tcg_target_long)TARGET_PAGE_MASK | a_mask; - if (compare_mask == sextreg(compare_mask, 0, 12)) { - tcg_out_opc_imm(s, OPC_ANDI, TCG_REG_TMP1, addr_reg, compare_mask); - } else { - tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask); - tcg_out_opc_reg(s, OPC_AND, TCG_REG_TMP1, TCG_REG_TMP1, addr_reg); - } - /* Compare masked address with the TLB entry. */ ldst->label_ptr[0] = s->code_ptr; tcg_out_opc_branch(s, OPC_BNE, TCG_REG_TMP0, TCG_REG_TMP1, 0); /* TLB Hit - translate address using addend. */ + addr_adj = addr_reg; if (TARGET_LONG_BITS == 32) { - tcg_out_ext32u(s, TCG_REG_TMP0, addr_reg); - addr_reg = TCG_REG_TMP0; + addr_adj = TCG_REG_TMP0; + tcg_out_ext32u(s, addr_adj, addr_reg); } - tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP0, TCG_REG_TMP2, addr_reg); + tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP0, TCG_REG_TMP2, addr_adj); *pbase = TCG_REG_TMP0; #else if (a_mask) { From patchwork Tue Apr 25 19:31:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676840 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881110wrs; Tue, 25 Apr 2023 12:44:45 -0700 (PDT) X-Google-Smtp-Source: AKy350bPOoPHlNS7eMRGxqXKG67HW69kyFZ7yCDCPs+RO9Yw7SOhlfcKQPHX3FENGbOak2k7iuPm X-Received: by 2002:ac8:5a84:0:b0:3eb:38eb:7b47 with SMTP id c4-20020ac85a84000000b003eb38eb7b47mr29241323qtc.32.1682451885189; Tue, 25 Apr 2023 12:44:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451885; cv=none; d=google.com; s=arc-20160816; b=OcUIBIMf+FxBZ+h7yn2YPTop3XK3SZasWKt5VDbqc+mrZtwb8t4X2Kh9THkM8GN50J sfFevWDxNmx23MK3Hyvm5EKuqixcVWseXH3MqxgLh6R3zZi0hqzm0aZTOaGZp5nJtF6C H8oYZ7D9j6eGKDJDu4GiGM4K/TzMkLLUOWEbF0D8Wx0jyzUbLmGmCaYtKUnDyfe19g2q nU/BqthIjowJNplLARJ/5SMBY1dofB6iX2Q0nNLiDoGm35DY0jRDn/iWpC1XSYJPDYRd Vh/eN5bQn86c3a5HWaI6USWD1VRH/71kIQgCtPC5Mq+MtQJcc8tysgLNbKFxZD0f05S7 QmnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=mdwZGwX/L+PZehj7YraClACaYMkMZgAjwbJK6U9Rhns=; b=NQ1VSzUt+ufU9k3MzkW6WRggfpzXig0ksLO0AMaPz2JX27Y4fxNl8rhwQcGAt9N3aA Twp3nuIfGUhfpp9EQRbb3P/KqtV5Dg4MLvH2Jzwu3/H2P1drHH1yplLKrIPk2yQ5a2gk za+Z1WxqTls1qeIMjecLVdHyQwNA434p0m/UvT3zGmVV9ZuS9kQIIrB4nsM6T7P5Zq5J 6LqPmLBn674VdIsaYilAkygVxn5QCtn3pkoWzSJTBsnGNRPHTbM2n7H02Jn9vDscsyUH 4l+HsljK1w1Puy2lbVAaeCOxWewJG8Y5A6VY06qB628R5M5Enr9nQx09wyUeiOZXlbes PZag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ttrBwWoa; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id f13-20020a05622a104d00b003ef64795c71si6114273qte.387.2023.04.25.12.44.45 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:44:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ttrBwWoa; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOS3-0005vA-NN; Tue, 25 Apr 2023 15:35:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prORx-0005lp-Oq for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:43 -0400 Received: from mail-lj1-x233.google.com ([2a00:1450:4864:20::233]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prORe-00059i-Gy for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:35:40 -0400 Received: by mail-lj1-x233.google.com with SMTP id 38308e7fff4ca-2a8ba23a2abso59679641fa.2 for ; Tue, 25 Apr 2023 12:35:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451321; x=1685043321; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mdwZGwX/L+PZehj7YraClACaYMkMZgAjwbJK6U9Rhns=; b=ttrBwWoaPI5iuuM/4Bxv+9q5lAClSmWqAlX12Il2Aazlodl+VDAYzaRDDJZp31BMT1 JZQ/8W+odMa50KxaFcO0+BAvEXJQHxCwet5XRl7W0iBIg/pLuYMIYsJP9MGpejN4fOcI RHKrZr3pw++oQO7JtM4gxW3PLDTJkxBVV7/cGWi/iMjeccGlkvJJdIWFhVDs9Gycpx8g gUblM+ZMk/SMod/jq/0a6FQwkrwn6y8m/DBPBC9B8/y66SV+jG5Mwxqylk4QYcsiocG6 89ueb0uYZcYPY/ELQAMxvYndPgApFpxUACPDIYw3oFHMwJtsa+zaIGh7T1G/qRbVtv5k Yb1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451321; x=1685043321; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mdwZGwX/L+PZehj7YraClACaYMkMZgAjwbJK6U9Rhns=; b=XPX56pmwhioZTRTGdh/iuE6EfX4f+vLgh4Q1gobEFflHw43GhESCk4prT7Mc9UXKWQ mkxSe8oreZSMrkuow8yQPkgGBIk8t5dcoHM67/eMr89ORAx2xQP0UESsMYpejFFfj/Qt NByj2BrxGWqxmztDgLScejVruYqs0Tt2hhQ9qHpnDNE+s9JE88qpH+V8sG12xE+baSMf rmTNzlSyAst9q3h1lUS4sC6RtFPDpWqme/ut6TT57zjFOc+kyrldJOs9I6nfDIP48T3z MrUBQwkpI05l3RMMvwHx2PzaHTKxJ0cnAV3bP/0/IUPmudxRoObRV1DTZGbu8g+d2XLa NQ1A== X-Gm-Message-State: AAQBX9ctSYTRofV0rs86sTjcYtMSOErQlVa/iNsNLJIGX0XlQaDwlGxL Cs6qaUbNkYdieKbwrb4sPxquFgIq1zW9PPR30EaQEA== X-Received: by 2002:a2e:9cc9:0:b0:2a7:b052:f59 with SMTP id g9-20020a2e9cc9000000b002a7b0520f59mr3847537ljj.5.1682451321578; Tue, 25 Apr 2023 12:35:21 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id z23-20020a2e8857000000b002a8c271de33sm2160484ljj.67.2023.04.25.12.35.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:35:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 39/57] tcg: Introduce tcg_target_has_memory_bswap Date: Tue, 25 Apr 2023 20:31:28 +0100 Message-Id: <20230425193146.2106111-40-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::233; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x233.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro with a function with a memop argument. Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.h | 1 - tcg/arm/tcg-target.h | 1 - tcg/i386/tcg-target.h | 3 --- tcg/loongarch64/tcg-target.h | 2 -- tcg/mips/tcg-target.h | 2 -- tcg/ppc/tcg-target.h | 1 - tcg/riscv/tcg-target.h | 2 -- tcg/s390x/tcg-target.h | 2 -- tcg/sparc64/tcg-target.h | 1 - tcg/tcg-internal.h | 2 ++ tcg/tci/tcg-target.h | 2 -- tcg/tcg-op.c | 20 +++++++++++--------- tcg/aarch64/tcg-target.c.inc | 5 +++++ tcg/arm/tcg-target.c.inc | 5 +++++ tcg/i386/tcg-target.c.inc | 5 +++++ tcg/loongarch64/tcg-target.c.inc | 5 +++++ tcg/mips/tcg-target.c.inc | 5 +++++ tcg/ppc/tcg-target.c.inc | 5 +++++ tcg/riscv/tcg-target.c.inc | 5 +++++ tcg/s390x/tcg-target.c.inc | 5 +++++ tcg/sparc64/tcg-target.c.inc | 5 +++++ tcg/tci/tcg-target.c.inc | 5 +++++ 22 files changed, 63 insertions(+), 26 deletions(-) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index 3c0b0d312d..378e01d9d8 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -154,7 +154,6 @@ extern bool have_lse2; #define TCG_TARGET_HAS_cmpsel_vec 0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h index def2a189e6..4c2d3332d5 100644 --- a/tcg/arm/tcg-target.h +++ b/tcg/arm/tcg-target.h @@ -150,7 +150,6 @@ extern bool use_neon_instructions; #define TCG_TARGET_HAS_cmpsel_vec 0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index 0421776cb8..8fe6958abd 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -240,9 +240,6 @@ extern bool have_atomic16; #include "tcg/tcg-mo.h" #define TCG_TARGET_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD) - -#define TCG_TARGET_HAS_MEMORY_BSWAP have_movbe - #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 17b8193aa5..75c3d80ed2 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -173,6 +173,4 @@ typedef enum { #define TCG_TARGET_NEED_LDST_LABELS -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 - #endif /* LOONGARCH_TCG_TARGET_H */ diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h index 42bd7fff01..47088af9cb 100644 --- a/tcg/mips/tcg-target.h +++ b/tcg/mips/tcg-target.h @@ -205,8 +205,6 @@ extern bool use_mips32r2_instructions; #endif #define TCG_TARGET_DEFAULT_MO 0 -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 - #define TCG_TARGET_NEED_LDST_LABELS #endif diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h index af81c5a57f..d55f0266bb 100644 --- a/tcg/ppc/tcg-target.h +++ b/tcg/ppc/tcg-target.h @@ -179,7 +179,6 @@ extern bool have_vsx; #define TCG_TARGET_HAS_cmpsel_vec 0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/riscv/tcg-target.h b/tcg/riscv/tcg-target.h index dddf2486c1..dece3b3c27 100644 --- a/tcg/riscv/tcg-target.h +++ b/tcg/riscv/tcg-target.h @@ -168,6 +168,4 @@ typedef enum { #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS -#define TCG_TARGET_HAS_MEMORY_BSWAP 0 - #endif diff --git a/tcg/s390x/tcg-target.h b/tcg/s390x/tcg-target.h index a05b473117..fe05680124 100644 --- a/tcg/s390x/tcg-target.h +++ b/tcg/s390x/tcg-target.h @@ -172,8 +172,6 @@ extern uint64_t s390_facilities[3]; #define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_BY_REF #define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_BY_REF -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 - #define TCG_TARGET_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD) #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/sparc64/tcg-target.h b/tcg/sparc64/tcg-target.h index 7434cc99d4..f6cd86975a 100644 --- a/tcg/sparc64/tcg-target.h +++ b/tcg/sparc64/tcg-target.h @@ -154,7 +154,6 @@ extern bool use_vis3_instructions; #define TCG_AREG0 TCG_REG_I0 #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 #define TCG_TARGET_NEED_LDST_LABELS #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h index 0f1ba01a9a..67b698bd5c 100644 --- a/tcg/tcg-internal.h +++ b/tcg/tcg-internal.h @@ -126,4 +126,6 @@ static inline TCGv_i64 TCGV128_HIGH(TCGv_i128 t) return temp_tcgv_i64(tcgv_i128_temp(t) + o); } +bool tcg_target_has_memory_bswap(MemOp memop); + #endif /* TCG_INTERNAL_H */ diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h index 7140a76a73..364012e4d2 100644 --- a/tcg/tci/tcg-target.h +++ b/tcg/tci/tcg-target.h @@ -176,6 +176,4 @@ typedef enum { We prefer consistency across hosts on this. */ #define TCG_TARGET_DEFAULT_MO (0) -#define TCG_TARGET_HAS_MEMORY_BSWAP 1 - #endif /* TCG_TARGET_H */ diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 9101d334b6..85f22458c9 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -2959,7 +2959,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop) oi = make_memop_idx(memop, idx); orig_memop = memop; - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { memop &= ~MO_BSWAP; /* The bswap primitive benefits from zero-extended input. */ if ((memop & MO_SSIZE) == MO_SW) { @@ -2996,7 +2996,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop) memop = tcg_canonicalize_memop(memop, 0, 1); oi = make_memop_idx(memop, idx); - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { swap = tcg_temp_ebb_new_i32(); switch (memop & MO_SIZE) { case MO_16: @@ -3045,7 +3045,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop) oi = make_memop_idx(memop, idx); orig_memop = memop; - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { memop &= ~MO_BSWAP; /* The bswap primitive benefits from zero-extended input. */ if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_64) { @@ -3091,7 +3091,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop) memop = tcg_canonicalize_memop(memop, 1, 1); oi = make_memop_idx(memop, idx); - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { swap = tcg_temp_ebb_new_i64(); switch (memop & MO_SIZE) { case MO_16: @@ -3168,11 +3168,6 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) tcg_debug_assert((orig & MO_SIZE) == MO_128); tcg_debug_assert((orig & MO_SIGN) == 0); - /* Use a memory ordering implemented by the host. */ - if (!TCG_TARGET_HAS_MEMORY_BSWAP && (orig & MO_BSWAP)) { - mop_1 &= ~MO_BSWAP; - } - /* Reduce the size to 64-bit. */ mop_1 = (mop_1 & ~MO_SIZE) | MO_64; @@ -3202,6 +3197,13 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) default: g_assert_not_reached(); } + + /* Use a memory ordering implemented by the host. */ + if ((orig & MO_BSWAP) && !tcg_target_has_memory_bswap(mop_1)) { + mop_1 &= ~MO_BSWAP; + mop_2 &= ~MO_BSWAP; + } + ret[0] = mop_1; ret[1] = mop_2; } diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 09c9ecad0f..8e5f3d3688 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1595,6 +1595,11 @@ typedef struct { TCGType index_ext; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 1, .tmp = { TCG_REG_TMP } }; diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index eb0542f32e..e5aed03247 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1325,6 +1325,11 @@ typedef struct { bool index_scratch; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg) { /* We arrive at the slow path via "BLNE", so R14 contains l->raddr. */ diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 32ef9ad4e4..8c0902844a 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1778,6 +1778,11 @@ typedef struct { int seg; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return have_movbe; +} + /* * Because i686 has no register parameters and because x86_64 has xchg * to handle addr/data register overlap, we have placed all input arguments diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 20cb21b264..62bf823084 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -828,6 +828,11 @@ typedef struct { TCGReg index; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index fa0f334e8d..cd0254a0d7 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1141,6 +1141,11 @@ typedef struct { MemOp align; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 94a9f70e17..c799d7c52a 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2020,6 +2020,11 @@ typedef struct { TCGReg index; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 3e4c91cce7..5193998865 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -867,6 +867,11 @@ static void tcg_out_goto(TCGContext *s, const tcg_insn_unit *target) tcg_debug_assert(ok); } +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + /* We have three temps, we might as well expose them. */ static const TCGLdstHelperParam ldst_helper_param = { .ntmp = 3, .tmp = { TCG_REG_TMP0, TCG_REG_TMP1, TCG_REG_TMP2 } diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index de8aed5f77..22f0206b5a 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -1574,6 +1574,11 @@ typedef struct { int disp; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} + static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data, HostAddress h) { diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 0237188d65..bb23038529 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -1011,6 +1011,11 @@ typedef struct { TCGReg index; } HostAddress; +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} + /* * For softmmu, perform the TLB load and compare. * For useronly, perform any required alignment tests. diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc index e31640d109..89f693050c 100644 --- a/tcg/tci/tcg-target.c.inc +++ b/tcg/tci/tcg-target.c.inc @@ -964,3 +964,8 @@ static void tcg_target_init(TCGContext *s) static inline void tcg_target_qemu_prologue(TCGContext *s) { } + +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return true; +} From patchwork Tue Apr 25 19:31:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676862 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883804wrs; Tue, 25 Apr 2023 12:52:08 -0700 (PDT) X-Google-Smtp-Source: AKy350bP0hFJ/amcgjyjW6iMiCiPNhbnKErlrOwIART9QsOHRSWxs9wDC45fNpx7PYjnqfr4sGho X-Received: by 2002:ac8:7d41:0:b0:3ef:437e:c824 with SMTP id h1-20020ac87d41000000b003ef437ec824mr30395440qtb.52.1682452328246; Tue, 25 Apr 2023 12:52:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452328; cv=none; d=google.com; s=arc-20160816; b=lwGjXSvLa0pb3ov5pi3rrRgt1WhnJiARpKexJIBNq831N2GKSRCxwjhTWUUEJtJy6c 4fLnxgtZwen+XpfXLlNh4lvFGKNzIpqotXvy5HmExYb0YGDN6GzTrkvcKgVtHv/4gRSC RUj+Cf7+CUWGiWqc3J0MOfuJs+Z0rBTV3KqAvAJt91iUh7m0m99scXeZdzuxbrFTH1ta MAjsnIpHR+oNVZse1PRO0ZKyW7O+cgz3YB+Lc5FSdLedFT3CcJ1TIzL3Bw80+iOPtojH hNvM+eyb2PDIMF1rwO2Tfw/5JPSf20Bpf9p15b44ArrIUzUnVBXyfoip2hIDORvi28EJ FhUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=4eQHDj9xrTiwfdZjyYUtTq2brqOmUSNalCk4uz3zgSM=; b=XrHZNIQsIlIxjGDd1RoT5YK8hzPMtRpd3lgbFMCZO70aOeEz6TeQKN7YWZspHNniaN vfwg5nqLeJUcRBsLrXsxrULqD6WoQBMhLIw0vozFck5SN9AApRk/S+oP2YeMyGNdx/h7 EKLoWI7uvX8VCGn4l8U2y82WlKjIg2htDYOTCECnkocamWK0Sf1azcVWyEqGiZKVTvwh aPpsEZkjOzkRBG+a5hZ8v0j2Nqn/49+7J63m/lwj+pOMC0a1mwy0sK0bYq/CIEw4l/oI lR8qdw+hnoMs41kQd6LNN0FqX78htJvA2D/qSgxYB4n3k2ne7+vlcvRHw2KZzHthpYga u76Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Xrbi6Zh2; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o16-20020a05622a139000b003eb13c5dce6si9386572qtk.489.2023.04.25.12.52.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:52:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Xrbi6Zh2; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOVB-00005k-Ss; Tue, 25 Apr 2023 15:39:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOV9-0008VJ-W7 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:39:00 -0400 Received: from mail-lf1-x131.google.com ([2a00:1450:4864:20::131]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOV7-0006JK-MS for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:38:59 -0400 Received: by mail-lf1-x131.google.com with SMTP id 2adb3069b0e04-4efeea05936so3416969e87.2 for ; Tue, 25 Apr 2023 12:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451536; x=1685043536; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4eQHDj9xrTiwfdZjyYUtTq2brqOmUSNalCk4uz3zgSM=; b=Xrbi6Zh2Cw4h/lCuxJctbKpliacEWKA3VcJ2pbQq+q9yahGj5ZpbDsvPDG0hzkzAX+ GzgEIBrRTGGJEVsmxSRhgg82Cn4UfIX2G0YP49cTZSUl9wGvPHgyhIKsQVQZhPwQZKVl sKf/rxTcSuXWp0uXIy6T00qUKvsaGZ/ok6BdqOp1L6bMut1g3kOuq0B2f2pE03HIVFpM +Z3gUWEfKshugOusLCvPhI9zfJi96KhqwVCSRdNwAh3R/mOY8/cNyKfxkeSvfor8T92r mnUH9SW7Fvu1DWLOoBjaswTC0f8/4HSpqQks5WykN8NZlYbTjAzw2dSOMW4YzqAl/SgO LmPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451536; x=1685043536; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4eQHDj9xrTiwfdZjyYUtTq2brqOmUSNalCk4uz3zgSM=; b=e6jFSAojWTHz3VDDTkzC79RU8UZi9WovHE2toSSlSbDp2jUng6ch0+zcYbfMZobv6a AqBe5VS/7nFFhqvCo4f668d96oDCxREqDClgK34CUHb6XAfgyTz5ky7yxc6lP9ekUtLb 0boz4RadPwyN2aw7hDC6/ospk9y5ZsCRhYSiqdIxXg+4wBxycw05W7caZfQW5iVPdGLc VIRg6XmVNGIKsa0LbFGhPWlp/PTeYOuIr5A1NNl5XNIwNpUwWGJe0RO94iJAcHe9XJjR vc10jdA8EUfg7iSuTaGGiZ18eCnJ6hVi0/wS+sPWGDhVKrlhrwrM4+MXbhLpXeyDTWNC acqA== X-Gm-Message-State: AAQBX9cn0jwpHBTXJOWl5h6KOEbY5l4gK9YGnkwY3CE0Qh859TL+UyJh Ie+7UdKJ/F/Z21c4PLcvGs6mMIn9CW5Ipb7RgyvVLw== X-Received: by 2002:a05:6512:405:b0:4eb:3021:3a8f with SMTP id u5-20020a056512040500b004eb30213a8fmr5528772lfk.61.1682451536024; Tue, 25 Apr 2023 12:38:56 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.38.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:38:55 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 40/57] tcg: Add INDEX_op_qemu_{ld,st}_i128 Date: Tue, 25 Apr 2023 20:31:29 +0100 Message-Id: <20230425193146.2106111-41-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::131; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x131.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Add opcodes for backend support for 128-bit memory operations. Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- include/tcg/tcg-opc.h | 8 +++++ tcg/aarch64/tcg-target.h | 2 ++ tcg/arm/tcg-target.h | 2 ++ tcg/i386/tcg-target.h | 2 ++ tcg/loongarch64/tcg-target.h | 1 + tcg/mips/tcg-target.h | 2 ++ tcg/ppc/tcg-target.h | 2 ++ tcg/riscv/tcg-target.h | 2 ++ tcg/s390x/tcg-target.h | 2 ++ tcg/sparc64/tcg-target.h | 2 ++ tcg/tci/tcg-target.h | 2 ++ tcg/tcg-op.c | 69 ++++++++++++++++++++++++++++++++---- tcg/tcg.c | 4 +++ docs/devel/tcg-ops.rst | 11 +++--- 14 files changed, 101 insertions(+), 10 deletions(-) diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h index dd444734d9..94cf7c5d6a 100644 --- a/include/tcg/tcg-opc.h +++ b/include/tcg/tcg-opc.h @@ -213,6 +213,14 @@ DEF(qemu_st8_i32, 0, TLADDR_ARGS + 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS | IMPL(TCG_TARGET_HAS_qemu_st8_i32)) +/* Only for 64-bit hosts at the moment. */ +DEF(qemu_ld_i128, 2, 1, 1, + TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT | + IMPL(TCG_TARGET_HAS_qemu_ldst_i128)) +DEF(qemu_st_i128, 0, 3, 1, + TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT | + IMPL(TCG_TARGET_HAS_qemu_ldst_i128)) + /* Host vector support. */ #define IMPLVEC TCG_OPF_VECTOR | IMPL(TCG_TARGET_MAYBE_vec) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index 378e01d9d8..74ee2ed255 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -129,6 +129,8 @@ extern bool have_lse2; #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_HAS_v64 1 #define TCG_TARGET_HAS_v128 1 #define TCG_TARGET_HAS_v256 0 diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h index 4c2d3332d5..65efc538f4 100644 --- a/tcg/arm/tcg-target.h +++ b/tcg/arm/tcg-target.h @@ -125,6 +125,8 @@ extern bool use_neon_instructions; #define TCG_TARGET_HAS_rem_i32 0 #define TCG_TARGET_HAS_qemu_st8_i32 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_HAS_v64 use_neon_instructions #define TCG_TARGET_HAS_v128 use_neon_instructions #define TCG_TARGET_HAS_v256 0 diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index 8fe6958abd..943af6775e 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -194,6 +194,8 @@ extern bool have_atomic16; #define TCG_TARGET_HAS_qemu_st8_i32 1 #endif +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + /* We do not support older SSE systems, only beginning with AVX1. */ #define TCG_TARGET_HAS_v64 have_avx1 #define TCG_TARGET_HAS_v128 have_avx1 diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 75c3d80ed2..482901ac15 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -168,6 +168,7 @@ typedef enum { #define TCG_TARGET_HAS_muls2_i64 0 #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 #define TCG_TARGET_DEFAULT_MO (0) diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h index 47088af9cb..7277a117ef 100644 --- a/tcg/mips/tcg-target.h +++ b/tcg/mips/tcg-target.h @@ -204,6 +204,8 @@ extern bool use_mips32r2_instructions; #define TCG_TARGET_HAS_ext16u_i64 0 /* andi rt, rs, 0xffff */ #endif +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_DEFAULT_MO 0 #define TCG_TARGET_NEED_LDST_LABELS diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h index d55f0266bb..0914380bd7 100644 --- a/tcg/ppc/tcg-target.h +++ b/tcg/ppc/tcg-target.h @@ -149,6 +149,8 @@ extern bool have_vsx; #define TCG_TARGET_HAS_mulsh_i64 1 #endif +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + /* * While technically Altivec could support V64, it has no 64-bit store * instruction and substituting two 32-bit stores makes the generated diff --git a/tcg/riscv/tcg-target.h b/tcg/riscv/tcg-target.h index dece3b3c27..494c986b49 100644 --- a/tcg/riscv/tcg-target.h +++ b/tcg/riscv/tcg-target.h @@ -163,6 +163,8 @@ typedef enum { #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_DEFAULT_MO (0) #define TCG_TARGET_NEED_LDST_LABELS diff --git a/tcg/s390x/tcg-target.h b/tcg/s390x/tcg-target.h index fe05680124..170007bea5 100644 --- a/tcg/s390x/tcg-target.h +++ b/tcg/s390x/tcg-target.h @@ -140,6 +140,8 @@ extern uint64_t s390_facilities[3]; #define TCG_TARGET_HAS_muluh_i64 0 #define TCG_TARGET_HAS_mulsh_i64 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_TARGET_HAS_v64 HAVE_FACILITY(VECTOR) #define TCG_TARGET_HAS_v128 HAVE_FACILITY(VECTOR) #define TCG_TARGET_HAS_v256 0 diff --git a/tcg/sparc64/tcg-target.h b/tcg/sparc64/tcg-target.h index f6cd86975a..31c5537379 100644 --- a/tcg/sparc64/tcg-target.h +++ b/tcg/sparc64/tcg-target.h @@ -151,6 +151,8 @@ extern bool use_vis3_instructions; #define TCG_TARGET_HAS_muluh_i64 use_vis3_instructions #define TCG_TARGET_HAS_mulsh_i64 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + #define TCG_AREG0 TCG_REG_I0 #define TCG_TARGET_DEFAULT_MO (0) diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h index 364012e4d2..28dc6d5cfc 100644 --- a/tcg/tci/tcg-target.h +++ b/tcg/tci/tcg-target.h @@ -127,6 +127,8 @@ #define TCG_TARGET_HAS_mulu2_i32 1 #endif /* TCG_TARGET_REG_BITS == 64 */ +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + /* Number of registers available. */ #define TCG_TARGET_NB_REGS 16 diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 85f22458c9..06d3181fd0 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -3216,7 +3216,7 @@ static void canonicalize_memop_i128_as_i64(MemOp ret[2], MemOp orig) void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOpIdx oi = make_memop_idx(memop, idx); + const MemOpIdx oi = make_memop_idx(memop, idx); tcg_debug_assert((memop & MO_SIZE) == MO_128); tcg_debug_assert((memop & MO_SIGN) == 0); @@ -3224,9 +3224,36 @@ void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); addr = plugin_prep_mem_callbacks(addr); - /* TODO: allow the tcg backend to see the whole operation. */ + /* TODO: For now, force 32-bit hosts to use the helper. */ + if (TCG_TARGET_HAS_qemu_ldst_i128 && TCG_TARGET_REG_BITS == 64) { + TCGv_i64 lo, hi; + TCGArg addr_arg; + MemOpIdx adj_oi; + bool need_bswap = false; - if (use_two_i64_for_i128(memop)) { + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { + lo = TCGV128_HIGH(val); + hi = TCGV128_LOW(val); + adj_oi = make_memop_idx(memop & ~MO_BSWAP, idx); + need_bswap = true; + } else { + lo = TCGV128_LOW(val); + hi = TCGV128_HIGH(val); + adj_oi = oi; + } + +#if TARGET_LONG_BITS == 32 + addr_arg = tcgv_i32_arg(addr); +#else + addr_arg = tcgv_i64_arg(addr); +#endif + tcg_gen_op4ii_i64(INDEX_op_qemu_ld_i128, lo, hi, addr_arg, adj_oi); + + if (need_bswap) { + tcg_gen_bswap64_i64(lo, lo); + tcg_gen_bswap64_i64(hi, hi); + } + } else if (use_two_i64_for_i128(memop)) { MemOp mop[2]; TCGv addr_p8; TCGv_i64 x, y; @@ -3269,7 +3296,7 @@ void tcg_gen_qemu_ld_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) { - MemOpIdx oi = make_memop_idx(memop, idx); + const MemOpIdx oi = make_memop_idx(memop, idx); tcg_debug_assert((memop & MO_SIZE) == MO_128); tcg_debug_assert((memop & MO_SIGN) == 0); @@ -3277,9 +3304,39 @@ void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop) tcg_gen_req_mo(TCG_MO_ST_LD | TCG_MO_ST_ST); addr = plugin_prep_mem_callbacks(addr); - /* TODO: allow the tcg backend to see the whole operation. */ + /* TODO: For now, force 32-bit hosts to use the helper. */ - if (use_two_i64_for_i128(memop)) { + if (TCG_TARGET_HAS_qemu_ldst_i128 && TCG_TARGET_REG_BITS == 64) { + TCGv_i64 lo, hi; + TCGArg addr_arg; + MemOpIdx adj_oi; + bool need_bswap = false; + + if ((memop & MO_BSWAP) && !tcg_target_has_memory_bswap(memop)) { + lo = tcg_temp_new_i64(); + hi = tcg_temp_new_i64(); + tcg_gen_bswap64_i64(lo, TCGV128_HIGH(val)); + tcg_gen_bswap64_i64(hi, TCGV128_LOW(val)); + adj_oi = make_memop_idx(memop & ~MO_BSWAP, idx); + need_bswap = true; + } else { + lo = TCGV128_LOW(val); + hi = TCGV128_HIGH(val); + adj_oi = oi; + } + +#if TARGET_LONG_BITS == 32 + addr_arg = tcgv_i32_arg(addr); +#else + addr_arg = tcgv_i64_arg(addr); +#endif + tcg_gen_op4ii_i64(INDEX_op_qemu_st_i128, lo, hi, addr_arg, adj_oi); + + if (need_bswap) { + tcg_temp_free_i64(lo); + tcg_temp_free_i64(hi); + } + } else if (use_two_i64_for_i128(memop)) { MemOp mop[2]; TCGv addr_p8; TCGv_i64 x, y; diff --git a/tcg/tcg.c b/tcg/tcg.c index 8216855810..07a9489b71 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -1718,6 +1718,10 @@ bool tcg_op_supported(TCGOpcode op) case INDEX_op_qemu_st8_i32: return TCG_TARGET_HAS_qemu_st8_i32; + case INDEX_op_qemu_ld_i128: + case INDEX_op_qemu_st_i128: + return TCG_TARGET_HAS_qemu_ldst_i128; + case INDEX_op_mov_i32: case INDEX_op_setcond_i32: case INDEX_op_brcond_i32: diff --git a/docs/devel/tcg-ops.rst b/docs/devel/tcg-ops.rst index f3f451b77f..6a166c5665 100644 --- a/docs/devel/tcg-ops.rst +++ b/docs/devel/tcg-ops.rst @@ -672,19 +672,20 @@ QEMU specific operations | This operation is optional. If the TCG backend does not implement the goto_ptr opcode, emitting this op is equivalent to emitting exit_tb(0). - * - qemu_ld_i32/i64 *t0*, *t1*, *flags*, *memidx* + * - qemu_ld_i32/i64/i128 *t0*, *t1*, *flags*, *memidx* - qemu_st_i32/i64 *t0*, *t1*, *flags*, *memidx* + qemu_st_i32/i64/i128 *t0*, *t1*, *flags*, *memidx* qemu_st8_i32 *t0*, *t1*, *flags*, *memidx* - | Load data at the guest address *t1* into *t0*, or store data in *t0* at guest - address *t1*. The _i32/_i64 size applies to the size of the input/output + address *t1*. The _i32/_i64/_i128 size applies to the size of the input/output register *t0* only. The address *t1* is always sized according to the guest, and the width of the memory operation is controlled by *flags*. | | Both *t0* and *t1* may be split into little-endian ordered pairs of registers - if dealing with 64-bit quantities on a 32-bit host. + if dealing with 64-bit quantities on a 32-bit host, or 128-bit quantities on + a 64-bit host. | | The *memidx* selects the qemu tlb index to use (e.g. user or kernel access). The flags are the MemOp bits, selecting the sign, width, and endianness @@ -693,6 +694,8 @@ QEMU specific operations | For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a 64-bit memory access specified in *flags*. | + | For qemu_ld/st_i128, these are only supported for a 64-bit host. + | | For i386, qemu_st8_i32 is exactly like qemu_st_i32, except the size of the memory operation is known to be 8-bit. This allows the backend to provide a different set of register constraints. From patchwork Tue Apr 25 19:31:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676829 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880507wrs; Tue, 25 Apr 2023 12:43:13 -0700 (PDT) X-Google-Smtp-Source: AKy350Y50FOAp+rG9oNVIfZwHLBSUvelGVNQAK7rt8pDPb+2cY9a02tJrNN+e4BQaYPuHv2Epk5T X-Received: by 2002:a05:6214:29ca:b0:614:db92:d591 with SMTP id gh10-20020a05621429ca00b00614db92d591mr13828074qvb.34.1682451793543; Tue, 25 Apr 2023 12:43:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451793; cv=none; d=google.com; s=arc-20160816; b=I5ab7+KExA/zF/T46wdeQacgyGQcJpn+eWc7hCz1VgklAXlMvVaO6XOHd9Ju51x1ne H+Z10PEy7tloBy1HNLMXNtkUxQA9am5gv1egunP9ktfwOX0lqLtVTaGlc+/aGXPTVl3H iFoasaW1oySz61JV1co9IVA48qQ4xTRYQxRvAGhUYhiO1WOWPtPghnKeJzTCvE9XPDoD 2Nkeo6q3rvTa8BNBhkv9t/fzBRoQklyqPOKHSAOj2E+gWJhO0ppf0FXoNM1XroonE03M D4AdFzrhCXJbnFnwkv2g8PljN0kOztXHIVwsuyYc5c4EUKfJg3SvR3G92krTwOqnzNj8 mv8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Kfy0WeuIjSTnI0OlHtJbQJVrAkVRNlYtU0wj+LFi/C0=; b=iioTojcKoIx4DRuSMBbxmSW1r+nio6N6XKqCGGrU03xB7S6CM0wwrfeyyL3+56mfy4 cSMGR3k08yoP702ACxrWymaylIrAIc37g2MLK982NQRh084oe7kkp9wNr2FGO/Hx/CP0 2ADGNL9I0Y3oi+9WZIfWmqR0hZEFWnTi+FZPPARt965kaJrGsFl1Luw1GPZWSrmKcaa0 GpUg/sss28F0e/U5kcyCNjjMSNGJJ2RO0nlccK661oEQR90H4ZXXl9vkizrWecpcFAd9 GxfOaVKcwHn6/fGukKJFEuPcZSQde74qv+fIYtiCtp2gPH5OrDlerwlGOf/Jx5t0pl7R B0xA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=H+4EDKWE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id jq13-20020ad45fcd000000b005ef85b53a5bsi9330752qvb.30.2023.04.25.12.43.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=H+4EDKWE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOVX-0000Kp-MQ; Tue, 25 Apr 2023 15:39:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOVP-0000FT-Hb for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:39:15 -0400 Received: from mail-lf1-x130.google.com ([2a00:1450:4864:20::130]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOVM-0006ME-JB for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:39:15 -0400 Received: by mail-lf1-x130.google.com with SMTP id 2adb3069b0e04-4effb818c37so1347757e87.3 for ; Tue, 25 Apr 2023 12:39:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451550; x=1685043550; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Kfy0WeuIjSTnI0OlHtJbQJVrAkVRNlYtU0wj+LFi/C0=; b=H+4EDKWErMOdM5pXQPiLbbmci9yL9Vw8I2Nir0rpimXos27lipYIf9lbPRaYY/bZml +XopbHT7lYqaHQhnabizoagy61C/Cdiwvgj+78ERk+wkrRTG4JnjMcFvoVxcwRl5Oz+A sxSY2+QWJ9QcGzi0Bw+oBP+gNEDljOKuGG1PC+PyDGfJ5C4lHgm4ugey+u4xXZrkJupv fEEyGRyAIfZWOXeE6rlH7+vSmJFb9J8+OfcMtdOpfwj2wC34dqHMMN8XTKWUqFS5hMK0 GsjFy20Dlbew4ud2bAJ+9s+YmAQk/IE+UxaZI7tDDU9N9tFklb52+FXH75iR+Y87QTUn RlQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451550; x=1685043550; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Kfy0WeuIjSTnI0OlHtJbQJVrAkVRNlYtU0wj+LFi/C0=; b=IxfYwtSmZ62js79LPi/cUyoFGcdqIvcb2tTTmiojKVcOycx6BKWEDu4AelPNf8Qvsq +L3VVytUeJEDYBsQujGGmuyDMcg36mvVMOOmnHQy2NSf+s1m/FK9SS0aqdAvYoqF7Sd9 pERG9WGJK6OA8HrIMEbgkcTxbZ9tiBy7afjamflPcAaqvCqTITHeJ1sgiu+64Lw96ps8 QXTXOmsXGlQuZAz/A+23RS01TYT4KSXiQsZaft+bhVDNpI3t4Rp0BHXVtTZkGoAUbpKH kQN3Pl5OzULxe9yYxgalyTbd/hWcjrhNn96ee4ZAgfrQ0uMPLZvl77LGiBHrR8+QLwNo pKPA== X-Gm-Message-State: AAQBX9dEphqkKpX6AB57coFu/DC3osMbA9wSpS4EZ0NrLfCnnSUsuEGy I372JsA0ztrNdtdervnLUODBX3UcqrfAr3u5GZwkJA== X-Received: by 2002:ac2:4247:0:b0:4eb:341c:ecc5 with SMTP id m7-20020ac24247000000b004eb341cecc5mr4803669lfl.12.1682451550260; Tue, 25 Apr 2023 12:39:10 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.38.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:39:09 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 41/57] tcg: Support TCG_TYPE_I128 in tcg_out_{ld, st}_helper_{args, ret} Date: Tue, 25 Apr 2023 20:31:30 +0100 Message-Id: <20230425193146.2106111-42-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::130; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x130.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/tcg.c | 174 ++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 148 insertions(+), 26 deletions(-) diff --git a/tcg/tcg.c b/tcg/tcg.c index 07a9489b71..c5a0cfd846 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -206,6 +206,7 @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] __attribute__((unused)) = { [MO_UQ] = helper_ldq_mmu, #if TCG_TARGET_REG_BITS == 64 [MO_SL] = helper_ldsl_mmu, + [MO_128] = helper_ld16_mmu, #endif }; @@ -214,6 +215,9 @@ static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = { [MO_16] = helper_stw_mmu, [MO_32] = helper_stl_mmu, [MO_64] = helper_stq_mmu, +#if TCG_TARGET_REG_BITS == 64 + [MO_128] = helper_st16_mmu, +#endif }; TCGContext tcg_init_ctx; @@ -773,6 +777,15 @@ static TCGHelperInfo info_helper_ld64_mmu = { | dh_typemask(ptr, 4) /* uintptr_t ra */ }; +static TCGHelperInfo info_helper_ld128_mmu = { + .flags = TCG_CALL_NO_WG, + .typemask = dh_typemask(i128, 0) /* return Int128 */ + | dh_typemask(env, 1) + | dh_typemask(tl, 2) /* target_ulong addr */ + | dh_typemask(i32, 3) /* unsigned oi */ + | dh_typemask(ptr, 4) /* uintptr_t ra */ +}; + static TCGHelperInfo info_helper_st32_mmu = { .flags = TCG_CALL_NO_WG, .typemask = dh_typemask(void, 0) @@ -793,6 +806,16 @@ static TCGHelperInfo info_helper_st64_mmu = { | dh_typemask(ptr, 5) /* uintptr_t ra */ }; +static TCGHelperInfo info_helper_st128_mmu = { + .flags = TCG_CALL_NO_WG, + .typemask = dh_typemask(void, 0) + | dh_typemask(env, 1) + | dh_typemask(tl, 2) /* target_ulong addr */ + | dh_typemask(i128, 3) /* Int128 data */ + | dh_typemask(i32, 4) /* unsigned oi */ + | dh_typemask(ptr, 5) /* uintptr_t ra */ +}; + #ifdef CONFIG_TCG_INTERPRETER static ffi_type *typecode_to_ffi(int argmask) { @@ -1206,8 +1229,10 @@ static void tcg_context_init(unsigned max_cpus) init_call_layout(&info_helper_ld32_mmu); init_call_layout(&info_helper_ld64_mmu); + init_call_layout(&info_helper_ld128_mmu); init_call_layout(&info_helper_st32_mmu); init_call_layout(&info_helper_st64_mmu); + init_call_layout(&info_helper_st128_mmu); #ifdef CONFIG_TCG_INTERPRETER init_ffi_layouts(); @@ -5366,6 +5391,9 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, case MO_64: info = &info_helper_ld64_mmu; break; + case MO_128: + info = &info_helper_ld128_mmu; + break; default: g_assert_not_reached(); } @@ -5380,8 +5408,33 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, tcg_out_helper_load_slots(s, nmov, mov, parm->ntmp, parm->tmp); - /* No special attention for 32 and 64-bit return values. */ - tcg_debug_assert(info->out_kind == TCG_CALL_RET_NORMAL); + switch (info->out_kind) { + case TCG_CALL_RET_NORMAL: + case TCG_CALL_RET_BY_VEC: + break; + case TCG_CALL_RET_BY_REF: + /* + * The return reference is in the first argument slot. + * We need memory in which to return: re-use the top of stack. + */ + { + int ofs_slot0 = arg_slot_stk_ofs(0); + + if (arg_slot_reg_p(0)) { + tcg_out_addi_ptr(s, tcg_target_call_iarg_regs[0], + TCG_REG_CALL_STACK, ofs_slot0); + } else { + tcg_debug_assert(parm->ntmp != 0); + tcg_out_addi_ptr(s, parm->tmp[0], + TCG_REG_CALL_STACK, ofs_slot0); + tcg_out_st(s, TCG_TYPE_PTR, parm->tmp[0], + TCG_REG_CALL_STACK, ofs_slot0); + } + } + break; + default: + g_assert_not_reached(); + } tcg_out_helper_load_common_args(s, ldst, parm, info, next_arg); } @@ -5390,11 +5443,18 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst, bool load_sign, const TCGLdstHelperParam *parm) { + MemOp mop = get_memop(ldst->oi); TCGMovExtend mov[2]; + int ofs_slot0; - if (ldst->type <= TCG_TYPE_REG) { - MemOp mop = get_memop(ldst->oi); + switch (ldst->type) { + case TCG_TYPE_I64: + if (TCG_TARGET_REG_BITS == 32) { + break; + } + /* fall through */ + case TCG_TYPE_I32: mov[0].dst = ldst->datalo_reg; mov[0].src = tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, 0); mov[0].dst_type = ldst->type; @@ -5420,25 +5480,49 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst, mov[0].src_ext = mop & MO_SSIZE; } tcg_out_movext1(s, mov); - } else { - assert(TCG_TARGET_REG_BITS == 32); + return; - mov[0].dst = ldst->datalo_reg; - mov[0].src = - tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, HOST_BIG_ENDIAN); - mov[0].dst_type = TCG_TYPE_I32; - mov[0].src_type = TCG_TYPE_I32; - mov[0].src_ext = MO_32; + case TCG_TYPE_I128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + ofs_slot0 = arg_slot_stk_ofs(0); + switch (TCG_TARGET_CALL_RET_I128) { + case TCG_CALL_RET_NORMAL: + break; + case TCG_CALL_RET_BY_VEC: + tcg_out_st(s, TCG_TYPE_V128, + tcg_target_call_oarg_reg(TCG_CALL_RET_BY_VEC, 0), + TCG_REG_CALL_STACK, ofs_slot0); + /* fall through */ + case TCG_CALL_RET_BY_REF: + tcg_out_ld(s, TCG_TYPE_I64, ldst->datalo_reg, + TCG_REG_CALL_STACK, ofs_slot0 + 8 * HOST_BIG_ENDIAN); + tcg_out_ld(s, TCG_TYPE_I64, ldst->datahi_reg, + TCG_REG_CALL_STACK, ofs_slot0 + 8 * !HOST_BIG_ENDIAN); + return; + default: + g_assert_not_reached(); + } + break; - mov[1].dst = ldst->datahi_reg; - mov[1].src = - tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, !HOST_BIG_ENDIAN); - mov[1].dst_type = TCG_TYPE_REG; - mov[1].src_type = TCG_TYPE_REG; - mov[1].src_ext = MO_32; - - tcg_out_movext2(s, mov, mov + 1, parm->ntmp ? parm->tmp[0] : -1); + default: + g_assert_not_reached(); } + + mov[0].dst = ldst->datalo_reg; + mov[0].src = + tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, HOST_BIG_ENDIAN); + mov[0].dst_type = TCG_TYPE_I32; + mov[0].src_type = TCG_TYPE_I32; + mov[0].src_ext = TCG_TARGET_REG_BITS == 32 ? MO_32 : MO_64; + + mov[1].dst = ldst->datahi_reg; + mov[1].src = + tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, !HOST_BIG_ENDIAN); + mov[1].dst_type = TCG_TYPE_REG; + mov[1].src_type = TCG_TYPE_REG; + mov[1].src_ext = TCG_TARGET_REG_BITS == 32 ? MO_32 : MO_64; + + tcg_out_movext2(s, mov, mov + 1, parm->ntmp ? parm->tmp[0] : -1); } static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, @@ -5462,6 +5546,10 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, info = &info_helper_st64_mmu; data_type = TCG_TYPE_I64; break; + case MO_128: + info = &info_helper_st128_mmu; + data_type = TCG_TYPE_I128; + break; default: g_assert_not_reached(); } @@ -5479,13 +5567,47 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, /* Handle data argument. */ loc = &info->in[next_arg]; - n = tcg_out_helper_add_mov(mov + nmov, loc, data_type, ldst->type, - ldst->datalo_reg, ldst->datahi_reg); - next_arg += n; - nmov += n; - tcg_debug_assert(nmov <= ARRAY_SIZE(mov)); + switch (loc->kind) { + case TCG_CALL_ARG_NORMAL: + case TCG_CALL_ARG_EXTEND_U: + case TCG_CALL_ARG_EXTEND_S: + n = tcg_out_helper_add_mov(mov + nmov, loc, data_type, ldst->type, + ldst->datalo_reg, ldst->datahi_reg); + next_arg += n; + nmov += n; + tcg_out_helper_load_slots(s, nmov, mov, parm->ntmp, parm->tmp); + break; + + case TCG_CALL_ARG_BY_REF: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_debug_assert(data_type == TCG_TYPE_I128); + tcg_out_st(s, TCG_TYPE_I64, + HOST_BIG_ENDIAN ? ldst->datahi_reg : ldst->datalo_reg, + TCG_REG_CALL_STACK, arg_slot_stk_ofs(loc[0].ref_slot)); + tcg_out_st(s, TCG_TYPE_I64, + HOST_BIG_ENDIAN ? ldst->datalo_reg : ldst->datahi_reg, + TCG_REG_CALL_STACK, arg_slot_stk_ofs(loc[1].ref_slot)); + + tcg_out_helper_load_slots(s, nmov, mov, parm->ntmp, parm->tmp); + + if (arg_slot_reg_p(loc->arg_slot)) { + tcg_out_addi_ptr(s, tcg_target_call_iarg_regs[loc->arg_slot], + TCG_REG_CALL_STACK, + arg_slot_stk_ofs(loc->ref_slot)); + } else { + tcg_debug_assert(parm->ntmp != 0); + tcg_out_addi_ptr(s, parm->tmp[0], TCG_REG_CALL_STACK, + arg_slot_stk_ofs(loc->ref_slot)); + tcg_out_st(s, TCG_TYPE_PTR, parm->tmp[0], + TCG_REG_CALL_STACK, arg_slot_stk_ofs(loc->arg_slot)); + } + next_arg += 2; + break; + + default: + g_assert_not_reached(); + } - tcg_out_helper_load_slots(s, nmov, mov, parm->ntmp, parm->tmp); tcg_out_helper_load_common_args(s, ldst, parm, info, next_arg); } From patchwork Tue Apr 25 19:31:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676849 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882296wrs; Tue, 25 Apr 2023 12:47:51 -0700 (PDT) X-Google-Smtp-Source: AKy350YPEu82vcwwmoOeJ9RpIMcXN5vBUZXjLlV0HmHEoWAuDYkDK2getETA/kFMSw3kxikDNiC+ X-Received: by 2002:ad4:5969:0:b0:5e8:979f:2e49 with SMTP id eq9-20020ad45969000000b005e8979f2e49mr31281698qvb.41.1682452071674; Tue, 25 Apr 2023 12:47:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452071; cv=none; d=google.com; s=arc-20160816; b=NvwAuR8u+pSV+Je9/pgbvIK3kKLDv6Ve75dQzLf7/uS+yn1rE1EOom22kdwyGd/6FN KzuymEvcX1aFclYRUZoJtlS2EXs/cbCiJ5Eg66oS/S46XGbafXttTV3MJN88Xoz272cM MUtNfwx+pu0m4WmsJCdwowKHtLSoOy6Ooga76071YfH8fi5w7BtCZzKsuGVSOduPu3sQ jcz4xvm0EeI114nVWdS4Rapjoq+05tFrRwDWchMEUVJFdhvAFJRBIf1WFy1h7+MuZaFo i/8o/gdyHfJGzqXUz3NbgBvykPzQqgqhSmztBG+UZtP7thwuMARlS3Cp74ho00peDlYW ZdYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=XMwAK8+LCKklM46g5obMNkVRLVOSMtjwgv/TpGWw4pA=; b=pygqZCmGO2rngZXBXqPkPF0dYMt5S/UkXG9CzyWWVVhUCIbFtMi1ctBKPSmBsjmdcl WBSr2fy831CR8cuRvxmUn54dw4mmS3VtfPmt4R/JwoNotodqgarx8xmrNGuCfBylzKCD SYdObN+HmiUgc1jHnj+RE6gtp2ZZBSNTe4bgWecoQ1TMWa8ywx+lndjbxMxKSoYwcgYs WGYtyMrg719DwBmeNrjol0Lgn4N7KaW68XxKsrE85Z5ERFKjYct48xMIgzFIkzF2JvAG A2SWfwMOp5ZRYv3wQqIyuhf2tVjtxEIqoPpm6mKS1UtcxtkIa8AZIB830ZnLlAsASzwf vI3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tquWj1KN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n16-20020ac85a10000000b003e4db91c228si9675405qta.311.2023.04.25.12.47.51 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:47:51 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tquWj1KN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOWh-0001ZG-Ty; Tue, 25 Apr 2023 15:40:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOWg-0001Yj-E7 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:40:34 -0400 Received: from mail-lf1-x133.google.com ([2a00:1450:4864:20::133]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOWZ-00070h-Ph for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:40:33 -0400 Received: by mail-lf1-x133.google.com with SMTP id 2adb3069b0e04-4ec81773cf7so6505090e87.2 for ; Tue, 25 Apr 2023 12:40:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451626; x=1685043626; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XMwAK8+LCKklM46g5obMNkVRLVOSMtjwgv/TpGWw4pA=; b=tquWj1KNSzdJtzVyD+tXUp087E/gXkeNrl7YMB8IFduqjUiaJ5UevvSNJ6Wkio0Lp5 DAp4CtE78V+J4VGO6x7+U0Mqo7WwwjOeHUz4ZBj3ToRugXRYMz/iYEq8FiB6Yt8UNKuf 16FptZlCLpWaoSM25UUsbNkhwfBvmZbw06i92vPpjWX6xLAmE95Fc54sidss/21+xOnh 2fw68Y317o6muUqaNIcsnip9gqfA4vlaADSmDQ0Zsti02kZv866WKcJEH5T6wHwe/3rM 0v2MFEiaznJ25m97oL0i7nSeFS2bkRr0SX8xAU1YJ+P8UY2THwXGhrhhGFEcRD/IxuYd 5v6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451626; x=1685043626; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XMwAK8+LCKklM46g5obMNkVRLVOSMtjwgv/TpGWw4pA=; b=IP72gRf0e809KiUeeR95eEpfSgcdy0H71qBGIkeCLlC8sCdAVU8Vexaf2Z2IH790VO jGkuvjer0AXD5v/ZgCmjM0UWa18JKtCkkIsJPFgI1GIW4TNgSru3afegLz9dsBf6Tg0V 5eHxEe2J1JFggoNca+Achbrj+8Htjy7LCSLsAx8q+L5CYdRRxNUbiD3NOHc47buw43NF roeRc8BQ1kSWIMUG4mpSGiesIYKPY/5ISj2NurQ0a4m8K2vFhJm9y+S0o+k3qZyR25M5 yqbXULiBau5jgUnzb3WJMJBCYfCJ6o9aOlT5W/YdyOyjKUpDvzGXII2c3PKcQMe1EE9u xNYQ== X-Gm-Message-State: AAQBX9e+Z2Wz3vUGJGwH0Iy5oRj5ZSH6FIv2jz3GYzP8uZDe3z6Mm6+P 19UxTL2SijPiLPkGW6mxAwIsXVGFUv85B5LKolnq2w== X-Received: by 2002:ac2:596b:0:b0:4dd:91a6:fe50 with SMTP id h11-20020ac2596b000000b004dd91a6fe50mr5435823lfp.28.1682451626091; Tue, 25 Apr 2023 12:40:26 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.39.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:40:25 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 42/57] tcg: Introduce atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:31 +0100 Message-Id: <20230425193146.2106111-43-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::133; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x133.google.com X-Spam_score_int: -16 X-Spam_score: -1.7 X-Spam_bar: - X-Spam_report: (-1.7 / 5.0 requ) BAYES_00=-1.9, DKIM_INVALID=0.1, DKIM_SIGNED=0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Examine MemOp for atomicity and alignment, adjusting alignment as required to implement atomicity on the host. Signed-off-by: Richard Henderson --- tcg/tcg.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/tcg/tcg.c b/tcg/tcg.c index c5a0cfd846..d7ff96fd1d 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -220,6 +220,11 @@ static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = { #endif }; +static MemOp atom_and_align_for_opc(TCGContext *s, MemOp *p_atom_a, + MemOp *p_atom_u, MemOp opc, + MemOp host_atom, bool allow_two_ops) + __attribute__((unused)); + TCGContext tcg_init_ctx; __thread TCGContext *tcg_ctx; @@ -5123,6 +5128,70 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op) } } +/* + * Return the alignment and atomicity to use for the inline fast path + * for the given memory operation. The alignment may be larger than + * that specified in @opc, and the correct alignment will be diagnosed + * by the slow path helper. + */ +static MemOp atom_and_align_for_opc(TCGContext *s, MemOp *p_atom_a, + MemOp *p_atom_u, MemOp opc, + MemOp host_atom, bool allow_two_ops) +{ + MemOp align = get_alignment_bits(opc); + MemOp atom, atmax, atmin, size = opc & MO_SIZE; + + /* When serialized, no further atomicity required. */ + if (s->gen_tb->cflags & CF_PARALLEL) { + atom = opc & MO_ATOM_MASK; + } else { + atom = MO_ATOM_NONE; + } + + atmax = opc & MO_ATMAX_MASK; + if (atmax == MO_ATMAX_SIZE) { + atmax = size; + } else { + atmax = atmax >> MO_ATMAX_SHIFT; + } + + switch (atom) { + case MO_ATOM_NONE: + /* The operation requires no specific atomicity. */ + atmax = atmin = MO_8; + break; + case MO_ATOM_IFALIGN: + /* If unaligned, the subobjects are bytes. */ + atmin = MO_8; + break; + case MO_ATOM_WITHIN16: + /* If unaligned, there are subobjects if atmax < size. */ + atmin = (atmax < size ? atmax : MO_8); + atmax = size; + break; + case MO_ATOM_SUBALIGN: + /* If unaligned but not odd, there are subobjects up to atmax - 1. */ + atmin = (atmax == MO_8 ? MO_8 : atmax - 1); + break; + default: + g_assert_not_reached(); + } + + /* + * If there are subobjects, and the host model does not match, then we + * need to raise the initial alignment check. If the backend is prepared + * to double-check alignment and issue two half size ops, we need not + * raise initial alignment beyond half. + */ + if (atmin > MO_8 && host_atom != atom) { + align = MAX(align, size - allow_two_ops); + } + + *p_atom_a = atmax; + *p_atom_u = atmin; + return align; +} + /* * Similarly for qemu_ld/st slow path helpers. * We must re-implement tcg_gen_callN and tcg_reg_alloc_call simultaneously, From patchwork Tue Apr 25 19:31:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676854 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882852wrs; Tue, 25 Apr 2023 12:49:26 -0700 (PDT) X-Google-Smtp-Source: AKy350Y+TCnc2pB4Aze2UBnuo+iKXEht8+JjAb/NLNMzQ7RhG66P3YK5GK6GxKifLEhDT3Cu9UyN X-Received: by 2002:a05:622a:181d:b0:3ec:47d5:ec65 with SMTP id t29-20020a05622a181d00b003ec47d5ec65mr27926053qtc.60.1682452166382; Tue, 25 Apr 2023 12:49:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452166; cv=none; d=google.com; s=arc-20160816; b=E8/VVO2eLR7JwWxsTb4ZTSFU0jPYep4WbZsEZpEGt1eu55z1xQ4SzT8nNMdXMq/NK1 d4LaQO8vSmxsSyBPOg9lI3cHgjphk1PrTaZTzT/85fYXRIf2RHJXaZzBDfKJncvwI0+S yRsr68BRY2OmywUouNCqhRUcQpClvS8k0dasQTajXJsdE6QZLTSp67p2N2nDto13RVbg 4MSQ/jLQL2WLH/aBJtQpBGJT+6KLqv/Z+JESwsB8TRBwWEDj7ouqJQjpDldsSVKxLbrF umABVcyjZWiBtVISW62Q4WWWupXBoZjIYEeJ35/z2n9BMm4teohcCDL+F6Uk4gWqFBlv FKrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=voYrBLtybOoDO/WS2tn3zAHmuD0tWE6cjSLyscFlUk8=; b=KWyII2FSUidyqrUHrFiVXTJ3Q2/dh8FaiQSHoPqRQubqu/5DP+EAm40CGJXgUk1ljK muDE6Io8QyzQb8V6eUAo5aTGnE+lqdrWbbbTkvJWNGqJEu5gClJLminDcdKJ63GgFKBw ZuhIBI/psQ3fPB8SpFn+ZS2pn5DdojlyEbLsRB3yM+nzays/YFyHf1Jmbr2ocvklFeHM zyV35FujONdmkU72dXKJARbkgElpz3nqnhKg01sGWJktX9fkqdfpTRtaTw18AuSgKfCZ Yla0J7wT9n2BiPbX8+R01gGUEwExnO/7Ah3eaeDkoWp3Z9V6ODOKF1tcfOUrC+Q7jqJW QR1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hbpbm7l8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y17-20020ac85f51000000b003eb6892292csi9458265qta.21.2023.04.25.12.49.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:49:26 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hbpbm7l8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOX0-0001bi-4p; Tue, 25 Apr 2023 15:40:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOWy-0001b9-6J for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:40:52 -0400 Received: from mail-lf1-x129.google.com ([2a00:1450:4864:20::129]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOWw-00077z-2J for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:40:51 -0400 Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-4ecb137af7eso6393496e87.2 for ; Tue, 25 Apr 2023 12:40:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451648; x=1685043648; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=voYrBLtybOoDO/WS2tn3zAHmuD0tWE6cjSLyscFlUk8=; b=hbpbm7l8FBjy1K758QuNNVHEKXsL1nabgLzYA57tRM0q3g1vsGSuvHRiH7LAcxiGw2 vh9WTJP+4hNm73hvM0GjcLj74JBSAifGP4jH9M3jtlmoZVh0naXf0D1Gltu1YqlyJGiQ 1sRua6T5IJkn/11Za7+wGdFnOTQHvqUH5QhJ17Jlsur1CjvVc4SuUFK2ShJB5kqcv+fH WNycysEA3Pw1rLoOLYSe+0Jwa+sMUfejH8kKIOkYhCGrkETAGN7C94VU1TKmxfaiTS48 WWp4K/By84q5krWYzk3NhgH92CdZLRcuAFTmQ5J9q27hNMOqdmTpXODCCibcHPL4en8M y4eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451648; x=1685043648; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=voYrBLtybOoDO/WS2tn3zAHmuD0tWE6cjSLyscFlUk8=; b=bC+uhIS31mmgi4UA/DHZ86gKv5mJXUITqBXjbX+yCJ0udpoNBRxvl6q99OBrjytHTd zTAsS2LY7dlbFtowCg+u0xsmgigKQ84gWEvL9z24/zqnHUnVvXvZ2lRlMWA4oY8jVwlv Nc1J+QvWF0byqM6n7bDMESiDRx4Qxk6vf8Ri9puFGepodV953I+oEZn+cy7DhnvBULru CekPjHu5MG9+Hk/AUP3Nw0jEdMm80N4dZaRuo/oUOC4IfkNUF/kV40o2B28io1mux30c JqixdMIbgG24uIUOAQ0LCiXCcQWNrp1S/gEauHLpfDpL+/gZcv5eKAJFJlErOPkBtag0 P2LA== X-Gm-Message-State: AAQBX9cUhVjpO1yQCl2BBwlFvq/RAG7gO+uVLDPon3B1GkchrySMZ99+ c2gS86pSrwe8IvLIl8yvKhvC2wIzxKzf6Q0nZ6Mi9w== X-Received: by 2002:ac2:43d3:0:b0:4ec:883a:6664 with SMTP id u19-20020ac243d3000000b004ec883a6664mr5390838lfl.16.1682451648443; Tue, 25 Apr 2023 12:40:48 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.40.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:40:48 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 43/57] tcg/i386: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:32 +0100 Message-Id: <20230425193146.2106111-44-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::129; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x129.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Signed-off-by: Richard Henderson --- tcg/i386/tcg-target.c.inc | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 8c0902844a..6a492bb9e7 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -1776,6 +1776,8 @@ typedef struct { int index; int ofs; int seg; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1897,8 +1899,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1 << a_bits) - 1; + MemOp atom_u; + unsigned a_mask; + + h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU int cmp_ofs = is_ld ? offsetof(CPUTLBEntry, addr_read) @@ -1943,10 +1949,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TLB_MASK_TABLE_OFS(mem_index) + offsetof(CPUTLBDescFast, table)); - /* If the required alignment is at least as large as the access, simply - copy the address and mask. For lesser alignments, check that we don't - cross pages for the complete access. */ - if (a_bits >= s_bits) { + /* + * If the required alignment is at least as large as the access, simply + * copy the address and mask. For lesser alignments, check that we don't + * cross pages for the complete access. + */ + if (a_mask >= s_mask) { tcg_out_mov(s, ttype, TCG_REG_L1, addrlo); } else { tcg_out_modrm_offset(s, OPC_LEA + trexw, TCG_REG_L1, @@ -1978,12 +1986,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_L0, TCG_REG_L0, offsetof(CPUTLBEntry, addend)); - *h = (HostAddress) { - .base = addrlo, - .index = TCG_REG_L0, - }; + h->base = addrlo; + h->index = TCG_REG_L0; + h->ofs = 0; + h->seg = 0; #else - if (a_bits) { + if (a_mask) { ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -1998,8 +2006,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, s->code_ptr += 4; } - *h = x86_guest_base; h->base = addrlo; + h->index = x86_guest_base.index; + h->ofs = x86_guest_base.ofs; + h->seg = x86_guest_base.seg; #endif return ldst; From patchwork Tue Apr 25 19:31:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676846 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881760wrs; Tue, 25 Apr 2023 12:46:23 -0700 (PDT) X-Google-Smtp-Source: AKy350bY4vbW8e9cWlaaqbsuIbbXN4JuQU2+2nLKCQaCkFxKnVXPkGxWT7NP03lnY99cURwRbO5J X-Received: by 2002:ad4:5f49:0:b0:5ef:5144:9d2f with SMTP id p9-20020ad45f49000000b005ef51449d2fmr31192040qvg.20.1682451982929; Tue, 25 Apr 2023 12:46:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451982; cv=none; d=google.com; s=arc-20160816; b=aWUDu4LOLOBFSccRNRZnZBVI/18qStF3jHcGrZk+vvHCCP8qCcxYnTeO5KDc1zr20q lCCuYkW15wnyj7BnN9jTFNwTZ28FeykB7oLPrvtNas/cZ6vbLOIHWyHJ/JvmsyyjgUrH JCUOya6QZuAO/UzIRwI3lQq1xrEMlhC0KT1etKtbsecvHj2Ofi4n4qtZOsH9C4MzUv17 J1XP+bFgFj2L78qkhQMwJtAGH1pDY4BUbjQbFwArZDL0StsZTwfFXq0rp6VXNLMiOWy6 Q85Vp6pYSWJIORrEjpMPRmn1lQdlAwtqOW8c4aBgFuwxt23dwKk1bZkXs72K/E0P/FZR jyZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=tw0I+ADtSsoTUZP6PXChoGX/kwSbfoaMV4Di5FGXG0I=; b=rUaKQ1ulupvSNVLMt3A0ixUBBnSpusr9UlowRYer8YgBKd1H3Xet1vIDS/pp1GCgUd ajPx/4amIi0ZApsI8wBrsIMF2uGnY1RMRECBcCV9G7czhFU7v2sKpuvG5vgWeaRn/FSN /HnxT93zfCMrEQfqjmfN2ZdYLFn/YJsUf1UiqaqkW4l0cr6enBt8j9/YXsmdXTTrR30f RpiMlXzAxeO4Ic3+nHlMSbk90Gcd5GRn81DzdfSKuItxZfhxY/xuCxi+p/H93J9IKiPF 9o88eLdpUH6CTZnH0ksI7tqNjbHqDbh8WaHt4VLl9NgLZKWkMvj7h6umIxOJN7dgbX58 mtnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qodv4JEN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c15-20020ad45aef000000b0056eb790b4a9si9351946qvh.539.2023.04.25.12.46.22 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:46:22 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qodv4JEN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOX8-0001tJ-EV; Tue, 25 Apr 2023 15:41:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOX7-0001og-6P for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:01 -0400 Received: from mail-lf1-x133.google.com ([2a00:1450:4864:20::133]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOX4-00070h-Ci for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:00 -0400 Received: by mail-lf1-x133.google.com with SMTP id 2adb3069b0e04-4ec81773cf7so6505632e87.2 for ; Tue, 25 Apr 2023 12:40:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451657; x=1685043657; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tw0I+ADtSsoTUZP6PXChoGX/kwSbfoaMV4Di5FGXG0I=; b=qodv4JENsXPBVovyhFUKZiakrWc+JtBQ6Wn4u/RUJHQ0ExSkxXNwsXQ7frv46tt2IL 4+rfpWjdEb9ImjCsyC9hHqewN8ejk4tAMUUUvvX/vnbpiCYb41v6Mh7ykHRM8n3CUxB2 GGCGpKN8t4DQYT9OMn94g+ZX4SLpFgHVYBI9YAjmWIz6d2WGRJKr22V2AHA605sti8+e 3i3cKAvQwEenPbsvpej19ha2PeBy7vAR2RuMXo5eoLAOjj+Qf6VSPwjWWBhy3wkW64d3 HbPqDDA2x4DbwjFyAU806LCEMysLCK/aRTfuPRSIHD1BKInKenNwogf3UlJPcev2ZLlB iDVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451657; x=1685043657; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tw0I+ADtSsoTUZP6PXChoGX/kwSbfoaMV4Di5FGXG0I=; b=WpXfzF/GFBbWP5rZt9yinj4R+cJdy0JNaV5r4XYsfgwzLs5SPgz59fWHXwW/c9ec6W qol9idGLf6+qRpr3KVscVIrFv0oa73TRy9Wv1uWbsmK1Pc37HTIjETH/FVoee5JhGj2H ThuJLYtuvzafSx0gqB9VNsg8unax+U++lRI4CyuQkrh+ovRIg24qY1yyijR2Y2DL8uBk 4GegMkUu1+oCe5/FPG1JkDj9PnduWtn+mxn8I5oS/cTI7Bzkmw3Wt8kPvPEsawAGgFMS rpaVncsp4B7q6r3PM00I/8xr6RBSRx8hsHa3nscg24eQ3jR3TGJpXykossBLoEylfJBZ cv0A== X-Gm-Message-State: AAQBX9fhA8deWJkCp4pWW19sDyJwweoiPjq9H4LbcRsNgoq5eS1dI4XQ q8ChcPpj+Hp61gH6VwLbmtefQ6QvDXEw8pIkpO8mHw== X-Received: by 2002:ac2:43ce:0:b0:4db:3e7e:51d3 with SMTP id u14-20020ac243ce000000b004db3e7e51d3mr4814225lfl.55.1682451657679; Tue, 25 Apr 2023 12:40:57 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.40.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:40:57 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 44/57] tcg/aarch64: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:33 +0100 Message-Id: <20230425193146.2106111-45-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::133; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x133.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.c.inc | 38 +++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 8e5f3d3688..1d6d382edd 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1593,6 +1593,8 @@ typedef struct { TCGReg base; TCGReg index; TCGType index_ext; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1646,8 +1648,14 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGType addr_type = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32; TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1u << a_bits) - 1; + MemOp atom_u; + unsigned a_mask; + + h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + have_lse2 ? MO_ATOM_WITHIN16 + : MO_ATOM_IFALIGN, + false); + a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; @@ -1693,7 +1701,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * bits within the address. For unaligned access, we check that we don't * cross pages using the address of the last byte of the access. */ - if (a_bits >= s_bits) { + if (a_mask >= s_mask) { x3 = addr_reg; } else { tcg_out_insn(s, 3401, ADDI, TARGET_LONG_BITS == 64, @@ -1713,11 +1721,9 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->label_ptr[0] = s->code_ptr; tcg_out_insn(s, 3202, B_C, TCG_COND_NE, 0); - *h = (HostAddress){ - .base = TCG_REG_X1, - .index = addr_reg, - .index_ext = addr_type - }; + h->base = TCG_REG_X1, + h->index = addr_reg; + h->index_ext = addr_type; #else if (a_mask) { ldst = new_ldst_label(s); @@ -1735,17 +1741,13 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, } if (USE_GUEST_BASE) { - *h = (HostAddress){ - .base = TCG_REG_GUEST_BASE, - .index = addr_reg, - .index_ext = addr_type - }; + h->base = TCG_REG_GUEST_BASE; + h->index = addr_reg; + h->index_ext = addr_type; } else { - *h = (HostAddress){ - .base = addr_reg, - .index = TCG_REG_XZR, - .index_ext = TCG_TYPE_I64 - }; + h->base = addr_reg; + h->index = TCG_REG_XZR; + h->index_ext = TCG_TYPE_I64; } #endif From patchwork Tue Apr 25 19:31:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676839 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880963wrs; Tue, 25 Apr 2023 12:44:21 -0700 (PDT) X-Google-Smtp-Source: AKy350YHfin1C2XXoGNsWUczz+xrjjPgsUh47T5baE/QxLhEOmpQyyhhfE9toNzsv8/FTu9g/Rzb X-Received: by 2002:a05:6214:d45:b0:5bf:ba9d:8726 with SMTP id 5-20020a0562140d4500b005bfba9d8726mr31513937qvr.10.1682451861738; Tue, 25 Apr 2023 12:44:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451861; cv=none; d=google.com; s=arc-20160816; b=nE5QNJ58FpYIZ3G7xsS99wJ8gia3nVM3B4bf+rautv9UNNNlNGKVGLOuhqQjPVCb76 +S8CpR1vwECBWQo+6DVBs1f4lNI3mHI5zo8NigxNpFLII9SACS6SYyBn+x37xJyHQNsx yan0pWXKFQwcN3E9hpd2Znn93WTcmsn3YFuY/BHcRbTueQyumGEAoFJcVmd48GjzUOLH rBaROo/2y4asEOxs/kdJ6jTFECm2ZyCiX/qUxBYUzxSz0BiZQnrKftdhw+ig3P4LRIuS Da0wKh4xU/FK6/o/rrzKW4d3ShFwi5TSpweG3l7cK6zssC7U2KUbbrN4N4RrfMCI2057 IW2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=xNYQj2Zg5TENytieXxeMxebRkQ90woYbMY4x/xns2OM=; b=n/Wvn11BcdltqJzVDoiZC3fHeQr+M8sgpLjQw+usWvChvxcSBsAPaOLzRh8PTWd1E0 YyDd/F8EEtOypRrwq+vfe/Z60p9avkhM4ocB3ZuK4GqgBXXFeMqx1NnNyI3JnaSjTagu 1tD1XYqYrMI/ejy26Hc8wY0+DEQo/qDJt23CMABz78hLDvdITzEiruHe6E7D1ftkhlxU 8LvzxvxdeKgLyGmkpyVOaOeXFRhNUEHUeymcgBjZfrQvAKQEvy9x00vEvQuaGK7DaFrv m39EI09j4+A92bBH0CoSk4aFv76qPjULGqiCEwtZV3x525ElIxvsVIW+ODjb+f1jk0LL uKWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PMMjqVVq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id hu11-20020a056214234b00b005ef4254d70csi9314704qvb.501.2023.04.25.12.44.21 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:44:21 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PMMjqVVq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOXS-0002BK-2T; Tue, 25 Apr 2023 15:41:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOXP-00028r-0v for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:19 -0400 Received: from mail-lf1-x135.google.com ([2a00:1450:4864:20::135]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOXM-0007DO-1J for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:18 -0400 Received: by mail-lf1-x135.google.com with SMTP id 2adb3069b0e04-4ec816d64afso26828388e87.1 for ; Tue, 25 Apr 2023 12:41:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451674; x=1685043674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xNYQj2Zg5TENytieXxeMxebRkQ90woYbMY4x/xns2OM=; b=PMMjqVVqIk1Nm6j4pJLL3nLW0EB4bWKzxY8fL79gc6G+4Ln3fvLbqBaGsGPno69vOy qRqhQrYgJbmlViPBoqrYCQlca90TjjWdHMx0ck6IjbOL7NeASL1n222gusSS5TZvyv/N K/+fehDco33dv3mjqI4tkANgWJdeeMJR6B6f0ApNkkromV/BqLQBp5nOWxVwniPZLxcF wJ92zj7XTOMuG0FBUjvUansCU2i4pgez075ZFIRKdVZSSFwkl7BnpPEfB85SgShJWZyM 3OcesGy9hRXaFeF3GAGmEYHFqUkZCU/B467Mgl6yHfgLhurXXoP03o+o4SrLFzJvJRSA rDwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451674; x=1685043674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xNYQj2Zg5TENytieXxeMxebRkQ90woYbMY4x/xns2OM=; b=kn7odfReboaOMPWwKD7OyLFxK0/XTMwnd9W31svu+5k4BX9H3tyV3ULZ80HLRTBAhg 2TKMJwGUvfd2zH4IO05aQZbwJE7BwW3vpq/fzZDXENn9vVBxc0F21UBgyQL18zVR+TaS oIwTfqaeIRLESyUFTnrhX4dqiUZufSg27upnfXWan31WVQVRN1wgvO3qhWJFH4w4cm45 NNc4jOP7kY9L5prRatJZ+/fvIOI/kZonUONSxjWbIz0THD+ajnSJs+N48WK1AMWgy7zj p0n2R1zkkgr4N9QKkQSlX4pf/XqwYNTv7PSOrZZ9NjKikor8NfrQ+IuvrrBY8hx97rh0 Ha+w== X-Gm-Message-State: AC+VfDz7sWvfXEqwAFXz19eOqUVBFINFBsfk2O4542WO4RVbPSGIOO8+ m00ymwAn4VvLhoXj5NjwzvZiZF15G3OjGMgba51IzA== X-Received: by 2002:ac2:4a68:0:b0:4eb:438b:de16 with SMTP id q8-20020ac24a68000000b004eb438bde16mr6652lfp.15.1682451674571; Tue, 25 Apr 2023 12:41:14 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.40.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:41:14 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 45/57] tcg/arm: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:34 +0100 Message-Id: <20230425193146.2106111-46-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::135; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x135.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Signed-off-by: Richard Henderson --- tcg/arm/tcg-target.c.inc | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index e5aed03247..edd995e04f 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1323,6 +1323,8 @@ typedef struct { TCGReg base; int index; bool index_scratch; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1379,8 +1381,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp a_bits = get_alignment_bits(opc); - unsigned a_mask = (1 << a_bits) - 1; + MemOp a_bits, atom_a, atom_u; + unsigned a_mask; + + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1 << a_bits) - 1; #ifdef CONFIG_SOFTMMU int mem_index = get_mmuidx(oi); @@ -1498,6 +1504,9 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, }; #endif + h->align = a_bits; + h->atom = atom_a; + return ldst; } From patchwork Tue Apr 25 19:31:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676836 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880842wrs; Tue, 25 Apr 2023 12:44:02 -0700 (PDT) X-Google-Smtp-Source: AKy350aqIdg44roJZEatFxlzClZrPH5SmI0CmHnEXAYs0m9pYaWVWrCx08kSjXa8VM2a9TZCli9W X-Received: by 2002:ac8:5916:0:b0:3ef:35e2:ade9 with SMTP id 22-20020ac85916000000b003ef35e2ade9mr27822537qty.33.1682451842548; Tue, 25 Apr 2023 12:44:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451842; cv=none; d=google.com; s=arc-20160816; b=cGotKsKhehfty2XWvENlMKOJWFL82BURar5gsBXt5lEiZjjNfyHQhZ6e7wYwEq1pGJ m7TP9spsGfIopZLX52rpnMZ/zaZ2axvsQllu7CxszVMGP1o+ZXN42nFGTSlbYOnecVdF 5NGKYTFqNR5o7v6ZKO1EMXuEg7tUOh0cWxLCZAWpbV2DuTPj3u1cphs1/7cF8KRzsbM3 8b/VXqF9kRs7sSqq9tRR39pXXjoWAdsMduUuzk1TwpCWoEMLA4eHdDUbmIyR+NGrn1BE KfxH8dDZwnnQQErJxfaeiSpmEsDw8/Bk4LVRdFE7cezJ8+5lWpf2IPM0A/seE6nzEznn xj/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=VhQ4va+L9W76HQPa92crHAm/u67MMsKKHsaZ1OacYxI=; b=WI3OInzTOYTEkofvdtURG5RaW4mMuNcKxMoDA3vgYozhpX2sRF9dypC7dXCYv1aOs1 jY51SY2C4DK6GH327VZs36ysvRCQhKuyGQd6kr96wUI+GFhP3pku5Lirb5tRMBMc7nSR SXVNNRC+TmxvATOMG4o0f9nAtBro4o02JDMOgvpH6S8cg0ye9XSWO9/kGE/WLebjRnP2 t/JawpNh8fZ0OOdHRIPvto8X28R6HDlqC7mOa+STl7QNLeZKetUOVQNmEpC/tm98qIYj UNhyZORT/idhoyqqsV1xHwoveM53+AA7p+L83XU/Wk6EYzvrE03vi/lSB3FIQwmDFeHv qtBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tcgRlDw6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x3-20020a05620a448300b0074e174f7d9fsi9714882qkp.69.2023.04.25.12.44.02 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:44:02 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tcgRlDw6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOXo-0002cs-Fu; Tue, 25 Apr 2023 15:41:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOXn-0002ca-GV for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:43 -0400 Received: from mail-lf1-x132.google.com ([2a00:1450:4864:20::132]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOXX-0007Gx-MU for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:43 -0400 Received: by mail-lf1-x132.google.com with SMTP id 2adb3069b0e04-4edcdfa8638so6745678e87.2 for ; Tue, 25 Apr 2023 12:41:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451686; x=1685043686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VhQ4va+L9W76HQPa92crHAm/u67MMsKKHsaZ1OacYxI=; b=tcgRlDw6ECIOkgdNni+h2heGVFiX9cdS6dn2eQ75VTg7UFOR0S2Jv0RNtS3C0Sgg2g 343A006kqSjq/4NrAs8sbl90RQPN2O2yHOYfkcdMTpVBrJOvSB1NOFkqe827Pvaxq4Ot /Qpvz6TqZBss7eH9JeHgOoHTasNA8gLJaHw9rx/afpWOKMpsUtApdlogPE8SGxN1euN7 8yYPJeR7ccOFuKRWCR4Fcv4j7OE/1QCO+W2e0hCFErdlXOAM9Z9WoEk7WAR1/s90CPs/ 3mPFqnX/i1dhPI2+SlNanUC99J1xe7tc8lx1DI/3crRiiDcqVpTD2ihCNIaqFOSD1NZd L4jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451686; x=1685043686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VhQ4va+L9W76HQPa92crHAm/u67MMsKKHsaZ1OacYxI=; b=DszPUjS9D6AGG7NBIXTdcAfzduLV5Z2AvhbIWY8eYkzZoyYJhzDq/OauCYxTWktm9Q V0OcuOqYBAP21V6sOuA8cg7ehHAyW0qvmHdSJUcbgDsp29+ALUIfQmrNFJRYi2iVddJG v4hwDOXFlqKzRzmTaWZW/MrYeD0Wksb8eMoPVMh5O+/bipIPvOVYR+eXXS4l8jtjUyYt GmmWkIpQoh30GN5W9I3iv9l/ubHpvq4aec0Jl8Vr7UrK/2qAQB3EljUse+gkpEKmajFT N0r7IBohtd+o2ny33NFAk8i23BvJyCNX6YlKy+v1FMXFPTj8VOhGq9pIb5Aq3WI/5ssc VqYw== X-Gm-Message-State: AAQBX9cHgxRakoLnXqcpIY1hDPyXPKQlKI61BdDspNG7dFYOopg0y0Sv gKuo2odKmMqXMo7TjActWXgaiyiVrulx16q3KeO5zg== X-Received: by 2002:ac2:4563:0:b0:4ec:8b57:b018 with SMTP id k3-20020ac24563000000b004ec8b57b018mr4377425lfm.33.1682451686253; Tue, 25 Apr 2023 12:41:26 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.41.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:41:25 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 46/57] tcg/loongarch64: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:35 +0100 Message-Id: <20230425193146.2106111-47-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::132; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x132.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 62bf823084..43341524f2 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -826,6 +826,8 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) typedef struct { TCGReg base; TCGReg index; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -845,7 +847,11 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); + MemOp a_bits, atom_u; + + a_bits = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + h->align = a_bits; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; From patchwork Tue Apr 25 19:31:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676857 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883136wrs; Tue, 25 Apr 2023 12:50:12 -0700 (PDT) X-Google-Smtp-Source: AKy350YQfK9pJ1HXokPiOZ9hVghx4KNuGN34W2NuiCUdU9OYQOsRbtAcnXvkI0IG8Icns/Fp5FtU X-Received: by 2002:a05:622a:40d:b0:3d2:7f3b:c691 with SMTP id n13-20020a05622a040d00b003d27f3bc691mr34006004qtx.47.1682452212504; Tue, 25 Apr 2023 12:50:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452212; cv=none; d=google.com; s=arc-20160816; b=kTqQeqUDn07DJOsnAmiIeA6Qfvd1wZUKvrNciIvF3c9PiznbMbM5lYTRQFlXd53X9W sL7jU3muPQd5S5tS1vfJPUd2faG6Lpv281aCg/x1N/tYAREZwbX8fFainGvfx5Edz+XV HC+0lWxpxHiNqasfdk2VAS92w2c6XqPVrL/M74AJaMJ7N4suON6TwiCSjT9MeQgwYt94 Fn3ZiIW0jcKLLl1pzQ3ARwSNKdi4Ey5m85rkMPC6if2Tf31GwOxMubkeniRWR28uk1kC KCSPuYl7J4qiellANn2h12+gIItPDZ7/zM/gzRHt+6ghPaUuu+3QrYmKXFuO9IP0cVqf QBDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=naApWSmi65k1gRAeLMa4iVYTmTT0eNi8LAPHTVnKRRQ=; b=fsemF/bR/+6sPbI+hk4rc8J6iBFe0u4sVf9/PKgF/qwvIEHYucecL1rKDVYzwcQv9p 8TNpB6UOXCRgunukpxjFj6wp1E1qd59Gj34fB5UWUN+oFWPwRqsLmljrAJtIGiMMWIo9 NiViP6soFaLLkB53dI5mA6EtrFBZHuDExLosjxbfRMuGYyFP44y7Y+VpuPksi/ioL9v5 JVHza3rriwUerEJ2zVK+k/A+AFp8rkKcjSakcUv625CawtZxkIT/hpmbUxAT//FjD6Tv 7PPSDrUEZi5O3JkH1P3HMIHQs0HH2+E2tH3rRatU2YDmw8ZDVVo72juT6/seoy/L3dOB UdeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UYPRDzgZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id dt50-20020a05620a47b200b0074deeac1509si9194745qkb.354.2023.04.25.12.50.12 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:50:12 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UYPRDzgZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOXg-0002ZW-H1; Tue, 25 Apr 2023 15:41:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOXf-0002Y1-9v for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:35 -0400 Received: from mail-lf1-x12d.google.com ([2a00:1450:4864:20::12d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOXc-0007I0-Ly for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:35 -0400 Received: by mail-lf1-x12d.google.com with SMTP id 2adb3069b0e04-4edb26f762dso6618510e87.3 for ; Tue, 25 Apr 2023 12:41:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451691; x=1685043691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=naApWSmi65k1gRAeLMa4iVYTmTT0eNi8LAPHTVnKRRQ=; b=UYPRDzgZ3Vkg2LxIAsLUN5iX90IVZgBLD49ji+fhoMdZX6BwS4+SAEqlG61hosLF+n 1IkoTvqg/h7iBAsimrbjG/OkqcZrFUB8OSeurY5B5N0nubZE0uFh+E/8jTaV6LTkYGN5 qsDRUhd4LINtL3zKDrlLROil0HrccZkW1Q+0kDVnUR96RXUHP0Y6zKK/stjt7gkfu+j8 IBiJMu9YfQQks94sAKv9RZFRKuuWpFBNQI8dqBBNRvX9lvdJ7fdGBTwt38g5WBNOYmkz De4NEPyIXLcs/J142EG5rUacSEHY3yOUb5UzE+6rq2J5Je7ini3j+U61CBgGoRaIvIs0 eqig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451691; x=1685043691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=naApWSmi65k1gRAeLMa4iVYTmTT0eNi8LAPHTVnKRRQ=; b=lIh9nB84W3Lk/2HRssL/uGZO4yasQE0oKDj/q6Z1pBKGgxzakdxgvgbn8f4d+12TGV sARJn9cV3QULrjJJMQcPqxdjea6w2zYEShpGxm3o9e6QEyviNaARncWVEPm6owDJkxnk cNTfjabpIGulkQ1CswEIOCpggKoRf8N6lBeFgA5m4QxD/2QY02zHNX4s3AM+U+vhE8Cc WXFcXe0OD6viCOSAJrHp9qswg28wAgF5yS03YzKr4hW9F21dKZL6ldjxYgnPo97ixFHJ grAL8fOpbKebZgmy+KmghQGdJFVZj0YBr3gd/jFuqkmzEEjJd+py+utkl5pQxgeaP3F7 6BUA== X-Gm-Message-State: AAQBX9cm020AevgWgy1yiGqPZ+vOF9f04CC8gHJ9MurzspyE0atNzRjL hRRu/iTSVqkX5CG2FbUe2lvmoz1CqRpmTlg7AYIO4Q== X-Received: by 2002:ac2:4302:0:b0:4cb:280b:33c9 with SMTP id l2-20020ac24302000000b004cb280b33c9mr4483035lfh.24.1682451690872; Tue, 25 Apr 2023 12:41:30 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.41.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:41:30 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 47/57] tcg/mips: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:36 +0100 Message-Id: <20230425193146.2106111-48-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::12d; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x12d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/mips/tcg-target.c.inc | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index cd0254a0d7..43a8ffac17 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1139,6 +1139,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) typedef struct { TCGReg base; MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1158,11 +1159,16 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); + MemOp a_bits, atom_u; unsigned s_bits = opc & MO_SIZE; - unsigned a_mask = (1 << a_bits) - 1; + unsigned a_mask; TCGReg base; + a_bits = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + h->align = a_bits; + a_mask = (1 << a_bits) - 1; + #ifdef CONFIG_SOFTMMU unsigned s_mask = (1 << s_bits) - 1; int mem_index = get_mmuidx(oi); @@ -1281,7 +1287,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, #endif h->base = base; - h->align = a_bits; return ldst; } From patchwork Tue Apr 25 19:31:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676866 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2884000wrs; Tue, 25 Apr 2023 12:52:38 -0700 (PDT) X-Google-Smtp-Source: AKy350b/eIePO9LZVFRjQGEOVr/jjGfv5eX4fyUvgfgJVKraGeuJiZsnvc8dQhDsMtjILF9W1c8x X-Received: by 2002:ac8:5e51:0:b0:3ef:3730:6829 with SMTP id i17-20020ac85e51000000b003ef37306829mr30497668qtx.36.1682452358591; Tue, 25 Apr 2023 12:52:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452358; cv=none; d=google.com; s=arc-20160816; b=r0qliMbAw/zrpMRTicoKxs9WshY7Fuq29g+yF2EGjubGuZ5kRyaScS4C1kjHgcme4E /eug3ARW9/JklvbIMh2yMPtc6oCwzJpTLjxtvdVhnB7YqWr3ZmJ4j6tC7yf5vAHS4kXz LEJG2cXyOT3KeRuIGzFa+40FhGbbhDTxv8DSx/t/o3qqNS/Gy/d+0T6ZKfj6Tk/H0DAi JyDeqFaYWJ+Cu3hS80xMgJ1wQvPnsc/uyG5dweB2KxiY/FLhCUOfHXkKh9KoeesAgLtR Jsi8r+Sz6b9cy3SrpvIWivq8pK0lWO+jmLRRMioKhTnu7Tzmyt4wtBrhS8+T5xWcJNF2 coeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=4sDcHMsV6UnaIVOp+Xo/707fzlwAwqK9fjYxSZbgJwQ=; b=aeNYRmWQDP38xW8KF12iBvHmkh2LXjYP9hAdgj4UtoYa/nLEDizrBB6pfHjBSpJy9v y6UA5Cztvfagq25z6Bz1h6TAq4w+wIWzMWx6X2adziWgK9XW0wKPwP2vsx/x1Re8PHdA 0P2Xmy3/93JWxFkZtGU+bEZV7JHyIm771+4ibYq56WxM2bQfNDs1ilwVWceuCflP7tu1 iWq/BiL33VUpIJshijvvqsjQSvgdzFn1Xg6FhpTsL3SPqTQyG8Zz9Pu01bKAbqEsp+CM sN5yXV5UYnHz/8N5gJnb92DdW6hrv8AxyEAlb5BHICnw9pSV2vR4OohwpoqeIS7wxFt4 VhSw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YbVrfL7u; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ay35-20020a05620a17a300b0074df4bb07fcsi9562024qkb.531.2023.04.25.12.52.38 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:52:38 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YbVrfL7u; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOXm-0002c2-E1; Tue, 25 Apr 2023 15:41:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOXk-0002ap-KN for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:40 -0400 Received: from mail-lf1-x134.google.com ([2a00:1450:4864:20::134]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOXh-0007JL-Uy for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:40 -0400 Received: by mail-lf1-x134.google.com with SMTP id 2adb3069b0e04-4efe9a98736so3900988e87.1 for ; Tue, 25 Apr 2023 12:41:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451696; x=1685043696; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4sDcHMsV6UnaIVOp+Xo/707fzlwAwqK9fjYxSZbgJwQ=; b=YbVrfL7u9UzIC2n0oYc6ToWMnFhl4tYFCdP7c1XMnZUa2dwRrH9Y7NjFw5uqDi9b0s 9jMFKrJYCBiM5e2X9BFkukePlpMFcT2LZPwLKvxKDfkriAvptsay0D06yKqeEW+GtdP/ nuIioTiJBi/GLKrR1aSTV3L0+Ox+X4t859vp0dn8ckkSPxx6fanv4VcvMcr13TZ+Mvtj XwZCAr7oJucXTN5rTNmbkzUHQkix3C6Ytud9d9OcnEZUo5q2Ez7lCA4y3ZVL6QoJQZbn VCLkcVWiV3VO6dZeX310wwvPtdDvsCP793stlDDSSbPtQK2m6hbPqB8h4tvtR2Fg/TxK Rrbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451696; x=1685043696; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4sDcHMsV6UnaIVOp+Xo/707fzlwAwqK9fjYxSZbgJwQ=; b=kTTHdfzbd0gzV+lFMaJuEgxAQI52NE3d8yfYP4lKRnt2z0TPN+UoJVG8UnDbbbQQGC eudatMvILIPYq5cazzlMv1m7mA58RQ7yUoEhl97jt8EWccHQnkkSzjZD2AEYXUc+1s8v zX0dNpRUNLZMvhZa7NjI8kmVMmadhIwQiCvZ19+igSky+lBf6tqJ9iCrFyi744mei8/7 pSnWpTU8FZwpfwoII8sPJqfcETauBp3OIa5o/UtFJOjvo2LtQtS6lu3r96WfyihO8X0S ICcLREEdyBZ7QhlR0Jm/gwzrl4EX7Enjx8AQiZHU0gJyOXuihrsJ6gUevL5dfV3C0mt3 F2ug== X-Gm-Message-State: AAQBX9dMpzLQPibn5iLYYiuUwpjNlEXllplkZTYXdtTVywYwGEufrSar rNCZaOFLM3wU7No7eLFcvQeCVueCq5pkOmjIIL8kqg== X-Received: by 2002:ac2:5975:0:b0:4ed:d1d6:c595 with SMTP id h21-20020ac25975000000b004edd1d6c595mr4841448lfp.55.1682451696540; Tue, 25 Apr 2023 12:41:36 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.41.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:41:36 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 48/57] tcg/ppc: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:37 +0100 Message-Id: <20230425193146.2106111-49-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::134; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x134.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/ppc/tcg-target.c.inc | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index c799d7c52a..743a452981 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2037,7 +2037,22 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); + MemOp a_bits, atom_a, atom_u; + + /* + * Book II, Section 1.4, Single-Copy Atomicity, specifies: + * + * Before 3.0, "An access that is not atomic is performed as a set of + * smaller disjoint atomic accesses. In general, the number and alignment + * of these accesses are implementation-dependent." Thus MO_ATOM_IFALIGN. + * + * As of 3.0, "the non-atomic access is performed as described in + * the corresponding list", which matches MO_ATOM_SUBALIGN. + */ + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + have_isa_3_00 ? MO_ATOM_SUBALIGN + : MO_ATOM_IFALIGN, + false); #ifdef CONFIG_SOFTMMU int mem_index = get_mmuidx(oi); From patchwork Tue Apr 25 19:31:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676863 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883811wrs; Tue, 25 Apr 2023 12:52:09 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6nRNRH01LhhhgwU/WIYW09L6FMDoS8Scrfu9Kcli8xC+o5Mm+55fFKdIjZtNK/ySfLPgYX X-Received: by 2002:a05:6214:e6a:b0:616:5215:42ff with SMTP id jz10-20020a0562140e6a00b00616521542ffmr3490172qvb.28.1682452329022; Tue, 25 Apr 2023 12:52:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452329; cv=none; d=google.com; s=arc-20160816; b=JutGm0OgAM0HMynlYa6e7l3W4JAqamHtLD+abAsxq60L/cJZyfR/d6zC6Su8XReFIU nUoFX4+DFPu3wlBhjMtQLfvC1uv88hLGZn8g8PPDyaqplvLD5YVfywrUnskyJTfnIjUT apZHqwmmhJmaTmsEMWC2m9okjei5Ju6ZAbKqCRQ680IqgeAQGIW576abJPUa9yyo8OTv /CxZgHz9rwOdRbBBPdrTak5RPvKw48bLVoy3r03mXSWyFMjtVj4YCPwzRHq30NOwt+pk BKHvmy4CNPkxCtuc4Y1nkIB1VJ64f+P/ITaDF0MwBLKXv3UDdgx10XL3apJccA0NIxc5 LgJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=IwfQsco1hxF3XHjRYzbNyV4rrRWNhq8it3E+wHc4wxU=; b=BMebXb20ltwkFDpS+weW0RX3ehjvv7DkMM+lsU6pOr7miyjZe9SutU+tdGEZsfvq62 cXOJkFFoMyqDUIS1m7R5hKpzj/pKFjsJqCvZi+2AFf1mIJLBmvLYKJqJ2QBHkVxW+uNF vyuaZ6QD5f70BBsbdz7zxuxaPR+S78bheZwnkQe8iKW0tvQhLwpwqmYsk/g4x+h8abHG 5a1kcWfGqkKkXf6c85/7PoYcxwXKPp0mD3lyoCXD3vy/R58vEkSeUluG6Q59Xmk9WypQ 4ChE9f2hKD/9ELH1AGgSAzMc8Fd9Ju9w+XwlC3Dh4Eq/azCXVwi/X/MpEfkAJxtBlANx NPVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mvExPJBC; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id jl15-20020ad45e8f000000b005efd3b426b0si9271970qvb.571.2023.04.25.12.52.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:52:09 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mvExPJBC; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOXq-0002eB-Hw; Tue, 25 Apr 2023 15:41:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOXp-0002ds-Ub for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:45 -0400 Received: from mail-lj1-x232.google.com ([2a00:1450:4864:20::232]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOXo-0007Kb-EE for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:45 -0400 Received: by mail-lj1-x232.google.com with SMTP id 38308e7fff4ca-2a8bb726210so59492911fa.1 for ; Tue, 25 Apr 2023 12:41:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451703; x=1685043703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IwfQsco1hxF3XHjRYzbNyV4rrRWNhq8it3E+wHc4wxU=; b=mvExPJBCVtAUA1IGymCoo7ZSxMGWsopnTGT6rFxBaSur5n9mTmWe/O960ts99y5nKw LFVGz7mHvOurQnMDhxtYWstTqJPQGdlFgtOTgbXjgbbpue4NpFwhYdHrQf58llj8K3Ih dFsj0NC4ZrNYZtY7Kf3AV6C1wy671O1zXTlUvXkLvAsnghaOC24FY5cfTjcw9zlVoMco Azb/Ex70IqohHZkwtctPj61tggqHcp/bW4o0JBLVDyIjquplw0zuabGH0xK5dhIStHXd rTBFVBoScxtqVH6d7lenAnC9uh2a4SFCO/wTK+9OwbfchBrrku21Kn2tT1c2TAOk3yq1 QOaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451703; x=1685043703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IwfQsco1hxF3XHjRYzbNyV4rrRWNhq8it3E+wHc4wxU=; b=IlCEXywUDaFrWIN9l76fCBgHHRtkgvSpOvRWs1l/mfMbeCK/H099aGGmYLeT0nk4ap Qqbk86bWHSwTZGqeyfdxqU8wMuUGQTjz30UPk71dkoyFLl3n3ml/jReZ6r53xIbAGKwV UHW0o2lTbCoASYLYcwx94Dho3/o4GBceoayHMR0jXvrY46O+qu5K/x7KCppdzyS1t7gN yieKViEPvdC57fcFnLrcysCn+IDfxWh0VGcjmmFH7eZoFE4K1YLv9ZQIsy9xaifm5JSI W7aS/dtuuxY/lu1Ugh9pphV0+WcVxqpXaw+inrP467G+p4NGJIe7ZafNAWECOzoPIvhO S8Cw== X-Gm-Message-State: AAQBX9cxS9TSad3Z6m29I/1zcfOpI9d36JjfWIb2DW8R2FKnZGimwt7r aeMhMfQdaOO2ajaHuk963giKl5DIb0RNVXRbvcJAtQ== X-Received: by 2002:ac2:4910:0:b0:4ee:8ff3:c981 with SMTP id n16-20020ac24910000000b004ee8ff3c981mr4411726lfi.10.1682451702769; Tue, 25 Apr 2023 12:41:42 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.41.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:41:42 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 49/57] tcg/riscv: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:38 +0100 Message-Id: <20230425193146.2106111-50-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::232; envelope-from=richard.henderson@linaro.org; helo=mail-lj1-x232.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/riscv/tcg-target.c.inc | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 5193998865..aae0512cbf 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -924,8 +924,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1u << a_bits) - 1; + MemOp a_bits, atom_a, atom_u; + unsigned a_mask; + + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1u << a_bits) - 1; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; From patchwork Tue Apr 25 19:31:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676856 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883050wrs; Tue, 25 Apr 2023 12:50:01 -0700 (PDT) X-Google-Smtp-Source: AKy350ZAglw/I8Y6Ym61J2lLivaAtRWlPYtKjhYKlWbIZn85R30S3L9T3wqt7x9L25AKDQke5QL9 X-Received: by 2002:ac8:5f0b:0:b0:3e8:62da:5d18 with SMTP id x11-20020ac85f0b000000b003e862da5d18mr28237144qta.25.1682452201226; Tue, 25 Apr 2023 12:50:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452201; cv=none; d=google.com; s=arc-20160816; b=U56u3lJRQdiWckAE86L84Ddug/LtiP0JNwMynYJrjDFUdWiet5mw//Zg75pMDKGUsV igWJIGv4H9hzPbsr2Nf7hfoNkfop05GEa+kBZieTbaW9SpdFFgAqngpjKclYc/+sClML EkBf5nqEEkePrRHTNOY9KSCsyY48Hmq1sLl3bbcbdgaheXhV8RCCyqRXfUaIfzIJPTJL Cdff6WuatQcjp1Uh6Yy7CWkN3JtG+OEEamhlkGAQ4y2vXZNpQCxjlfQrfeeBRE8zhYXN d3ItEY3Wj3iWQxTKo7eWvxOueLCV0YpHuGrz/+vAdV1ANytJPn17hzTaAng1liBazx8+ pwiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=V5RMJ17ugwtLDyPRP13L9tNn+ld/75MDb1q/pfzv0DU=; b=HBmQyrs3t6mJpE7fJP9eYE4IGAprgIQjjXh1w7K9M95Gzhw5s7erkUKhI8zu1wP8lD Wy9iCwAYQjE18LuyCxONHnvSrn/Fl8mFlafSTUnYVSCxzID7nwaWH6onjCHlhJK8RjE2 TfKHSN54uytPt6MM005utwD0a0AXL2Bp0eOR+w+rrwBtNeNAtT7kapX5x742FCEDrMv3 hjEhcKtd6H8RoEx7ll0IXtzk8QwgW5kRlARAE3Cz7evlyu4x0bhZElmMehosp8Pk/V2i Cbs9Lmna5+Y9d8dqwSc3FJhC8fi/UUsm/uVVrXKerWhk8ZDg2eQ1I/XD7cnaziUmi1tm Ty8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=SikyR0KO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n9-20020a05620a294900b0074d885db7b3si9505324qkp.66.2023.04.25.12.50.01 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:50:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=SikyR0KO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOXv-0002jr-Oz; Tue, 25 Apr 2023 15:41:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOXu-0002hR-3y for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:50 -0400 Received: from mail-lf1-x12a.google.com ([2a00:1450:4864:20::12a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOXs-0007Gy-Fj for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:41:49 -0400 Received: by mail-lf1-x12a.google.com with SMTP id 2adb3069b0e04-4efefbd2c5eso3441964e87.0 for ; Tue, 25 Apr 2023 12:41:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451707; x=1685043707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V5RMJ17ugwtLDyPRP13L9tNn+ld/75MDb1q/pfzv0DU=; b=SikyR0KOVeHlbDjVhuVnKq+I3SYoofkrVkB/MxdgBixPU7skE4xZwI9AEVv+ml84Dj zmPahdno9MaepyydfnKZDCMlhn5sYK9ALE6c/B4szH0biVWZFDd4+TFTZV5qJDtZId3j 3x5V5UhGcgpgstmRF2LRPAN+XzD6cjqXGKeUE5ZaLENnA+pa1g6oziJhBC0tnMGl9+MC F3N9T4hGq8apENCbb0w1S5YdeH3JzSUNPlHiqZlE6nv2ngSp84t8yOPdX26szWXy9mKZ UdACiem0u/nDJPwC2tNb5dW3mIwav14mWyf4iSpNaSRcXko8+Q9aXnRbXjeG7oJfKpPV cFQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451707; x=1685043707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V5RMJ17ugwtLDyPRP13L9tNn+ld/75MDb1q/pfzv0DU=; b=g9w5zSs+e7DW51t0wkliiDb8VgiG7gs2GNQNTlSJxVU0pQ4L+AYAP6sIrlBihlxJfI hrLPQBYu5I82A7mWWKNMkR03vorFzEGj9WZyb3pmfISSHIT14TuBXOqgmN2fIuFZ86Gm u7khf/GN/1yceBXt6piRafDRvZT92f5cjXtPyd0jB5MaiGgBDC/VvErOSGxan51uyiJO s2crgn9uHprtDEO2TtGnOoRlWK6mmhLQz6QRVtG8LIf1v/MwQtYOUDZRU2TJYqS5lpy9 22XCEubA0NKjDznlGaIVRQ3NvBbVGwu5KxHHGw6A89FL/lnZwVz/biNP+E7W8eR0OCRE pRiw== X-Gm-Message-State: AC+VfDxCLULueTXsCgyi20l7n7rdr72V50/k9BHuaYwfenPbfGzkFHNE Vydqts9Ufp+dkvRY1m9lRcT0mDDnza9+KFo3gqXSAQ== X-Received: by 2002:ac2:4181:0:b0:4f0:441:71a4 with SMTP id z1-20020ac24181000000b004f0044171a4mr2767lfh.35.1682451707772; Tue, 25 Apr 2023 12:41:47 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.41.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:41:47 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 50/57] tcg/s390x: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:39 +0100 Message-Id: <20230425193146.2106111-51-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::12a; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x12a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/s390x/tcg-target.c.inc | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index 22f0206b5a..ddd9860a6a 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -1572,6 +1572,8 @@ typedef struct { TCGReg base; TCGReg index; int disp; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) @@ -1733,8 +1735,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned a_mask = (1u << a_bits) - 1; + MemOp atom_u; + unsigned a_mask; + + h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, + MO_ATOM_IFALIGN, false); + a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU unsigned s_bits = opc & MO_SIZE; @@ -1764,7 +1770,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * bits within the address. For unaligned access, we check that we don't * cross pages using the address of the last byte of the access. */ - a_off = (a_bits >= s_bits ? 0 : s_mask - a_mask); + a_off = (a_mask >= s_mask ? 0 : s_mask - a_mask); tlb_mask = (uint64_t)TARGET_PAGE_MASK | a_mask; if (a_off == 0) { tgen_andi_risbg(s, TCG_REG_R0, addr_reg, tlb_mask); @@ -1806,7 +1812,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->addrlo_reg = addr_reg; /* We are expecting a_bits to max out at 7, much lower than TMLL. */ - tcg_debug_assert(a_bits < 16); + tcg_debug_assert(a_mask <= 0xffff); tcg_out_insn(s, RI, TMLL, addr_reg, a_mask); tcg_out16(s, RI_BRC | (7 << 4)); /* CC in {1,2,3} */ From patchwork Tue Apr 25 19:31:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676830 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880568wrs; Tue, 25 Apr 2023 12:43:23 -0700 (PDT) X-Google-Smtp-Source: AKy350Z8mWvxFzr9IyeVaAZm2zhngTEXrL/jZGCy73jZRgaeq5b/Ww4PSGu9N4KuJzZ1conYQsuq X-Received: by 2002:a05:6214:e8d:b0:5ef:85b5:3a4a with SMTP id hf13-20020a0562140e8d00b005ef85b53a4amr33933307qvb.30.1682451803333; Tue, 25 Apr 2023 12:43:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451803; cv=none; d=google.com; s=arc-20160816; b=dBpUBH9BwGOCzmB20YKHqgyBqWFn73wGWjiYLbalc348V/m5hqhj+c7r4B0FYuspJ8 /iFn83p2oD/bhiVU7xZZTOqUm+84Sq40lbYHJQUW2RdGBY6ypuMRwk4OpTU8MZWTpv/4 +NowGzz6x6CqFfPKH+SOzlZpFnlXm+hDJk93bCOYhYCOaj/qHpXSTuRHhn3C9xndua9L XSjOqXXyAMvWhDVNTvGqmSPe2ttkJ8fnx2OFUCvFFE7H65nmyUNQpX446sJ2lbCiIINn 25ovghiSyyOALJK8jiPTjaYVjHmPkzPTIC6F4m8D9dskJAVgFQ+LltGf9Zq7kY6DdMCa Itpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=5+Rgd6h8ypNJWz++9VNujvYl5qeiVkqzvRz2suad2+I=; b=XypJJOix31U4B2LgpIxvD9gNgdqkqTtIzl4vJ7vRYskxEKa5atpR4vGPfAjEfiOG8R wL1y62kOjNK7XgCjy2XBu04In1m7uUUHn1jf2edMR4eF84R2frsZ4a9hCh0LgoFL8O7J WUnSW94MLTRHe5YDkn3TTXcbXfkAmhK6Mo+QQN+DredBUHyNoFNxagrWNaMw2pULoAub WQ4hl70TEPgyS45Md+W6CIRH6m4JePvGZvc9iH57R3xtr3PR1IKUGjUAKaxUCGDxXujl 21moUwqaf1vLeHLfCGC5iFP6BxE4wlA8FkNiwfxobQnowckO0rjMS3SLyn8+0zYCgTiF xSPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="xxnsD/5i"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c15-20020ad45aef000000b0056eae9c625fsi9423062qvh.318.2023.04.25.12.43.23 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:43:23 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="xxnsD/5i"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOYE-00032W-Mw; Tue, 25 Apr 2023 15:42:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOY6-0002tE-7M for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:02 -0400 Received: from mail-lf1-x129.google.com ([2a00:1450:4864:20::129]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOY3-0007PF-Gt for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:01 -0400 Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-4ec8133c59eso6400245e87.0 for ; Tue, 25 Apr 2023 12:41:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451716; x=1685043716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5+Rgd6h8ypNJWz++9VNujvYl5qeiVkqzvRz2suad2+I=; b=xxnsD/5iYNTQXjenHapynSfAap13xr+jjgBEdvb08p27sjZayHxEqX4RF+2MnAdWMA wAqytipEkSAw7AuHPP1N+JC9o6tqW3NpSdl+T08V8AkSpenMdmSU7NODAoNiOrRTvHkq psxqH5kP/qSaqBQpcZFPpLDNMQfq9O3NkRnjilu7fakeSN+M2sK5YJVwsppwKx1uzdEH CvwXv1cGEWBacSp9zdfYuZ1g587adQaBPe52gfnTfXDJ52bLL0PZ3ziixJEYtGccPgpN Ji45ENUzfVZOrcWjMVwNyxXGpdUEkWlORJL+AiZYVfCJ9dphaIf3ABli6GKMPEsQxiLc 2GjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451716; x=1685043716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5+Rgd6h8ypNJWz++9VNujvYl5qeiVkqzvRz2suad2+I=; b=jMy6icm4P9DTTQsvVAuJ6egHmknq+4WJUJZzY7c9Y6ks5DCmELAOK9kyP3U7qeN0ws 4dRRbnfw7W7dNfnRTB1TF/bhEhYn+stnWZ3IZn+xiibnZaey78aj1RfjqSvq65llCl/s GGBxGusZsSdzGgOfkyfSL/rB1pFE4WRd5P37SCfODhImc6g9V2vIBWzWBftYteINVYw+ oGdlnRRqmw8TKRoKUWYw1wn4khL36rbm6CT6ZiY6TiOvhnFkLfBktqFSIcfW2GigH3N1 7OKnfOiOAVeGVem8uMW8FnvkBIayub+j/PaiCkx5fM6lVAIOE27LySoVw6S1br9ER6+E KwDA== X-Gm-Message-State: AAQBX9eRAmAG/K/ekQ9C5ER5JI1rIsnLb0MsgpfuYIodUlOofgX5sZst mZX5rtX7dhLRLmnCwO4Beae+7nSixQXDU1Nrkcr/bw== X-Received: by 2002:a05:6512:909:b0:4eb:dd2:f3d2 with SMTP id e9-20020a056512090900b004eb0dd2f3d2mr4950709lft.43.1682451716518; Tue, 25 Apr 2023 12:41:56 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.41.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:41:56 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 51/57] tcg/sparc64: Use atom_and_align_for_opc Date: Tue, 25 Apr 2023 20:31:40 +0100 Message-Id: <20230425193146.2106111-52-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::129; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x129.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/sparc64/tcg-target.c.inc | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index bb23038529..4f9ec02b1f 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -1028,11 +1028,13 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - unsigned a_bits = get_alignment_bits(opc); - unsigned s_bits = opc & MO_SIZE; + MemOp s_bits = opc & MO_SIZE; + MemOp a_bits, atom_a, atom_u; unsigned a_mask; /* We don't support unaligned accesses. */ + a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + MO_ATOM_IFALIGN, false); a_bits = MAX(a_bits, s_bits); a_mask = (1u << a_bits) - 1; From patchwork Tue Apr 25 19:31:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676861 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883463wrs; Tue, 25 Apr 2023 12:51:12 -0700 (PDT) X-Google-Smtp-Source: AKy350YISEz0z6iNBqy6flBPzFaMrOifh3Pp0a05Fw++mCwVrMNYriRnQbYxPscCujAIJAstbZpG X-Received: by 2002:ad4:5aae:0:b0:5f1:5f73:aed7 with SMTP id u14-20020ad45aae000000b005f15f73aed7mr34730255qvg.27.1682452272090; Tue, 25 Apr 2023 12:51:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452272; cv=none; d=google.com; s=arc-20160816; b=0KxUqQttPe9n9nbX0Ngb+0NKKZL8vRgKnVZxZVWRrQuPDaQUrwQrCCy/YWxJTkKuyQ GL75rZ9PqJpHKs09hPAHrD8+etQgD5GFl/vMpnmVsOCVvDBJ1crF2RX3zLJY7HTy2e1s i+mj6dfqTFNQLPu3ehVDc1RtE3G0i71qYiO6GzOg8C5pSTcN+tHQ9zf/x3sIk7HtpUu2 wgqilOujGTZRXve6FWNX7PXeklFY77goIzLF5+EmC9iLwMtrGH9vaWepIIyHAIOWwWCG ueFdptsvv6En8f7gLjdBAeHdiOqFpus8EQY02n5GJjkyBc1sdBmk/A18+y8fXy/SoZGl XWQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=bVx5ipNFndzzTu/iPLxsCUzCu2+By2v8JyYAdLF1dlA=; b=wc5Y6gUe0qaPswWtmU31HKE03iKgmV11fdrLWL0pl9iV00bS+CiUaJ119mUVWrPFsN fAj7Kx86f71HlDTreeWMb8jLVZznhMYBDJI0n305q5HO7LDSEgBwik8DpaKWrzvF/26O k7rNVt++Y3iBTuD9EX3Kf7qeJ8zGC6eXPlzOFD//mrY/kZdZ8cwE11JObQVog4sGbNtH MmWVxTMMO80EMXakcHIEbDfy/0olxdt1OzpTdEuoxDRNNc3N+MJ/96kUcyF04GUt8R6c R03cTy8vOedIB9tJN5iRz0LkA85i4383GJ8f8HDmt9sn6lwlLKf3o80EjpRV+f3v3WxS vYyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NYRqaN9Z; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q12-20020a05620a2a4c00b0074e185eb101si9635558qkp.464.2023.04.25.12.51.11 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:51:12 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NYRqaN9Z; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOYQ-0003G6-If; Tue, 25 Apr 2023 15:42:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOYI-00035h-MV for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:15 -0400 Received: from mail-lf1-x129.google.com ([2a00:1450:4864:20::129]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOYD-0007SX-4P for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:12 -0400 Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-4efea4569f5so4058201e87.3 for ; Tue, 25 Apr 2023 12:42:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451727; x=1685043727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bVx5ipNFndzzTu/iPLxsCUzCu2+By2v8JyYAdLF1dlA=; b=NYRqaN9ZY+sndgk27czSOKr2IiFUlMGeVCXIGc17ugaVjI8VvY47KcfsCKQqf+8IRO X+SxaegTWZgst5QO6dLlrPBrJ3gYqdlioMbjOpfhHCW0DhcCbbEJ//X7KPYz/CAVUmRQ O3NZbdYohnY0OuxBg3+NYUSBvgK9BH7RiMuMzgJ+n8DXMlSoOUz2a0EM5N5z+VeALCB2 J9+lpL6qOeihxXb8e3223E2FVXKUw8aCQMefvCCBZQyCDcWuFUTuJ4u8yd7RcwuQrsVr Xk0xR3PGBd2mhKakAB3CBi/nXJkgM4H/YEp/FFVbS/f8byo/HeOgA57LrgHb/UVTN7vQ rI9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451727; x=1685043727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bVx5ipNFndzzTu/iPLxsCUzCu2+By2v8JyYAdLF1dlA=; b=QItVyAe3YKJ7/sa+l6xeg1zyLzWySoSZCMF/2655xGyiQpgQio3ACFGYUVbdFtE1Su FZhDfDD7XmW/uMv1Eptf0LIuBtuhHnK5TprzwAEFA9v+0zPmixaA8fcSoFPbmFE8ftCL EizvQjpgr3BKpi07+Sk9kqdkYVhiKhYE9E12mQApc0yfv9V7ks7EU7LZPIZJJtwTVCAx eBSjLfEbyy1oQj7+B9InK+nb2g4SlH337oIClTT+XgNn6CB4lq6TtN1ufJu6ZCPiott6 Bf+uzxsEqsNZY8lUNACoQ+j7ptX36Qi76kDMj1GyOWwt3foh6i876iFyYpJAWQAY9PuZ 2a7A== X-Gm-Message-State: AAQBX9fg+GFi41w2HpNGhsbC4Zg0+gpqLjrXmFgbLR/UtmeG8TlCRwGS vGHqx+b8YTjtb4L3hAynW3NOiK9IuB077fdsloUc0A== X-Received: by 2002:ac2:443b:0:b0:4ee:e0c7:4358 with SMTP id w27-20020ac2443b000000b004eee0c74358mr4507496lfl.31.1682451727510; Tue, 25 Apr 2023 12:42:07 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.41.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:42:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 52/57] tcg/i386: Honor 64-bit atomicity in 32-bit mode Date: Tue, 25 Apr 2023 20:31:41 +0100 Message-Id: <20230425193146.2106111-53-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::129; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x129.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use the fpu to perform 64-bit loads and stores. Signed-off-by: Richard Henderson --- tcg/i386/tcg-target.c.inc | 44 +++++++++++++++++++++++++++++++++------ 1 file changed, 38 insertions(+), 6 deletions(-) diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 6a492bb9e7..671937ff5d 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -468,6 +468,10 @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct) #define OPC_GRP5 (0xff) #define OPC_GRP14 (0x73 | P_EXT | P_DATA16) +#define OPC_ESCDF (0xdf) +#define ESCDF_FILD_m64 5 +#define ESCDF_FISTP_m64 7 + /* Group 1 opcode extensions for 0x80-0x83. These are also used as modifiers for OPC_ARITH. */ #define ARITH_ADD 0 @@ -2093,7 +2097,20 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, datalo = datahi; datahi = t; } - if (h.base == datalo || h.index == datalo) { + if (h.atom == MO_64) { + /* + * Atomicity requires that we use use a single 8-byte load. + * For simplicity and code size, always use the FPU for this. + * Similar insns using SSE/AVX are merely larger. + * Load from memory in one go, then store back to the stack, + * from whence we can load into the correct integer regs. + */ + tcg_out_modrm_sib_offset(s, OPC_ESCDF + h.seg, ESCDF_FILD_m64, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_offset(s, OPC_ESCDF, ESCDF_FISTP_m64, TCG_REG_ESP, 0); + tcg_out_modrm_offset(s, movop, datalo, TCG_REG_ESP, 0); + tcg_out_modrm_offset(s, movop, datahi, TCG_REG_ESP, 4); + } else if (h.base == datalo || h.index == datalo) { tcg_out_modrm_sib_offset(s, OPC_LEA, datahi, h.base, h.index, 0, h.ofs); tcg_out_modrm_offset(s, movop + h.seg, datalo, datahi, 0); @@ -2163,12 +2180,27 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, if (TCG_TARGET_REG_BITS == 64) { tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo, h.base, h.index, 0, h.ofs); + break; + } + if (use_movbe) { + TCGReg t = datalo; + datalo = datahi; + datahi = t; + } + if (h.atom == MO_64) { + /* + * Atomicity requires that we use use one 8-byte store. + * For simplicity, and code size, always use the FPU for this. + * Similar insns using SSE/AVX are merely larger. + * Assemble the 8-byte quantity in required endianness + * on the stack, load to coproc unit, and store. + */ + tcg_out_modrm_offset(s, movop, datalo, TCG_REG_ESP, 0); + tcg_out_modrm_offset(s, movop, datahi, TCG_REG_ESP, 4); + tcg_out_modrm_offset(s, OPC_ESCDF, ESCDF_FILD_m64, TCG_REG_ESP, 0); + tcg_out_modrm_sib_offset(s, OPC_ESCDF + h.seg, ESCDF_FISTP_m64, + h.base, h.index, 0, h.ofs); } else { - if (use_movbe) { - TCGReg t = datalo; - datalo = datahi; - datahi = t; - } tcg_out_modrm_sib_offset(s, movop + h.seg, datalo, h.base, h.index, 0, h.ofs); tcg_out_modrm_sib_offset(s, movop + h.seg, datahi, From patchwork Tue Apr 25 19:31:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676858 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883146wrs; Tue, 25 Apr 2023 12:50:14 -0700 (PDT) X-Google-Smtp-Source: AKy350b5fwIWAjkE4v7H2aG2XfDyX6Y4Qy9Z4l230oqsTf8p9gY08MYjRMWCeUV6tX86oKSpOwh8 X-Received: by 2002:ad4:5ceb:0:b0:5b3:e172:b63e with SMTP id iv11-20020ad45ceb000000b005b3e172b63emr33561043qvb.22.1682452213893; Tue, 25 Apr 2023 12:50:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452213; cv=none; d=google.com; s=arc-20160816; b=PVrYcPF7GiF6pqpzz9H7DLv5bsPgMJryR5V1NfMwNcQBzZPSiNGP3qmNSInNw8WnAv W65xbRqZLVu+Rf81l/Utiza+6eDHatk9EZzwXvWa9oGvvWhwbtJpn2sDBtM0pI23MiKM t/9DpMdhr+Ag4QmpaiQMgc5Oj+j5pFC8guWmsyx/2zRLD9GPO8zeLem2WishkVpmohww ALg2o+1tNn7FmORBtYKjiP4thkvMuPwp3yq6kWrAiusZkKmP6c/wk22LZpt9cQ2+G4Ry j7tFhP9CXAmVBGvRRTG8cCXg35XWDc/4tjbgHza9KPa1tcgBsu2sTrguyk6e3JuMTuKF oECw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=wyklooo0uraLqVOx9ne4PdT+xcmPv9LAwlA941QlXUc=; b=DtEG8JQE/SGrK8QW4hPe2ArRND0pGL0BX223XuUwjM7TEd0pRL9YbF31fWZ+OV74MC kdFS6FbGLmPSbFN8mlxasyOFhqakzsb55vge0E7/J6hdTbbCRP4bG3tvVTTkbAz3IcbC 3aowxI72DAvByJ1DdmzAED5/UMyv3/JdvTMRixyW3Z+ksUEu18ek9igBc+mlmWBg8F7Y c5gOE0ZptxVumK8z83z6veF82i/mV+3evGRJq5SchSf36aWfCsesEJ350K2sE4owRGeG 0H9WRVmhfvF3NEYFVnvm7GPV9dlBeivfXpgMkpwMNhOxkTZMdBOESoxYP3eqc25GOg8U jAGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ufC1TUAc; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t5-20020a05621421a500b005ef41e50ed4si8875596qvc.404.2023.04.25.12.50.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:50:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ufC1TUAc; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOYb-00048h-32; Tue, 25 Apr 2023 15:42:33 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOYX-0003hJ-Gl for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:29 -0400 Received: from mail-lf1-x136.google.com ([2a00:1450:4864:20::136]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOYU-0007aP-8a for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:29 -0400 Received: by mail-lf1-x136.google.com with SMTP id 2adb3069b0e04-4efefbd2c5eso3442634e87.0 for ; Tue, 25 Apr 2023 12:42:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451744; x=1685043744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wyklooo0uraLqVOx9ne4PdT+xcmPv9LAwlA941QlXUc=; b=ufC1TUAc3awMK+DX+PammUIF0J0ebg50Fj18utY6msokCsIBPPqTUn8KbmOrd/YueH WVTg8Un7RuKUBGQiukMBF+W/HN7n2GbUbH1xWjFypk2eZRTT0IJPIH5Yfq/3JFlhuYUE ZZf3jBoEBWSvNp60pFSetyTzkxbk6xm10eoz2MDCKSvtF+sdQR2DHpbionqQQawf1ebe drS1nhL8a8Z1xccNL04z1K3B0VhWlz+vLoH/DMLOzQDUDGGyEFpDzUJ6eysxgB3RhktH 83KXobFNel6yWrMXAj9OyokOj14PaHumKRb8Mc5ckjK4GWtL5XDJU8Jb2S8taN7pjfPr FKTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451744; x=1685043744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wyklooo0uraLqVOx9ne4PdT+xcmPv9LAwlA941QlXUc=; b=DO5QP3gExR0szyqdrA9DMQ5Sjqib2hMynXnFBb7rHYBVgzXCdEa608paCqs5SWxvAH hAYZJkFXMPHRHfr6G1HG/Yn8mxb1lpDqmWArPGODlE6WzKwKpQc7jyV04MnmzOwIT3GC 1+bEEsZzhUmbKdA4B8WWxIfvPy7RSto/TqVEqV/fdd/GzoQRwkE9D+W11lnCszRH7J+g dV18XHdaw29CWDDmgiU0Df1wiz+oV13KrC7Tw/959Lyi5IP4VTSgxWCdHD7Zibf903Aw fwGfn7LAoitb5umKuN6xbH75wsJIuRQHgQ+cQgJ2rvGJmuC0wKBWw2oz4+ewTWN0tOTr Xh2Q== X-Gm-Message-State: AAQBX9cjSEKITk0malQjPdoLdsfQqcZsf1n/d1Uf3lrFW9ZCTSR/vt9M dhcH9hND8Iuyc6D7U1AASP4Ny5pv65lDxTkMsOIvog== X-Received: by 2002:ac2:4e49:0:b0:4eb:29b0:1ca4 with SMTP id f9-20020ac24e49000000b004eb29b01ca4mr4041999lfr.8.1682451744631; Tue, 25 Apr 2023 12:42:24 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.42.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:42:24 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 53/57] tcg/i386: Support 128-bit load/store with have_atomic16 Date: Tue, 25 Apr 2023 20:31:42 +0100 Message-Id: <20230425193146.2106111-54-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::136; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x136.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Signed-off-by: Richard Henderson --- tcg/i386/tcg-target.h | 3 +- tcg/i386/tcg-target.c.inc | 184 +++++++++++++++++++++++++++++++++++++- 2 files changed, 182 insertions(+), 5 deletions(-) diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index 943af6775e..7f69997e30 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -194,7 +194,8 @@ extern bool have_atomic16; #define TCG_TARGET_HAS_qemu_st8_i32 1 #endif -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 \ + (TCG_TARGET_REG_BITS == 64 && have_atomic16) /* We do not support older SSE systems, only beginning with AVX1. */ #define TCG_TARGET_HAS_v64 have_avx1 diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 671937ff5d..f3904d06f5 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -91,6 +91,8 @@ static const int tcg_target_reg_alloc_order[] = { #endif }; +#define TCG_TMP_VEC TCG_REG_XMM5 + static const int tcg_target_call_iarg_regs[] = { #if TCG_TARGET_REG_BITS == 64 #if defined(_WIN64) @@ -347,6 +349,8 @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct) #define OPC_PCMPGTW (0x65 | P_EXT | P_DATA16) #define OPC_PCMPGTD (0x66 | P_EXT | P_DATA16) #define OPC_PCMPGTQ (0x37 | P_EXT38 | P_DATA16) +#define OPC_PEXTRD (0x16 | P_EXT3A | P_DATA16) +#define OPC_PINSRD (0x22 | P_EXT3A | P_DATA16) #define OPC_PMAXSB (0x3c | P_EXT38 | P_DATA16) #define OPC_PMAXSW (0xee | P_EXT | P_DATA16) #define OPC_PMAXSD (0x3d | P_EXT38 | P_DATA16) @@ -1786,7 +1790,22 @@ typedef struct { bool tcg_target_has_memory_bswap(MemOp memop) { - return have_movbe; + MemOp atom_a, atom_u; + + if (!have_movbe) { + return false; + } + if ((memop & MO_SIZE) <= MO_64) { + return true; + } + + /* + * Reject 16-byte memop with 16-byte atomicity, i.e. VMOVDQA, + * but do allow a pair of 64-bit operations, i.e. MOVBEQ. + */ + (void)atom_and_align_for_opc(tcg_ctx, &atom_a, &atom_u, memop, + MO_ATOM_IFALIGN, true); + return atom_a <= MO_64; } /* @@ -1814,6 +1833,30 @@ static const TCGLdstHelperParam ldst_helper_param = { static const TCGLdstHelperParam ldst_helper_param = { }; #endif +static void tcg_out_vec_to_pair(TCGContext *s, TCGType type, + TCGReg l, TCGReg h, TCGReg v) +{ + int rexw = type == TCG_TYPE_I32 ? 0 : P_REXW; + + /* vpmov{d,q} %v, %l */ + tcg_out_vex_modrm(s, OPC_MOVD_EyVy + rexw, v, 0, l); + /* vpextr{d,q} $1, %v, %h */ + tcg_out_vex_modrm(s, OPC_PEXTRD + rexw, v, 0, h); + tcg_out8(s, 1); +} + +static void tcg_out_pair_to_vec(TCGContext *s, TCGType type, + TCGReg v, TCGReg l, TCGReg h) +{ + int rexw = type == TCG_TYPE_I32 ? 0 : P_REXW; + + /* vmov{d,q} %l, %v */ + tcg_out_vex_modrm(s, OPC_MOVD_VyEy + rexw, v, 0, l); + /* vpinsr{d,q} $1, %h, %v, %v */ + tcg_out_vex_modrm(s, OPC_PINSRD + rexw, v, v, h); + tcg_out8(s, 1); +} + /* * Generate code for the slow path for a load at the end of block */ @@ -1903,11 +1946,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp atom_u; + MemOp atom_u, s_bits; unsigned a_mask; + s_bits = opc & MO_SIZE; h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, - MO_ATOM_IFALIGN, false); + MO_ATOM_IFALIGN, s_bits == MO_128); a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU @@ -1917,7 +1961,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGType tlbtype = TCG_TYPE_I32; int trexw = 0, hrexw = 0, tlbrexw = 0; unsigned mem_index = get_mmuidx(oi); - unsigned s_bits = opc & MO_SIZE; unsigned s_mask = (1 << s_bits) - 1; target_ulong tlb_mask; @@ -2122,6 +2165,69 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, h.base, h.index, 0, h.ofs + 4); } break; + + case MO_128: + { + TCGLabel *l1 = NULL, *l2 = NULL; + bool use_pair = h.align < MO_128; + + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + + if (!use_pair) { + tcg_debug_assert(!use_movbe); + /* + * Atomicity requires that we use use VMOVDQA. + * If we've already checked for 16-byte alignment, that's all + * we need. If we arrive here with lesser alignment, then we + * have determined that less than 16-byte alignment can be + * satisfied with two 8-byte loads. + */ + if (h.align < MO_128) { + use_pair = true; + l1 = gen_new_label(); + l2 = gen_new_label(); + + tcg_out_testi(s, h.base, 15); + tcg_out_jxx(s, JCC_JNE, l2, true); + } + + tcg_out_vex_modrm_sib_offset(s, OPC_MOVDQA_VxWx + h.seg, + TCG_TMP_VEC, 0, + h.base, h.index, 0, h.ofs); + tcg_out_vec_to_pair(s, TCG_TYPE_I64, datalo, + datahi, TCG_TMP_VEC); + + if (use_pair) { + tcg_out_jxx(s, JCC_JMP, l1, true); + tcg_out_label(s, l2); + } + } + if (use_pair) { + if (use_movbe) { + TCGReg t = datalo; + datalo = datahi; + datahi = t; + } + if (h.base == datalo || h.index == datalo) { + tcg_out_modrm_sib_offset(s, OPC_LEA, datahi, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_offset(s, movop + P_REXW + h.seg, + datalo, datahi, 0); + tcg_out_modrm_offset(s, movop + P_REXW + h.seg, + datahi, datahi, 8); + } else { + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datahi, + h.base, h.index, 0, h.ofs + 8); + } + } + if (l1) { + tcg_out_label(s, l1); + } + } + break; + default: g_assert_not_reached(); } @@ -2207,6 +2313,60 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, h.base, h.index, 0, h.ofs + 4); } break; + + case MO_128: + { + TCGLabel *l1 = NULL, *l2 = NULL; + bool use_pair = h.align < MO_128; + + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + + if (!use_pair) { + tcg_debug_assert(!use_movbe); + /* + * Atomicity requires that we use use VMOVDQA. + * If we've already checked for 16-byte alignment, that's all + * we need. If we arrive here with lesser alignment, then we + * have determined that less that 16-byte alignment can be + * satisfied with two 8-byte loads. + */ + if (h.align < MO_128) { + use_pair = true; + l1 = gen_new_label(); + l2 = gen_new_label(); + + tcg_out_testi(s, h.base, 15); + tcg_out_jxx(s, JCC_JNE, l2, true); + } + + tcg_out_pair_to_vec(s, TCG_TYPE_I64, TCG_TMP_VEC, + datalo, datahi); + tcg_out_vex_modrm_sib_offset(s, OPC_MOVDQA_WxVx + h.seg, + TCG_TMP_VEC, 0, + h.base, h.index, 0, h.ofs); + + if (use_pair) { + tcg_out_jxx(s, JCC_JMP, l1, true); + tcg_out_label(s, l2); + } + } + if (use_pair) { + if (use_movbe) { + TCGReg t = datalo; + datalo = datahi; + datahi = t; + } + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo, + h.base, h.index, 0, h.ofs); + tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datahi, + h.base, h.index, 0, h.ofs + 8); + } + if (l1) { + tcg_out_label(s, l1); + } + } + break; + default: g_assert_not_reached(); } @@ -2530,6 +2690,10 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_qemu_ld(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_ld_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); + break; case INDEX_op_qemu_st_i32: case INDEX_op_qemu_st8_i32: if (TCG_TARGET_REG_BITS >= TARGET_LONG_BITS) { @@ -2547,6 +2711,10 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_qemu_st(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_st_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); + break; OP_32_64(mulu2): tcg_out_modrm(s, OPC_GRP3_Ev + rexw, EXT3_MUL, args[3]); @@ -3241,6 +3409,13 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) : TARGET_LONG_BITS <= TCG_TARGET_REG_BITS ? C_O0_I3(L, L, L) : C_O0_I4(L, L, L, L)); + case INDEX_op_qemu_ld_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + return C_O2_I1(r, r, L); + case INDEX_op_qemu_st_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + return C_O0_I3(L, L, L); + case INDEX_op_brcond2_i32: return C_O0_I4(r, r, ri, ri); @@ -4097,6 +4272,7 @@ static void tcg_target_init(TCGContext *s) s->reserved_regs = 0; tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK); + tcg_regset_set_reg(s->reserved_regs, TCG_TMP_VEC); #ifdef _WIN64 /* These are call saved, and we don't save them, so don't use them. */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_XMM6); From patchwork Tue Apr 25 19:31:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676847 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2881811wrs; Tue, 25 Apr 2023 12:46:29 -0700 (PDT) X-Google-Smtp-Source: AKy350Yx8mrqpOpmBV4ykn2q5+KRu44kI8YInGbRa8N8PIqNy40nibRfYGVY3q5Ny5CovQzJTUJm X-Received: by 2002:ac8:5a54:0:b0:3ef:2d98:ecdf with SMTP id o20-20020ac85a54000000b003ef2d98ecdfmr28069051qta.55.1682451989211; Tue, 25 Apr 2023 12:46:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451989; cv=none; d=google.com; s=arc-20160816; b=RT65kFwU7AKmTQW9jRMxQtrGqVVdE66PMxNVMFqjEDF882kIq5fwHiqxhTQIR0B6AV ef+z9KSjGf3ddYGMts2bVb7RPjr+XLFtMDepkkYblMF9UGE3+FaeTTyAs1d5Q8/J2kjn wq6MNASawZIKe0rQwv25vWqAH7h06A1kG9whEd89QdIQvoMh0fHiflj+VKc738RjXLQr KyoyH/IgmXcZBIILJTXmhNfXRSaM7Su8Pek1qCbJlH6Mlo9XOBi0D4FRZXjbhJHgGnwq tZMw+c5m5jCLQU6uSvcyGkOtlhotrstj7/SSIn4u4Dsl//ZVfE8oZnWWbfS25Dw3iOlD twVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=c7cEeUCT+DqTBIQlkyakmBiJxoFYBo2GhPEulivzSlM=; b=KNNbAhVveMAZOBv+EkfB5h733+6P3RaYmnoAdQ0dTkpYOPg0A5FxnMSk2/3j+wpmrf 2pVR3Ykqs4y1mrsJ3NARum+nPI2DbbO/NOC6mk4fM62OcTjPwdKVDLjzY3joeH4r/PJJ NYvz5mRwpqv2KlDr+jjuEcvw/maJI2NCxezVIgX255+fGjcWpenqgAZ3Tevq7YhlIMXu i9wv4vkxE8IevlaLS1+penv3CkBY7RHRDsylEQr5XS1ZgwasMtaQYC7cCcR66VAb6lUl 8gwgTzMSJXIWoDi++I2JDf3P6aB3yzNbBwlFCBsejDYy7Cvlz1rqg3t9WBoSfdEMJCOU OYJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JDreiGso; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c11-20020a05622a024b00b003f1149a317fsi88027qtx.615.2023.04.25.12.46.29 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:46:29 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JDreiGso; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOYz-0005Ho-Dd; Tue, 25 Apr 2023 15:42:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOYx-0005G0-VO for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:55 -0400 Received: from mail-lf1-x12b.google.com ([2a00:1450:4864:20::12b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOYv-0007fO-GZ for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:42:55 -0400 Received: by mail-lf1-x12b.google.com with SMTP id 2adb3069b0e04-4eed764a10cso6620661e87.0 for ; Tue, 25 Apr 2023 12:42:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451772; x=1685043772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=c7cEeUCT+DqTBIQlkyakmBiJxoFYBo2GhPEulivzSlM=; b=JDreiGsomXo+9ZRhcCr6flCJv5k8+cu856eLBgvqWjPu5WjaWqHa2yOa5YclBFZDnC zeh1txdr/7jwH2M1tDnmjjIf5DbjHoMGG2evWbgfbmp5sy5zoSOSu51Fw6FXZWmLPScm jLEc9hM5hGdGuKaU4u5jo2GSS5itO6GFFbr29Vo8ivt4D6SqLnZkw8qKOlY4iG+n5ZBp ZmEWUl1Ok1vDY/xK1PQpbbyFWjAQuwTvaet2RkzcTrf+TYKkFuOQbSM/wvUa4Kh5qLc2 anTGqk8TwSu+IKK2MvlBLUY8fdlM2cq9/hNPRYETJov8PHJrwaiqfYvIgDMPBCfP7lca B2GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451772; x=1685043772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=c7cEeUCT+DqTBIQlkyakmBiJxoFYBo2GhPEulivzSlM=; b=daQfQqaaxEEMKVmW9pxQQ1eKavZLYHicunEjYkX7s9Oi1JUQf1NSu52vYlIgHQk0qy ZHf0w/vQw3SsPrMifXDNHGkd/1+sCexLmTiQNG+YuQXwa+gUGAVXeXEpXyCHbtp1dYJu 6YU3PVDXzFW0e9NkGZpqvkzXE12CrgQdNwxNr/1ZdiZfhybFMTzW4hYSZiU/fNaS6d0y AXNkquLlkSDT0ZaPESUnoSuqjxg39okY+rDD0E+y0i78zJWrRNZBFxXSdoN0H9Wa54ga mbDQZNuIXzcIcRc6d3PqymD0ubAqK1DIHadgXzD9MUDfnchvLCNG50Jb7g1U+1LOhn8q OKMg== X-Gm-Message-State: AAQBX9ewbSU7iZzT04y84KJtDyPb4N692/Qnx8PfScuEL5WquAQi6MWI +ZDs4SV8maPCvwCqlOOxJBuLKasoudGTUmyfcOphFg== X-Received: by 2002:ac2:43da:0:b0:4ec:9f24:3e5c with SMTP id u26-20020ac243da000000b004ec9f243e5cmr4467719lfl.2.1682451771816; Tue, 25 Apr 2023 12:42:51 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.42.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:42:51 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 54/57] tcg/aarch64: Rename temporaries Date: Tue, 25 Apr 2023 20:31:43 +0100 Message-Id: <20230425193146.2106111-55-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::12b; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x12b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org We will need to allocate a second general-purpose temporary. Rename the existing temps to add a distinguishing number. Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.c.inc | 50 ++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 1d6d382edd..76a6bfd202 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -80,8 +80,8 @@ static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot) bool have_lse; bool have_lse2; -#define TCG_REG_TMP TCG_REG_X30 -#define TCG_VEC_TMP TCG_REG_V31 +#define TCG_REG_TMP0 TCG_REG_X30 +#define TCG_VEC_TMP0 TCG_REG_V31 #ifndef CONFIG_SOFTMMU /* Note that XZR cannot be encoded in the address base register slot, @@ -998,7 +998,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg r, TCGReg base, intptr_t offset) { - TCGReg temp = TCG_REG_TMP; + TCGReg temp = TCG_REG_TMP0; if (offset < -0xffffff || offset > 0xffffff) { tcg_out_movi(s, TCG_TYPE_PTR, temp, offset); @@ -1150,8 +1150,8 @@ static void tcg_out_ldst(TCGContext *s, AArch64Insn insn, TCGReg rd, } /* Worst-case scenario, move offset to temp register, use reg offset. */ - tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, offset); - tcg_out_ldst_r(s, insn, rd, rn, TCG_TYPE_I64, TCG_REG_TMP); + tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP0, offset); + tcg_out_ldst_r(s, insn, rd, rn, TCG_TYPE_I64, TCG_REG_TMP0); } static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg) @@ -1367,8 +1367,8 @@ static void tcg_out_call_int(TCGContext *s, const tcg_insn_unit *target) if (offset == sextract64(offset, 0, 26)) { tcg_out_insn(s, 3206, BL, offset); } else { - tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, (intptr_t)target); - tcg_out_insn(s, 3207, BLR, TCG_REG_TMP); + tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP0, (intptr_t)target); + tcg_out_insn(s, 3207, BLR, TCG_REG_TMP0); } } @@ -1505,7 +1505,7 @@ static void tcg_out_addsub2(TCGContext *s, TCGType ext, TCGReg rl, AArch64Insn insn; if (rl == ah || (!const_bh && rl == bh)) { - rl = TCG_REG_TMP; + rl = TCG_REG_TMP0; } if (const_bl) { @@ -1522,7 +1522,7 @@ static void tcg_out_addsub2(TCGContext *s, TCGType ext, TCGReg rl, possibility of adding 0+const in the low part, and the immediate add instructions encode XSP not XZR. Don't try anything more elaborate here than loading another zero. */ - al = TCG_REG_TMP; + al = TCG_REG_TMP0; tcg_out_movi(s, ext, al, 0); } tcg_out_insn_3401(s, insn, ext, rl, al, bl); @@ -1563,7 +1563,7 @@ static void tcg_out_cltz(TCGContext *s, TCGType ext, TCGReg d, { TCGReg a1 = a0; if (is_ctz) { - a1 = TCG_REG_TMP; + a1 = TCG_REG_TMP0; tcg_out_insn(s, 3507, RBIT, ext, a1, a0); } if (const_b && b == (ext ? 64 : 32)) { @@ -1572,7 +1572,7 @@ static void tcg_out_cltz(TCGContext *s, TCGType ext, TCGReg d, AArch64Insn sel = I3506_CSEL; tcg_out_cmp(s, ext, a0, 0, 1); - tcg_out_insn(s, 3507, CLZ, ext, TCG_REG_TMP, a1); + tcg_out_insn(s, 3507, CLZ, ext, TCG_REG_TMP0, a1); if (const_b) { if (b == -1) { @@ -1585,7 +1585,7 @@ static void tcg_out_cltz(TCGContext *s, TCGType ext, TCGReg d, b = d; } } - tcg_out_insn_3506(s, sel, ext, d, TCG_REG_TMP, b, TCG_COND_NE); + tcg_out_insn_3506(s, sel, ext, d, TCG_REG_TMP0, b, TCG_COND_NE); } } @@ -1603,7 +1603,7 @@ bool tcg_target_has_memory_bswap(MemOp memop) } static const TCGLdstHelperParam ldst_helper_param = { - .ntmp = 1, .tmp = { TCG_REG_TMP } + .ntmp = 1, .tmp = { TCG_REG_TMP0 } }; static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) @@ -1864,7 +1864,7 @@ static void tcg_out_goto_tb(TCGContext *s, int which) set_jmp_insn_offset(s, which); tcg_out32(s, I3206_B); - tcg_out_insn(s, 3207, BR, TCG_REG_TMP); + tcg_out_insn(s, 3207, BR, TCG_REG_TMP0); set_jmp_reset_offset(s, which); } @@ -1883,7 +1883,7 @@ void tb_target_set_jmp_target(const TranslationBlock *tb, int n, ptrdiff_t i_offset = i_addr - jmp_rx; /* Note that we asserted this in range in tcg_out_goto_tb. */ - insn = deposit32(I3305_LDR | TCG_REG_TMP, 5, 19, i_offset >> 2); + insn = deposit32(I3305_LDR | TCG_REG_TMP0, 5, 19, i_offset >> 2); } qatomic_set((uint32_t *)jmp_rw, insn); flush_idcache_range(jmp_rx, jmp_rw, 4); @@ -2079,13 +2079,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_rem_i64: case INDEX_op_rem_i32: - tcg_out_insn(s, 3508, SDIV, ext, TCG_REG_TMP, a1, a2); - tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP, a2, a1); + tcg_out_insn(s, 3508, SDIV, ext, TCG_REG_TMP0, a1, a2); + tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP0, a2, a1); break; case INDEX_op_remu_i64: case INDEX_op_remu_i32: - tcg_out_insn(s, 3508, UDIV, ext, TCG_REG_TMP, a1, a2); - tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP, a2, a1); + tcg_out_insn(s, 3508, UDIV, ext, TCG_REG_TMP0, a1, a2); + tcg_out_insn(s, 3509, MSUB, ext, a0, TCG_REG_TMP0, a2, a1); break; case INDEX_op_shl_i64: @@ -2129,8 +2129,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, if (c2) { tcg_out_rotl(s, ext, a0, a1, a2); } else { - tcg_out_insn(s, 3502, SUB, 0, TCG_REG_TMP, TCG_REG_XZR, a2); - tcg_out_insn(s, 3508, RORV, ext, a0, a1, TCG_REG_TMP); + tcg_out_insn(s, 3502, SUB, 0, TCG_REG_TMP0, TCG_REG_XZR, a2); + tcg_out_insn(s, 3508, RORV, ext, a0, a1, TCG_REG_TMP0); } break; @@ -2532,8 +2532,8 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, break; } } - tcg_out_dupi_vec(s, type, MO_8, TCG_VEC_TMP, 0); - a2 = TCG_VEC_TMP; + tcg_out_dupi_vec(s, type, MO_8, TCG_VEC_TMP0, 0); + a2 = TCG_VEC_TMP0; } if (is_scalar) { insn = cmp_scalar_insn[cond]; @@ -2942,9 +2942,9 @@ static void tcg_target_init(TCGContext *s) s->reserved_regs = 0; tcg_regset_set_reg(s->reserved_regs, TCG_REG_SP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_FP); - tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_X18); /* platform register */ - tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP0); + tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP0); } /* Saving pairs: (X19, X20) .. (X27, X28), (X29(fp), X30(lr)). */ From patchwork Tue Apr 25 19:31:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676837 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2880893wrs; Tue, 25 Apr 2023 12:44:09 -0700 (PDT) X-Google-Smtp-Source: AKy350bSrMUtJJmAEpETUgt0Lih9tMNwj/k+fmPqOX7rhTr+E4YF1ehLyoG1VflLYph8p0BbafJt X-Received: by 2002:ad4:576b:0:b0:56e:a96b:a3a1 with SMTP id r11-20020ad4576b000000b0056ea96ba3a1mr31807278qvx.7.1682451848801; Tue, 25 Apr 2023 12:44:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682451848; cv=none; d=google.com; s=arc-20160816; b=bSqsDnxk3bLGtWwxzc5ZxdeoVl5BKgdHC/h3pETTpht6yEqCMgR6nHnsclQdynR3de yFIR9/7+FS4MyBEICCRSzS+xNaMQxrIBzCFOnWgptnkqLVmlwYL4W8qYw5D1TICZpMka bIL43rLMzrwycwyZFLtGF3Hr/WHOICfqPoQKZoDzNZM150nyANm6FGlOrUF31vPJqEhz m4TUuFFa/X3F0NUZSwyggdbH7R18Vl8OAFCp4OlDQIBtRg/um34RUV/6PbPqDy4yGYWH +02s5W7GX5JgK3oEIt6mVuqY7nyiPe010PeQ+XVX3zBXHsIgVh//CI5ZdTqYHY3/N+tX gtng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=kSvmWvewT6DAwHW7Dk3CYO+nDkMqwiVa8dlhAxhzxaQ=; b=zxll1MNAlh3v9xnyD9NQNY+6ddUcLlqsn4TYKdp7+B5EavKYDjc2fcau/5zbj1vFxY F13cVCoZX2YzOP/cMCYUxcm1WzjIPJTKegGH3OFe0H9n3r/EOuXupGPiSKp2Anjtqwbg KA4IaVJKxi+TRBroYG7e1vyTGFJKE494dcO4uJ4IESspPWsU5zqg8nSKDOshsxSzHIck 77iiagT009kB3wLzuWs+rTTX7MA3bt4/qNbgvIypZgVcT7R3RdMf4Ne9+JFNyd3xP2u4 Me1TOMlRZCZSgtBjxO7PON7FLa/tbPUQSoKqiGn/c2ePvwbc6Uu29PvByemtJSTALpur lOCA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TQ0ZAeQb; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id iy5-20020a0562140f6500b005a5b21d4c71si9928816qvb.589.2023.04.25.12.44.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:44:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TQ0ZAeQb; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOZ4-0005ep-Uo; Tue, 25 Apr 2023 15:43:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOZ2-0005St-PJ for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:43:00 -0400 Received: from mail-lf1-x12b.google.com ([2a00:1450:4864:20::12b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOZ0-0007hX-Ha for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:43:00 -0400 Received: by mail-lf1-x12b.google.com with SMTP id 2adb3069b0e04-4effc733e43so1336196e87.2 for ; Tue, 25 Apr 2023 12:42:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451777; x=1685043777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kSvmWvewT6DAwHW7Dk3CYO+nDkMqwiVa8dlhAxhzxaQ=; b=TQ0ZAeQb3NUmcDSHCoUc1wXg37AnmrSSVvSmXBQznV8HJS/mVuqHzyJvPVH7T4T3cp FzZenrKSk1VBVx/f+TMNBtaZcXjDdELNdc0/NsdAPDl7hk8rqWrvMInlmRccBq99vqHU TaqgSE7TGCA58aI2n62w1CJNHaa2vTWo67LmD2KLPz2nX+BRR3QoCKjnMPJJt6K7Ea3b QIByj/1hBCwRztoj0gCReh97FeA13Uf4U8JkqKKxLbQaz4XcBXoN9+UoPbVnh/X5wfkk G9zO9fAJES1onPNpo5Sy7tGXwjB/vk4o0Dg2kVByE/yGTBQ+rSDHTiRBYFc2WCCcles5 QG3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451777; x=1685043777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kSvmWvewT6DAwHW7Dk3CYO+nDkMqwiVa8dlhAxhzxaQ=; b=anoZnTs9oxE9gkzgteTxp3qs4hn+E4vKvOaShPAgkbVRTAfI6teo0mj9HvMjeh2cQZ ZEOODE1Cyh+dLQl6FeKsxW2UId6v4m0JiV6g9sTQ3yh8jUxSE3d2VQFKL984phiez2G2 5jkg+jzGR6Y7L6zS12DghfQRXW+p3wvhHxpEgdRDQy+lICtp7oW3bMu1e5ORN55YsD+c EWahrqUPAGNof7PGbxzD8ohBX2uJc0AVWR+8sDWL98Ha2BhCv0K4XFKPE2yU+822LKuq y0g8P4C2YnPSjUOoomA5DgUQafdCvcBUZrXL1l0f4tIuh7Qi1o5POfVV/zZa3+r+7auC ljEA== X-Gm-Message-State: AAQBX9eS9M3mkIZzNJaC7ADOqkpXcpeEqWad4+GOkk4piqvzaDBjawTc /7icUSt2uFc/nlRBdAu5ZmC9j8Vo5xOdvLhWHbX1sQ== X-Received: by 2002:ac2:4215:0:b0:4ec:a9c1:27d7 with SMTP id y21-20020ac24215000000b004eca9c127d7mr5172704lfh.67.1682451776863; Tue, 25 Apr 2023 12:42:56 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.42.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:42:56 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 55/57] tcg/aarch64: Support 128-bit load/store Date: Tue, 25 Apr 2023 20:31:44 +0100 Message-Id: <20230425193146.2106111-56-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::12b; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x12b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use LDXP+STXP when LSE2 is not present and 16-byte atomicity is required, and LDP/STP otherwise. This requires allocating a second general-purpose temporary, as Rs cannot overlap Rn in STXP. Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target-con-set.h | 2 + tcg/aarch64/tcg-target.h | 2 +- tcg/aarch64/tcg-target.c.inc | 181 ++++++++++++++++++++++++++++++- 3 files changed, 181 insertions(+), 4 deletions(-) diff --git a/tcg/aarch64/tcg-target-con-set.h b/tcg/aarch64/tcg-target-con-set.h index d6c6866878..74065c7098 100644 --- a/tcg/aarch64/tcg-target-con-set.h +++ b/tcg/aarch64/tcg-target-con-set.h @@ -14,6 +14,7 @@ C_O0_I2(lZ, l) C_O0_I2(r, rA) C_O0_I2(rZ, r) C_O0_I2(w, r) +C_O0_I3(lZ, lZ, l) C_O1_I1(r, l) C_O1_I1(r, r) C_O1_I1(w, r) @@ -33,4 +34,5 @@ C_O1_I2(w, w, wO) C_O1_I2(w, w, wZ) C_O1_I3(w, w, w, w) C_O1_I4(r, r, rA, rZ, rZ) +C_O2_I1(r, r, l) C_O2_I4(r, r, rZ, rZ, rA, rMZ) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index 74ee2ed255..fa6af9746f 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -129,7 +129,7 @@ extern bool have_lse2; #define TCG_TARGET_HAS_muluh_i64 1 #define TCG_TARGET_HAS_mulsh_i64 1 -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 1 #define TCG_TARGET_HAS_v64 1 #define TCG_TARGET_HAS_v128 1 diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 76a6bfd202..f1627cb96d 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -81,6 +81,7 @@ bool have_lse; bool have_lse2; #define TCG_REG_TMP0 TCG_REG_X30 +#define TCG_REG_TMP1 TCG_REG_X17 #define TCG_VEC_TMP0 TCG_REG_V31 #ifndef CONFIG_SOFTMMU @@ -404,6 +405,10 @@ typedef enum { I3305_LDR_v64 = 0x5c000000, I3305_LDR_v128 = 0x9c000000, + /* Load/store exclusive. */ + I3306_LDXP = 0xc8600000, + I3306_STXP = 0xc8200000, + /* Load/store register. Described here as 3.3.12, but the helper that emits them can transform to 3.3.10 or 3.3.13. */ I3312_STRB = 0x38000000 | LDST_ST << 22 | MO_8 << 30, @@ -468,6 +473,9 @@ typedef enum { I3406_ADR = 0x10000000, I3406_ADRP = 0x90000000, + /* Add/subtract extended register instructions. */ + I3501_ADD = 0x0b200000, + /* Add/subtract shifted register instructions (without a shift). */ I3502_ADD = 0x0b000000, I3502_ADDS = 0x2b000000, @@ -638,6 +646,12 @@ static void tcg_out_insn_3305(TCGContext *s, AArch64Insn insn, tcg_out32(s, insn | (imm19 & 0x7ffff) << 5 | rt); } +static void tcg_out_insn_3306(TCGContext *s, AArch64Insn insn, TCGReg rs, + TCGReg rt, TCGReg rt2, TCGReg rn) +{ + tcg_out32(s, insn | rs << 16 | rt2 << 10 | rn << 5 | rt); +} + static void tcg_out_insn_3201(TCGContext *s, AArch64Insn insn, TCGType ext, TCGReg rt, int imm19) { @@ -720,6 +734,14 @@ static void tcg_out_insn_3406(TCGContext *s, AArch64Insn insn, tcg_out32(s, insn | (disp & 3) << 29 | (disp & 0x1ffffc) << (5 - 2) | rd); } +static inline void tcg_out_insn_3501(TCGContext *s, AArch64Insn insn, + TCGType sf, TCGReg rd, TCGReg rn, + TCGReg rm, int opt, int imm3) +{ + tcg_out32(s, insn | sf << 31 | rm << 16 | opt << 13 | + imm3 << 10 | rn << 5 | rd); +} + /* This function is for both 3.5.2 (Add/Subtract shifted register), for the rare occasion when we actually want to supply a shift amount. */ static inline void tcg_out_insn_3502S(TCGContext *s, AArch64Insn insn, @@ -1648,17 +1670,17 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, TCGType addr_type = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32; TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp atom_u; + MemOp atom_u, s_bits; unsigned a_mask; + s_bits = opc & MO_SIZE; h->align = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, have_lse2 ? MO_ATOM_WITHIN16 : MO_ATOM_IFALIGN, - false); + s_bits == MO_128); a_mask = (1 << h->align) - 1; #ifdef CONFIG_SOFTMMU - unsigned s_bits = opc & MO_SIZE; unsigned s_mask = (1u << s_bits) - 1; unsigned mem_index = get_mmuidx(oi); TCGReg x3; @@ -1839,6 +1861,148 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg, } } +static TCGLabelQemuLdst * +prepare_host_addr_base_only(TCGContext *s, HostAddress *h, TCGReg addr_reg, + MemOpIdx oi, bool is_ld) +{ + TCGLabelQemuLdst *ldst; + + ldst = prepare_host_addr(s, h, addr_reg, oi, true); + + /* Compose the final address, as LDP/STP have no indexing. */ + if (h->index != TCG_REG_XZR) { + tcg_out_insn(s, 3501, ADD, TCG_TYPE_I64, TCG_REG_TMP0, + h->base, h->index, + h->index_ext == TCG_TYPE_I32 ? MO_32 : MO_64, 0); + h->base = TCG_REG_TMP0; + h->index = TCG_REG_XZR; + h->index_ext = TCG_TYPE_I64; + } + + return ldst; +} + +static void tcg_out_qemu_ld128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + + ldst = prepare_host_addr_base_only(s, &h, addr_reg, oi, true); + + if (h.atom < MO_128 || have_lse2) { + tcg_out_insn(s, 3314, LDP, datalo, datahi, h.base, 0, 0, 0); + } else { + TCGLabel *l0, *l1 = NULL; + + /* + * 16-byte atomicity without LSE2 requires LDXP+STXP loop: + * 1: ldxp lo,hi,[addr] + * stxp tmp1,lo,hi,[addr] + * cbnz tmp1, 1b + * + * If we have already checked for 16-byte alignment, that's all + * we need. Otherwise we have determined that misaligned atomicity + * may be handled with two 8-byte loads. + */ + if (h.align < MO_128) { + /* + * TODO: align should be MO_64, so we only need test bit 3, + * which means we could use TBNZ instead of AND+CBNE. + */ + l1 = gen_new_label(); + tcg_out_logicali(s, I3404_ANDI, 0, TCG_REG_TMP1, addr_reg, 15); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, + TCG_REG_TMP1, 0, 1, l1); + } + + l0 = gen_new_label(); + tcg_out_label(s, l0); + + tcg_out_insn(s, 3306, LDXP, TCG_REG_XZR, datalo, datahi, h.base); + tcg_out_insn(s, 3306, STXP, TCG_REG_TMP1, datalo, datahi, h.base); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, TCG_REG_TMP1, 0, 1, l0); + + if (l1) { + TCGLabel *l2 = gen_new_label(); + tcg_out_goto_label(s, l2); + + tcg_out_label(s, l1); + tcg_out_insn(s, 3314, LDP, datalo, datahi, h.base, 0, 0, 0); + + tcg_out_label(s, l2); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + +static void tcg_out_qemu_st128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + + ldst = prepare_host_addr_base_only(s, &h, addr_reg, oi, false); + + if (h.atom < MO_128 || have_lse2) { + tcg_out_insn(s, 3314, STP, datalo, datahi, h.base, 0, 0, 0); + } else { + TCGLabel *l0, *l1 = NULL; + + /* + * 16-byte atomicity without LSE2 requires LDXP+STXP loop: + * 1: ldxp xzr,tmp1,[addr] + * stxp tmp1,lo,hi,[addr] + * cbnz tmp1, 1b + * + * If we have already checked for 16-byte alignment, that's all + * we need. Otherwise we have determined that misaligned atomicity + * may be handled with two 8-byte stores. + */ + if (h.align < MO_128) { + /* + * TODO: align should be MO_64, so we only need test bit 3, + * which means we could use TBNZ instead of AND+CBNE. + */ + l1 = gen_new_label(); + tcg_out_logicali(s, I3404_ANDI, 0, TCG_REG_TMP1, addr_reg, 15); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, + TCG_REG_TMP1, 0, 1, l1); + } + + l0 = gen_new_label(); + tcg_out_label(s, l0); + + tcg_out_insn(s, 3306, LDXP, TCG_REG_XZR, + TCG_REG_XZR, TCG_REG_TMP1, h.base); + tcg_out_insn(s, 3306, STXP, TCG_REG_TMP1, datalo, datahi, h.base); + tcg_out_brcond(s, TCG_TYPE_I32, TCG_COND_NE, TCG_REG_TMP1, 0, 1, l0); + + if (l1) { + TCGLabel *l2 = gen_new_label(); + tcg_out_goto_label(s, l2); + + tcg_out_label(s, l1); + tcg_out_insn(s, 3314, STP, datalo, datahi, h.base, 0, 0, 0); + + tcg_out_label(s, l2); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + static const tcg_insn_unit *tb_ret_addr; static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) @@ -2176,6 +2340,12 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, REG0(0), a1, a2, ext); break; + case INDEX_op_qemu_ld_i128: + tcg_out_qemu_ld128(s, a0, a1, a2, args[3]); + break; + case INDEX_op_qemu_st_i128: + tcg_out_qemu_st128(s, REG0(0), REG0(1), a2, args[3]); + break; case INDEX_op_bswap64_i64: tcg_out_rev(s, TCG_TYPE_I64, MO_64, a0, a1); @@ -2813,9 +2983,13 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_qemu_ld_i32: case INDEX_op_qemu_ld_i64: return C_O1_I1(r, l); + case INDEX_op_qemu_ld_i128: + return C_O2_I1(r, r, l); case INDEX_op_qemu_st_i32: case INDEX_op_qemu_st_i64: return C_O0_I2(lZ, l); + case INDEX_op_qemu_st_i128: + return C_O0_I3(lZ, lZ, l); case INDEX_op_deposit_i32: case INDEX_op_deposit_i64: @@ -2944,6 +3118,7 @@ static void tcg_target_init(TCGContext *s) tcg_regset_set_reg(s->reserved_regs, TCG_REG_FP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_X18); /* platform register */ tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP0); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP1); tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP0); } From patchwork Tue Apr 25 19:31:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676855 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2882855wrs; Tue, 25 Apr 2023 12:49:26 -0700 (PDT) X-Google-Smtp-Source: AKy350YnNDfJ7Xl1lkDO05gjXgIgT2jo0GS9P6HkMA6r53HzjyF9Jgi/GICq0KQhlrqYtofQi6dO X-Received: by 2002:ac8:5a03:0:b0:3ef:499a:dd99 with SMTP id n3-20020ac85a03000000b003ef499add99mr29499882qta.66.1682452166529; Tue, 25 Apr 2023 12:49:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452166; cv=none; d=google.com; s=arc-20160816; b=tV/iUToqrsAja8FSldg1OGut4/W/1z8XJDej9XjxHUp56IzF6nhFaLrvCcNIeQVL0h fy9uw1vk0w2H0xSsyeb6feLAKQMoOk0gHPLX4eiwFjtXWktj/3075dZyveWkA/5BGPwe iWB3lhAk/rL6XK6XTl+XL4Qw2o8Hq3pVjsOs7bGQrh/frzhu+PWM9HVPmGawlKvLWj4a Uxadka3XBtYYcHzlyKwVQdVF0PllbbHDM8M0G2pny9jUOqGb8+fUI7PCC5qtttaDARpQ FIyS41f1Rlj0WQObeh695UcxD5M3wQF2xwe5n8uGLrzetNCyrsWXpsGVuzsdPuVQZ0EL SQqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=52kj+hgoN8ybqKBx/aqOlA/DoCWot5p7JjaerE/ZwRE=; b=v2EL1WMZAQNu41RmA9d6KSRh0KMoVdZkJLQKnGsIP7c8JMRu5HydhkSSqaeSRvKN+D HhSZOxZbz2nzPRH3R1TLxeaMEzK1LM9sxopcJPE7E9qZqMZIoXu0ZgQjBGKLP/bD7fVt a8F5ajF1WSnwwBoCGhnCOR5TCYOG61O69sL2e1zdOmkOg9QUwwQUlh959X2idnKcqLoi Dc78PnGeGPIlkGVRyVV7DMyrIYI99Ry8lUTHpKueN7Gr1uvp2pHltTEgtt2gUIhedpQe 1LypGAWY02+zHR+l4ZA8tuSF+7eJRaDcuZy9e9z2PuMR8i5eoamkI/wbzHl++HVGL7pS crkg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IAK13Nq6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q4-20020a05620a0c8400b0074e4adfa4b9si5002083qki.584.2023.04.25.12.49.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:49:26 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IAK13Nq6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOZb-0000kW-EB; Tue, 25 Apr 2023 15:43:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOZZ-0000V0-FZ for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:43:33 -0400 Received: from mail-lf1-x12b.google.com ([2a00:1450:4864:20::12b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOZJ-0007fP-Dy for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:43:33 -0400 Received: by mail-lf1-x12b.google.com with SMTP id 2adb3069b0e04-4edc7cc6f46so6626106e87.1 for ; Tue, 25 Apr 2023 12:43:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451795; x=1685043795; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=52kj+hgoN8ybqKBx/aqOlA/DoCWot5p7JjaerE/ZwRE=; b=IAK13Nq6huL8F/i4Hg6sFxU/BkZrxrnejvFRx8+/c52EgE39gU3AHhNzzXZjrjdUlA jvuaWwwe1ghFxYlg2Jo2U96vT4Pz9C0F8jIkmdLQYVuKI9OaWkODWoerq3jK+jEBi+fK axzgpMp1BDehYgEJnUcBFiXunYl7Ms4LaDRoM/lq6DjnHBtrBgKXXPPSuR41o03+YBQt MOpqk/BtJQddGHNO7YJn+WhS9E83FU2+1RqzyZcT2cLIHQQwtQpyH12Ot7qGixXFPD2K YOBeY5GwAs3E2I8O2YpRAHsLWJLvqWOZiWPnRA4i74jZdAlkcPVnj1Wg2qUSgtMEKxAS FCBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451795; x=1685043795; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=52kj+hgoN8ybqKBx/aqOlA/DoCWot5p7JjaerE/ZwRE=; b=TaB7ds1E4NPDZ5Su5UoOwoEkiBjoTtVPnJFioacKpe91RzE9DvUo7dS0jQpqqI/js4 77tA5BVx00mDs13Htwq3QGQzsTvAwz5MbAsNi50RbcZTQIny9fQxIcFA2JA+uOehN5kh dvYtNzKOS/Swkta+My3Qqxfy4pnpl68LtKYDESwclGc1tGx1JfRZ4qriWb7iHNHPzKgK c3k5pyv+tWIbpPUl2yPlURx0gkKnzsalN5C+LYdy8SwEqobTki9Brnp9+y1okG3eNAw5 ZTD7TjADM9QZTAo5hcbSUTBDRkrvBMRJ/YcW6zBQ4R9WYQlxT+d4UJWGSTnY4sEPc71b qRmw== X-Gm-Message-State: AAQBX9ekQj4a8RgeAOwHvy8728bM3MVNnBkyAWCWuCgagRNovagkgPxb TeDY4m8cZcBqK8LMTg8IzJ00z9Pt6B+7FQJtjvgTNg== X-Received: by 2002:ac2:44cb:0:b0:4ec:8d71:e997 with SMTP id d11-20020ac244cb000000b004ec8d71e997mr4481453lfm.19.1682451794760; Tue, 25 Apr 2023 12:43:14 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.42.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:43:14 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 56/57] tcg/ppc: Support 128-bit load/store Date: Tue, 25 Apr 2023 20:31:45 +0100 Message-Id: <20230425193146.2106111-57-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::12b; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x12b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use LQ/STQ with ISA v2.07, and 16-byte atomicity is required. Note that these instructions do not require 16-byte alignment. Signed-off-by: Richard Henderson --- tcg/ppc/tcg-target-con-set.h | 2 + tcg/ppc/tcg-target-con-str.h | 1 + tcg/ppc/tcg-target.h | 3 +- tcg/ppc/tcg-target.c.inc | 173 +++++++++++++++++++++++++++++++---- 4 files changed, 158 insertions(+), 21 deletions(-) diff --git a/tcg/ppc/tcg-target-con-set.h b/tcg/ppc/tcg-target-con-set.h index f206b29205..bbd7b21247 100644 --- a/tcg/ppc/tcg-target-con-set.h +++ b/tcg/ppc/tcg-target-con-set.h @@ -14,6 +14,7 @@ C_O0_I2(r, r) C_O0_I2(r, ri) C_O0_I2(v, r) C_O0_I3(r, r, r) +C_O0_I3(o, m, r) C_O0_I4(r, r, ri, ri) C_O0_I4(r, r, r, r) C_O1_I1(r, r) @@ -34,6 +35,7 @@ C_O1_I3(v, v, v, v) C_O1_I4(r, r, ri, rZ, rZ) C_O1_I4(r, r, r, ri, ri) C_O2_I1(r, r, r) +C_O2_I1(o, m, r) C_O2_I2(r, r, r, r) C_O2_I4(r, r, rI, rZM, r, r) C_O2_I4(r, r, r, r, rI, rZM) diff --git a/tcg/ppc/tcg-target-con-str.h b/tcg/ppc/tcg-target-con-str.h index 9dcbc3df50..a01cfa6f84 100644 --- a/tcg/ppc/tcg-target-con-str.h +++ b/tcg/ppc/tcg-target-con-str.h @@ -9,6 +9,7 @@ * REGS(letter, register_mask) */ REGS('r', ALL_GENERAL_REGS) +REGS('o', ALL_GENERAL_REGS & 0xAAAAAAAAu) /* odd registers */ REGS('v', ALL_VECTOR_REGS) /* diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h index 0914380bd7..204b70f86a 100644 --- a/tcg/ppc/tcg-target.h +++ b/tcg/ppc/tcg-target.h @@ -149,7 +149,8 @@ extern bool have_vsx; #define TCG_TARGET_HAS_mulsh_i64 1 #endif -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 \ + (TCG_TARGET_REG_BITS == 64 && have_isa_2_07) /* * While technically Altivec could support V64, it has no 64-bit store diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 743a452981..f61c036115 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -298,25 +298,27 @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct) #define B OPCD( 18) #define BC OPCD( 16) + #define LBZ OPCD( 34) #define LHZ OPCD( 40) #define LHA OPCD( 42) #define LWZ OPCD( 32) #define LWZUX XO31( 55) -#define STB OPCD( 38) -#define STH OPCD( 44) -#define STW OPCD( 36) - -#define STD XO62( 0) -#define STDU XO62( 1) -#define STDX XO31(149) - #define LD XO58( 0) #define LDX XO31( 21) #define LDU XO58( 1) #define LDUX XO31( 53) #define LWA XO58( 2) #define LWAX XO31(341) +#define LQ OPCD( 56) + +#define STB OPCD( 38) +#define STH OPCD( 44) +#define STW OPCD( 36) +#define STD XO62( 0) +#define STDU XO62( 1) +#define STDX XO31(149) +#define STQ XO62( 2) #define ADDIC OPCD( 12) #define ADDI OPCD( 14) @@ -2018,11 +2020,25 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) typedef struct { TCGReg base; TCGReg index; + MemOp align; + MemOp atom; } HostAddress; bool tcg_target_has_memory_bswap(MemOp memop) { - return true; + MemOp atom_a, atom_u; + + if ((memop & MO_SIZE) <= MO_64) { + return true; + } + + /* + * Reject 16-byte memop with 16-byte atomicity, + * but do allow a pair of 64-bit operations. + */ + (void)atom_and_align_for_opc(tcg_ctx, &atom_a, &atom_u, memop, + MO_ATOM_IFALIGN, true); + return atom_a <= MO_64; } /* @@ -2037,7 +2053,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); - MemOp a_bits, atom_a, atom_u; + MemOp a_bits, atom_u, s_bits; /* * Book II, Section 1.4, Single-Copy Atomicity, specifies: @@ -2049,10 +2065,19 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * As of 3.0, "the non-atomic access is performed as described in * the corresponding list", which matches MO_ATOM_SUBALIGN. */ - a_bits = atom_and_align_for_opc(s, &atom_a, &atom_u, opc, + s_bits = opc & MO_SIZE; + a_bits = atom_and_align_for_opc(s, &h->atom, &atom_u, opc, have_isa_3_00 ? MO_ATOM_SUBALIGN : MO_ATOM_IFALIGN, - false); + s_bits == MO_128); + + if (TCG_TARGET_REG_BITS == 32) { + /* We don't support unaligned accesses on 32-bits. */ + if (a_bits < s_bits) { + a_bits = s_bits; + } + } + h->align = a_bits; #ifdef CONFIG_SOFTMMU int mem_index = get_mmuidx(oi); @@ -2061,7 +2086,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, int fast_off = TLB_MASK_TABLE_OFS(mem_index); int mask_off = fast_off + offsetof(CPUTLBDescFast, mask); int table_off = fast_off + offsetof(CPUTLBDescFast, table); - unsigned s_bits = opc & MO_SIZE; ldst = new_ldst_label(s); ldst->is_ld = is_ld; @@ -2111,13 +2135,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, /* Clear the non-page, non-alignment bits from the address in R0. */ if (TCG_TARGET_REG_BITS == 32) { - /* We don't support unaligned accesses on 32-bits. - * Preserve the bottom bits and thus trigger a comparison - * failure on unaligned accesses. - */ - if (a_bits < s_bits) { - a_bits = s_bits; - } tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0, (32 - a_bits) & 31, 31 - TARGET_PAGE_BITS); } else { @@ -2302,6 +2319,108 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi, } } +static TCGLabelQemuLdst * +prepare_host_addr_index_only(TCGContext *s, HostAddress *h, TCGReg addr_reg, + MemOpIdx oi, bool is_ld) +{ + TCGLabelQemuLdst *ldst; + + ldst = prepare_host_addr(s, h, addr_reg, -1, oi, true); + + /* Compose the final address, as LQ/STQ have no indexing. */ + if (h->base != 0) { + tcg_out32(s, ADD | TAB(TCG_REG_TMP1, h->base, h->index)); + h->index = TCG_REG_TMP1; + h->base = 0; + } + + return ldst; +} + +static void tcg_out_qemu_ld128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + bool need_bswap; + + ldst = prepare_host_addr_index_only(s, &h, addr_reg, oi, true); + need_bswap = get_memop(oi) & MO_BSWAP; + + if (h.atom == MO_128) { + tcg_debug_assert(!need_bswap); + tcg_debug_assert(datalo & 1); + tcg_debug_assert(datahi == datalo - 1); + tcg_out32(s, LQ | TAI(datahi, h.index, 0)); + } else { + TCGReg d1, d2; + + if (HOST_BIG_ENDIAN ^ need_bswap) { + d1 = datahi, d2 = datalo; + } else { + d1 = datalo, d2 = datahi; + } + + if (need_bswap) { + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, 8); + tcg_out32(s, LDBRX | TAB(d1, 0, h.index)); + tcg_out32(s, LDBRX | TAB(d2, h.index, TCG_REG_R0)); + } else { + tcg_out32(s, LD | TAI(d1, h.index, 0)); + tcg_out32(s, LD | TAI(d2, h.index, 8)); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + +static void tcg_out_qemu_st128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi) +{ + TCGLabelQemuLdst *ldst; + HostAddress h; + bool need_bswap; + + ldst = prepare_host_addr_index_only(s, &h, addr_reg, oi, false); + need_bswap = get_memop(oi) & MO_BSWAP; + + if (h.atom == MO_128) { + tcg_debug_assert(!need_bswap); + tcg_debug_assert(datalo & 1); + tcg_debug_assert(datahi == datalo - 1); + tcg_out32(s, STQ | TAI(datahi, h.index, 0)); + } else { + TCGReg d1, d2; + + if (HOST_BIG_ENDIAN ^ need_bswap) { + d1 = datahi, d2 = datalo; + } else { + d1 = datalo, d2 = datahi; + } + + if (need_bswap) { + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, 8); + tcg_out32(s, STDBRX | TAB(d1, 0, h.index)); + tcg_out32(s, STDBRX | TAB(d2, h.index, TCG_REG_R0)); + } else { + tcg_out32(s, STD | TAI(d1, h.index, 0)); + tcg_out32(s, STD | TAI(d2, h.index, 8)); + } + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + static void tcg_out_nop_fill(tcg_insn_unit *p, int count) { int i; @@ -2852,6 +2971,11 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_ld_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_ld128(s, args[0], args[1], args[2], args[3]); + break; + case INDEX_op_qemu_st_i32: if (TCG_TARGET_REG_BITS >= TARGET_LONG_BITS) { tcg_out_qemu_st(s, args[0], -1, args[1], -1, @@ -2873,6 +2997,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, args[4], TCG_TYPE_I64); } break; + case INDEX_op_qemu_st_i128: + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_out_qemu_st128(s, args[0], args[1], args[2], args[3]); + break; case INDEX_op_setcond_i32: tcg_out_setcond(s, TCG_TYPE_I32, args[3], args[0], args[1], args[2], @@ -3708,6 +3836,11 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) : TARGET_LONG_BITS == 32 ? C_O0_I3(r, r, r) : C_O0_I4(r, r, r, r)); + case INDEX_op_qemu_ld_i128: + return C_O2_I1(o, m, r); + case INDEX_op_qemu_st_i128: + return C_O0_I3(o, m, r); + case INDEX_op_add_vec: case INDEX_op_sub_vec: case INDEX_op_mul_vec: From patchwork Tue Apr 25 19:31:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 676860 Delivered-To: patch@linaro.org Received: by 2002:a5d:4a41:0:0:0:0:0 with SMTP id v1csp2883337wrs; Tue, 25 Apr 2023 12:50:50 -0700 (PDT) X-Google-Smtp-Source: AKy350abrI0MjuzNDzmM+ibaJ/A6dQrrUSMkNUaDNlMRaBoh2xlGufugEstHI5aHYTw2Qhh0cqRb X-Received: by 2002:ac8:5c56:0:b0:3e7:e69f:4762 with SMTP id j22-20020ac85c56000000b003e7e69f4762mr32767167qtj.20.1682452250726; Tue, 25 Apr 2023 12:50:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682452250; cv=none; d=google.com; s=arc-20160816; b=ikwp9MdiIe01rwMTiuJPQFuCfjRcfr3z2woiYZz8rdolqtIz2OIIhyJ7Sz2YJ6SXX2 hayZ/IC0trzwVZwiDxbN33WuBOL2nz2DKtLdHVCnCG2nMccUlqFDIUjfS7tsYl6JFute tdlPfYwmXZBxDkdvf7cFqTaPeOZU/XkwzOMynDl+YjPFioe/z73DnhxEIhl3lh/BhJUM QuNiPsGfJAIav+SyqW7b+9tmoZLK6QgP9cRkzPp0QuiX/wTQ/MNoKI/kGhrOmEU4g3ig /f35RlB6xLwOwGhTYYBaOtOO1dSBrLpGygltNd0ueQ1zcLs0sgTVsSJoWVXsj1eeAs0W atIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=255MRdCL5yMgAYzZU5GCo8Mz4wWBTy7IHHcpHB/YBDg=; b=dE0glbHWEiQ/sxuGQkQcAarZOBL8ap4yIXqzKAazKjIqa6IoPHR1JJwRooHwlGYEDC /ioilFYeECPHK1EydoE7jECf6HM961f1BV8Vv43n3qf+0xIaLrZ+imNK+AqtKmMhQn6V YTmUrXe/SLm2HelGeasEqjFDTHFUe9c7EjRPfTFmjKzbNM6MU8MinNkpucS2oEGqISHD uI/jz4DmCS7NgbZNRQVoF5mekktDdehysolGkdco5rZSiJVKE98mh0DaEBuXViKyx1kq fIgfLT+6IPS/LpR/QNbd5eIXl0wOFBu68PAklQ99sA3NYxIbI1CUJkActsGfuc0mEQuZ L40w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sbTDdgmk; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x3-20020a05622a000300b003bd14d5b9b6si9521310qtw.1.2023.04.25.12.50.50 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Apr 2023 12:50:50 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sbTDdgmk; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prOZo-0002fM-HS; Tue, 25 Apr 2023 15:43:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prOZn-0002Vj-22 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:43:47 -0400 Received: from mail-lf1-x135.google.com ([2a00:1450:4864:20::135]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1prOZk-0007t1-OB for qemu-devel@nongnu.org; Tue, 25 Apr 2023 15:43:46 -0400 Received: by mail-lf1-x135.google.com with SMTP id 2adb3069b0e04-4efd6e26585so4511210e87.1 for ; Tue, 25 Apr 2023 12:43:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1682451823; x=1685043823; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=255MRdCL5yMgAYzZU5GCo8Mz4wWBTy7IHHcpHB/YBDg=; b=sbTDdgmkFLqLlFT5SbZ0bpicuWpYQQ0NjRafDjt5n8LFU0si3ndPwahPXHXd1KNWeP fFZDN/0HcJqHkJyLAiXFlt3iJZKtzUyfHAyrMAJtmG17u2bRnwqjDdrQms/Q0JE2jAD1 Qsnv+8tJWtJRvvJatT8VO4BI4EZVSOIXM22HZNh7iySqBn6GjxujtVkhXLraAVKt/LG3 0XCXaP9BzEza2ykuYmqIaA2UQQziJSLWCZ765hNahUo0K5T2TQ8iWm5FsDMXkvZSM+zr xXzuuwXelkEr0qlYBJpiQ5elK3QPid+lbOwcNlVYquqPhvwPmtIrSotVebitm/VjmJZT 8uRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682451823; x=1685043823; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=255MRdCL5yMgAYzZU5GCo8Mz4wWBTy7IHHcpHB/YBDg=; b=dFH1cH8Gm6O41kqjQLBO1PeyQvaL8eR/vodxsRMHZXbaCWBvfE/8/R4biEY0SQmvj/ 0Gip+YZeS85GecWYwZ2asqz5HssgmlWrEq9OrVuHam+evoIR0orIYyBDGQJWcepu8avD q+71XLgPoboQI1HyMN6IWUbt+Nd+1R2FPREdqKc4YoRuVff5ibo9hkWsucVK2bk9Bx2P rsz8I6oZB6LtiNq6MVViGW6NfLyt77cP5Fk4D8Xkxww7nc2AlmJ7KEP+XUUMnlzRstjD fo6+IZRgZ5KTDD84HGvslM2mQ59kbZ4OXM+KgqFNGfsZ+zZwvdlW6oEPKQmJhZGVtgPw DMVg== X-Gm-Message-State: AAQBX9cLCKaOihWvmahiXca8ccTBGPPo0xHEVl5Al2TVNPiOKXBAobWn GEk17eaS1nYvc3ufaJEFEznQdCyGWPtVgDJv6Qvo4w== X-Received: by 2002:ac2:563c:0:b0:4ec:9faf:9ec9 with SMTP id b28-20020ac2563c000000b004ec9faf9ec9mr5404811lff.23.1682451823259; Tue, 25 Apr 2023 12:43:43 -0700 (PDT) Received: from stoup.. ([91.209.212.61]) by smtp.gmail.com with ESMTPSA id d8-20020ac25448000000b004ec55ac6cd1sm2175662lfn.136.2023.04.25.12.43.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 12:43:42 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: qemu-arm@nongnu.org, qemu-s390x@nongnu.org, qemu-riscv@nongnu.org, qemu-ppc@nongnu.org, git@xen0n.name, jiaxun.yang@flygoat.com, philmd@linaro.org Subject: [PATCH v3 57/57] tcg/s390x: Support 128-bit load/store Date: Tue, 25 Apr 2023 20:31:46 +0100 Message-Id: <20230425193146.2106111-58-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230425193146.2106111-1-richard.henderson@linaro.org> References: <20230425193146.2106111-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::135; envelope-from=richard.henderson@linaro.org; helo=mail-lf1-x135.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use LPQ/STPQ when 16-byte atomicity is required. Note that these instructions require 16-byte alignment. Signed-off-by: Richard Henderson --- tcg/s390x/tcg-target-con-set.h | 2 + tcg/s390x/tcg-target.h | 2 +- tcg/s390x/tcg-target.c.inc | 100 ++++++++++++++++++++++++++++++++- 3 files changed, 102 insertions(+), 2 deletions(-) diff --git a/tcg/s390x/tcg-target-con-set.h b/tcg/s390x/tcg-target-con-set.h index ecc079bb6d..cbad91b2b5 100644 --- a/tcg/s390x/tcg-target-con-set.h +++ b/tcg/s390x/tcg-target-con-set.h @@ -14,6 +14,7 @@ C_O0_I2(r, r) C_O0_I2(r, ri) C_O0_I2(r, rA) C_O0_I2(v, r) +C_O0_I3(o, m, r) C_O1_I1(r, r) C_O1_I1(v, r) C_O1_I1(v, v) @@ -36,6 +37,7 @@ C_O1_I2(v, v, v) C_O1_I3(v, v, v, v) C_O1_I4(r, r, ri, rI, r) C_O1_I4(r, r, rA, rI, r) +C_O2_I1(o, m, r) C_O2_I2(o, m, 0, r) C_O2_I2(o, m, r, r) C_O2_I3(o, m, 0, 1, r) diff --git a/tcg/s390x/tcg-target.h b/tcg/s390x/tcg-target.h index 170007bea5..ec96952172 100644 --- a/tcg/s390x/tcg-target.h +++ b/tcg/s390x/tcg-target.h @@ -140,7 +140,7 @@ extern uint64_t s390_facilities[3]; #define TCG_TARGET_HAS_muluh_i64 0 #define TCG_TARGET_HAS_mulsh_i64 0 -#define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_qemu_ldst_i128 1 #define TCG_TARGET_HAS_v64 HAVE_FACILITY(VECTOR) #define TCG_TARGET_HAS_v128 HAVE_FACILITY(VECTOR) diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index ddd9860a6a..91fecfc51b 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -243,6 +243,7 @@ typedef enum S390Opcode { RXY_LLGF = 0xe316, RXY_LLGH = 0xe391, RXY_LMG = 0xeb04, + RXY_LPQ = 0xe38f, RXY_LRV = 0xe31e, RXY_LRVG = 0xe30f, RXY_LRVH = 0xe31f, @@ -253,6 +254,7 @@ typedef enum S390Opcode { RXY_STG = 0xe324, RXY_STHY = 0xe370, RXY_STMG = 0xeb24, + RXY_STPQ = 0xe38e, RXY_STRV = 0xe33e, RXY_STRVG = 0xe32f, RXY_STRVH = 0xe33f, @@ -1578,7 +1580,19 @@ typedef struct { bool tcg_target_has_memory_bswap(MemOp memop) { - return true; + MemOp atom_a, atom_u; + + if ((memop & MO_SIZE) <= MO_64) { + return true; + } + + /* + * Reject 16-byte memop with 16-byte atomicity, + * but do allow a pair of 64-bit operations. + */ + (void)atom_and_align_for_opc(tcg_ctx, &atom_a, &atom_u, memop, + MO_ATOM_IFALIGN, true); + return atom_a <= MO_64; } static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data, @@ -1868,6 +1882,80 @@ static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg, } } +static void tcg_out_qemu_ldst_i128(TCGContext *s, TCGReg datalo, TCGReg datahi, + TCGReg addr_reg, MemOpIdx oi, bool is_ld) +{ + TCGLabel *l1 = NULL, *l2 = NULL; + TCGLabelQemuLdst *ldst; + HostAddress h; + bool need_bswap; + bool use_pair; + S390Opcode insn; + + ldst = prepare_host_addr(s, &h, addr_reg, oi, is_ld); + + use_pair = h.atom < MO_128; + need_bswap = get_memop(oi) & MO_BSWAP; + + if (!use_pair) { + /* + * Atomicity requires we use LPQ. If we've already checked for + * 16-byte alignment, that's all we need. If we arrive with + * lesser alignment, we have determined that less than 16-byte + * alignment can be satisfied with two 8-byte loads. + */ + if (h.align < MO_128) { + use_pair = true; + l1 = gen_new_label(); + l2 = gen_new_label(); + + tcg_out_insn(s, RI, TMLL, addr_reg, 15); + tgen_branch(s, 7, l1); /* CC in {1,2,3} */ + } + + tcg_debug_assert(!need_bswap); + tcg_debug_assert(datalo & 1); + tcg_debug_assert(datahi == datalo - 1); + insn = is_ld ? RXY_LPQ : RXY_STPQ; + tcg_out_insn_RXY(s, insn, datahi, h.base, h.index, h.disp); + + if (use_pair) { + tgen_branch(s, S390_CC_ALWAYS, l2); + tcg_out_label(s, l1); + } + } + if (use_pair) { + TCGReg d1, d2; + + if (need_bswap) { + d1 = datalo, d2 = datahi; + insn = is_ld ? RXY_LRVG : RXY_STRVG; + } else { + d1 = datahi, d2 = datalo; + insn = is_ld ? RXY_LG : RXY_STG; + } + + if (h.base == d1 || h.index == d1) { + tcg_out_insn(s, RXY, LAY, TCG_TMP0, h.base, h.index, h.disp); + h.base = TCG_TMP0; + h.index = TCG_REG_NONE; + h.disp = 0; + } + tcg_out_insn_RXY(s, insn, d1, h.base, h.index, h.disp); + tcg_out_insn_RXY(s, insn, d2, h.base, h.index, h.disp + 8); + } + if (l2) { + tcg_out_label(s, l2); + } + + if (ldst) { + ldst->type = TCG_TYPE_I128; + ldst->datalo_reg = datalo; + ldst->datahi_reg = datahi; + ldst->raddr = tcg_splitwx_to_rx(s->code_ptr); + } +} + static void tcg_out_exit_tb(TCGContext *s, uintptr_t a0) { /* Reuse the zeroing that exists for goto_ptr. */ @@ -2225,6 +2313,12 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, args[0], args[1], args[2], TCG_TYPE_I64); break; + case INDEX_op_qemu_ld_i128: + tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], true); + break; + case INDEX_op_qemu_st_i128: + tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], false); + break; case INDEX_op_ld16s_i64: tcg_out_mem(s, 0, RXY_LGH, args[0], args[1], TCG_REG_NONE, args[2]); @@ -3102,6 +3196,10 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_qemu_st_i64: case INDEX_op_qemu_st_i32: return C_O0_I2(r, r); + case INDEX_op_qemu_ld_i128: + return C_O2_I1(o, m, r); + case INDEX_op_qemu_st_i128: + return C_O0_I3(o, m, r); case INDEX_op_deposit_i32: case INDEX_op_deposit_i64: