From patchwork Fri Jun 4 15:51:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454054 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp536847jae; Fri, 4 Jun 2021 08:55:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyq4SlHqbb8/NtLrMasOAhZV0QGYBQxxy4WeuyqZOwBKx29eGiThrxxvm6feufpnLvtuFv5 X-Received: by 2002:a02:b388:: with SMTP id p8mr642156jan.73.1622822137833; Fri, 04 Jun 2021 08:55:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822137; cv=none; d=google.com; s=arc-20160816; b=C9KA9s1OodM7GLxgdqXd9XFokYRB1u3CgBN1hrxficp1Sq3aEUAyEdnfLbU/VlyprL 6WrH+KdoZU5DoMAvY3kKyKDcpUOZ/wGw20wDqYLCFdKQdQ8MCmJTYsZ+51L6vaxUrK5N jHvURx7gtmlhdQ8CAAcr30tskUHmlnfoSEWGVrP4hUb+4eRJ9dJIxvAbZNq8Xce6uJDC qciiaSSRUHqgiFiMLbmFvfE75mLcaLi6QP6vBgC2WGRJIYkDiQGuvByLSTbLHantg618 Xh/pjEAtoEKiOs5R1Qc68EUi0sVWmcKmQG7ok+22sG6ukVadKdu5i/TMgcnmKPTJd1CG GzaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=VydS99WQVqx+ym7s6ppugDiseMCd9bc+7KSAd2+foIM=; b=ulrQSTnxbR81LOpo9Svzhmwu2pz8XjErKl6YTDQKjqWbp6K1prfObnS9Q47jcIK7bh SIMrcSyTFr1CGErb/xFxyvMHLxS7zlIbZVlJP4rq2k1xGWZnScyo05Ix+7WKoIkfpfRH IBKtRQYeJmNMbmFV74P5VzesZ3RYo5+3FzC2sZk4lBBRIXyYggkcpPvdlYyS/fYo8psd 7slvsir/Dmun+4CLApOLPb7tmEhGk8Ijc0SN9DnPMTHcZh2Sp1XhhYZ+7wYtNsIPoYev 7yKgx9Nk+QXPt9oZK5gOlKB46QPgL3R8fuV0QNC+4Qm97Nl1D5cCn90gYkNSHLvW3hcj PD2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=d8VWgbOf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j23si6706206ioo.31.2021.06.04.08.55.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:55:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=d8VWgbOf; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52266 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCAb-0008Ou-AE for patch@linaro.org; Fri, 04 Jun 2021 11:55:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44214) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8P-0002HP-56 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:21 -0400 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]:36566) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8L-0008MO-Uk for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:20 -0400 Received: by mail-wm1-x32b.google.com with SMTP id n17-20020a7bc5d10000b0290169edfadac9so8215266wmk.1 for ; Fri, 04 Jun 2021 08:53:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VydS99WQVqx+ym7s6ppugDiseMCd9bc+7KSAd2+foIM=; b=d8VWgbOfZnuu1mRgSriDmrd+UlQFOHJFhDh4AVTwPPW7YAh2cthrWurREZLLC7MKcQ uw2wu9zbbFPivCW+Br6usiYwz1IwMy8ylMmpzynJPiT/DTafq+bdPSaDzODoE2eX8vGH IKPf6q7crBG08MwtnXp9irnfMBA9TJWtprUO9taAiPjjkYTPJ4A9Y0O/cppP0CA5u7qH pYcoEUoyTTdfKhg9vRmmvfOVYbrYmG7aIAULdVJLcPG7ovL3Aycy9u3kKzR1V9LvMXaw LyBin32NpmXjwNdWCzEf5DBLikMxcKxuGWvcac0Y2J1NmTy0X4ELJuxPadIkMuDPp4lh 3Rvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VydS99WQVqx+ym7s6ppugDiseMCd9bc+7KSAd2+foIM=; b=DVAP6wKeayakZ8eDNTRWKe/tzoR17qd5+UO8Mdk5WhZhFK2m2/7Yg0Zi9uVLy7zbDd Y9BAqBUsbszQPIpbK+RIpmJ98IMBwSXXMqRV97Up/BsGvqMLDFeTuxZrzwxNbjaBGB9a 37UiZHpBaE7RkQPxLfyNZNR7fgj987+HNOM0Aq3DKC+WXj7dSkLSLGigGekoIR3W/rNa b4lIThiaLeo3/TMBOYJLdn/ZNwieGMFkfYkcoAI4fTNMty9+qTeMjSwekaurqmpVGBnU hiTlTiHPLRFWAh8MBuk5K4tub8ey8TxMR+aOFWmBst94q8yhFiVvWPkJj6PuEsrM19M+ 9Qhw== X-Gm-Message-State: AOAM532q4jpyi69nSVgpXlWemML555utx1+KBM03v3sd31xq51IAx/2P zvF2F4t9k4+o+BnyMun+1anP9g== X-Received: by 2002:a7b:c095:: with SMTP id r21mr4374471wmh.86.1622821994736; Fri, 04 Jun 2021 08:53:14 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id f20sm6989289wmh.41.2021.06.04.08.53.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:13 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 6EA4D1FF87; Fri, 4 Jun 2021 16:53:12 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 01/99] MAINTAINERS: Add qtest/arm-cpu-features.c to ARM TCG CPUs section Date: Fri, 4 Jun 2021 16:51:34 +0100 Message-Id: <20210604155312.15902-2-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32b; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Andrew Jones , qemu-arm@nongnu.org, =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Thomas Huth Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé We want the ARM maintainers and the qemu-arm@ list to be notified when this file is modified. Add an entry to the 'ARM TCG CPUs' section in the MAINTAINERS file. Acked-by: Andrew Jones Reviewed-by: Thomas Huth Reviewed-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-2-philmd@redhat.com> --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/MAINTAINERS b/MAINTAINERS index 96a4eeb5a5..1ff68116b0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -155,6 +155,7 @@ S: Maintained F: target/arm/ F: tests/tcg/arm/ F: tests/tcg/aarch64/ +F: tests/qtest/arm-cpu-features.c F: hw/arm/ F: hw/cpu/a*mpcore.c F: include/hw/cpu/a*mpcore.h From patchwork Fri Jun 4 15:51:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454050 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp535503jae; Fri, 4 Jun 2021 08:53:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzogB4Fe/oVdySPH17znqGx5fmF4btmh8KaYrJ9sIYusiZ27ddehZD/eO56iTzHwUdBQwcD X-Received: by 2002:a1f:260c:: with SMTP id m12mr2861831vkm.25.1622822018615; Fri, 04 Jun 2021 08:53:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822018; cv=none; d=google.com; s=arc-20160816; b=tr1SxkcDmM49dyaFmACTKaDnJ32IOWLl0oEptZDIQOng8RZK1+JT+lte8FgPhnKeRQ XYUloUsl+vk5gPTMQaEfJCjjksq4t9oXhQo2Xu6gzdUFg1dVUiWBJ54beMMqrnHx2jgv WTCx6k8EpM51cNTOzYPyDN3C7uHWtbbwoKAfw2EXB0lvxQRnRTr8Bdm6wX4uZ320ro6T TqkHU113YMiL7oAX7WJdBntEwKKP+xEMc+Guoz5s2q9DlQwbilKPtbTwswN3eucuenkq k977bClY4wa35CAVvS/KrOLjEunCAoe63I9drJEF+11n/zZgy1PRUFkh0bFUd3aBTW8V sUEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=TfhBAPUGZRyh95qvT9iSddyRzYFalb2GCwwg+0epNlQ=; b=oERlfrCcoYb42IoWVgTFYzhKfLyrz8HmohPKzsKAOhN8u8Bs1VGsk5eoJvVZCeO4J7 8KxQo//5BtgmBIvn4nyjbyobJno3k485XVur8EXllMl+TV8s+ctVlfJNKdZUScZ/r3q7 4q8R0GMLKJvTG8OROp3VbG+grEJhk69aBWrST5Qtpwhs0Fu4oImhLWBfhZZMlkv9SOWI /xBta28fQ2HkcB9x6mu8iaoV2aecEdaJKctlYEql7gvATM51/j7iFNopE8mFUvzEol7K umYc4/yl/fa0wvaA45hqVRIv9jEV6FZm604QT5GSl+XkFJ6H/NK1e8LEBAHfHM1mFndT mm0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lU8Nth9h; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id b6si2935640vso.336.2021.06.04.08.53.38 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:53:38 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lU8Nth9h; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43428 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpC8f-0002K7-Qz for patch@linaro.org; Fri, 04 Jun 2021 11:53:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44196) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8O-0002Gr-Qn for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:20 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:35604) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8L-0008MT-V1 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:20 -0400 Received: by mail-wr1-x433.google.com with SMTP id m18so9807468wrv.2 for ; Fri, 04 Jun 2021 08:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TfhBAPUGZRyh95qvT9iSddyRzYFalb2GCwwg+0epNlQ=; b=lU8Nth9hhzn7DJhdY23Rq5zeLxg6bFeMjTebVFM98emHAVMYG5wBh4B/tPHZfURLUc Mc8sdBji7u80x4idaa6qtx0RtRejaBMxZ+IcgUm3tbcQ8/K9GLy6fV0B2kS5lV1jWBCV Bl/Dn1bYPaN07T1yiMargDzLatmDd5/CDWyfc+q0DkUA4xyHqIaCMbW4etIEF5DwhdSm FaArvVntjXGrt9Z2m7+NWy+JBI9IbrcfOxtpEsYMkmHQpYsxwcDQ5u4i2ioip4+W/yW4 kXQSEqFeBCxqU1vHqQ1vwjvNXILCt7BARH/Ar1cRjeKrkrpPfMHxBPMGr1bsObqkyJpT IgkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TfhBAPUGZRyh95qvT9iSddyRzYFalb2GCwwg+0epNlQ=; b=IT555olwQknDaSAKqgQTySeeC6OV/JCHwfjQDYyZzDUx6mWFi2qzUqMEMMyaaEGNP6 wSpAUuPissZhcbNdjCrrUx60SczPY2yIPLQ5kg7NFmjAYdtbObi1inPRPjHPp8lSPzYh d9Cdp9IwW+oIeht/aNp6L37rf0jGixk64vnDDXr3Q707fJb4aD9liP79SVZ4CFfYGCzy BOaDHVj6mjU5IXqkW6OliVzPRU9AG2hlvAGd0WuJW22BZqTxc8JkyGgrR27PemYk8vig rpo9sFJD1z2J1LACFn20FZmqTnineHI+bp7C/qfu89aY1dQGjezQnO4VBkye7HC1Vi59 Ll8w== X-Gm-Message-State: AOAM532b3ZqSpBJ/EdGfI1LpXgoFNq1EkgARXKeRNszlxXOffSQDq78Q BjxW6ut/iRhTlUuNmgTQy0f3dw== X-Received: by 2002:a5d:6e92:: with SMTP id k18mr4692302wrz.94.1622821995428; Fri, 04 Jun 2021 08:53:15 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id j14sm9095407wmi.32.2021.06.04.08.53.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:13 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 8A38D1FF8C; Fri, 4 Jun 2021 16:53:12 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 02/99] accel: Introduce 'query-accels' QMP command Date: Fri, 4 Jun 2021 16:51:35 +0100 Message-Id: <20210604155312.15902-3-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::433; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x433.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eduardo Habkost , Richard Henderson , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Dau?= =?utf-8?b?ZMOp?= , qemu-arm@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Introduce the 'query-accels' QMP command which returns a list of built-in accelerator names. - Accelerator is a QAPI enum of all existing accelerators, - AcceleratorInfo is a QAPI structure providing accelerator specific information. Currently the common structure base provides the name of the accelerator, while the specific part is empty, but each accelerator can expand it. - 'query-accels' QMP command returns a list of @AcceleratorInfo For example on a KVM-only build we get: { "execute": "query-accels" } { "return": [ { "name": "qtest" }, { "name": "kvm" } ] } Note that we can't make the enum values or union branches conditional because of target-specific poisoning of accelerator definitions. Reviewed-by: Eric Blake Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-3-philmd@redhat.com> --- qapi/machine.json | 47 +++++++++++++++++++++++++++++++++++++++++++++ accel/accel-qmp.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++ accel/meson.build | 2 +- 3 files changed, 97 insertions(+), 1 deletion(-) create mode 100644 accel/accel-qmp.c -- 2.20.1 Reviewed-by: Thomas Huth diff --git a/qapi/machine.json b/qapi/machine.json index 58a9c86b36..79a0891793 100644 --- a/qapi/machine.json +++ b/qapi/machine.json @@ -1274,3 +1274,50 @@ ## { 'event': 'MEM_UNPLUG_ERROR', 'data': { 'device': 'str', 'msg': 'str' } } + +## +# @Accelerator: +# +# An enumeration of accelerator names. +# +# Since: 6.1 +## +{ 'enum': 'Accelerator', + 'data': [ 'hax', 'hvf', 'kvm', 'qtest', 'tcg', 'whpx', 'xen' ] } + +## +# @AcceleratorInfo: +# +# Accelerator information. +# +# @name: The accelerator name. +# +# Since: 6.1 +## +{ 'struct': 'AcceleratorInfo', + 'data': { 'name': 'Accelerator' } } + +## +# @query-accels: +# +# Get a list of AcceleratorInfo for all built-in accelerators. +# +# Returns: a list of @AcceleratorInfo describing each accelerator. +# +# Since: 6.1 +# +# Example: +# +# -> { "execute": "query-accels" } +# <- { "return": [ +# { +# "name": "qtest" +# }, +# { +# "name": "kvm" +# } +# ] } +# +## +{ 'command': 'query-accels', + 'returns': ['AcceleratorInfo'] } diff --git a/accel/accel-qmp.c b/accel/accel-qmp.c new file mode 100644 index 0000000000..426737b3f9 --- /dev/null +++ b/accel/accel-qmp.c @@ -0,0 +1,49 @@ +/* + * QEMU accelerators, QMP commands + * + * Copyright (c) 2021 Red Hat Inc. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "qapi/qapi-commands-machine.h" + +static const bool accel_builtin_list[ACCELERATOR__MAX] = { + [ACCELERATOR_QTEST] = true, +#ifdef CONFIG_TCG + [ACCELERATOR_TCG] = true, +#endif +#ifdef CONFIG_KVM + [ACCELERATOR_KVM] = true, +#endif +#ifdef CONFIG_HAX + [ACCELERATOR_HAX] = true, +#endif +#ifdef CONFIG_HVF + [ACCELERATOR_HVF] = true, +#endif +#ifdef CONFIG_WHPX + [ACCELERATOR_WHPX] = true, +#endif +#ifdef CONFIG_XEN_BACKEND + [ACCELERATOR_XEN] = true, +#endif +}; + +AcceleratorInfoList *qmp_query_accels(Error **errp) +{ + AcceleratorInfoList *list = NULL, **tail = &list; + + for (Accelerator accel = 0; accel < ACCELERATOR__MAX; accel++) { + if (accel_builtin_list[accel]) { + AcceleratorInfo *info = g_new0(AcceleratorInfo, 1); + + info->name = accel; + + QAPI_LIST_APPEND(tail, info); + } + } + + return list; +} diff --git a/accel/meson.build b/accel/meson.build index b44ba30c86..7a48f6d568 100644 --- a/accel/meson.build +++ b/accel/meson.build @@ -1,4 +1,4 @@ -specific_ss.add(files('accel-common.c')) +specific_ss.add(files('accel-common.c', 'accel-qmp.c')) softmmu_ss.add(files('accel-softmmu.c')) user_ss.add(files('accel-user.c')) From patchwork Fri Jun 4 15:51:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454051 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp535776jae; Fri, 4 Jun 2021 08:54:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxK9IvqmzTA2AjPmT2ZsyWlf9gGkVW62eltuGBUopu7aio8mpO4mH1nnhuNTwa6U04dGM4q X-Received: by 2002:a92:c7d0:: with SMTP id g16mr4418432ilk.296.1622822044122; Fri, 04 Jun 2021 08:54:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822044; cv=none; d=google.com; s=arc-20160816; b=IFjOwQKtIdJAfGduNB9/j+bvW9Q/zW3bUwDXj81ZOAq0Ovyyh4gbU19ZGuR9SmjC3V 8Crk+FPr3NsPn5P/QG0GyOD8fL3GinLeDlrDryoyn1LuoLc1yFkMcMlh10GtG2WxPVF9 eGUkxTSQ6F7jblEcVnQX7yZpYIvEgyQ8xvdYRpBQwLv2NLhWEFxlHhbmBsvFsII4Urjs y5as2029dAz6epI/RRkZtlmDthtsOLIwYF5Xrv+IX6fKhwOPEcOahSCp7+P6rpWQNLUJ yywge4C1qlgXAS1XGIuhrDJWEsSAu1KxjurUMFaGzH0wN7uKkLu4nb/BDpTcUKzDu2+/ bQZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=TvyIlJW3PO7q09G/gvxhFxBVndq7M4s8XzLxW3dgEK0=; b=Mhm4f8tXlsbGTkW1ysecuRarG/0epexMXFOShjNHTBcKvHxurOKFOCwHKIO7Hvx4Fs mVVSwTbciy+m+02jm31N1I40Azwv9b9XLTNfUPq/IA5wH4eP1t6CNqsAy0p3Fy/LCkCX bpDr2FNSqv+H5xV5JXKDmeIprS2aJX0izNP0E1MYak6m7kw/e4RFy0zc3KXdglyOf1qO DrodX2G7E9yBK/g+uy/rJZ0GYTIc+Uc8u3rKdsHjcbWin9zwAYzqsiyh0u7YvyeKuWHe m4Rd5uITHaUJEIDkSTnwBGkHmyWuKJkB273+wHYd6mOVk60tq2p5cpWYE9LysAoKFKzH iWTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WYwIOaaI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id g23si6732084jao.111.2021.06.04.08.54.04 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:54:04 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WYwIOaaI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44978 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpC95-0003R8-H3 for patch@linaro.org; Fri, 04 Jun 2021 11:54:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44266) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8R-0002Kw-14 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:23 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]:35599) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8N-0008O5-5g for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:22 -0400 Received: by mail-wr1-x42d.google.com with SMTP id m18so9807582wrv.2 for ; Fri, 04 Jun 2021 08:53:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TvyIlJW3PO7q09G/gvxhFxBVndq7M4s8XzLxW3dgEK0=; b=WYwIOaaIjE19ZhMLtM5SnbB16XfCcI1pHVrnlhT2fH6VApU3pTTbmhEv5Hnt+Vl/wL fifSxS9u1dpXVPAVzXQUsjsZ3DODWwGmkJU0/fT0FoaeHOKuU8bmnCrNW9JHAkGzcRRI qkZgVB9Zlatrkfp1mmUsZC6w9IlWpcaz3+O/LaIXgWysj3Ub75+wpYnDGp2uys3y8B/X gZwoKtc/1J8VyVn0/qKPiUTJmIHGxkyfbDC/iAkYz6oBlYWvJ7X/C2dTf/fAJfumqTdf /AuzUO9YYnSBiCZJFB0SlEepfQLRZ93IOULZngHbt9ignJTL+83PPV3GKYZz00opX1/6 ZxFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TvyIlJW3PO7q09G/gvxhFxBVndq7M4s8XzLxW3dgEK0=; b=CPFxquhAGFr4F/gqkimBxX8KYjx9cAYiitd5aXwOpEd+s43B8EULGTaFe3qR8omF8M Z8GOdatQBQPYmboWku/sh07q5ucGiyMGyXOzz/dyFyhG1qfDxRkRTD+pA+ApAwzsJ3Pi RyEPviVK0HeNeiZlxw51vEb0RUXm8RRpImA0w5eBqY27JmzV9xz09NeLG7Ei2P/tm3qz kCBzMi4D6gMz72n+FNlinTZsEZp5G+9bsUiqwVBgEDnDc9MQ7bzPctgz0OV406NPKt1A +RR3yggwGJ2BNWEOmFiy+KRAt1AFUGRQCXXTlh2GNZ7DtO9kRpDjHmbgRVemtdejOOH6 aC2w== X-Gm-Message-State: AOAM5334f+U7E4zJhxFLRMasQ3qOwVdl+vImEEKNWAlICcEbeWwZNOHf ZLIYfzRV6qFfwT7mESJR/O3QQA== X-Received: by 2002:a5d:6b0b:: with SMTP id v11mr4489637wrw.339.1622821997592; Fri, 04 Jun 2021 08:53:17 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id m37sm482577wms.46.2021.06.04.08.53.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:14 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A239D1FF8F; Fri, 4 Jun 2021 16:53:12 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 03/99] qtest: Add qtest_has_accel() method Date: Fri, 4 Jun 2021 16:51:36 +0100 Message-Id: <20210604155312.15902-4-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-arm@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Introduce the qtest_has_accel() method which allows a runtime query on whether a QEMU instance has an accelerator built-in. Reviewed-by: Eric Blake Reviewed-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-4-philmd@redhat.com> --- tests/qtest/libqos/libqtest.h | 8 ++++++++ tests/qtest/libqtest.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) -- 2.20.1 diff --git a/tests/qtest/libqos/libqtest.h b/tests/qtest/libqos/libqtest.h index a68dcd79d4..d80c618c18 100644 --- a/tests/qtest/libqos/libqtest.h +++ b/tests/qtest/libqos/libqtest.h @@ -763,6 +763,14 @@ void qmp_expect_error_and_unref(QDict *rsp, const char *class); */ bool qtest_probe_child(QTestState *s); +/** + * qtest_has_accel: + * @accel_name: Accelerator name to check for. + * + * Returns: true if the accelerator is built in. + */ +bool qtest_has_accel(const char *accel_name); + /** * qtest_set_expected_status: * @s: QTestState instance to operate on. diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c index 825b13a44c..6bda6e1f33 100644 --- a/tests/qtest/libqtest.c +++ b/tests/qtest/libqtest.c @@ -393,6 +393,35 @@ QTestState *qtest_init_with_serial(const char *extra_args, int *sock_fd) return qts; } +bool qtest_has_accel(const char *accel_name) +{ + bool has_accel = false; + QDict *response; + QList *accels; + QListEntry *accel; + QTestState *qts; + + qts = qtest_initf("-accel qtest -machine none"); + response = qtest_qmp(qts, "{'execute': 'query-accels'}"); + accels = qdict_get_qlist(response, "return"); + + QLIST_FOREACH_ENTRY(accels, accel) { + QDict *accel_dict = qobject_to(QDict, qlist_entry_obj(accel)); + const char *name = qdict_get_str(accel_dict, "name"); + + if (g_str_equal(name, accel_name)) { + has_accel = true; + break; + } + } + qobject_unref(response); + + qtest_quit(qts); + + return has_accel; +} + + void qtest_quit(QTestState *s) { qtest_remove_abrt_handler(s); From patchwork Fri Jun 4 15:51:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454055 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp537741jae; Fri, 4 Jun 2021 08:56:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxTUpXOtmzO/F5uhp9m2yc3XrRrYLAn13d0j5t2M+1rCTAeO3mKYSEOlVG5U3j7Hh8bHtdE X-Received: by 2002:ab0:2118:: with SMTP id d24mr4230902ual.36.1622822212444; Fri, 04 Jun 2021 08:56:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822212; cv=none; d=google.com; s=arc-20160816; b=Uw936OAKIaTQqkOl3VdscGYMFpr4RhWyzH+R6Icw440XxCrpLdB7p9fFvc/sBPDxIu YJ0DMUP48YLXjYKvnVf3ApJOhOLQAgLTJUHaioMnVS7PIf7qfv85K2gyK3KwjNj16cd6 YR/5wroW/uFqSUk9Pw5ICBArYsKKh7vJ6oOQB6/Ye20OfrVvUYgMh5mD1G9jZtzBIARP Ywj+mQ15Fxn+2WqunpiB0rRDwszrkK60eOqo4T083JK5j+EhcVtdYCplbXX81jwJA251 YFCQhWBwDSelSsRHfTodffL35Nlyi/ng3RQgZrwV6/b2k0PFi/Pv8OXoFnQy8YrBMFGh DQgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=JEWHBCHxKVXeNs+cV9ChbBOU9uNopKc+RwL1IoJNYZE=; b=iqxEKLfwreOKCzwrAXNWiotNgMs/5g+qfuDV4s6DKT3HaUpgbWB5YoItYpx+6+IWz4 b6j5RNuTxtaFXA5dL0L+4QkXT8h+cazTcO2r1qpwJE4cdVhK7tOFp5jZRpzRRTmHeeH3 hEsEyQtdoBRN/WsH1OepdHZv+s3goXJ7Wgi8MCtVDAEYC7cRBclPe7uikA22mwIcwrzQ 9hbi2vLwAbcg+hg8HGLvSJzX20a8FDVwmyKws+7rCVsrvBSPUAS6YenXV+MLt06j05+q 4MuGKr2V/5TuYfnETiEi24D/s8A5AhoFg18qx5telPY/0RE1ACwg4ke65INEkVgUSvHi KctA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=wtMqn6th; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i25si3210189vsm.342.2021.06.04.08.56.52 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:56:52 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=wtMqn6th; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:53690 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCBn-0000xo-PE for patch@linaro.org; Fri, 04 Jun 2021 11:56:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44288) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8R-0002M0-HP for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:23 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]:35413) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8O-0008Pd-Th for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:23 -0400 Received: by mail-wm1-x334.google.com with SMTP id h5-20020a05600c3505b029019f0654f6f1so7677046wmq.0 for ; Fri, 04 Jun 2021 08:53:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JEWHBCHxKVXeNs+cV9ChbBOU9uNopKc+RwL1IoJNYZE=; b=wtMqn6thCPNnLdYttCGfqmQqMOEg+ctSBuIECrpWX+oK1LZvqOXn8sUv0eNJFaXt/d ZQujLTQrJ9MwHgvc6688BQSlphHC6nL/xfC1CE78GOLOu+6P3jmTYiJOCtE5dsdXpnXu AZW193ADBcVFdKlvwt9SxsPa6DdDebwvuXALb5iCg5fBww2tCfvSyN2VgMENVbIF8b66 GE+83yxVlxSDu2hewbTaHpZAGs00qbMGUIXIS/a44he4BG7vRbEuqq3MqszCbkEBzQBN lU/qF24r0UAgDvZkVUKByxu/QP7Z/eDU1YlNLnzkVdz07nBZE0KcQbSChkwrZQr3/T05 U7dQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JEWHBCHxKVXeNs+cV9ChbBOU9uNopKc+RwL1IoJNYZE=; b=cnlaJcXeRUyMh9drf9V5jyOEKKPyJLx76/CJ2Ay+k5V3HkOfcMybYvd5wrUWnzZfs1 8Sv2MCZ8xo8bhyZYO/uGaUOJNZOucnKR5PjyGxhfvbhOQomE8x4OQEu31NbgxUX5tEc5 DM2lU7oGPbh89zLgDCkLefG1ok19XNtJWCp8/HsHYB+njD1ebXl/wSGVNKDQ6XdScw6b +Ua0r50NiIyDL1do92gaynclMQEovtsH/2i4SgnPGD/QoW5+5FQtGvvW/zSl9ug48rpY S1lF7RcUu5SLOdI32+mL3vKydrpRBGzW2JRTWftmGQJNBQI4aZXWpl7xC/MaKRLMgtFW 3ovA== X-Gm-Message-State: AOAM532BDd61KMFJ7kWOafHObK/Dbsf4w3jAj5egG/7vUd5bMu9uM8vT W3w02icICBTNKXTnKXkiQEak2g== X-Received: by 2002:a1c:5f86:: with SMTP id t128mr4403114wmb.165.1622821999429; Fri, 04 Jun 2021 08:53:19 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id m3sm7492783wrr.32.2021.06.04.08.53.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:17 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id B86BB1FF90; Fri, 4 Jun 2021 16:53:12 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 04/99] qtest/arm-cpu-features: Use generic qtest_has_accel() to check for KVM Date: Fri, 4 Jun 2021 16:51:37 +0100 Message-Id: <20210604155312.15902-5-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Peter Maydell , Andrew Jones , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Thomas Huth , qemu-arm@nongnu.org, Claudio Fontana , Paolo Bonzini , =?utf-8?q?Al?= =?utf-8?q?ex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Use the recently added generic qtest_has_accel() method to check if KVM is available. Suggested-by: Claudio Fontana Reviewed-by: Andrew Jones Reviewed-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-5-philmd@redhat.com> --- tests/qtest/arm-cpu-features.c | 25 +------------------------ 1 file changed, 1 insertion(+), 24 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c index 8252b85bb8..7f4b252127 100644 --- a/tests/qtest/arm-cpu-features.c +++ b/tests/qtest/arm-cpu-features.c @@ -26,21 +26,6 @@ " 'arguments': { 'type': 'full', " #define QUERY_TAIL "}}" -static bool kvm_enabled(QTestState *qts) -{ - QDict *resp, *qdict; - bool enabled; - - resp = qtest_qmp(qts, "{ 'execute': 'query-kvm' }"); - g_assert(qdict_haskey(resp, "return")); - qdict = qdict_get_qdict(resp, "return"); - g_assert(qdict_haskey(qdict, "enabled")); - enabled = qdict_get_bool(qdict, "enabled"); - qobject_unref(resp); - - return enabled; -} - static QDict *do_query_no_props(QTestState *qts, const char *cpu_type) { return qtest_qmp(qts, QUERY_HEAD "'model': { 'name': %s }" @@ -493,14 +478,6 @@ static void test_query_cpu_model_expansion_kvm(const void *data) qts = qtest_init(MACHINE_KVM "-cpu max"); - /* - * These tests target the 'host' CPU type, so KVM must be enabled. - */ - if (!kvm_enabled(qts)) { - qtest_quit(qts); - return; - } - /* Enabling and disabling kvm-no-adjvtime should always work. */ assert_has_feature_disabled(qts, "host", "kvm-no-adjvtime"); assert_set_feature(qts, "host", "kvm-no-adjvtime", true); @@ -624,7 +601,7 @@ int main(int argc, char **argv) * order avoid attempting to run an AArch32 QEMU with KVM on * AArch64 hosts. That won't work and isn't easy to detect. */ - if (g_str_equal(qtest_get_arch(), "aarch64")) { + if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("kvm")) { qtest_add_data_func("/arm/kvm/query-cpu-model-expansion", NULL, test_query_cpu_model_expansion_kvm); } From patchwork Fri Jun 4 15:51:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454064 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp544939jae; Fri, 4 Jun 2021 09:05:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJylKkKT95DpFLCwMQf4DmhHgcqOcOQgrZQefUXlz8Z9H+k6XdDal2iOU66Lsy1hRCT++Xmo X-Received: by 2002:a17:907:6289:: with SMTP id nd9mr4801587ejc.384.1622822728973; Fri, 04 Jun 2021 09:05:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822728; cv=none; d=google.com; s=arc-20160816; b=IiGXxkl8q9TgXN5t3KRVFiKaXEXOeyiUEmvgQcuY6yIKZ8IOaT2/MWqZ5htnLZSwzL bEW4/uaoNlkNkden6jZw0NoYE0u/L7AXCen/q9Wjp991KDWLUdm11KvV+DNrNpAgs7ma wCiXFi6uWkVTdphy9fIfSs8XLmNDFlXU0ZvypdGjV4mMCscWXz9wfJ76gEFQehMBVZvb FRC6BmVIbMxnpnoQneLzD+N6PJXYECNiaiuiwn9f5ecj1f3csqxvP1YrQokY3dnIyk3b wwNMtc1wYy8eu7N0FEpt4c+lRdWAJ14SAINjRDUq+jIjTKPJpM0jpzPhS8kjBrn4GZvQ vLmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=67nZUzxSZk/gVJPBhG39pqDikF99Pp1OEKbjWVU7ILs=; b=t/AXs8gjAFPp10AB0ZXjdoePYeuh1yWBvbmEA6wBzib3GTyigUysu2U8dERXOD3qdF T9YgQIC+avqD95OgqRLSv2wPBNQJYPIH53/jSFT07kD2a87IJWIK4tsrVvoiJt2D6XTU XCRB7xd3cTT1WT77CvSHrKEGUmgHSZcXOIJWLENOx+sIBbf71C8+dCr+JfdtgZ36fbM6 rYLERgIUrBCbeVO9D5W6elmdvVgSpHfzDFwHrTpuuA9igtzOh+oUoAwzbJbqulswu4f7 0YiNAovy/TLZZRcjVe3cSN52dV1m5FcbmADkMuiKe93kI3rvKsBMmXcO0HCA1WFoj54f bWcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=m3KM02st; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c9si4989533edv.134.2021.06.04.09.05.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:05:28 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=m3KM02st; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52454 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCK8-0002vv-0C for patch@linaro.org; Fri, 04 Jun 2021 12:05:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44382) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8U-0002Xd-Tk for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:26 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]:34481) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8P-0008QC-Fe for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:26 -0400 Received: by mail-wm1-x32f.google.com with SMTP id u5-20020a7bc0450000b02901480e40338bso4495442wmc.1 for ; Fri, 04 Jun 2021 08:53:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=67nZUzxSZk/gVJPBhG39pqDikF99Pp1OEKbjWVU7ILs=; b=m3KM02stufm+OUqybEoRGCzWNyZvvNEe6dwY6eFwewV0WulgmwmgiwjcSiyILfgiI5 GK4vPvD2aMrXn0bbWhgitB2yCpeCFQU2PVP0wC/y21vxeV0k/1BVJG8bjh/84ooWmUJU CSLCMZnWlpDOEUysvdArApkjInVlqzC5DHk4GnElnHYdlmNEeLjp17InzU2VS3HEBywV +vxVQ3RjV67Q4YLLfNxjfWoi0jwPdM3OZY8JqqB8+9ftFJlQrSPjJ9tjqGZBxgNIz1pi GOWZr65ReY3bZLzGB/yzXL6D7OdfnqYX4jUJ61//KvKHU6Ohs3AQ4Ti2EidVUVyE/y6Z C2Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=67nZUzxSZk/gVJPBhG39pqDikF99Pp1OEKbjWVU7ILs=; b=E72mWJUzLUGFTy3yQP/VxkBixEbPQ2+cdj5W7n2KCyXwWWxxC3xLX2TAK4CSMHvppb m8w7x/ZsR11YP8NycIlhGe+yqZBpOHCAkcx3aEV5dpcsxnrxSYkPOEyXb63MNROoo2qV EJ7ZvY5pSXvpSRlb78/VP6KFJ0g7PG+x5J2BenA8oJ9T+qzwVIQmIen6R9UFHP/qodw0 wQfcMHISfu3fEGUdTzgl/FAE1VKHguS4RpNEIPe+2jrQq4bF/MoYXeEaY0vKFEZvPEWe 6EwuJrIFbGWYRF0ZOpwcyY2sJdf6Gx0OixCtd8KGFwRo7ez6SgvHNu/9s0wYEQ0mMsPM 1U7A== X-Gm-Message-State: AOAM530HjyMym/QmCpgE/0ELu9wQ191wI8KDByZSlWix6/BU6zpSaSbC +qFJ75hkbAxi6t25hfGdy6DBZQ== X-Received: by 2002:a05:600c:3658:: with SMTP id y24mr4406058wmq.122.1622822000148; Fri, 04 Jun 2021 08:53:20 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id r2sm7160694wrv.39.2021.06.04.08.53.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:17 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id CDF2F1FF91; Fri, 4 Jun 2021 16:53:12 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 05/99] qtest/arm-cpu-features: Restrict sve_tests_sve_off_kvm test to KVM Date: Fri, 4 Jun 2021 16:51:38 +0100 Message-Id: <20210604155312.15902-6-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Peter Maydell , Andrew Jones , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Thomas Huth , qemu-arm@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé The sve_tests_sve_off_kvm() test is KVM specific. Only run it if KVM is available. Suggested-by: Andrew Jones Reviewed-by: Andrew Jones Reviewed-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-6-philmd@redhat.com> --- tests/qtest/arm-cpu-features.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c index 7f4b252127..66300c3bc2 100644 --- a/tests/qtest/arm-cpu-features.c +++ b/tests/qtest/arm-cpu-features.c @@ -604,6 +604,8 @@ int main(int argc, char **argv) if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("kvm")) { qtest_add_data_func("/arm/kvm/query-cpu-model-expansion", NULL, test_query_cpu_model_expansion_kvm); + qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off", + NULL, sve_tests_sve_off_kvm); } if (g_str_equal(qtest_get_arch(), "aarch64")) { @@ -611,8 +613,6 @@ int main(int argc, char **argv) NULL, sve_tests_sve_max_vq_8); qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off", NULL, sve_tests_sve_off); - qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off", - NULL, sve_tests_sve_off_kvm); } return g_test_run(); From patchwork Fri Jun 4 15:51:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454052 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp536317jae; Fri, 4 Jun 2021 08:54:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzhyMpJNfPIRJI80V2ElLYRSFiVf03dIiQNb/x6ar/0AN38PSNurTwdeRcFj/6XtFV4Vf6o X-Received: by 2002:a02:354d:: with SMTP id y13mr4796822jae.83.1622822093960; Fri, 04 Jun 2021 08:54:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822093; cv=none; d=google.com; s=arc-20160816; b=nxdDMf+tE18saJef/OLpoDOnJ3AHs3ZYNesZNjD+x4qh/44Tx4uO0MkvcwbppT4ZJu cxbjjWg7mzMKeAHVV+jvcOHeC1F1n+9a8SJE0b+gKQB8q5QKItWEgyR8aXkh2xYWR0Xe boH9xlvm6SdBVNHuYHb1E+6zhT21jEI1xxwCZuKtkE0wp3EvxvV0tRN5+MY7d+HurLnj LJI/Eg51fTGs7p4pwXO3PnMuCcjGcW+vk2EdZZJEhsPV//qJKJXqrAfQyYSyaoh5mgNd BlaICLt4FUx0I6oFV8XBEsG8dXRCNGXzhPWY51nJn3AuvwdS4tPhR9lGxppJzEOuaFev NbYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=HPoMc3YlkQAs9v+mSgnPrpvdiBylh/hj1ZQf5qMDtII=; b=TDzhy9VqtEMwg0VIrktxDMglCJm4wsM1gX04ordOd3U6B5DD7zjH8ZOgo89Ee0G2Of GNFfHVmSrxGnAkYKmshV0Ptrz547V6GHSC6bz+CEfjw+H+eumuONyx2a43U38KaqSqg1 gF1S3tsjncNOQeRk0pRkkssrrg3ok2f/lDy4Ntb04V7c93QXvEcm8EajUaqF9l0PDP1C CyAmBV+pUdVxRT1UlAgkTZ7j1lWZMLD51ukZmLCK/CC7kxv4i3Yc24V01jpALSK5gEY7 A9314VGkCbD3zf53vKd6DEc3CWTYULXQboFP2ax4eN/GdzASVJAXfydLwszY6+yUzjP+ 0lPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=NMhDWPfe; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id b6si2543838ile.102.2021.06.04.08.54.53 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:54:53 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=NMhDWPfe; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:48366 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpC9t-0005mh-EQ for patch@linaro.org; Fri, 04 Jun 2021 11:54:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44450) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8X-0002jW-Ph for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:29 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]:33369) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8U-0008U6-2K for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:29 -0400 Received: by mail-wm1-x331.google.com with SMTP id s70-20020a1ca9490000b02901a589651424so1270454wme.0 for ; Fri, 04 Jun 2021 08:53:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HPoMc3YlkQAs9v+mSgnPrpvdiBylh/hj1ZQf5qMDtII=; b=NMhDWPfeWN5J/SlWK4FDgmYPV/9S5pRV9gLmvw8KG8Rj/VlzPof07KH9eW1jLslei/ rbHdVH+Ulv7cCfMameePbEkdIsEkW1Uv3f4jp6PmFwJJL2V3jCfrd4QecRd2nrLXFOdz dHiKkvfTAvE4J20DM3S5jUVTaCHI/J/4qnudAYpHScoYPzZbf0jCmh3/ORLa6Sb9m/Fv 2+rJE3ZaqGb0gT/wbUMkGT07i3lIR4MWzSDckRk2Xx1Yv0UIZ5jXQ5fUOlYl7UXYQ9/3 0y1mgO9wnMyw70fph+ri00ZSEGumr+ncGttCw6R/eLnfIWbzz0hnn3aoasMQjPWMSRHk xJwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HPoMc3YlkQAs9v+mSgnPrpvdiBylh/hj1ZQf5qMDtII=; b=gxurZErTpF4EzawYtVUT7QG56nYJOxBkIUPlEXHo64zVaEQPToMbhUoVXuVGHgCrDo fp5uF8PYB8wUu0+FDQHsd7G8SxxjrJgfWbVVs50SNObwsr6KTYt5cBNXssQF//A8UCiq 9m+ENshsM+/J2JgNMMtkoxP8an6+5c8S+FMXD1/zWL81/8CMcC/c7tQH9tBAf4IT5njx dBZiLce5MH3fL9s6SNJfUY/HQXSuIY3okOnoAXa66FSDWjTFakOFfW4gchGncx3B2z3Z C4YuUJZK+MyR6jBvVOQjE3djhUGSfWq5rMk/g1CJb31NvoN88dNIQgDB+f8+Lf9NTb/N 9MJg== X-Gm-Message-State: AOAM532cQfolmfvUqqKKtErFpNAj9zWJE1LNK5US0o6TyaKhoUZB4WOm /JP8aDalnK0yUNlsp95VxEMTLA== X-Received: by 2002:a1c:6209:: with SMTP id w9mr4290273wmb.27.1622822004458; Fri, 04 Jun 2021 08:53:24 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id h14sm8563060wmb.1.2021.06.04.08.53.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:17 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E3F6E1FF92; Fri, 4 Jun 2021 16:53:12 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 06/99] qtest/arm-cpu-features: Remove TCG fallback to KVM specific tests Date: Fri, 4 Jun 2021 16:51:39 +0100 Message-Id: <20210604155312.15902-7-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Peter Maydell , Andrew Jones , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Thomas Huth , qemu-arm@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé sve_tests_sve_off_kvm() and test_query_cpu_model_expansion_kvm() tests are now only being run if KVM is available. Drop the TCG fallback. Suggested-by: Andrew Jones Reviewed-by: Andrew Jones Reviewed-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-7-philmd@redhat.com> --- tests/qtest/arm-cpu-features.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c index 66300c3bc2..b1d406542f 100644 --- a/tests/qtest/arm-cpu-features.c +++ b/tests/qtest/arm-cpu-features.c @@ -21,7 +21,7 @@ #define SVE_MAX_VQ 16 #define MACHINE "-machine virt,gic-version=max -accel tcg " -#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm -accel tcg " +#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm " #define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \ " 'arguments': { 'type': 'full', " #define QUERY_TAIL "}}" From patchwork Fri Jun 4 15:51:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454059 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp538532jae; Fri, 4 Jun 2021 08:58:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzAlVHVNU15yGRObRt1k6N0v9pZMyn19eElhG3GfgLOIPLmYCRDrNSEvGnwiV9l40QPzD4z X-Received: by 2002:a05:6102:f0b:: with SMTP id v11mr2568948vss.8.1622822281679; Fri, 04 Jun 2021 08:58:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822281; cv=none; d=google.com; s=arc-20160816; b=tGymkY8GTCnimHgfJwa9r4VSw5/Y2W9/FtNp8YFf89hdjxZZF7zQoAa/pB9LiB114U lyQHhEtWZqigo/ubyNcd9Y+WbOejCk1w956dDlGIMOqhIRwQy0QTturbnqsR/+pu/pUV avPg01PZKrwzXzRLFLECReTNf4Z344vb2sVPX7pLJ99WsHSQdtscZTXJEE54AF90T8rW GzqBf/U5xxcaMgd7p+FwT598Ms4PKPftt3mBxnkMQ0feCs1+zScsRD4VE3WQYd5CWr16 4CZhCXxpXOM+Zqys0uxW/HG78pH6h0buZMl7N6jCa5Vs1EZp6iCR9xwxTSxLHd1SObp/ IyEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=WOuysOnef2mMKklKTFeIYkHV2AEtzMpA+XWnC2+7giM=; b=uANT1OvILPZK9aiNi+Pggier1m5vcgmC/7s5fZBN8QUUFnL2EDNQLPxEzHYAB3yeKG eFLMKlpZL+Gz70wHoTaLpos5gOaFbnK2Tqkwv4zl9Bf27QpCQCHcrU11Me7xdIvp2FI8 iZM7oQ4w8hdwEuoPPlatt+2faSKE0qlNPvKSxLiOgniyh5iIwLNrBdugAdj1Rx9F5SMK XM5jp95kXcBWyh5hzyZXkjkCYCY8cCpEj4gAyGWALIwbUQ/PVVPZvm526KeAiMGgFb5s H+SbFiltrAym1bh/vOI6QeVKk4sxZEo262f39JRrjtQuz6blnN4w/cOhQ+t478PM+hrq X4Lg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lf4rpAtd; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q1si3315753vsh.348.2021.06.04.08.58.01 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:58:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lf4rpAtd; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:34098 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCCv-0006rQ-2l for patch@linaro.org; Fri, 04 Jun 2021 11:58:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44418) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8W-0002dc-EI for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:28 -0400 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]:38835) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8S-0008SH-He for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:28 -0400 Received: by mail-wr1-x432.google.com with SMTP id c9so1114018wrt.5 for ; Fri, 04 Jun 2021 08:53:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WOuysOnef2mMKklKTFeIYkHV2AEtzMpA+XWnC2+7giM=; b=lf4rpAtdDhOuo89nFjvu+c5ZRPeDbZIxLSiUlimSH2V32i6d08AWPx6bdZ00+CBKxV p4Nr1NEF1OSrlDSwFWItTQHgmoh9Hpw5fHeiTV5sjLu/hbL3cFCmgZVoelB8JvBv8F9S LoJwhA+6s7djrxFZ+xbL7Qs2wBw7Zqf48rh16OzH9lrcC4uSOaDYPw5K0blRpS3htudg 6+wTPB6i5NisBGF4xd67zWWRkWAHOZmn4uxIR78F4F2GBY5LaIw5hDedTN0uiHeoCAl0 GKyoDemr3Q8R1KJWSNN6N2uutckvOyxxDC2OnJ7MKpoKGHDGd5WpNpteii8V0GE2Vztq AsVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WOuysOnef2mMKklKTFeIYkHV2AEtzMpA+XWnC2+7giM=; b=DMT4ZyIoNPyZoX4EnlDRTx+wdfYHxm292riAM5q+4sgNk6FOp8aiGIICB+aQOSfT2E VkhdklO6n2VjZ+++4GenwdiqrsuI+PcKVxOwu618TjJDxa1o9eZMg9LTmILpNBzVduzH c1IRIJhq2YR8YyQI689o77UuZSKQ1IZVXGLI2jjaqS5yxUS2fpHE3qsIWhdN3E4bjiSz lD+CJCrfj5SAgu7zzml4Xddqdrd2HRtYzTH4T4o3tt9yYOBxW85ySyUhyST6XLho7qh5 QgBuplp60sOwK9nMyfLN+6NYR/4I1MhbvfxKCyjN/EBh5EMyzGL+9C2mOszogRb7oEex gIJA== X-Gm-Message-State: AOAM531byPFsc+oPTNS4mmySORWEIQBiBhi2Oppiohp3/D5AyMbPA4R+ baV1u247TxjwKQuCHnUB/GwQ0g== X-Received: by 2002:a5d:6109:: with SMTP id v9mr4712052wrt.0.1622822002504; Fri, 04 Jun 2021 08:53:22 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id h9sm1793301wmm.33.2021.06.04.08.53.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:17 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 05F101FF93; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 07/99] qtest/arm-cpu-features: Use generic qtest_has_accel() to check for TCG Date: Fri, 4 Jun 2021 16:51:40 +0100 Message-Id: <20210604155312.15902-8-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::432; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Peter Maydell , Andrew Jones , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Thomas Huth , qemu-arm@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Now than we can probe if the TCG accelerator is available at runtime with a QMP command, only run these tests if TCG is built into the QEMU binary. Suggested-by: Andrew Jones Reviewed-by: Andrew Jones Reviewed-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-8-philmd@redhat.com> --- tests/qtest/arm-cpu-features.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson Reviewed-by: Thomas Huth diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c index b1d406542f..0d9145dd16 100644 --- a/tests/qtest/arm-cpu-features.c +++ b/tests/qtest/arm-cpu-features.c @@ -20,7 +20,7 @@ */ #define SVE_MAX_VQ 16 -#define MACHINE "-machine virt,gic-version=max -accel tcg " +#define MACHINE_TCG "-machine virt,gic-version=max -accel tcg " #define MACHINE_KVM "-machine virt,gic-version=max -accel kvm " #define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \ " 'arguments': { 'type': 'full', " @@ -337,7 +337,7 @@ static void sve_tests_sve_max_vq_8(const void *data) { QTestState *qts; - qts = qtest_init(MACHINE "-cpu max,sve-max-vq=8"); + qts = qtest_init(MACHINE_TCG "-cpu max,sve-max-vq=8"); assert_sve_vls(qts, "max", BIT_ULL(8) - 1, NULL); @@ -372,7 +372,7 @@ static void sve_tests_sve_off(const void *data) { QTestState *qts; - qts = qtest_init(MACHINE "-cpu max,sve=off"); + qts = qtest_init(MACHINE_TCG "-cpu max,sve=off"); /* SVE is off, so the map should be empty. */ assert_sve_vls(qts, "max", 0, NULL); @@ -428,7 +428,7 @@ static void test_query_cpu_model_expansion(const void *data) { QTestState *qts; - qts = qtest_init(MACHINE "-cpu max"); + qts = qtest_init(MACHINE_TCG "-cpu max"); /* Test common query-cpu-model-expansion input validation */ assert_type_full(qts); @@ -593,8 +593,10 @@ int main(int argc, char **argv) { g_test_init(&argc, &argv, NULL); - qtest_add_data_func("/arm/query-cpu-model-expansion", - NULL, test_query_cpu_model_expansion); + if (qtest_has_accel("tcg")) { + qtest_add_data_func("/arm/query-cpu-model-expansion", + NULL, test_query_cpu_model_expansion); + } /* * For now we only run KVM specific tests with AArch64 QEMU in @@ -608,7 +610,7 @@ int main(int argc, char **argv) NULL, sve_tests_sve_off_kvm); } - if (g_str_equal(qtest_get_arch(), "aarch64")) { + if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("tcg")) { qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8", NULL, sve_tests_sve_max_vq_8); qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off", From patchwork Fri Jun 4 15:51:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454061 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp541876jae; Fri, 4 Jun 2021 09:02:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyExPqBwmXRogW9c+SJSuIi9QPP77QOGUQ18EdgdTsp8n9npj95//fF6teZjRPVn9jnvBih X-Received: by 2002:a2e:b4c3:: with SMTP id r3mr3820521ljm.138.1622822529796; Fri, 04 Jun 2021 09:02:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822529; cv=none; d=google.com; s=arc-20160816; b=pUqr5KseXg1vl305/iNnLZrQhsNNP74uRTuja/s+uhpLOqlzoLf/bx2hudrZYHYJ5n 4ipsH36M/lPxBsyKlK7EU9aLaX8pRNCCbl58A59M7oI8R86mlBfASSkojogRvXjH38aW XK90PGsuNFPohiSiKyT5P7DZlfrloXG/CXwRo+CWHE85w0jTidBoHUYz4MUK2VqMRTbm gGDyyEUfV72uDkYHWuezdZ3zeNOaKQ9u/aXCLp2m+O6yBNfcXHhS3VxmIIAlzJ5bfltP rom436CSMlkLGFjdx6asDmyUg6l21zWJWdl4XPcf7X+NZYQvywdTrVmnAa1ig5jFbwum b/nQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=ZW7hi0DfYBrqOjPk2iwpg84qtf2Zw2XIPeU/d/sNq6A=; b=l47kF5Rlrlc6BdMod/vUhCGbBcUvLeljJRomrVNI9aX07opZELpH91AXgPrEeAt8mm r2iaxVjDNL7l/r25bADPj6kB5W7Cu6nc94If7lz0ZkYkmwi0smdbSN5TlXP72K35qXfV Iiz3m520bSfCehrQNBFKiBpjb4KbPVlw2I0yitp1CN1XE83JomOGklfBpCrnk+U+JI2I jmiT3vMq6eAOsNBCD/lUvzpdyQWNg/HHfIgZwbfXCB7fcv5GETfb1RBMSZVbV65zKmW2 iO5rukHDC6/Kk/D8VcgoAVxy+uimoIPUZRl2c5HkMA+gFAXPbp/24UuWdh7eE74oy1Pl lEyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=kMSlM8i+; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v16si6892164ljh.187.2021.06.04.09.02.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:02:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=kMSlM8i+; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43662 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCGt-0004xE-Ld for patch@linaro.org; Fri, 04 Jun 2021 12:02:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44356) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8T-0002TJ-Sr for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:25 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:37597) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8R-0008Rv-9p for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:25 -0400 Received: by mail-wr1-x431.google.com with SMTP id i94so4753997wri.4 for ; Fri, 04 Jun 2021 08:53:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZW7hi0DfYBrqOjPk2iwpg84qtf2Zw2XIPeU/d/sNq6A=; b=kMSlM8i+xPJ54AVpBAAaZTwcB0JEwBVVmLNzpVQl6rNdMmHeKcvvIH548K9xi/zJ5J Q+aPyx+yYPnKjAulG6recWcT52MwqoPVt3t/DC+M934wqBsNhIYiziyxcezyaVpdu6PY ZGJedqsC6Plzhh0CdFgUQi9QgM2HBkVrltwPEaNpgnVHvQmcN7uO6wzL5ATpEMeYwEQe ZZiSe4MAul5npTnLgwOTsGe67H5/P2rsyKVY36WncbAwXJQhMesMZ6hhPdvWEGKyw7dT jq3oxc27BsoUfg2GgX/pWg1gGNiJ9ysRsC7F9aX/nwzf+82RN9LrcKChyrRHQDh1+Mjh h79g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZW7hi0DfYBrqOjPk2iwpg84qtf2Zw2XIPeU/d/sNq6A=; b=HYFfBpEWPDlrlhZv6npwbSX8qBwrOkPEWEb2hfUG8QQ9d/XxjaZTsOmaKRdXHyj9Mc IToHqQmSrJxCBTztbsFquIIZkV7YY9E4D2vWNHwbvEEW9EWofH18ieZFkWh+2GP/gVTc MfGqsbJUQztS0yZDwybEh4wXrASivoa/FpXgSU7bj2u0EvFuo3dFM+O6RTZgEO2+ltQl MyEsvqTkzbekSmxAgUHubdrp1MhEBVz6QpBsFoGHA0pXbVXKyyCK2QuQACZ3+Z9Mk2tN D3uUJ7CDOktQx5NHbKVlBZo6uhwBW+JFvJkrOH+RoK8HsxQSIiy0z5qYoKEer91SePUI Rtqw== X-Gm-Message-State: AOAM533PVcIMpciTihHbYdlOg456RShD61VXFFEiQhYp0DfRlVAp6hl7 F2CbKvRDfrEKIpi5Dm9sCFCI1g== X-Received: by 2002:adf:dcc3:: with SMTP id x3mr4441419wrm.177.1622822001785; Fri, 04 Jun 2021 08:53:21 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id n8sm5892201wmi.16.2021.06.04.08.53.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:17 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 1C6171FF96; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 08/99] qtest/migration-test: Skip tests if KVM not builtin on s390x/ppc64 Date: Fri, 4 Jun 2021 16:51:41 +0100 Message-Id: <20210604155312.15902-9-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , Juan Quintela , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?b?w6k=?= , Cornelia Huck , Greg Kurz , "Dr. David Alan Gilbert" , qemu-arm@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , David Gibson Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé We might have a s390x/ppc64 QEMU binary built without the KVM accelerator (configured with --disable-kvm). Checking for /dev/kvm accessibility isn't enough, also check for the accelerator in the binary. Reviewed-by: David Gibson Reviewed-by: Greg Kurz Reviewed-by: Cornelia Huck Reviewed-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-9-philmd@redhat.com> --- tests/qtest/migration-test.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.20.1 Reviewed-by: Thomas Huth diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c index 2b028df687..102bc36b91 100644 --- a/tests/qtest/migration-test.c +++ b/tests/qtest/migration-test.c @@ -1387,7 +1387,7 @@ int main(int argc, char **argv) */ if (g_str_equal(qtest_get_arch(), "ppc64") && (access("/sys/module/kvm_hv", F_OK) || - access("/dev/kvm", R_OK | W_OK))) { + access("/dev/kvm", R_OK | W_OK) || !qtest_has_accel("kvm"))) { g_test_message("Skipping test: kvm_hv not available"); return g_test_run(); } @@ -1398,7 +1398,7 @@ int main(int argc, char **argv) */ if (g_str_equal(qtest_get_arch(), "s390x")) { #if defined(HOST_S390X) - if (access("/dev/kvm", R_OK | W_OK)) { + if (access("/dev/kvm", R_OK | W_OK) || !qtest_has_accel("kvm")) { g_test_message("Skipping test: kvm not available"); return g_test_run(); } From patchwork Fri Jun 4 15:51:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454057 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp538184jae; Fri, 4 Jun 2021 08:57:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwRI6QCCPZFqfk6O3XDz5Bf9+hmpBJIzME7CHbtF7Sokf8hj9uOR10FQ1dyCDi4gTIgs3EE X-Received: by 2002:a05:6602:158a:: with SMTP id e10mr4345164iow.137.1622822251171; Fri, 04 Jun 2021 08:57:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822251; cv=none; d=google.com; s=arc-20160816; b=R4L413b+IoH4lG3uQdKAKZ9j2nSsCHQRfiVKNG55dsNd0x+VMZW7DKgXcR/JS6a/+b +47QX4QTo2tZjp9cw1JYWWGNaoBUm0JlWJISJd5hcUY5GogSEDAvXD8S72FYAEZaaGkp 0fRPk/mCfAeNgsAerkH0gg7MuzoxhvNGKIWUaOyXZo4K1Ks3LF//4EFgS9TdAY4lNzYJ AVdKKHQtYkyIAUZt2ZMuwWbEJS4iCHaVr1U3kl9RAR8PWbA82184ETvZ7nOnKyHDRDjg hsZLBCzMKc+zDVxCED6mRG3r61U4d800Dv7F8+xhK6zhAEoZ9NQg+CreIyEeM0ceZBWW scaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=JaXO37oL0Bjgy5CbV3aQwdfffelPx4CsfMkOQJk0Amo=; b=mkJzOrvE5avR7RG9gVyHaRKT8pfb8qmOdgqHI7RlZiX4zT30/XN8cErmgn1icBmUae 7bc22Q276Eix5M1tUDNLF/FbToN33fl/R/hChCx6JWakwnFJF3pskdyoT6uEYjADCfbS PvAmUQSFN9Nyp690sTetWCLE3pMbK9prfuSYgQeJoIApgluR5yEk6rz4+cYkzzBtLaS+ cueKFmlHKPRrOgG5Q6fKBEfUf8xMUextrh0AAOdW9iv6VT9Bq3ZruRYsa7W+FE5bqo7Z YlvAKiXLNmxS+ecLTtmsORuQw7JJ17pufq4YObbV8YRbz7iYjbAj+iwLYpqJki2R8UQA 1HXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=j4ekZwoU; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j1si5904109ila.134.2021.06.04.08.57.31 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:57:31 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=j4ekZwoU; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59516 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCCQ-00050z-KP for patch@linaro.org; Fri, 04 Jun 2021 11:57:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44532) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8a-0002ra-7T for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:32 -0400 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]:46850) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8W-000059-Rx for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:31 -0400 Received: by mail-wr1-x430.google.com with SMTP id a11so7916918wrt.13 for ; Fri, 04 Jun 2021 08:53:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JaXO37oL0Bjgy5CbV3aQwdfffelPx4CsfMkOQJk0Amo=; b=j4ekZwoUGqBxQbvgqsImdYhFqO9F2t+t9MeU/p4DulzCN29f/7iTIdZ6Dtk3+53YDQ UuNXOR/620M6TY/RT9YC9nA0DpE89f4m6gYrhWn9qOZ6laSLAgLrbhubRms0w6EtSksn yN0eFWo2Eu19sJq/9xA7Q7IX9j5NpmvNEVa6AHLHlUnU1Mo4OeOQ+E2OsYc690bHEcSo MOt62W3EWPURM3PvxcXIzUnr61C7hORKFLNe0OlkiL0Q9hMIwfDQFp/tK0oun7W6LnEU SehXadRlI0n8plmH8vsc9eLE8JrFJVYmL27sm2rl3a2gk8v8NRTYPByLPq7TyL8g+am1 88+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JaXO37oL0Bjgy5CbV3aQwdfffelPx4CsfMkOQJk0Amo=; b=QvRn5+YImo/T/JuZNQVPXiWje795aFu0z93Qi4WfSexJbUTxgrPwCKUDAsdn7C7ImU oetbB3uqf9N7FFoRSG2Bhaw4JyonE2JRA9yHjDsS3FYHnDFuS1m0W50pcaXleh0slN4X H14xjWHqpr4m1Y7TnbBS7vLyTnLjuQ0iiUfGjPe7k7OWaZMKSOq4xm/LNgJmjYgbu13x uHpHhc5NSMp24xCCbvdx0o6RuSgPgcSYm2I/68klZqLg13Tvg7RhG6UMpvAq9hh1UstD aqY8cjCs43ijZ0FCoYWtenW5Rd2Gil0F81xdqtDJ/SRuS32OhSojUTz9hT2fEL4I+LSV i9Jw== X-Gm-Message-State: AOAM532E16uDMPi/15R3GLfxvCB99JZmNZPVwUFGstb5vPcruN9o5LsB BiiN4AVWxljqhXvl0OWy2tYkkQ== X-Received: by 2002:a5d:6e0d:: with SMTP id h13mr4720422wrz.118.1622822007261; Fri, 04 Jun 2021 08:53:27 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id s1sm7478013wre.67.2021.06.04.08.53.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:24 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 337C71FF98; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 09/99] qtest/bios-tables-test: Rename tests not TCG specific Date: Fri, 4 Jun 2021 16:51:42 +0100 Message-Id: <20210604155312.15902-10-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Igor Mammedov , qemu-arm@nongnu.org, =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , "Michael S. Tsirkin" Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Various tests don't require TCG, but have '_tcg' in their name. As this is misleading, remove 'tcg' from their name. Reported-by: Igor Mammedov Reviewed-by: Igor Mammedov Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-10-philmd@redhat.com> --- tests/qtest/bios-tables-test.c | 142 ++++++++++++++++----------------- 1 file changed, 71 insertions(+), 71 deletions(-) -- 2.20.1 diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c index 156d4174aa..ce498b3ff4 100644 --- a/tests/qtest/bios-tables-test.c +++ b/tests/qtest/bios-tables-test.c @@ -753,7 +753,7 @@ static uint8_t base_required_struct_types[] = { 0, 1, 3, 4, 16, 17, 19, 32, 127 }; -static void test_acpi_piix4_tcg(void) +static void test_acpi_piix4(void) { test_data data; @@ -768,7 +768,7 @@ static void test_acpi_piix4_tcg(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_bridge(void) +static void test_acpi_piix4_bridge(void) { test_data data; @@ -824,7 +824,7 @@ static void test_acpi_piix4_no_acpi_pci_hotplug(void) free_test_data(&data); } -static void test_acpi_q35_tcg(void) +static void test_acpi_q35(void) { test_data data; @@ -841,7 +841,7 @@ static void test_acpi_q35_tcg(void) free_test_data(&data); } -static void test_acpi_q35_tcg_bridge(void) +static void test_acpi_q35_bridge(void) { test_data data; @@ -855,7 +855,7 @@ static void test_acpi_q35_tcg_bridge(void) free_test_data(&data); } -static void test_acpi_q35_tcg_mmio64(void) +static void test_acpi_q35_mmio64(void) { test_data data = { .machine = MACHINE_Q35, @@ -872,7 +872,7 @@ static void test_acpi_q35_tcg_mmio64(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_cphp(void) +static void test_acpi_piix4_cphp(void) { test_data data; @@ -888,7 +888,7 @@ static void test_acpi_piix4_tcg_cphp(void) free_test_data(&data); } -static void test_acpi_q35_tcg_cphp(void) +static void test_acpi_q35_cphp(void) { test_data data; @@ -908,7 +908,7 @@ static uint8_t ipmi_required_struct_types[] = { 0, 1, 3, 4, 16, 17, 19, 32, 38, 127 }; -static void test_acpi_q35_tcg_ipmi(void) +static void test_acpi_q35_ipmi(void) { test_data data; @@ -923,7 +923,7 @@ static void test_acpi_q35_tcg_ipmi(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_ipmi(void) +static void test_acpi_piix4_ipmi(void) { test_data data; @@ -941,7 +941,7 @@ static void test_acpi_piix4_tcg_ipmi(void) free_test_data(&data); } -static void test_acpi_q35_tcg_memhp(void) +static void test_acpi_q35_memhp(void) { test_data data; @@ -957,7 +957,7 @@ static void test_acpi_q35_tcg_memhp(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_memhp(void) +static void test_acpi_piix4_memhp(void) { test_data data; @@ -973,7 +973,7 @@ static void test_acpi_piix4_tcg_memhp(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_nosmm(void) +static void test_acpi_piix4_nosmm(void) { test_data data; @@ -984,7 +984,7 @@ static void test_acpi_piix4_tcg_nosmm(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_smm_compat(void) +static void test_acpi_piix4_smm_compat(void) { test_data data; @@ -995,7 +995,7 @@ static void test_acpi_piix4_tcg_smm_compat(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_smm_compat_nosmm(void) +static void test_acpi_piix4_smm_compat_nosmm(void) { test_data data; @@ -1006,7 +1006,7 @@ static void test_acpi_piix4_tcg_smm_compat_nosmm(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_nohpet(void) +static void test_acpi_piix4_nohpet(void) { test_data data; @@ -1017,7 +1017,7 @@ static void test_acpi_piix4_tcg_nohpet(void) free_test_data(&data); } -static void test_acpi_q35_tcg_numamem(void) +static void test_acpi_q35_numamem(void) { test_data data; @@ -1029,7 +1029,7 @@ static void test_acpi_q35_tcg_numamem(void) free_test_data(&data); } -static void test_acpi_q35_tcg_nosmm(void) +static void test_acpi_q35_nosmm(void) { test_data data; @@ -1040,7 +1040,7 @@ static void test_acpi_q35_tcg_nosmm(void) free_test_data(&data); } -static void test_acpi_q35_tcg_smm_compat(void) +static void test_acpi_q35_smm_compat(void) { test_data data; @@ -1051,7 +1051,7 @@ static void test_acpi_q35_tcg_smm_compat(void) free_test_data(&data); } -static void test_acpi_q35_tcg_smm_compat_nosmm(void) +static void test_acpi_q35_smm_compat_nosmm(void) { test_data data; @@ -1062,7 +1062,7 @@ static void test_acpi_q35_tcg_smm_compat_nosmm(void) free_test_data(&data); } -static void test_acpi_q35_tcg_nohpet(void) +static void test_acpi_q35_nohpet(void) { test_data data; @@ -1073,7 +1073,7 @@ static void test_acpi_q35_tcg_nohpet(void) free_test_data(&data); } -static void test_acpi_piix4_tcg_numamem(void) +static void test_acpi_piix4_numamem(void) { test_data data; @@ -1087,11 +1087,11 @@ static void test_acpi_piix4_tcg_numamem(void) uint64_t tpm_tis_base_addr; -static void test_acpi_tcg_tpm(const char *machine, const char *tpm_if, +static void test_acpi_tpm(const char *machine, const char *tpm_if, uint64_t base) { #ifdef CONFIG_TPM - gchar *tmp_dir_name = g_strdup_printf("qemu-test_acpi_%s_tcg_%s.XXXXXX", + gchar *tmp_dir_name = g_strdup_printf("qemu-test_acpi_%s_%s.XXXXXX", machine, tpm_if); char *tmp_path = g_dir_make_tmp(tmp_dir_name, NULL); TestState test; @@ -1139,12 +1139,12 @@ static void test_acpi_tcg_tpm(const char *machine, const char *tpm_if, #endif } -static void test_acpi_q35_tcg_tpm_tis(void) +static void test_acpi_q35_tpm_tis(void) { - test_acpi_tcg_tpm("q35", "tis", 0xFED40000); + test_acpi_tpm("q35", "tis", 0xFED40000); } -static void test_acpi_tcg_dimm_pxm(const char *machine) +static void test_acpi_dimm_pxm(const char *machine) { test_data data; @@ -1174,14 +1174,14 @@ static void test_acpi_tcg_dimm_pxm(const char *machine) free_test_data(&data); } -static void test_acpi_q35_tcg_dimm_pxm(void) +static void test_acpi_q35_dimm_pxm(void) { - test_acpi_tcg_dimm_pxm(MACHINE_Q35); + test_acpi_dimm_pxm(MACHINE_Q35); } -static void test_acpi_piix4_tcg_dimm_pxm(void) +static void test_acpi_piix4_dimm_pxm(void) { - test_acpi_tcg_dimm_pxm(MACHINE_PC); + test_acpi_dimm_pxm(MACHINE_PC); } static void test_acpi_virt_tcg_memhp(void) @@ -1223,7 +1223,7 @@ static void test_acpi_microvm_prepare(test_data *data) data->blkdev = "virtio-blk-device"; } -static void test_acpi_microvm_tcg(void) +static void test_acpi_microvm(void) { test_data data; @@ -1233,7 +1233,7 @@ static void test_acpi_microvm_tcg(void) free_test_data(&data); } -static void test_acpi_microvm_usb_tcg(void) +static void test_acpi_microvm_usb(void) { test_data data; @@ -1244,7 +1244,7 @@ static void test_acpi_microvm_usb_tcg(void) free_test_data(&data); } -static void test_acpi_microvm_rtc_tcg(void) +static void test_acpi_microvm_rtc(void) { test_data data; @@ -1255,7 +1255,7 @@ static void test_acpi_microvm_rtc_tcg(void) free_test_data(&data); } -static void test_acpi_microvm_pcie_tcg(void) +static void test_acpi_microvm_pcie(void) { test_data data; @@ -1267,7 +1267,7 @@ static void test_acpi_microvm_pcie_tcg(void) free_test_data(&data); } -static void test_acpi_microvm_ioapic2_tcg(void) +static void test_acpi_microvm_ioapic2(void) { test_data data; @@ -1332,7 +1332,7 @@ static void test_acpi_virt_tcg_pxb(void) free_test_data(&data); } -static void test_acpi_tcg_acpi_hmat(const char *machine) +static void test_acpi_acpi_hmat(const char *machine) { test_data data; @@ -1364,14 +1364,14 @@ static void test_acpi_tcg_acpi_hmat(const char *machine) free_test_data(&data); } -static void test_acpi_q35_tcg_acpi_hmat(void) +static void test_acpi_q35_acpi_hmat(void) { - test_acpi_tcg_acpi_hmat(MACHINE_Q35); + test_acpi_acpi_hmat(MACHINE_Q35); } -static void test_acpi_piix4_tcg_acpi_hmat(void) +static void test_acpi_piix4_acpi_hmat(void) { - test_acpi_tcg_acpi_hmat(MACHINE_PC); + test_acpi_acpi_hmat(MACHINE_PC); } static void test_acpi_virt_tcg(void) @@ -1512,50 +1512,50 @@ int main(int argc, char *argv[]) return ret; } qtest_add_func("acpi/q35/oem-fields", test_acpi_oem_fields_q35); - qtest_add_func("acpi/q35/tpm-tis", test_acpi_q35_tcg_tpm_tis); - qtest_add_func("acpi/piix4", test_acpi_piix4_tcg); + qtest_add_func("acpi/q35/tpm-tis", test_acpi_q35_tpm_tis); + qtest_add_func("acpi/piix4", test_acpi_piix4); qtest_add_func("acpi/oem-fields", test_acpi_oem_fields_pc); - qtest_add_func("acpi/piix4/bridge", test_acpi_piix4_tcg_bridge); + qtest_add_func("acpi/piix4/bridge", test_acpi_piix4_bridge); qtest_add_func("acpi/piix4/pci-hotplug/no_root_hotplug", test_acpi_piix4_no_root_hotplug); qtest_add_func("acpi/piix4/pci-hotplug/no_bridge_hotplug", test_acpi_piix4_no_bridge_hotplug); qtest_add_func("acpi/piix4/pci-hotplug/off", test_acpi_piix4_no_acpi_pci_hotplug); - qtest_add_func("acpi/q35", test_acpi_q35_tcg); - qtest_add_func("acpi/q35/bridge", test_acpi_q35_tcg_bridge); - qtest_add_func("acpi/q35/mmio64", test_acpi_q35_tcg_mmio64); - qtest_add_func("acpi/piix4/ipmi", test_acpi_piix4_tcg_ipmi); - qtest_add_func("acpi/q35/ipmi", test_acpi_q35_tcg_ipmi); - qtest_add_func("acpi/piix4/cpuhp", test_acpi_piix4_tcg_cphp); - qtest_add_func("acpi/q35/cpuhp", test_acpi_q35_tcg_cphp); - qtest_add_func("acpi/piix4/memhp", test_acpi_piix4_tcg_memhp); - qtest_add_func("acpi/q35/memhp", test_acpi_q35_tcg_memhp); - qtest_add_func("acpi/piix4/numamem", test_acpi_piix4_tcg_numamem); - qtest_add_func("acpi/q35/numamem", test_acpi_q35_tcg_numamem); - qtest_add_func("acpi/piix4/nosmm", test_acpi_piix4_tcg_nosmm); + qtest_add_func("acpi/q35", test_acpi_q35); + qtest_add_func("acpi/q35/bridge", test_acpi_q35_bridge); + qtest_add_func("acpi/q35/mmio64", test_acpi_q35_mmio64); + qtest_add_func("acpi/piix4/ipmi", test_acpi_piix4_ipmi); + qtest_add_func("acpi/q35/ipmi", test_acpi_q35_ipmi); + qtest_add_func("acpi/piix4/cpuhp", test_acpi_piix4_cphp); + qtest_add_func("acpi/q35/cpuhp", test_acpi_q35_cphp); + qtest_add_func("acpi/piix4/memhp", test_acpi_piix4_memhp); + qtest_add_func("acpi/q35/memhp", test_acpi_q35_memhp); + qtest_add_func("acpi/piix4/numamem", test_acpi_piix4_numamem); + qtest_add_func("acpi/q35/numamem", test_acpi_q35_numamem); + qtest_add_func("acpi/piix4/nosmm", test_acpi_piix4_nosmm); qtest_add_func("acpi/piix4/smm-compat", - test_acpi_piix4_tcg_smm_compat); + test_acpi_piix4_smm_compat); qtest_add_func("acpi/piix4/smm-compat-nosmm", - test_acpi_piix4_tcg_smm_compat_nosmm); - qtest_add_func("acpi/piix4/nohpet", test_acpi_piix4_tcg_nohpet); - qtest_add_func("acpi/q35/nosmm", test_acpi_q35_tcg_nosmm); + test_acpi_piix4_smm_compat_nosmm); + qtest_add_func("acpi/piix4/nohpet", test_acpi_piix4_nohpet); + qtest_add_func("acpi/q35/nosmm", test_acpi_q35_nosmm); qtest_add_func("acpi/q35/smm-compat", - test_acpi_q35_tcg_smm_compat); + test_acpi_q35_smm_compat); qtest_add_func("acpi/q35/smm-compat-nosmm", - test_acpi_q35_tcg_smm_compat_nosmm); - qtest_add_func("acpi/q35/nohpet", test_acpi_q35_tcg_nohpet); - qtest_add_func("acpi/piix4/dimmpxm", test_acpi_piix4_tcg_dimm_pxm); - qtest_add_func("acpi/q35/dimmpxm", test_acpi_q35_tcg_dimm_pxm); - qtest_add_func("acpi/piix4/acpihmat", test_acpi_piix4_tcg_acpi_hmat); - qtest_add_func("acpi/q35/acpihmat", test_acpi_q35_tcg_acpi_hmat); - qtest_add_func("acpi/microvm", test_acpi_microvm_tcg); - qtest_add_func("acpi/microvm/usb", test_acpi_microvm_usb_tcg); - qtest_add_func("acpi/microvm/rtc", test_acpi_microvm_rtc_tcg); - qtest_add_func("acpi/microvm/ioapic2", test_acpi_microvm_ioapic2_tcg); + test_acpi_q35_smm_compat_nosmm); + qtest_add_func("acpi/q35/nohpet", test_acpi_q35_nohpet); + qtest_add_func("acpi/piix4/dimmpxm", test_acpi_piix4_dimm_pxm); + qtest_add_func("acpi/q35/dimmpxm", test_acpi_q35_dimm_pxm); + qtest_add_func("acpi/piix4/acpihmat", test_acpi_piix4_acpi_hmat); + qtest_add_func("acpi/q35/acpihmat", test_acpi_q35_acpi_hmat); + qtest_add_func("acpi/microvm", test_acpi_microvm); + qtest_add_func("acpi/microvm/usb", test_acpi_microvm_usb); + qtest_add_func("acpi/microvm/rtc", test_acpi_microvm_rtc); + qtest_add_func("acpi/microvm/ioapic2", test_acpi_microvm_ioapic2); qtest_add_func("acpi/microvm/oem-fields", test_acpi_oem_fields_microvm); if (strcmp(arch, "x86_64") == 0) { - qtest_add_func("acpi/microvm/pcie", test_acpi_microvm_pcie_tcg); + qtest_add_func("acpi/microvm/pcie", test_acpi_microvm_pcie); } } else if (strcmp(arch, "aarch64") == 0) { qtest_add_func("acpi/virt", test_acpi_virt_tcg); From patchwork Fri Jun 4 15:51:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454053 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp536553jae; Fri, 4 Jun 2021 08:55:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyGdO4sNPi8BL+OmoHiKwGVu7k3Kl8VjG0uBho+5iBefHtHCdC7OhlNqXsdyBwx8wxsmNY6 X-Received: by 2002:a02:3304:: with SMTP id c4mr4748302jae.68.1622822115383; Fri, 04 Jun 2021 08:55:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822115; cv=none; d=google.com; s=arc-20160816; b=UV0O8XQYM7kGNGX0otuqH9xHejPIBRZ0F0PmVLdTmbigjTMMAbNAukHgT4xhuES0mV C7dTArmmH7i/oqgLGdDFzbGJN+WH3pVV71EFM8+YpFwbpsE6ha2pSi/Db4a1+rw3EZFW O9Cwtf7iAzZfiy0E4uHp8hyBl8aNZcUxSfyCMyljX0g9inhoZ2M+rOKaY/jcNa8Bfdfu 6LsCKWCm5lKguoodeDJDE7wLF4by6oAluvbWP8DA7rZS4mtIPrGELucFtt87pk31v0iS Q2PMu62/8mgZQXq4WQSyT4PPteARh0XYXNiab/2s4n59yh/aDI9WYj1Eag8j5mCjJiLJ QA3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=YzeACc1wsH/oKOcJhjQoY6+4BE5bGAGTHjMBp9qY+rQ=; b=DatzIMMNpGQ7cobIdbFMuEVFaorO6y3/lOxpHtPgC35giIev3dOcCfepaGqITOI/z3 sy8wcjxJLL+YHm4OodyO6NdqEAkg+n6+BPJqcoDkRSExL1lT2f9U7PAaT2nztM0EcrNh NhWhf1TEYzPU8gOx5Ms5rTYTTEz8wzkbvgiik/Xrsm+f14+bLkD2LCYRepS9ykQxDaTr tbc/GSdxGVFl0rSGx4vHE8kizHI40SLgNeAVzdftn9TMTbHe52eCkjI9/nG5LcYJog23 4mipY+k669/q4RDfJN+/ZQfqSgxbd1u46vXVhTEWNkljC+V/ss4Gs5bvGPARwsucMzj/ vc/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="l/E2T43M"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v17si6567425iln.123.2021.06.04.08.55.15 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:55:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="l/E2T43M"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:47826 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCAE-0005RJ-Ro for patch@linaro.org; Fri, 04 Jun 2021 11:55:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44680) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8f-0003D9-UI for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:37 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]:40958) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8b-00008M-FB for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:37 -0400 Received: by mail-wm1-x333.google.com with SMTP id b145-20020a1c80970000b029019c8c824054so8206279wmd.5 for ; Fri, 04 Jun 2021 08:53:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YzeACc1wsH/oKOcJhjQoY6+4BE5bGAGTHjMBp9qY+rQ=; b=l/E2T43Mtl+Bfg/wVWB0xLuC2QJ8kJrYS9/fn2dhcIsMPJhwwOxgQEYqr86pE3WMYo igXoesKIHiU8siquKI39ttrW6RdREx30d951yWEw4TubQg/5KTLwQDymcv4uGNn5fUk8 Ejx64XeujPRpqIAh8YyYnEuNRKB19BRKq7UbJ1saxAc+jXqoVPZajWC3uYFC996w2k96 oZXcbVHkX8VY65rCCIYkuog73QLZxk1NcEah6o1bGv6rHuEoC/QCJsYsqpKrJdZWUBfk mnWc1RMZ2AfOHeaBMcmyNGpImHiDO7dRrrJMNGMyjp1CYm+FZSSS29WwVojMSV6HndhX fOhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YzeACc1wsH/oKOcJhjQoY6+4BE5bGAGTHjMBp9qY+rQ=; b=sKbpcODBkqiSem4XYGA4zD43xlDWoEFqnDZfYgh6lo1e5EYT9F4CRVQUZ/R2RlcHVN AoVNmTAFr5zYWI/7FlHvOJXlThN4y6OmqgA/1RyA5DUKVeMdM7drGU+CaW9hiph7E+b7 Z3//II9oxoC+Q6e+5KUsuCrK2ft+g5oZMAQovWp1lFaEzz7frXNELe/WitZXzBK3r/fM 2YJg2IE4cgbvmIM0PHPBJdDJoXE/b6gPseUA49vwFPc5bxCZ1LgEEJJWduZbmXF2Xtwg egEEBlHU7MkaksIcXIuzfau2QobGxGzeGeeyNpRent0p6CDoH4MxI70gG1GExLSDGjMg 8z6A== X-Gm-Message-State: AOAM533qQPNmuczOAzsPG1XC5KU9CYEgt4kCJZ+n2aKtWFgS5FcBS52B MB8z38eVzZPWl5snwFNRrJ073w== X-Received: by 2002:a05:600c:2188:: with SMTP id e8mr4462876wme.129.1622822011826; Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id 32sm7976328wrs.5.2021.06.04.08.53.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:24 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 492161FF99; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 10/99] qtest/bios-tables-test: Rename TCG specific tests Date: Fri, 4 Jun 2021 16:51:43 +0100 Message-Id: <20210604155312.15902-11-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Igor Mammedov , qemu-arm@nongnu.org, =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , "Michael S. Tsirkin" Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Some tests require TCG, but don't have '_tcg' in their name, while others do. Unify the test names by adding 'tcg' to the TCG specific tests. Reported-by: Igor Mammedov Reviewed-by: Igor Mammedov Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-11-philmd@redhat.com> --- tests/qtest/bios-tables-test.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) -- 2.20.1 diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c index ce498b3ff4..ad877baeb1 100644 --- a/tests/qtest/bios-tables-test.c +++ b/tests/qtest/bios-tables-test.c @@ -1255,7 +1255,7 @@ static void test_acpi_microvm_rtc(void) free_test_data(&data); } -static void test_acpi_microvm_pcie(void) +static void test_acpi_microvm_pcie_tcg(void) { test_data data; @@ -1475,7 +1475,7 @@ static void test_acpi_oem_fields_microvm(void) g_free(args); } -static void test_acpi_oem_fields_virt(void) +static void test_acpi_oem_fields_virt_tcg(void) { test_data data = { .machine = "virt", @@ -1555,14 +1555,14 @@ int main(int argc, char *argv[]) qtest_add_func("acpi/microvm/ioapic2", test_acpi_microvm_ioapic2); qtest_add_func("acpi/microvm/oem-fields", test_acpi_oem_fields_microvm); if (strcmp(arch, "x86_64") == 0) { - qtest_add_func("acpi/microvm/pcie", test_acpi_microvm_pcie); + qtest_add_func("acpi/microvm/pcie", test_acpi_microvm_pcie_tcg); } } else if (strcmp(arch, "aarch64") == 0) { qtest_add_func("acpi/virt", test_acpi_virt_tcg); qtest_add_func("acpi/virt/numamem", test_acpi_virt_tcg_numamem); qtest_add_func("acpi/virt/memhp", test_acpi_virt_tcg_memhp); qtest_add_func("acpi/virt/pxb", test_acpi_virt_tcg_pxb); - qtest_add_func("acpi/virt/oem-fields", test_acpi_oem_fields_virt); + qtest_add_func("acpi/virt/oem-fields", test_acpi_oem_fields_virt_tcg); } ret = g_test_run(); boot_sector_cleanup(disk); From patchwork Fri Jun 4 15:51:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454068 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp551705jae; Fri, 4 Jun 2021 09:12:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7M5cvrDgFgtoH2GWEqhHfW9+rU8McCvp9QWiYuo0UKnz+yRKbRipoz84uFK8NO491TG6R X-Received: by 2002:aa7:d558:: with SMTP id u24mr5437108edr.331.1622823160828; Fri, 04 Jun 2021 09:12:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823160; cv=none; d=google.com; s=arc-20160816; b=FUrkzeGT6Gb5sjeq/TgmeNM0RAnHMQFwgTeafiaNEV4VeVv4sRasWO4UzErD2aVbb+ kg0UuWvfTlvXZY13k5XGi8Z02H86Q+pO4jx1eDMO1nFhl+EHQIRANJ/+z2UJllD/34m6 kjQWnIvYYxqBki72+JlcRWIonCiU6MTR6Cp9rCfSaEMaXGryV8tKPVc1MB4F7OnsqwzF Eicb9jf1MCkrtrCd1iEWuwJMngSFawzSotbmTgeIG2XIZrIUxCJOCunlYgJ4BKkrzPxC 66b5T6aJVAza8b34kEIqo4thmdckmfLP0+WHKAnqHJ4Iqa6k+aNp2crQLenNiRngtqR2 wJgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=S4sdlJss3fc5mJwXHCh2ySSBFoRnKokoMtehS+ckwhM=; b=DyeGexivodVcboZ0FbZQ8/Mvu0O3kPKA+AWOJSCSOMVwRXcnFVXtlPmW9MJMv/7MSo jW9Z4T9gjHerJxDRIamXSoZnqE45pa1ahK9Hvwy5H0yZDJYQ02pcJqdoHDCNvrNxUVoq MKCODUnOM8AomUw2ihT70Hl3ejYIxrTuYydpJiHiKgDbUc9g6CPIilxQD+bJCmlhYqSc DSZL9WtVdmWyuTsgS6gHgEU6ar0W36+CqUjepwfYLa7IDoZhVTIbCfEmcfILAEJ/3qhG i+rpxx6BDmvflHp1E9vZX4bFwdiWz9essDQvbStXS/UNiFOIIHPb9vryVsVJaY6VBqmF T+vg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=QTkPKJc1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id f9si4913907edv.112.2021.06.04.09.12.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:12:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=QTkPKJc1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42734 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCR5-0007pG-Uu for patch@linaro.org; Fri, 04 Jun 2021 12:12:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44542) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8b-0002uS-4G for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:33 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]:33531) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8Y-00005q-NJ for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:32 -0400 Received: by mail-wr1-x42d.google.com with SMTP id a20so9852847wrc.0 for ; Fri, 04 Jun 2021 08:53:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S4sdlJss3fc5mJwXHCh2ySSBFoRnKokoMtehS+ckwhM=; b=QTkPKJc14r1NA8Dkr7oI7muo1JR5+lKUzKCMV3FSzcnGm6bOxfrLlgcR+1EVYXfxdb IsM61dWNcNeerTgF6tIgk3I/1vl8AEJHkoTqaVIOVcvaR1D9VqLnb2A9Ex6Ttt/I+ITC Gp6NImQJVwKPRCguWTnsEMpkPWkMmolpHruLc2k+A7MSxLLoK547HTBjszHi4yFEMCwR NdXvV1Czo3jvLjkCE50t6ip6kMsU42tNifRWkxa5OwsftoZ/YEToc0qTu5DPife7iYrE r1aPyWJSmAe1gVn1ZDAaoeG9PBqBPqcT/gbXHMSzCUoy5KVaKi5FN8UzUImlmlRtZVQF pfiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S4sdlJss3fc5mJwXHCh2ySSBFoRnKokoMtehS+ckwhM=; b=tgFJj4VNhUWjRhn0jXs+ucfbWL+p5OvNVQp2Xcn2vk171YZ05/tI6nRBgOQlG5850E MrJF0CMe6roOQZrktI5AWtDYQpUPwDCYPDDdUj/gAiTI1xDkNAaX/Hvc+8f0MhDf7H3p mGQCm8U+2Km/TA2YatIHcZJgrdd35MbSC8HEGGVq0vSqTjMiMWkOjeGbvfp2pFhgq47p Ca3QbZdTfW5yDJK7bLUdwi/sE6o5mGa1qU5eQt5xQEtXRVYhORAzaXYGlghLOwie8jRG cOVr1Z+in0mXSSfsIJf1Y52EGwuay/xHIOt2wAfDDh9xl2uwadruv/OZKoFB5mKV7dKL jZjw== X-Gm-Message-State: AOAM532exfJZ5hJXa0gGUf086J0X2s+6+aJJ5RQBCr+zkUhNZXpZ5qoA A5/hIYIYEts895unzRsaNpXI7w== X-Received: by 2002:adf:a108:: with SMTP id o8mr4732699wro.290.1622822009072; Fri, 04 Jun 2021 08:53:29 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id s62sm9212366wms.13.2021.06.04.08.53.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:24 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 5E8C71FF9A; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 11/99] qtest/bios-tables-test: Make test build-independent from accelerator Date: Fri, 4 Jun 2021 16:51:44 +0100 Message-Id: <20210604155312.15902-12-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Michael S. Tsirkin" , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?b?w6k=?= , qemu-arm@nongnu.org, Igor Mammedov , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Now that we can probe if the TCG accelerator is available at runtime with a QMP command, do it once at the beginning and only register the tests we can run. We can then replace the #ifdef'ry by an assertion. Reviewed-by: Eric Blake Reviewed-by: Igor Mammedov Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-12-philmd@redhat.com> --- tests/qtest/bios-tables-test.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson Reviewed-by: Thomas Huth diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c index ad877baeb1..762d154b34 100644 --- a/tests/qtest/bios-tables-test.c +++ b/tests/qtest/bios-tables-test.c @@ -97,6 +97,7 @@ typedef struct { QTestState *qts; } test_data; +static bool tcg_accel_available; static char disk[] = "tests/acpi-test-disk-XXXXXX"; static const char *data_dir = "tests/data/acpi"; #ifdef CONFIG_IASL @@ -718,12 +719,7 @@ static void test_acpi_one(const char *params, test_data *data) char *args; bool use_uefi = data->uefi_fl1 && data->uefi_fl2; -#ifndef CONFIG_TCG - if (data->tcg_only) { - g_test_skip("TCG disabled, skipping ACPI tcg_only test"); - return; - } -#endif /* CONFIG_TCG */ + assert(!data->tcg_only || tcg_accel_available); args = test_acpi_create_args(data, params, use_uefi); data->qts = qtest_init(args); @@ -1506,6 +1502,8 @@ int main(int argc, char *argv[]) g_test_init(&argc, &argv, NULL); + tcg_accel_available = qtest_has_accel("tcg"); + if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) { ret = boot_sector_init(disk); if (ret) { @@ -1554,10 +1552,10 @@ int main(int argc, char *argv[]) qtest_add_func("acpi/microvm/rtc", test_acpi_microvm_rtc); qtest_add_func("acpi/microvm/ioapic2", test_acpi_microvm_ioapic2); qtest_add_func("acpi/microvm/oem-fields", test_acpi_oem_fields_microvm); - if (strcmp(arch, "x86_64") == 0) { + if (strcmp(arch, "x86_64") == 0 && tcg_accel_available) { qtest_add_func("acpi/microvm/pcie", test_acpi_microvm_pcie_tcg); } - } else if (strcmp(arch, "aarch64") == 0) { + } else if (strcmp(arch, "aarch64") == 0 && tcg_accel_available) { qtest_add_func("acpi/virt", test_acpi_virt_tcg); qtest_add_func("acpi/virt/numamem", test_acpi_virt_tcg_numamem); qtest_add_func("acpi/virt/memhp", test_acpi_virt_tcg_memhp); From patchwork Fri Jun 4 15:51:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454067 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp548283jae; Fri, 4 Jun 2021 09:08:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzli2pzdg5llqjf+4g4ff/XO5tPIEm1G0ph3RNO/NbnYy955yaROACE673/KQrLxu90s2hi X-Received: by 2002:aa7:c40f:: with SMTP id j15mr4661651edq.169.1622822926598; Fri, 04 Jun 2021 09:08:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822926; cv=none; d=google.com; s=arc-20160816; b=qswqc/gV7n6TZof6WZGHyQLq6qUTcuz9mls3eVyzp2+7DTI/XaFqIk2su7w+M+q6Pz ZIhj1wkwe+6dw/Sj0fGyeo39EDNMDbB25qw246V6ovvNK1v0OuXfl1ovhPtH6uxi4CE8 EPz8d5ZtZlc7t7K3egWmKB564FLdXaQ7cCzqJFFlQyVscRLGjryy0VCCofbyjpc1c0K+ YoEb/E35qQJDN4S3YFNN8us5Bg28wV/bu/HdjhPNinUxyW2QqON0FiT4JPgTq+mHYmGo 7at07DkrYU45W7vWX8YsS8RBjnRbMKd0RhnE5fTK2fRblVFvdoNEkIgwSfyrvQkQa+pX Xo9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=TUAH7JH6sleif6Yi/np3XOEDj2fTn8clJjdpTJ+Ga34=; b=K09SSUg0iCJQQB4k0alYQag52hIk6Q6xOpBhdB3z9PiyyBvMtNQg475iaC5jqg5vDD hqWNvwxzplcXBldBIob4MCqFgprErNohC6ZZ2IFlqNIHde71gtmfUiyP3llvu+RmaE30 otEmv6WfEwAooLBfsHCtd8giS0QZpC5v6G8opvkXV3veu+i1UHCKQA9otSvk1IwCWPZS 1HSySXAA5UQvTe/TmKfYYriZWI9KTqmYBVIviD6+RCS/F005a27Y3ORBs3R64IwgCwnt oc7scMIkzIkUpKGJlJuFTjlGbOGolQtLML6br8W/xkb3EiJYVtw1QpKaHmHB88kq5mJa /FRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ujGNIWqb; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id s7si5247550edx.5.2021.06.04.09.08.46 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:08:46 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ujGNIWqb; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:33992 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCNJ-0001OH-LK for patch@linaro.org; Fri, 04 Jun 2021 12:08:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44464) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8Y-0002mU-FY for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:30 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]:38652) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8V-0008VI-S0 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:30 -0400 Received: by mail-wm1-x334.google.com with SMTP id t4-20020a1c77040000b029019d22d84ebdso8196130wmi.3 for ; Fri, 04 Jun 2021 08:53:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TUAH7JH6sleif6Yi/np3XOEDj2fTn8clJjdpTJ+Ga34=; b=ujGNIWqbA5UJVSACD3kh2IHSs8XLv2gFrHwVH4KjpF01wDk4mOEyK1yPukApQGZAFA ftZwampw5bHyS6XePvbsZaVIK7L5TiEMsXcBrZdhRNmMsGaRjROThLBzOqICw7P1YknS C9cGsmtqOn49/QZr6i6ICP/llP5VSDRRlz4YOKcDTzVWLlmZSEOg1VHHC1MquiNmDnOd 0lX095Kz+6WOpzrRWJO3GKOr5PCaA4uTxId8jJUQKAbHijHF3qTO0ejXGPuk/7ujKUNu KJ0o7e9VaGqKu1mb0MV0IsIWOxBNTGLETuRWKm+yDwURhyKlEUU54onyVdKXFDmZnKkY VPsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TUAH7JH6sleif6Yi/np3XOEDj2fTn8clJjdpTJ+Ga34=; b=YqAwUfCGB/k/1N4Avr45texPxYzcLspZm4DjVqfvbpqiXmsM3WEkBW+5g6OuJm4Txd If1jIKD2aNRyUt/piU0X6f66RxCsKfjQhPRf9LL9Z3Sl+aY49nwdfOfXeLW3aAi0LfMc zBRmM5g8KzyGKr2RTt2PzmWHaKyf8dJxkZn/Fyo5WDhqJxjAkAZ9jgZNnnhFDdr9Y5AU w9UfP93k92Y0T97rNCJzya/KNEoY/LK8PDfkVlQ0GXeTBbJNIGUvOnPc6fzOuMNadCXQ hzJGr+rLtrGUgQMRM1606wcdrgsZKNDK3l0wInSRoMxSCxDAznYAgjXwrK4i7kZtBBQH YckQ== X-Gm-Message-State: AOAM531aY+fxUPTsaPhrNgOTe6Dgk16hVvK+oNz6FHt+2DmKYBAvQlrg 2s9wODD0eNumAJMU17g2/G/MP99Eod4DTA== X-Received: by 2002:a7b:c157:: with SMTP id z23mr4471975wmi.148.1622822006340; Fri, 04 Jun 2021 08:53:26 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id b22sm2782729wmj.22.2021.06.04.08.53.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:24 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 7374F1FF9B; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 12/99] qtest: Do not restrict bios-tables-test to Aarch64 hosts anymore Date: Fri, 4 Jun 2021 16:51:45 +0100 Message-Id: <20210604155312.15902-13-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-arm@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Since commit 82bf7ae84ce ("target/arm: Remove KVM support for 32-bit Arm hosts") we can remove the comment / check added in commit ab6b6a77774 and directly run the bios-tables-test. Reviewed-by: Eric Blake Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée Message-Id: <20210505125806.1263441-13-philmd@redhat.com> --- tests/qtest/meson.build | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson Acked-by: Thomas Huth diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build index c3a223a83d..2c7415d616 100644 --- a/tests/qtest/meson.build +++ b/tests/qtest/meson.build @@ -176,14 +176,13 @@ qtests_arm = \ 'boot-serial-test', 'hexloader-test'] -# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make bios-tables-test unconditional qtests_aarch64 = \ - (cpu != 'arm' ? ['bios-tables-test'] : []) + \ (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \ (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \ ['arm-cpu-features', 'numa-test', 'boot-serial-test', + 'bios-tables-test', 'xlnx-can-test', 'migration-test'] From patchwork Fri Jun 4 15:51:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454062 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp543041jae; Fri, 4 Jun 2021 09:03:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwLcbEJefEu3Yo3lkLyi1bbEXA8wgjS0pnqcrbXLOCyOobSr0LykCjmLb1bygYv8K+5KBxL X-Received: by 2002:a05:6402:754:: with SMTP id p20mr5232597edy.311.1622822601499; Fri, 04 Jun 2021 09:03:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822601; cv=none; d=google.com; s=arc-20160816; b=BKzPnj3RlW8cjI1YSu3DSrmen9kC+v5jxtlcbx5o4jwgUdJX4DDQnV96rE0jtBBxEA xZ+aglSHlHa9JoyAt/9zKt9LlADbh6RF2XH5Xg1N56gCgfTQQ5yq9jzH8WS6PBuMXkvX eOB0ytL+wedwxGb5sEiJko44UmoMX9XpiBSmGxi1xDBUIvEL26+lKex03x6Rlw6q6r2Q aNUn3sdhkkrM0s9wdJWpglDua6vdZ6EGVvH9xHTMpX98qDHTM1z8o8aWl9sx+pzD+vkz RPVirI5P7Zzygj7AYvKOvLmgXsIAIRzvocHuchsn6asKvxAkbKkJaRXtjCdwgQq6pcOk HVdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=bGZvEWP0B7XQQEsHoRPrpycuhOjRaiqhP5MCalZexp8=; b=PrIS4KrZW7Ak8TgV7vxAbOw1tWw1uc3yPRpM5PJ93a22XQaZ97KhpXxQYyCQGZjG6b uxXFw/G9hFyjI9gbyWu12FPpl4zvNNmK90duWTqHRkPJQioriU//vw/zSlPm/qfKrOxk AeeKurpqoq2a81tcY2gMDTXIcd3UMtR2GftdbRzbgehxvC8vEzC8eNpfL/cdenYFYOo0 rFiPuZoOkNNnJ59MpQSMn59Tcv7/TdE4gJQbVmHOaf6VcqO7LppMRLfl7G/94hh0P6Ax PzIbm6pCujo7rsY2M3sTXnbpYNYlTNsc9PePY1/n0BRRXuWipxHjssR0tpjruNT1nmFI fcew== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WXrKnXgh; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id k20si5063293ejd.293.2021.06.04.09.03.21 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:03:21 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WXrKnXgh; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:46836 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCI4-0007C9-AG for patch@linaro.org; Fri, 04 Jun 2021 12:03:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44610) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8d-000331-9c for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:35 -0400 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]:36842) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8Z-00006V-LB for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:34 -0400 Received: by mail-wr1-x434.google.com with SMTP id n4so9814581wrw.3 for ; Fri, 04 Jun 2021 08:53:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bGZvEWP0B7XQQEsHoRPrpycuhOjRaiqhP5MCalZexp8=; b=WXrKnXghbHEvw8Vgiq01+e7x8S36P2nbON4QAdKLwgJKvKlXFLZL2t966Npejniae1 kmgtm8uPNGRTwLthfl+0SYlhYwqJ+PI8bI2QsGTUQodygLZHtFTJwsm5wMAXUEx4zKQV k39dJMQ7hLKZ+DAbpot18+mGaLE2QViRkqFgWIUXDqFNMw7861ZL4OtZNqGR6khhc0EA iR5wqCSgX/Hcefcczp7oVSQBjHODyHzSf/Cnfz+oIODZB9yB0GbGKd+H+A5Us597HxLm nlBc6yAyrRjGezhaxh6rRXmL/aCBRz1Slqtq35LlPbThTgqAv/oyZkM080QC9TnX7lnW 3/Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bGZvEWP0B7XQQEsHoRPrpycuhOjRaiqhP5MCalZexp8=; b=QGbQPD698dcUNSlcXEGoPoyEeKrWIy13ZxJQlui0K67WFIG0tLIr0rJqYvxaWLw7e2 St8WZuvUhhWKwBWEAhVYTp28AV5tkehMniCC30zQ7F6D/wMxgH9SnVxxhEvhXCvhqy7y 5ub7rJToAwagCFtP8Jpe2BLUxEZ4xKm897we2DlSfziK9G0PVMOg92lMyyfJYcCaKdqr CJIOonva2zzleXGy7Z1rlPbSifni17s6WHEnZ0dhKHBEehUtteWashhhnNwTM7erK0Ra FCoU1Diu9YOilf3ko5s/qyOJmCkIuumVPuCuiWDzumpqATHSMwju3BjHnn28A4MrRCgg 3Idw== X-Gm-Message-State: AOAM533fTeLY4H+SQkD8lnbaAyNU0mKBSuNEYEgJptJKs7/CQJAZkKB/ XZyXgqccOMKa79iwbwlTruihiw== X-Received: by 2002:a5d:54c8:: with SMTP id x8mr4706349wrv.109.1622822010020; Fri, 04 Jun 2021 08:53:30 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id g11sm6873579wri.59.2021.06.04.08.53.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:24 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A88F91FF9C; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 13/99] meson: add target_user_arch Date: Fri, 4 Jun 2021 16:51:46 +0100 Message-Id: <20210604155312.15902-14-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Cornelia Huck , David Hildenbrand , Bin Meng , Mark Cave-Ayland , Max Filippov , Taylor Simpson , Alistair Francis , "Edgar E. Iglesias" , Marek Vasut , Yoshinori Sato , Claudio Fontana , "open list:PowerPC TCG CPUs" , Artyom Tarasenko , Thomas Huth , Richard Henderson , Greg Kurz , "open list:S390 general arch..." , qemu-arm@nongnu.org, Stafford Horne , =?utf-8?q?Alex_Benn=C3=A9e?= , David Gibson , "open list:RISC-V TCG CPUs" , Bastian Koppelmann , Chris Wulff , Laurent Vivier , Palmer Dabbelt Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana the lack of target_user_arch makes it hard to fully leverage the build system in order to separate user code from sysemu code. Provide it, so that we can avoid the proliferation of #ifdef in target code. Signed-off-by: Claudio Fontana Reviewed-by: Alex Bennée [claudio: added changes for new target hexagon] Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- v15 - remove duplicate ss.source_set for mips --- target/alpha/meson.build | 3 +++ target/arm/meson.build | 2 ++ target/cris/meson.build | 3 +++ target/hexagon/meson.build | 3 +++ target/hppa/meson.build | 3 +++ target/m68k/meson.build | 3 +++ target/microblaze/meson.build | 3 +++ target/nios2/meson.build | 3 +++ target/openrisc/meson.build | 3 +++ target/ppc/meson.build | 3 +++ target/riscv/meson.build | 3 +++ target/s390x/meson.build | 3 +++ target/sh4/meson.build | 3 +++ target/sparc/meson.build | 3 +++ target/tricore/meson.build | 3 +++ target/xtensa/meson.build | 3 +++ 16 files changed, 47 insertions(+) -- 2.20.1 diff --git a/target/alpha/meson.build b/target/alpha/meson.build index 1aec55abb4..1b0555d3ee 100644 --- a/target/alpha/meson.build +++ b/target/alpha/meson.build @@ -14,5 +14,8 @@ alpha_ss.add(files( alpha_softmmu_ss = ss.source_set() alpha_softmmu_ss.add(files('machine.c')) +alpha_user_ss = ss.source_set() + target_arch += {'alpha': alpha_ss} target_softmmu_arch += {'alpha': alpha_softmmu_ss} +target_user_arch += {'alpha': alpha_user_ss} diff --git a/target/arm/meson.build b/target/arm/meson.build index 5bfaf43b50..6106d24665 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -56,6 +56,8 @@ arm_softmmu_ss.add(files( 'monitor.c', 'psci.c', )) +arm_user_ss = ss.source_set() target_arch += {'arm': arm_ss} target_softmmu_arch += {'arm': arm_softmmu_ss} +target_user_arch += {'arm': arm_user_ss} diff --git a/target/cris/meson.build b/target/cris/meson.build index 67c3793c85..7fd81e0348 100644 --- a/target/cris/meson.build +++ b/target/cris/meson.build @@ -10,5 +10,8 @@ cris_ss.add(files( cris_softmmu_ss = ss.source_set() cris_softmmu_ss.add(files('mmu.c', 'machine.c')) +cris_user_ss = ss.source_set() + target_arch += {'cris': cris_ss} target_softmmu_arch += {'cris': cris_softmmu_ss} +target_user_arch += {'cris': cris_user_ss} diff --git a/target/hexagon/meson.build b/target/hexagon/meson.build index 6fd9360b74..fe232810ab 100644 --- a/target/hexagon/meson.build +++ b/target/hexagon/meson.build @@ -175,4 +175,7 @@ hexagon_ss.add(files( 'fma_emu.c', )) +hexagon_user_ss = ss.source_set() + target_arch += {'hexagon': hexagon_ss} +target_user_arch += {'hexagon': hexagon_user_ss} diff --git a/target/hppa/meson.build b/target/hppa/meson.build index 8a7ff82efc..85ad314671 100644 --- a/target/hppa/meson.build +++ b/target/hppa/meson.build @@ -15,5 +15,8 @@ hppa_ss.add(files( hppa_softmmu_ss = ss.source_set() hppa_softmmu_ss.add(files('machine.c')) +hppa_user_ss = ss.source_set() + target_arch += {'hppa': hppa_ss} target_softmmu_arch += {'hppa': hppa_softmmu_ss} +target_user_arch += {'hppa': hppa_user_ss} diff --git a/target/m68k/meson.build b/target/m68k/meson.build index 05cd9fbd1e..b507682684 100644 --- a/target/m68k/meson.build +++ b/target/m68k/meson.build @@ -13,5 +13,8 @@ m68k_ss.add(files( m68k_softmmu_ss = ss.source_set() m68k_softmmu_ss.add(files('monitor.c')) +m68k_user_ss = ss.source_set() + target_arch += {'m68k': m68k_ss} target_softmmu_arch += {'m68k': m68k_softmmu_ss} +target_user_arch += {'m68k': m68k_user_ss} diff --git a/target/microblaze/meson.build b/target/microblaze/meson.build index 05ee0ec163..52d8fcb0a3 100644 --- a/target/microblaze/meson.build +++ b/target/microblaze/meson.build @@ -16,5 +16,8 @@ microblaze_softmmu_ss.add(files( 'machine.c', )) +microblaze_user_ss = ss.source_set() + target_arch += {'microblaze': microblaze_ss} target_softmmu_arch += {'microblaze': microblaze_softmmu_ss} +target_user_arch += {'microblaze': microblaze_user_ss} diff --git a/target/nios2/meson.build b/target/nios2/meson.build index e643917db1..00367056fa 100644 --- a/target/nios2/meson.build +++ b/target/nios2/meson.build @@ -11,5 +11,8 @@ nios2_ss.add(files( nios2_softmmu_ss = ss.source_set() nios2_softmmu_ss.add(files('monitor.c')) +nios2_user_ss = ss.source_set() + target_arch += {'nios2': nios2_ss} target_softmmu_arch += {'nios2': nios2_softmmu_ss} +target_user_arch += {'nios2': nios2_user_ss} diff --git a/target/openrisc/meson.build b/target/openrisc/meson.build index 9774a58306..794a9e8161 100644 --- a/target/openrisc/meson.build +++ b/target/openrisc/meson.build @@ -19,5 +19,8 @@ openrisc_ss.add(files( openrisc_softmmu_ss = ss.source_set() openrisc_softmmu_ss.add(files('machine.c')) +openrisc_user_ss = ss.source_set() + target_arch += {'openrisc': openrisc_ss} target_softmmu_arch += {'openrisc': openrisc_softmmu_ss} +target_user_arch += {'openrisc': openrisc_user_ss} diff --git a/target/ppc/meson.build b/target/ppc/meson.build index a4f18ff414..0afaea25dd 100644 --- a/target/ppc/meson.build +++ b/target/ppc/meson.build @@ -51,5 +51,8 @@ ppc_softmmu_ss.add(when: 'TARGET_PPC64', if_true: files( 'mmu-radix64.c', )) +ppc_user_ss = ss.source_set() + target_arch += {'ppc': ppc_ss} target_softmmu_arch += {'ppc': ppc_softmmu_ss} +target_user_arch += {'ppc': ppc_user_ss} diff --git a/target/riscv/meson.build b/target/riscv/meson.build index af6c3416b7..673b35b175 100644 --- a/target/riscv/meson.build +++ b/target/riscv/meson.build @@ -27,5 +27,8 @@ riscv_softmmu_ss.add(files( 'machine.c' )) +riscv_user_ss = ss.source_set() + target_arch += {'riscv': riscv_ss} target_softmmu_arch += {'riscv': riscv_softmmu_ss} +target_user_arch += {'riscv': riscv_user_ss} diff --git a/target/s390x/meson.build b/target/s390x/meson.build index c42eadb7d2..1219f64112 100644 --- a/target/s390x/meson.build +++ b/target/s390x/meson.build @@ -58,5 +58,8 @@ if host_machine.cpu_family() == 's390x' and cc.has_link_argument('-Wl,--s390-pgs if_true: declare_dependency(link_args: ['-Wl,--s390-pgste'])) endif +s390x_user_ss = ss.source_set() + target_arch += {'s390x': s390x_ss} target_softmmu_arch += {'s390x': s390x_softmmu_ss} +target_user_arch += {'s390x': s390x_user_ss} diff --git a/target/sh4/meson.build b/target/sh4/meson.build index 56a57576da..5a05729bc1 100644 --- a/target/sh4/meson.build +++ b/target/sh4/meson.build @@ -10,5 +10,8 @@ sh4_ss.add(files( sh4_softmmu_ss = ss.source_set() sh4_softmmu_ss.add(files('monitor.c')) +sh4_user_ss = ss.source_set() + target_arch += {'sh4': sh4_ss} target_softmmu_arch += {'sh4': sh4_softmmu_ss} +target_user_arch += {'sh4': sh4_user_ss} diff --git a/target/sparc/meson.build b/target/sparc/meson.build index a3638b9503..cc77a77064 100644 --- a/target/sparc/meson.build +++ b/target/sparc/meson.build @@ -19,5 +19,8 @@ sparc_softmmu_ss.add(files( 'monitor.c', )) +sparc_user_ss = ss.source_set() + target_arch += {'sparc': sparc_ss} target_softmmu_arch += {'sparc': sparc_softmmu_ss} +target_user_arch += {'sparc': sparc_user_ss} diff --git a/target/tricore/meson.build b/target/tricore/meson.build index 0ccc829517..7086ae1a22 100644 --- a/target/tricore/meson.build +++ b/target/tricore/meson.build @@ -11,5 +11,8 @@ tricore_ss.add(zlib) tricore_softmmu_ss = ss.source_set() +tricore_user_ss = ss.source_set() + target_arch += {'tricore': tricore_ss} target_softmmu_arch += {'tricore': tricore_softmmu_ss} +target_user_arch += {'tricore': tricore_user_ss} diff --git a/target/xtensa/meson.build b/target/xtensa/meson.build index 7c4efa6c62..ade555ae36 100644 --- a/target/xtensa/meson.build +++ b/target/xtensa/meson.build @@ -23,5 +23,8 @@ xtensa_softmmu_ss.add(files( 'xtensa-semi.c', )) +xtensa_user_ss = ss.source_set() + target_arch += {'xtensa': xtensa_ss} target_softmmu_arch += {'xtensa': xtensa_softmmu_ss} +target_user_arch += {'xtensa': xtensa_user_ss} From patchwork Fri Jun 4 15:51:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454126 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp605920jae; Fri, 4 Jun 2021 10:18:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnwSdTREo5g/iyo5DMn4n1OHZDL+5J0/NjQLzghUShXSqOZti/OHlUbkOAm6p0UH458HQo X-Received: by 2002:ab0:211a:: with SMTP id d26mr4771819ual.41.1622827117717; Fri, 04 Jun 2021 10:18:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827117; cv=none; d=google.com; s=arc-20160816; b=ThZqY1Bo0JMlUDcZ+xIkL6KejQon+OgSMQO4ijy1Ac6citFoOyMuJLkaUSGEBSVE9/ NIXZHuq+1Y436KL8VFno0RVT1rCeW0r7f2BqOykZUHnR7ryZN9ENVKCHJOHKKkqO6Tgj /2Cngs9bVAvPmXMa011l+8MGaj5Xf13CsvlaaSmqj6ZsXbIRbuFZ+yzMc9OuK9TLj61D +X8W61fmcFU3eJaQ63aBHe3AwcQC6gNQXjtZnXjpuRVzkGOCOIGNLLPwcAiPOrELyh5T q/2t6VLqA3q3trPuPc1lE2FD+oYhiSTnTlS5mR3Nwhhxn14Zxyvgeme/DokG11gZbhtF bcrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=0qK3PtK1ZJ0cptr2WVdMdlRn0V00pOzTguQ/XpJv0Nw=; b=apYfCLZugGbdlZ+MHYZpgM7vaV77vBnmYT+GB4Lo9wOlfBjruKqBo/85viMJhQaU87 hmSJqnhoVEsEa5AjlGEip4sLzb2jVKk2jwlcvmV0meGB3o40KwfksDmDqMGzun55dqFq S0u7PMYZS5YrGBWggwOf7dNQSGHF4hzszXhTXYhn9S0M4OYEA2UbVWojbRZkfiogqdkW h7CR6sYWD0FcRLd+kxWT3KcndAUBCtcRNO6mWeQEzCH7/88ob1MCeVuJv+XglI0fEjXg pIZ2Zy+YQzFrnO78tKWQ3FCyY8Y4fEbKDRyKOPc8aZBbt12Ff7w5G+z+CNoZJKVPsNax bT4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WOjKAbta; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q99si3818767uaq.150.2021.06.04.10.18.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:18:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WOjKAbta; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42372 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDSv-0003s9-17 for patch@linaro.org; Fri, 04 Jun 2021 13:18:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48706) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNJ-00086u-Lu for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:49 -0400 Received: from mail-wr1-x42e.google.com ([2a00:1450:4864:20::42e]:39729) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNC-00021m-AS for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:49 -0400 Received: by mail-wr1-x42e.google.com with SMTP id l2so10038036wrw.6 for ; Fri, 04 Jun 2021 10:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0qK3PtK1ZJ0cptr2WVdMdlRn0V00pOzTguQ/XpJv0Nw=; b=WOjKAbtaYFNBmEFh1M3XN+TY7WR2aqCgcVLpDVHxXUhaCzd27M2HH848aV1XyL/XDW AmHZMagjIFUZuz0nT9LTUVm5OLZDWiUV4m/qDk2ReDfP6yFVsPqypVLIZmjfTKOSbnvC 2+Rl7ubJHcfF1U34Ess78woUEKrPogtxwFHAhd3kX8/BMJdE+qilgxav1+lcYWRs84db vRUYg3+oHhotH3p5gvle/B7MhlVYtupYPliOVC4zrydyHGnBO11QRYR3cXPYXnYarYQN N2bNXmGWbLXnquDq1TsG61jB08krHwffBEH4w4NJLmz15bWihIXTU4+IefOa1q/OlTQe 4cVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0qK3PtK1ZJ0cptr2WVdMdlRn0V00pOzTguQ/XpJv0Nw=; b=QgnIV8uhYv6bEC47AF/FLbL/EjxqcfnwNorH6ryqoTKOPFYzCiE+9w2M+Tu/W9a3+K y03Mh4GXxCfAnPZg5Gdy6yTxmabMUg0QdfoZexxFT6nQdx9HA0FV5J2ozeGF4nXQKuB4 P3HDGM3GeLtXeXBwLpy45qQM0+StEzhQFcmG35BZAjghWsEXXcVuJJ+a46nUMRJb/PW4 GY4DU71OEMIEaJWU+BHPlTMhqlPqytw4hJiBb6k09CTa0fo0xPgMN4f84kkKYU03S2S4 iQKgpsp3iEboTyO6z94IGW14KDWOFCkw2fDA8aW0pAcifmW1scjADl4p/Zepdo9+/gRd qzQg== X-Gm-Message-State: AOAM532hkoPyCmXFCGbq94CCCq1a1UAt2wBgiIYxCRjP4lcV5coH9pCU LODcaqllVgYZ4rtfzySGrV0uXA== X-Received: by 2002:adf:f78d:: with SMTP id q13mr4878148wrp.191.1622826760625; Fri, 04 Jun 2021 10:12:40 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id q3sm7211868wrr.43.2021.06.04.10.12.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id C96491FF9D; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 14/99] accel: add cpu_reset Date: Fri, 4 Jun 2021 16:51:47 +0100 Message-Id: <20210604155312.15902-15-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42e; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "open list:X86 KVM CPUs" , Marcelo Tosatti , Richard Henderson , qemu-arm@nongnu.org, Claudio Fontana , Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana in cpu_reset(), implemented in the common cpu.c, add a call to a new accel_cpu_reset(), which ensures that the CPU accel interface is also reset when the CPU is reset. Use this first for x86/kvm, simply moving the kvm_arch_reset_vcpu() call. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- include/hw/core/accel-cpu.h | 2 ++ include/qemu/accel.h | 6 ++++++ accel/accel-common.c | 9 +++++++++ hw/core/cpu-common.c | 3 ++- target/i386/cpu.c | 4 ---- target/i386/kvm/kvm-cpu.c | 6 ++++++ 6 files changed, 25 insertions(+), 5 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/include/hw/core/accel-cpu.h b/include/hw/core/accel-cpu.h index 5dbfd79955..700a5bd266 100644 --- a/include/hw/core/accel-cpu.h +++ b/include/hw/core/accel-cpu.h @@ -33,6 +33,8 @@ typedef struct AccelCPUClass { void (*cpu_class_init)(CPUClass *cc); void (*cpu_instance_init)(CPUState *cpu); bool (*cpu_realizefn)(CPUState *cpu, Error **errp); + void (*cpu_reset)(CPUState *cpu); + } AccelCPUClass; #endif /* ACCEL_CPU_H */ diff --git a/include/qemu/accel.h b/include/qemu/accel.h index 4f4c283f6f..8d3a15b916 100644 --- a/include/qemu/accel.h +++ b/include/qemu/accel.h @@ -91,4 +91,10 @@ void accel_cpu_instance_init(CPUState *cpu); */ bool accel_cpu_realizefn(CPUState *cpu, Error **errp); +/** + * accel_cpu_reset: + * @cpu: The CPU that needs to call accel-specific reset. + */ +void accel_cpu_reset(CPUState *cpu); + #endif /* QEMU_ACCEL_H */ diff --git a/accel/accel-common.c b/accel/accel-common.c index cf07f78421..3331a9dcfd 100644 --- a/accel/accel-common.c +++ b/accel/accel-common.c @@ -121,6 +121,15 @@ bool accel_cpu_realizefn(CPUState *cpu, Error **errp) return true; } +void accel_cpu_reset(CPUState *cpu) +{ + CPUClass *cc = CPU_GET_CLASS(cpu); + + if (cc->accel_cpu && cc->accel_cpu->cpu_reset) { + cc->accel_cpu->cpu_reset(cpu); + } +} + static const TypeInfo accel_cpu_type = { .name = TYPE_ACCEL_CPU, .parent = TYPE_OBJECT, diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c index e2f5a64604..ab258ad4f2 100644 --- a/hw/core/cpu-common.c +++ b/hw/core/cpu-common.c @@ -34,6 +34,7 @@ #include "hw/qdev-properties.h" #include "trace/trace-root.h" #include "qemu/plugin.h" +#include "qemu/accel.h" CPUState *cpu_by_arch_id(int64_t id) { @@ -112,7 +113,7 @@ void cpu_dump_state(CPUState *cpu, FILE *f, int flags) void cpu_reset(CPUState *cpu) { device_cold_reset(DEVICE(cpu)); - + accel_cpu_reset(cpu); trace_guest_cpu_reset(cpu); } diff --git a/target/i386/cpu.c b/target/i386/cpu.c index e0ba36cc23..0c22324daf 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -5749,10 +5749,6 @@ static void x86_cpu_reset(DeviceState *dev) apic_designate_bsp(cpu->apic_state, s->cpu_index == 0); s->halted = !cpu_is_bsp(cpu); - - if (kvm_enabled()) { - kvm_arch_reset_vcpu(cpu); - } #endif } diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c index 5235bce8dc..63410d3f18 100644 --- a/target/i386/kvm/kvm-cpu.c +++ b/target/i386/kvm/kvm-cpu.c @@ -135,12 +135,18 @@ static void kvm_cpu_instance_init(CPUState *cs) } } +static void kvm_cpu_reset(CPUState *cpu) +{ + kvm_arch_reset_vcpu(X86_CPU(cpu)); +} + static void kvm_cpu_accel_class_init(ObjectClass *oc, void *data) { AccelCPUClass *acc = ACCEL_CPU_CLASS(oc); acc->cpu_realizefn = kvm_cpu_realizefn; acc->cpu_instance_init = kvm_cpu_instance_init; + acc->cpu_reset = kvm_cpu_reset; } static const TypeInfo kvm_cpu_accel_type_info = { .name = ACCEL_CPU_NAME("kvm"), From patchwork Fri Jun 4 15:51:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454138 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp619260jae; Fri, 4 Jun 2021 10:37:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwd5Ij4w5T4+Trsj7QkhVtML1lGru0lkq4xid4DJnxSzOnRi1VTGA1c3tJvOEYioYb3OyF4 X-Received: by 2002:a05:6102:c0f:: with SMTP id x15mr4225214vss.32.1622828232987; Fri, 04 Jun 2021 10:37:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622828232; cv=none; d=google.com; s=arc-20160816; b=FulAm8zJbWwji0VZ88akLStzsn9BbtUJVQvfsMXMyq2q0VqFxD5ApR4acr9PZItFAW fbSS6w9Ne8URe0oNGt1t64mKCmaSx/BEzwqClzdg92/WwN8Xm9ZuxRXjqTt24CabEfQM sgULU7eJ4uzc3M0YetAkPoHT4TyS4IzzxeKAEksKMVTqfxgWKuTN2eP8m7LRnUu16JuQ hrzNx/e5hMRFtZxoS429bmSIUCB0ATrVxQ06CxXyn/domX/hzXQmR5peKTwNriYVqg1Z nwVJTcmJiSfkN54fgfaNwf8CR3eHRUgoql9kTFvC0zHBRvLPFkC0o+zN99y8T1C2gFQA ZqWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=2vvNAoxjt1VSZ2ICaP0bgwI19xolrw5FxWN3lqD0Qh4=; b=Fmovanp41zsrcxPmXhmf9DOpxJI/U/BUzA7EQkz9/KQHhAFAeOG+rJP3jdXgWQmLcE P9Pj1ZRsmQ9wO+4cdt7P0AMSo9bvISA6IDfMZpo6tpQnNbKR21mT4qMVNZG4T2vmdKSn HFx8SbJ15nc8weAiotfGUtjE7ZOHADF6oyizRYNpbe+Wj3nT/e3U123i0XoaDP6h0+u5 AD3KIRdNKg108iyALcUlxEsEtnJmOyEP8nPZpP24yKoKhoyEFMh2ONKy748ftZeo71/M Uo/ue/ft8shhT1QsvjgS2+YuG3VNTVN9kTKGX4YhSSQvV+QjPGrQRDf0lrzVAK9qNl6w J/EQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="d4j6xHj/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u6si3010643uaq.92.2021.06.04.10.37.12 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:37:12 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="d4j6xHj/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:48762 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDku-00059o-Ci for patch@linaro.org; Fri, 04 Jun 2021 13:37:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48848) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNO-0008QR-6Q for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:54 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:34516) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNL-00025B-BS for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:53 -0400 Received: by mail-wr1-x431.google.com with SMTP id q5so10052209wrm.1 for ; Fri, 04 Jun 2021 10:12:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2vvNAoxjt1VSZ2ICaP0bgwI19xolrw5FxWN3lqD0Qh4=; b=d4j6xHj/02LhPK+9akWqA3CXM5sLVzlLbBfXMsglJ/gtWATcvd62Cc+gjYnQvoYTzF z3a1uJait3HaAY9w6JNdWpsL6MZeMdf2vfS6Pv5MWwloW8FV2/I4htK4MNXCis9Pnefu PUNgtMdf5I7XmB2dbqNA6Y3Srgolur1ZsFAvTDhxdUYprKvBQAS7fXO9VM72pXSujxcp w8eZk5sVLGte066UYtgVPuNKDuF/H9qUB9X7qIv1AdY0cZPR3GmRxp1jdqEvS3DxyvrO 95zsolFYDI5NxVa5WP+8YR96a98ww51+B+zTnQgYOnmDtEE9tBsccg0YM9dn7Tp0FH4f 1qWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2vvNAoxjt1VSZ2ICaP0bgwI19xolrw5FxWN3lqD0Qh4=; b=ldN9trUWJKJxF5mlP2hpaQz+UTjpxw4D1G5lH0jgDdS5TVm6I1CexuYL+CdH8azK0G Ox7Y+DweiYOZQd2F6pGMvsM9hAFHTIO6fBy+dvJ0nn5nUPYpQeEE52tuPz2CWNWJ/Jr/ CdyaVvLo5duH5mKlZQlmrAHjASdlb52+pCdJFXUqTcjSP5EIN+f5XQHiVRXJFjLlZ0EC Si4+2WU8coyPbKvlQI/pqPx/ALfd6c0eDwvaDJyMPj3o7O7I6JhJB30H4LuCy4TBk73H T8IL3OkeIECf69s+2m9mqzcdulIR7q3eXF/501MC9AsqiK0IJf088mqftRmD4frYzuNn 6RMA== X-Gm-Message-State: AOAM530ptIOoLrb6R9i4FO6NBvs0v9cu8EPI4Og5oSl02VxL+44xBvAp ADx1n5XRUsr4s+oayIF23IGunw== X-Received: by 2002:adf:f70a:: with SMTP id r10mr4913956wrp.316.1622826769858; Fri, 04 Jun 2021 10:12:49 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id k16sm6059847wmr.42.2021.06.04.10.12.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:46 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E0AC71FF9E; Fri, 4 Jun 2021 16:53:13 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 15/99] target/arm: move translate modules to tcg/ Date: Fri, 4 Jun 2021 16:51:48 +0100 Message-Id: <20210604155312.15902-16-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/{ => tcg}/translate-a64.h | 0 target/arm/{ => tcg}/translate.h | 0 target/arm/{ => tcg}/a32-uncond.decode | 0 target/arm/{ => tcg}/a32.decode | 0 target/arm/{ => tcg}/m-nocp.decode | 0 target/arm/{ => tcg}/neon-dp.decode | 0 target/arm/{ => tcg}/neon-ls.decode | 0 target/arm/{ => tcg}/neon-shared.decode | 0 target/arm/{ => tcg}/sve.decode | 0 target/arm/{ => tcg}/t16.decode | 0 target/arm/{ => tcg}/t32.decode | 0 target/arm/{ => tcg}/vfp-uncond.decode | 0 target/arm/{ => tcg}/vfp.decode | 0 target/arm/{ => tcg}/translate-a64.c | 0 target/arm/{ => tcg}/translate-m-nocp.c | 0 target/arm/{ => tcg}/translate-neon.c | 0 target/arm/{ => tcg}/translate-sve.c | 0 target/arm/{ => tcg}/translate-vfp.c | 0 target/arm/{ => tcg}/translate.c | 0 target/arm/meson.build | 23 ++------------------- target/arm/tcg/meson.build | 27 +++++++++++++++++++++++++ 21 files changed, 29 insertions(+), 21 deletions(-) rename target/arm/{ => tcg}/translate-a64.h (100%) rename target/arm/{ => tcg}/translate.h (100%) rename target/arm/{ => tcg}/a32-uncond.decode (100%) rename target/arm/{ => tcg}/a32.decode (100%) rename target/arm/{ => tcg}/m-nocp.decode (100%) rename target/arm/{ => tcg}/neon-dp.decode (100%) rename target/arm/{ => tcg}/neon-ls.decode (100%) rename target/arm/{ => tcg}/neon-shared.decode (100%) rename target/arm/{ => tcg}/sve.decode (100%) rename target/arm/{ => tcg}/t16.decode (100%) rename target/arm/{ => tcg}/t32.decode (100%) rename target/arm/{ => tcg}/vfp-uncond.decode (100%) rename target/arm/{ => tcg}/vfp.decode (100%) rename target/arm/{ => tcg}/translate-a64.c (100%) rename target/arm/{ => tcg}/translate-m-nocp.c (100%) rename target/arm/{ => tcg}/translate-neon.c (100%) rename target/arm/{ => tcg}/translate-sve.c (100%) rename target/arm/{ => tcg}/translate-vfp.c (100%) rename target/arm/{ => tcg}/translate.c (100%) create mode 100644 target/arm/tcg/meson.build -- 2.20.1 diff --git a/target/arm/translate-a64.h b/target/arm/tcg/translate-a64.h similarity index 100% rename from target/arm/translate-a64.h rename to target/arm/tcg/translate-a64.h diff --git a/target/arm/translate.h b/target/arm/tcg/translate.h similarity index 100% rename from target/arm/translate.h rename to target/arm/tcg/translate.h diff --git a/target/arm/a32-uncond.decode b/target/arm/tcg/a32-uncond.decode similarity index 100% rename from target/arm/a32-uncond.decode rename to target/arm/tcg/a32-uncond.decode diff --git a/target/arm/a32.decode b/target/arm/tcg/a32.decode similarity index 100% rename from target/arm/a32.decode rename to target/arm/tcg/a32.decode diff --git a/target/arm/m-nocp.decode b/target/arm/tcg/m-nocp.decode similarity index 100% rename from target/arm/m-nocp.decode rename to target/arm/tcg/m-nocp.decode diff --git a/target/arm/neon-dp.decode b/target/arm/tcg/neon-dp.decode similarity index 100% rename from target/arm/neon-dp.decode rename to target/arm/tcg/neon-dp.decode diff --git a/target/arm/neon-ls.decode b/target/arm/tcg/neon-ls.decode similarity index 100% rename from target/arm/neon-ls.decode rename to target/arm/tcg/neon-ls.decode diff --git a/target/arm/neon-shared.decode b/target/arm/tcg/neon-shared.decode similarity index 100% rename from target/arm/neon-shared.decode rename to target/arm/tcg/neon-shared.decode diff --git a/target/arm/sve.decode b/target/arm/tcg/sve.decode similarity index 100% rename from target/arm/sve.decode rename to target/arm/tcg/sve.decode diff --git a/target/arm/t16.decode b/target/arm/tcg/t16.decode similarity index 100% rename from target/arm/t16.decode rename to target/arm/tcg/t16.decode diff --git a/target/arm/t32.decode b/target/arm/tcg/t32.decode similarity index 100% rename from target/arm/t32.decode rename to target/arm/tcg/t32.decode diff --git a/target/arm/vfp-uncond.decode b/target/arm/tcg/vfp-uncond.decode similarity index 100% rename from target/arm/vfp-uncond.decode rename to target/arm/tcg/vfp-uncond.decode diff --git a/target/arm/vfp.decode b/target/arm/tcg/vfp.decode similarity index 100% rename from target/arm/vfp.decode rename to target/arm/tcg/vfp.decode diff --git a/target/arm/translate-a64.c b/target/arm/tcg/translate-a64.c similarity index 100% rename from target/arm/translate-a64.c rename to target/arm/tcg/translate-a64.c diff --git a/target/arm/translate-m-nocp.c b/target/arm/tcg/translate-m-nocp.c similarity index 100% rename from target/arm/translate-m-nocp.c rename to target/arm/tcg/translate-m-nocp.c diff --git a/target/arm/translate-neon.c b/target/arm/tcg/translate-neon.c similarity index 100% rename from target/arm/translate-neon.c rename to target/arm/tcg/translate-neon.c diff --git a/target/arm/translate-sve.c b/target/arm/tcg/translate-sve.c similarity index 100% rename from target/arm/translate-sve.c rename to target/arm/tcg/translate-sve.c diff --git a/target/arm/translate-vfp.c b/target/arm/tcg/translate-vfp.c similarity index 100% rename from target/arm/translate-vfp.c rename to target/arm/tcg/translate-vfp.c diff --git a/target/arm/translate.c b/target/arm/tcg/translate.c similarity index 100% rename from target/arm/translate.c rename to target/arm/tcg/translate.c diff --git a/target/arm/meson.build b/target/arm/meson.build index 6106d24665..229ec7fa11 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -1,19 +1,4 @@ -gen = [ - decodetree.process('sve.decode', extra_args: '--decode=disas_sve'), - decodetree.process('neon-shared.decode', extra_args: '--decode=disas_neon_shared'), - decodetree.process('neon-dp.decode', extra_args: '--decode=disas_neon_dp'), - decodetree.process('neon-ls.decode', extra_args: '--decode=disas_neon_ls'), - decodetree.process('vfp.decode', extra_args: '--decode=disas_vfp'), - decodetree.process('vfp-uncond.decode', extra_args: '--decode=disas_vfp_uncond'), - decodetree.process('m-nocp.decode', extra_args: '--decode=disas_m_nocp'), - decodetree.process('a32.decode', extra_args: '--static-decode=disas_a32'), - decodetree.process('a32-uncond.decode', extra_args: '--static-decode=disas_a32_uncond'), - decodetree.process('t32.decode', extra_args: '--static-decode=disas_t32'), - decodetree.process('t16.decode', extra_args: ['-w', '16', '--static-decode=disas_t16']), -] - arm_ss = ss.source_set() -arm_ss.add(gen) arm_ss.add(files( 'cpu.c', 'crypto_helper.c', @@ -25,10 +10,6 @@ arm_ss.add(files( 'neon_helper.c', 'op_helper.c', 'tlb_helper.c', - 'translate.c', - 'translate-m-nocp.c', - 'translate-neon.c', - 'translate-vfp.c', 'vec_helper.c', 'vfp_helper.c', 'cpu_tcg.c', @@ -44,8 +25,6 @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files( 'mte_helper.c', 'pauth_helper.c', 'sve_helper.c', - 'translate-a64.c', - 'translate-sve.c', )) arm_softmmu_ss = ss.source_set() @@ -58,6 +37,8 @@ arm_softmmu_ss.add(files( )) arm_user_ss = ss.source_set() +subdir('tcg') + target_arch += {'arm': arm_ss} target_softmmu_arch += {'arm': arm_softmmu_ss} target_user_arch += {'arm': arm_user_ss} diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build new file mode 100644 index 0000000000..53a17ae6e6 --- /dev/null +++ b/target/arm/tcg/meson.build @@ -0,0 +1,27 @@ +gen = [ + decodetree.process('sve.decode', extra_args: '--decode=disas_sve'), + decodetree.process('neon-shared.decode', extra_args: '--decode=disas_neon_shared'), + decodetree.process('neon-dp.decode', extra_args: '--decode=disas_neon_dp'), + decodetree.process('neon-ls.decode', extra_args: '--decode=disas_neon_ls'), + decodetree.process('vfp.decode', extra_args: '--decode=disas_vfp'), + decodetree.process('vfp-uncond.decode', extra_args: '--decode=disas_vfp_uncond'), + decodetree.process('m-nocp.decode', extra_args: '--decode=disas_m_nocp'), + decodetree.process('a32.decode', extra_args: '--static-decode=disas_a32'), + decodetree.process('a32-uncond.decode', extra_args: '--static-decode=disas_a32_uncond'), + decodetree.process('t32.decode', extra_args: '--static-decode=disas_t32'), + decodetree.process('t16.decode', extra_args: ['-w', '16', '--static-decode=disas_t16']), +] + +arm_ss.add(gen) + +arm_ss.add(files( + 'translate.c', + 'translate-m-nocp.c', + 'translate-neon.c', + 'translate-vfp.c', +)) + +arm_ss.add(when: 'TARGET_AARCH64', if_true: files( + 'translate-a64.c', + 'translate-sve.c', +)) From patchwork Fri Jun 4 15:51:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454086 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp569508jae; Fri, 4 Jun 2021 09:33:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx114DbRh9D7bRCOjj5AVHZrLungGlrXbZsCxEbjpuhGqzVtgn2YoaEzXZX8mvva9uVUwsB X-Received: by 2002:ab0:176:: with SMTP id 109mr4333264uak.68.1622824428731; Fri, 04 Jun 2021 09:33:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824428; cv=none; d=google.com; s=arc-20160816; b=jnJ+WZvJzUg2LifKTgSNClSrlyra2mUMeK01F4H05tHvZCQz4KgLNB6iC1F0FZNuoh ckbsfeMyifPZQXUuBK4COTGxE8Hi15jc4Vc3qMY+nwFFI3lMmjAd24SxEwVd84VgM7nF pJHMv2xAac01hXwj96w9r4VEntV68eo6dtjuzR3RAdWkydGwldOxneMQjO6VUhYHWiRE ISF1FnO8WVFu1azGCnJ7eA80zXTLoLM93sgQrAg7vu5yOW49jP+arKYzBCoU5MC1kIwP j5BzInDibO+Uy6DcsmeHLvVthotuVMUyQPAxeGWNIJIyIpbBuEe7uCKToxU9kWchU3ll tO2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=A8D3uaI26zG4k8R6VTxbVafLxnun9Bh7qIkAIQc8YNo=; b=F0LmKWlhrfWLQSolOo0RDHHHe8aao4QFH6Ba7MZTQE83VdctOALSjCIJlC2J2zYzrH yBD7ZHpVMxWBSw85OWqFyrW3mttROyjmNx+75/iBhWfwFcNIDf2YIWO9SlSsBwGnLeif jS8GIMO4cyB0GUtMQsuVKWqhzzhKMZDP+tWm2RAN0lTbT8exPFW4Y5FuCod/zMSbEQ7W sysBbMeptwsaOaX9E+W2mdIDw4oKeUUoEzJKZ9J8NYmo1cfsWOlL3qzyoa3aD8G23i16 hGsdcg5lPjkjRI01JDrMb1A79RWunhRmyGIWBxGKQkaoOnOchjpmf0Gge4d7whES71LC igvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=eEtNeKbN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id m21si356399vsj.10.2021.06.04.09.33.48 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:33:48 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=eEtNeKbN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58544 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpClX-0007NU-TH for patch@linaro.org; Fri, 04 Jun 2021 12:33:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52014) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRR-0003A6-PO for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:02 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:33721) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRH-0003qy-Vh for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:01 -0400 Received: by mail-wr1-x431.google.com with SMTP id a20so9911918wrc.0 for ; Fri, 04 Jun 2021 09:12:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=A8D3uaI26zG4k8R6VTxbVafLxnun9Bh7qIkAIQc8YNo=; b=eEtNeKbN82V6LymS6u8H8344VDVMVL8QN6nYVyCLRFw+pxOjdwXE+BzSChELpacNoH Jlu+2i/mRKmX0VLxPJORs3EwDhmgAf3DTQRSFd7jRHUNiU53vK2Tgo8BpgNCDGNZGCod X5gK0sIwgg3eVtlK49AgnR9NocenTD4VSzw0shmHjQ2HFDw1lQuGgpcW2U/jkHomVLjH n3OnPN8O0G8YFGHRIlROK8jz0ihMnwgpUuS52AywAKrfbsf432YGk7hC2F07/Ib2tgGz dJYXNQ8OGwPA2Vd1DyNrFguJgsRwmN5RgCW4rnnkgh3QbQj4OnLgOr3Ih/fi0BtXjhC8 1xiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=A8D3uaI26zG4k8R6VTxbVafLxnun9Bh7qIkAIQc8YNo=; b=U10KQ6lhn7i+QxZTP/D+rS1VGmprNNSQ7rNscKnfXJflnGI2sVWO2RQtYTcD8rokYH WemtnfWMJzJIWRRKC+w9RPmnxPzUvlJIkaZKIbMpxboUkvvVvJsOlG8Y054HXVYZm1/l S1XkscAn37By06ZyWPIbIBB9gxo8LMEBRUuPy5PW4CgYH7M+MiN/rir50ej+PgbmO1/R Q+zOUxF6X0qJqVNzZnldY1k1H0lKtf6dcDpaflQByXnGlOUalnPmySt2+nD+oTGILC1w FK9pHhrz7yWpEUMFgMbF/sq6ZTA5CvvPRJh2VoBg6eQGQvzNJqHIvQAqqSu0gbHPJxI4 xbxQ== X-Gm-Message-State: AOAM532qfmu5NyTqsS4U0tqA8eUPx392ER5e1CcvHYDfwjHaHCmwQ/NY u3d4srTCcUrC7mddXDc1X3YzRA== X-Received: by 2002:adf:f68c:: with SMTP id v12mr4795061wrp.122.1622823170649; Fri, 04 Jun 2021 09:12:50 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id q20sm8970286wrf.45.2021.06.04.09.12.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 0B6251FF9F; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 16/99] target/arm: move helpers to tcg/ Date: Fri, 4 Jun 2021 16:51:49 +0100 Message-Id: <20210604155312.15902-17-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- meson.build | 1 + target/arm/{ => tcg}/op_addsub.h | 0 target/arm/tcg/trace.h | 1 + target/arm/{ => tcg}/vec_internal.h | 0 target/arm/{ => tcg}/crypto_helper.c | 0 target/arm/{ => tcg}/debug_helper.c | 0 target/arm/{ => tcg}/helper-a64.c | 0 target/arm/{ => tcg}/helper.c | 0 target/arm/{ => tcg}/iwmmxt_helper.c | 0 target/arm/{ => tcg}/m_helper.c | 0 target/arm/{ => tcg}/mte_helper.c | 0 target/arm/{ => tcg}/neon_helper.c | 0 target/arm/{ => tcg}/op_helper.c | 0 target/arm/{ => tcg}/pauth_helper.c | 0 target/arm/{ => tcg}/sve_helper.c | 0 target/arm/{ => tcg}/tlb_helper.c | 0 target/arm/{ => tcg}/vec_helper.c | 0 target/arm/{ => tcg}/vfp_helper.c | 0 target/arm/meson.build | 14 -------------- target/arm/tcg/meson.build | 14 ++++++++++++++ target/arm/tcg/trace-events | 10 ++++++++++ target/arm/trace-events | 9 --------- 22 files changed, 26 insertions(+), 23 deletions(-) rename target/arm/{ => tcg}/op_addsub.h (100%) create mode 100644 target/arm/tcg/trace.h rename target/arm/{ => tcg}/vec_internal.h (100%) rename target/arm/{ => tcg}/crypto_helper.c (100%) rename target/arm/{ => tcg}/debug_helper.c (100%) rename target/arm/{ => tcg}/helper-a64.c (100%) rename target/arm/{ => tcg}/helper.c (100%) rename target/arm/{ => tcg}/iwmmxt_helper.c (100%) rename target/arm/{ => tcg}/m_helper.c (100%) rename target/arm/{ => tcg}/mte_helper.c (100%) rename target/arm/{ => tcg}/neon_helper.c (100%) rename target/arm/{ => tcg}/op_helper.c (100%) rename target/arm/{ => tcg}/pauth_helper.c (100%) rename target/arm/{ => tcg}/sve_helper.c (100%) rename target/arm/{ => tcg}/tlb_helper.c (100%) rename target/arm/{ => tcg}/vec_helper.c (100%) rename target/arm/{ => tcg}/vfp_helper.c (100%) create mode 100644 target/arm/tcg/trace-events -- 2.20.1 diff --git a/meson.build b/meson.build index a876155969..eb22030571 100644 --- a/meson.build +++ b/meson.build @@ -1860,6 +1860,7 @@ if have_system or have_user 'accel/tcg', 'hw/core', 'target/arm', + 'target/arm/tcg', 'target/hppa', 'target/i386', 'target/i386/kvm', diff --git a/target/arm/op_addsub.h b/target/arm/tcg/op_addsub.h similarity index 100% rename from target/arm/op_addsub.h rename to target/arm/tcg/op_addsub.h diff --git a/target/arm/tcg/trace.h b/target/arm/tcg/trace.h new file mode 100644 index 0000000000..c6e89d018b --- /dev/null +++ b/target/arm/tcg/trace.h @@ -0,0 +1 @@ +#include "trace/trace-target_arm_tcg.h" diff --git a/target/arm/vec_internal.h b/target/arm/tcg/vec_internal.h similarity index 100% rename from target/arm/vec_internal.h rename to target/arm/tcg/vec_internal.h diff --git a/target/arm/crypto_helper.c b/target/arm/tcg/crypto_helper.c similarity index 100% rename from target/arm/crypto_helper.c rename to target/arm/tcg/crypto_helper.c diff --git a/target/arm/debug_helper.c b/target/arm/tcg/debug_helper.c similarity index 100% rename from target/arm/debug_helper.c rename to target/arm/tcg/debug_helper.c diff --git a/target/arm/helper-a64.c b/target/arm/tcg/helper-a64.c similarity index 100% rename from target/arm/helper-a64.c rename to target/arm/tcg/helper-a64.c diff --git a/target/arm/helper.c b/target/arm/tcg/helper.c similarity index 100% rename from target/arm/helper.c rename to target/arm/tcg/helper.c diff --git a/target/arm/iwmmxt_helper.c b/target/arm/tcg/iwmmxt_helper.c similarity index 100% rename from target/arm/iwmmxt_helper.c rename to target/arm/tcg/iwmmxt_helper.c diff --git a/target/arm/m_helper.c b/target/arm/tcg/m_helper.c similarity index 100% rename from target/arm/m_helper.c rename to target/arm/tcg/m_helper.c diff --git a/target/arm/mte_helper.c b/target/arm/tcg/mte_helper.c similarity index 100% rename from target/arm/mte_helper.c rename to target/arm/tcg/mte_helper.c diff --git a/target/arm/neon_helper.c b/target/arm/tcg/neon_helper.c similarity index 100% rename from target/arm/neon_helper.c rename to target/arm/tcg/neon_helper.c diff --git a/target/arm/op_helper.c b/target/arm/tcg/op_helper.c similarity index 100% rename from target/arm/op_helper.c rename to target/arm/tcg/op_helper.c diff --git a/target/arm/pauth_helper.c b/target/arm/tcg/pauth_helper.c similarity index 100% rename from target/arm/pauth_helper.c rename to target/arm/tcg/pauth_helper.c diff --git a/target/arm/sve_helper.c b/target/arm/tcg/sve_helper.c similarity index 100% rename from target/arm/sve_helper.c rename to target/arm/tcg/sve_helper.c diff --git a/target/arm/tlb_helper.c b/target/arm/tcg/tlb_helper.c similarity index 100% rename from target/arm/tlb_helper.c rename to target/arm/tcg/tlb_helper.c diff --git a/target/arm/vec_helper.c b/target/arm/tcg/vec_helper.c similarity index 100% rename from target/arm/vec_helper.c rename to target/arm/tcg/vec_helper.c diff --git a/target/arm/vfp_helper.c b/target/arm/tcg/vfp_helper.c similarity index 100% rename from target/arm/vfp_helper.c rename to target/arm/tcg/vfp_helper.c diff --git a/target/arm/meson.build b/target/arm/meson.build index 229ec7fa11..0172937b40 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -1,17 +1,7 @@ arm_ss = ss.source_set() arm_ss.add(files( 'cpu.c', - 'crypto_helper.c', - 'debug_helper.c', 'gdbstub.c', - 'helper.c', - 'iwmmxt_helper.c', - 'm_helper.c', - 'neon_helper.c', - 'op_helper.c', - 'tlb_helper.c', - 'vec_helper.c', - 'vfp_helper.c', 'cpu_tcg.c', )) arm_ss.add(zlib) @@ -21,10 +11,6 @@ arm_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c', 'kvm64.c'), if_false: fil arm_ss.add(when: 'TARGET_AARCH64', if_true: files( 'cpu64.c', 'gdbstub64.c', - 'helper-a64.c', - 'mte_helper.c', - 'pauth_helper.c', - 'sve_helper.c', )) arm_softmmu_ss = ss.source_set() diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 53a17ae6e6..b3c9d808f5 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -19,9 +19,23 @@ arm_ss.add(files( 'translate-m-nocp.c', 'translate-neon.c', 'translate-vfp.c', + 'helper.c', + 'iwmmxt_helper.c', + 'm_helper.c', + 'neon_helper.c', + 'op_helper.c', + 'tlb_helper.c', + 'vec_helper.c', + 'vfp_helper.c', + 'crypto_helper.c', + 'debug_helper.c', )) arm_ss.add(when: 'TARGET_AARCH64', if_true: files( 'translate-a64.c', 'translate-sve.c', + 'helper-a64.c', + 'mte_helper.c', + 'pauth_helper.c', + 'sve_helper.c', )) diff --git a/target/arm/tcg/trace-events b/target/arm/tcg/trace-events new file mode 100644 index 0000000000..755373a5b1 --- /dev/null +++ b/target/arm/tcg/trace-events @@ -0,0 +1,10 @@ +# See docs/devel/tracing.txt for syntax documentation. + +# helper.c +arm_gt_recalc(int timer, int irqstate, uint64_t nexttick) "gt recalc: timer %d irqstate %d next tick 0x%" PRIx64 +arm_gt_recalc_disabled(int timer) "gt recalc: timer %d irqstate 0 timer disabled" +arm_gt_cval_write(int timer, uint64_t value) "gt_cval_write: timer %d value 0x%" PRIx64 +arm_gt_tval_write(int timer, uint64_t value) "gt_tval_write: timer %d value 0x%" PRIx64 +arm_gt_ctl_write(int timer, uint64_t value) "gt_ctl_write: timer %d value 0x%" PRIx64 +arm_gt_imask_toggle(int timer, int irqstate) "gt_ctl_write: timer %d IMASK toggle, new irqstate %d" +arm_gt_cntvoff_write(uint64_t value) "gt_cntvoff_write: value 0x%" PRIx64 diff --git a/target/arm/trace-events b/target/arm/trace-events index 2a0ba7bffc..23af2d710e 100644 --- a/target/arm/trace-events +++ b/target/arm/trace-events @@ -1,13 +1,4 @@ # See docs/devel/tracing.rst for syntax documentation. -# helper.c -arm_gt_recalc(int timer, int irqstate, uint64_t nexttick) "gt recalc: timer %d irqstate %d next tick 0x%" PRIx64 -arm_gt_recalc_disabled(int timer) "gt recalc: timer %d irqstate 0 timer disabled" -arm_gt_cval_write(int timer, uint64_t value) "gt_cval_write: timer %d value 0x%" PRIx64 -arm_gt_tval_write(int timer, uint64_t value) "gt_tval_write: timer %d value 0x%" PRIx64 -arm_gt_ctl_write(int timer, uint64_t value) "gt_ctl_write: timer %d value 0x%" PRIx64 -arm_gt_imask_toggle(int timer, int irqstate) "gt_ctl_write: timer %d IMASK toggle, new irqstate %d" -arm_gt_cntvoff_write(uint64_t value) "gt_cntvoff_write: value 0x%" PRIx64 - # kvm.c kvm_arm_fixup_msi_route(uint64_t iova, uint64_t gpa) "MSI iova = 0x%"PRIx64" is translated into 0x%"PRIx64 From patchwork Fri Jun 4 15:51:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454149 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp655835jae; Fri, 4 Jun 2021 11:27:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxgGvKOXKvOvNNyqZYiO3rffBf5Vx50txzt+Bhm9AoGfweq9tB7X4WHQmVhQ1ExsAdHqRFH X-Received: by 2002:a05:6638:33a2:: with SMTP id h34mr5196617jav.60.1622831265178; Fri, 04 Jun 2021 11:27:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831265; cv=none; d=google.com; s=arc-20160816; b=w0uDXPnmQCb0lyQ6h+rAAGfhnVOu3wlW+mQOb+Ef7OafOnUa41oF5mvSiAx+3Wk55L uEkt0e2d8vdhnf9fKA8jIgdNo3mXebLhfBhpkKih64vpCRuA0gTPfYduvjQ7ccfBbF22 kZ3IgoHBW+GWBNsCOBz3Jz8SamhzdVbticCDwEqEXehYcSf+C9HyyRN2RtYRjNRErGvM 06pw9rLcdmYrLZn21l5t/glQOnUnMe+Mg7PvPXMeWVKCzig/HMGx1ENq8hwLaK080FY6 PNl1WSRRTpAaQX3HI7oHtllaM2wQn9S2hH6GWcsE12+W9elP8hF+mfYoWYtRvJTWX0G0 QRng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=pO+1XQtifTxu7QLeU1fB8+5lGSSsnzVhaAsD9VqsQLM=; b=QH6rpTJ397fUR4FA9y7zTXdA0qC8F7HBcGd9Hqmzs+y6bK8MMTCE/DpWfveobYMmUY K2XK9uGsq5hXjmuWzbOMg14c4PmBA6IdkjKOS5J9djiBFdayIRxgz9MvpCYYTVon9YPL HGP7zKy/iV/IxAElDm0qdKrBqlsVjDvIGcu4JcgrbZlFiS3SdoPVreb3e+5axmFKjIEs 1jvFctqbzhblnOfcZRtItArK0MJueF6XpG2iLpVGhBpqQSa+cn532HzYnxipjpijLXoY sl2Fcz32Iq2YPTtU5boeTk2AErn3cpAvhQcCqKMfYOvVh74dNjCBDU5i9jhRi9BH5LWL drXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=hDzhEkN9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y8si2770216ilg.72.2021.06.04.11.27.45 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:27:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=hDzhEkN9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:53502 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpEXo-0006DH-CG for patch@linaro.org; Fri, 04 Jun 2021 14:27:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42482) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpETA-0001Lk-MU for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:56 -0400 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]:43684) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpET4-0000QQ-5O for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:56 -0400 Received: by mail-wr1-x432.google.com with SMTP id u7so4751527wrs.10 for ; Fri, 04 Jun 2021 11:22:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pO+1XQtifTxu7QLeU1fB8+5lGSSsnzVhaAsD9VqsQLM=; b=hDzhEkN9l5IAgwG6XCIfkza7/Ukj1IimO84reBGPbqru067srecpjGqotATAonFgcd /MCE5OTwRgWScvqQeIhTiD5DimX3fVHlXPBEyTJ5r/Qm+loxUj5zQWqiS7j9+ODpliT6 f1GaA5ByzbM/N/ENR0VJ80DQO4lzI7vNyQgbCqV7Ou1rOZ0nd8s6uXqaISM2KqITV+9f GrG5vE2DDoxfrI/0qXt5uDH1SqZw1idC1KeWWPvy9wB0+Uu6tLqVclltQPO8L01xgscz oJWlp2bVQEFAHMvkHynRCrep1udfBexqPwGZttqq6fu/Pif7YaprXLaTbjWNfaF+Ii+D nhjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pO+1XQtifTxu7QLeU1fB8+5lGSSsnzVhaAsD9VqsQLM=; b=QTA+7paUJaC5byZ/lf/eyoPUC8Rl1LH8/rwy0JxywY4Pcs/QfmCL1DhTepdlvQUave i4q+rhbr3hB2BN8/AJuId08dJmjQI+J+HiA+IK5QPVq/ogd0CCjaRNwshTSH8jidNq/m uCyKII4yhoBQufUB5xAYB/gim0r0fmlbcrdSrRtTDk02vXDr22P1fHBmjOzPtU/BJHi8 gTWkaBx+9moeyQ5wIOKPqSPtIqsFlXLxXPCzB8NUS4ixVBJfXNxEozmPlJmoqtDPWkR6 72/vjxo6xfFdxdwjto2r+izr1qazU0gBKt13/U+7WyAX3MjDMP6U2OEWRDEHFBw8zCJJ VH0A== X-Gm-Message-State: AOAM532gxe/OaenxPcOnGARcoMHRLJVLpBN5gyhCxcsHeYHaM9Dz8rko HS9hJPSnR9IoX/6DdNGDtXs8kA== X-Received: by 2002:a5d:44d2:: with SMTP id z18mr5187039wrr.358.1622830968859; Fri, 04 Jun 2021 11:22:48 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id h9sm6140581wmb.35.2021.06.04.11.22.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:45 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2040E1FFA5; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 17/99] arm: tcg: only build under CONFIG_TCG Date: Fri, 4 Jun 2021 16:51:50 +0100 Message-Id: <20210604155312.15902-18-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::432; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/tcg/meson.build | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index b3c9d808f5..04b94a3bfb 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -12,9 +12,9 @@ gen = [ decodetree.process('t16.decode', extra_args: ['-w', '16', '--static-decode=disas_t16']), ] -arm_ss.add(gen) +arm_ss.add(when: 'CONFIG_TCG', if_true: gen) -arm_ss.add(files( +arm_ss.add(when: 'CONFIG_TCG', if_true: files( 'translate.c', 'translate-m-nocp.c', 'translate-neon.c', @@ -31,7 +31,7 @@ arm_ss.add(files( 'debug_helper.c', )) -arm_ss.add(when: 'TARGET_AARCH64', if_true: files( +arm_ss.add(when: ['TARGET_AARCH64','CONFIG_TCG'], if_true: files( 'translate-a64.c', 'translate-sve.c', 'helper-a64.c', From patchwork Fri Jun 4 15:51:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454085 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp569439jae; Fri, 4 Jun 2021 09:33:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzLuXqiaN3Lm0cfAu+ddjZFSt1i6famT3eiPr8OIW8fRRbkFsdVSHQ1PlAi/LxP381ZTlyG X-Received: by 2002:a67:f9cc:: with SMTP id c12mr3547541vsq.27.1622824422032; Fri, 04 Jun 2021 09:33:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824422; cv=none; d=google.com; s=arc-20160816; b=OQz7sLHFYE08JdJI2T0M0iqqnff2S03gt42+/w/2sUi5aSMp3iad3L2qlDxKEsgXVa +gItVkaZ9CiXOv9MT22+kwJtDzRlsWAExhnBYT42G+GY9GMbvJ+N8SFlablbwX7zFcjr MJ5RSVi7X4bUUx2ukB9Axs1lAcwrDZwuvCT5JbuThEpRT8Rjv//3ldItoDpJpK1WofM0 qTvuGQIiymnm6C4p9e3KrdJK0i5DsqSpJFnaIvN48lBqWbdWJ1GI5v7QjhjdwjWP3yz2 SawIgn8NajMs3aZTfGk9PUjkagtQOFFPsauyBFn9re/z90D4Zh2fbns3QESn/PgkWjsM BKcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=F0Zjuqp+eDe0I9dgRDfCqEr+Z2SzjMw09N7y7MQfjjk=; b=Fx9lrmYxsfGQiAwNpyqBQe56A82JUHtzb7ja3tKOIYn/5XsZpNymO/mdCeLfsLvghk LKYitBT/x0WEWxD8E/gwpSo6KgA4rj0uzNOP77D4RcbNTlSnx8R1axoFUnq9/A7J8K/9 wD39RYsCJBl9ewxn2WuygWGXdVcPhEPLdZMa4n7KMTzCoh+PUNTM455OZ0UJ07BCzEvI vUUb4iQWhwfDWHlfRDlUYS2WFokVsCCretRnoHHldanHeroUODORJDQbhpLmIf4B27ke 788MxV+JqA4aYml7dNxIDeZ7lQYKpnlYc/1GbhV4ZUHtyJpiGOf3nZ3vShSmyEO1mTCV 3V2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KoszT2d3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x11si2993759uat.119.2021.06.04.09.33.41 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:33:42 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KoszT2d3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58330 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpClR-0007F5-Ez for patch@linaro.org; Fri, 04 Jun 2021 12:33:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48644) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCI7-0008TA-19 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:24 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]:55095) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHi-0005t2-5M for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:22 -0400 Received: by mail-wm1-x32f.google.com with SMTP id o127so5677549wmo.4 for ; Fri, 04 Jun 2021 09:02:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=F0Zjuqp+eDe0I9dgRDfCqEr+Z2SzjMw09N7y7MQfjjk=; b=KoszT2d3OrEjU0HDXn5fRCMR3S1GCj6RVvYfry+e6n/8j96tISROd322ddLf2xL9Ks +FATjhE6Q9K3CXnsLg3atwAkjuSKV8l8zj7c5X36YeiudAJ6VhLcANszRsG3koq80d3m d7MC1D56rCRhSJfQo327AHvaPQ4uup6XM8+0lm0Shhuv3hNiUdorBBugq1vjzt6AgdtW vo++L0JTFtGkCVGgs2PqJMLrEgU7wLB6mB1zU1kH6N+ZJh6V9E4eOwsMWcjfLtWFBM66 YI5RIxnM0VCiXz5XsuVXW+CgCsO5eRJAtw7iOi2WssadwuzSwTsTQzOZg2Yw7+mrL3Xw Kezg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=F0Zjuqp+eDe0I9dgRDfCqEr+Z2SzjMw09N7y7MQfjjk=; b=GJh1IV8bOT/ei2hezQsyk5HLTJOGGEJRJslZ/OgVWK7ntpnnsE/GcnKxOlJXXrcbVz eNdnP/Sx24kYOBEiZ0DE6GQCWPmALKbngeEaSF17uYYaWP26Xe7pzue5sxrSUCraxIu7 aQZDZ0bADJSs4HV6AV/URiik0rEc3ikfMzTeLOopYLOG/0YkrsFolI7UADh4Dx5hsOBc U+EpzGxDzV7w2hYOG4lTtVrRic5mmLwyZp9HZZYx2B/CqSxnja3Pl2bCUo/hI2nCZ4Yo OyFvgyEW7LV3J6iDCtoy+osoaivbrFhNJRogEphpCvP/tgLLbchsor79uVFHP/uZ/YHQ aUfQ== X-Gm-Message-State: AOAM532o9xk3AKskrivhFtCt4UdH/lU38HfKLCVxA55sTv8vbiqnvezv wtVW8cKcluTbo9rmcXyRE6fkdg== X-Received: by 2002:a7b:c7c5:: with SMTP id z5mr4307814wmk.77.1622822576889; Fri, 04 Jun 2021 09:02:56 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id z188sm6250138wme.38.2021.06.04.09.02.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:53 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 38A711FFA6; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 18/99] target/arm: tcg: add sysemu and user subdirs Date: Fri, 4 Jun 2021 16:51:51 +0100 Message-Id: <20210604155312.15902-19-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/tcg/meson.build | 3 +++ target/arm/tcg/sysemu/meson.build | 2 ++ target/arm/tcg/user/meson.build | 2 ++ 3 files changed, 7 insertions(+) create mode 100644 target/arm/tcg/sysemu/meson.build create mode 100644 target/arm/tcg/user/meson.build -- 2.20.1 diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 04b94a3bfb..3503ad96c8 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -39,3 +39,6 @@ arm_ss.add(when: ['TARGET_AARCH64','CONFIG_TCG'], if_true: files( 'pauth_helper.c', 'sve_helper.c', )) + +subdir('user') +subdir('sysemu') diff --git a/target/arm/tcg/sysemu/meson.build b/target/arm/tcg/sysemu/meson.build new file mode 100644 index 0000000000..726387b0b3 --- /dev/null +++ b/target/arm/tcg/sysemu/meson.build @@ -0,0 +1,2 @@ +arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( +)) diff --git a/target/arm/tcg/user/meson.build b/target/arm/tcg/user/meson.build new file mode 100644 index 0000000000..7af3311190 --- /dev/null +++ b/target/arm/tcg/user/meson.build @@ -0,0 +1,2 @@ +arm_user_ss.add(when: 'CONFIG_TCG', if_true: files( +)) From patchwork Fri Jun 4 15:51:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454079 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp564994jae; Fri, 4 Jun 2021 09:28:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxma4SXTTifVeeEsP/zkMAusuhHXmqsDsrlefE3uLFnp3x0Azv7v70LrYdD7a8n1ecmzbVe X-Received: by 2002:a05:6102:742:: with SMTP id v2mr3516317vsg.44.1622824104628; Fri, 04 Jun 2021 09:28:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824104; cv=none; d=google.com; s=arc-20160816; b=cHY1zFdbhhjchE/UUEZ0XJXm4p7GdMLrtN1w2QdtBQ0DKY//QGqHa7mAUjwcR73VOV SCAh7Crx1NmjO4Vlj3oPvRJYwe/cZ2r9pNJ+kD7ywzKZznNADKcZv0wbEeuZNVQOCnug KYqR9j0kIV9N7QFqYLTcZCrkZ26oVSszej37jpvGFFzn5Q6zmdHYY7GmFAV8c57bwBbp MH/A/pIX8QUmNxP5QDH4gqNlMC69Fk/w/rS9V8bsyJVN3kibAYeECMhTsYM4RD3RJRPD tGV+JF+lAgN9NuIv1dv39r1UZmqXfxPxsY2x3lUjDjnsbiAn7mhfKTXut932y0plqAzb 8W+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=DIpMxNA8MOA687Rs0g/qEVI6kRJgN+IDZwl5dPu1YWM=; b=JHfT1wqoXkPHIqaRRLb9nkODZzSAB9acFSHgHhHsdnxwP2ztfstH5O4GQ3ctMe26e+ 0wV/5Bi9wvuOZRLBWi5oyw6fjwrpcv9D3/wFfytJ9iid86mE8hVH5ibsf4zipfGXCuXB gINCdluvdJirvPqHGhR8uBZt/SZW2QNCQBYA7ifbv/dZ0cVTonGTqyP4eR2Wlnt9FPZy 0nui48pf7BZ3ha4vjNnsDfazgSiXedSCy2x/heVvEXhJLoQJjmTd0JDliNJT4OvIWnoh w9iGwcQNgcPOTMS5pJsAwK3XsecFnqvqP2Wri2nq3EHgSvIiHq3TztA1oT7T6VlQMoxD bitw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Aqs95qaN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i14si425389uac.55.2021.06.04.09.28.24 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:28:24 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Aqs95qaN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:40568 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCgJ-0003du-UJ for patch@linaro.org; Fri, 04 Jun 2021 12:28:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48278) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHd-0007pS-Rg for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:53 -0400 Received: from mail-wm1-x336.google.com ([2a00:1450:4864:20::336]:39427) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHR-0005iE-Uu for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:53 -0400 Received: by mail-wm1-x336.google.com with SMTP id l18-20020a1ced120000b029014c1adff1edso8219971wmh.4 for ; Fri, 04 Jun 2021 09:02:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DIpMxNA8MOA687Rs0g/qEVI6kRJgN+IDZwl5dPu1YWM=; b=Aqs95qaNnWQ1Zy0NUQV4PhIa2U+yHeCLteA1hh5kgjzTwMzqaoMhbO46Umssy3R8fP GDbQmGQsjMeaosS8frAwyVKKpt1ThuJs6m1SAdmo3BM5ybPT8ZiGXEYw/y/zbZRTPKGi 2qrv/htA+Pr2qrKXUfHI/Z38rlCmpClo4nGrOyDj3QLjsThodQyAHjSKUac3jyAEy8xi cALIRyvsTtUzv7uKcXnMhZBerhbkabqqlEmexbrRApPZ/PsZc0iWGR/1289TJl2EZ0b0 EwPfyMwzTXWXyiam7oauh2WzUc1DILDOZEYs3bVdt7YtW6dYI7SvCZSv9ck13H8zVzkZ FH1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DIpMxNA8MOA687Rs0g/qEVI6kRJgN+IDZwl5dPu1YWM=; b=DDSRLdHmM+ejQHUtUfjDwDLaFLqZDQzxYpDPQ4JDN5NPt05YIf7diy1SXW/v3JIEDg jlQpj9xkgZpCb1Fd6I5weA7Xe6IeALhpgfLf+aH54tX2f6wA87+amL/vJQeWQF5+3DbE 5Z/e3IzsrksWxDfjZx3EwdH0alGqNO3JNw9hjZN3xiM1SxL5Ayc4UtaYv2JPpRS62x8r MCCO5USoUNGRyTxH71JrIDdVLFWE0hYFnn6TyQAZ1n/KqChtzinEy8MPoHkH1IJvm1CN /6hRbN+Ajn/FXMGs44yTpYAP33/gPmKL/3xXwAiX72v3fxZVO2wk3lVgSGefM14tAp++ /Ynw== X-Gm-Message-State: AOAM5328kvIb4weRKFfdJ+ouafQ0Gj34kDb8QSeZUsWuzKUSDS+iC5mh a43+nKOI9SfSa02E44IQafxsNnlohE1ZIQ== X-Received: by 2002:a05:600c:35c8:: with SMTP id r8mr3261544wmq.168.1622822560148; Fri, 04 Jun 2021 09:02:40 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id j9sm3077307wrs.49.2021.06.04.09.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 592081FFA9; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 19/99] target/arm: tcg: split mte_helper user-only and sysemu code Date: Fri, 4 Jun 2021 16:51:52 +0100 Message-Id: <20210604155312.15902-20-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::336; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x336.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana allocation_tag_mem has a different implementation for user-only and sysemu, so move the two implementations into the dedicated subdirs. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/tcg/mte_helper.h | 53 ++++++++ target/arm/tcg/mte_helper.c | 191 +---------------------------- target/arm/tcg/sysemu/mte_helper.c | 159 ++++++++++++++++++++++++ target/arm/tcg/user/mte_helper.c | 57 +++++++++ target/arm/tcg/sysemu/meson.build | 1 + target/arm/tcg/user/meson.build | 1 + 6 files changed, 272 insertions(+), 190 deletions(-) create mode 100644 target/arm/tcg/mte_helper.h create mode 100644 target/arm/tcg/sysemu/mte_helper.c create mode 100644 target/arm/tcg/user/mte_helper.c -- 2.20.1 diff --git a/target/arm/tcg/mte_helper.h b/target/arm/tcg/mte_helper.h new file mode 100644 index 0000000000..29db1ad9fc --- /dev/null +++ b/target/arm/tcg/mte_helper.h @@ -0,0 +1,53 @@ +/* + * ARM v8.5-MemTag Operations + * + * Copyright (c) 2020 Linaro, Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#ifndef MTE_HELPER_H +#define MTE_HELPER_H + +/** + * allocation_tag_mem: + * @env: the cpu environment + * @ptr_mmu_idx: the addressing regime to use for the virtual address + * @ptr: the virtual address for which to look up tag memory + * @ptr_access: the access to use for the virtual address + * @ptr_size: the number of bytes in the normal memory access + * @tag_access: the access to use for the tag memory + * @tag_size: the number of bytes in the tag memory access + * @ra: the return address for exception handling + * + * Our tag memory is formatted as a sequence of little-endian nibbles. + * That is, the byte at (addr >> (LOG2_TAG_GRANULE + 1)) contains two + * tags, with the tag at [3:0] for the lower addr and the tag at [7:4] + * for the higher addr. + * + * Here, resolve the physical address from the virtual address, and return + * a pointer to the corresponding tag byte. Exit with exception if the + * virtual address is not accessible for @ptr_access. + * + * The @ptr_size and @tag_size values may not have an obvious relation + * due to the alignment of @ptr, and the number of tag checks required. + * + * If there is no tag storage corresponding to @ptr, return NULL. + */ +uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, + uint64_t ptr, MMUAccessType ptr_access, + int ptr_size, MMUAccessType tag_access, + int tag_size, uintptr_t ra); + +#endif /* MTE_HELPER_H */ diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c index a6fccc6e69..7cc9b31fb0 100644 --- a/target/arm/tcg/mte_helper.c +++ b/target/arm/tcg/mte_helper.c @@ -26,7 +26,7 @@ #include "exec/helper-proto.h" #include "qapi/error.h" #include "qemu/guest-random.h" - +#include "tcg/mte_helper.h" static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude) { @@ -47,195 +47,6 @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude) return tag; } -/** - * allocation_tag_mem: - * @env: the cpu environment - * @ptr_mmu_idx: the addressing regime to use for the virtual address - * @ptr: the virtual address for which to look up tag memory - * @ptr_access: the access to use for the virtual address - * @ptr_size: the number of bytes in the normal memory access - * @tag_access: the access to use for the tag memory - * @tag_size: the number of bytes in the tag memory access - * @ra: the return address for exception handling - * - * Our tag memory is formatted as a sequence of little-endian nibbles. - * That is, the byte at (addr >> (LOG2_TAG_GRANULE + 1)) contains two - * tags, with the tag at [3:0] for the lower addr and the tag at [7:4] - * for the higher addr. - * - * Here, resolve the physical address from the virtual address, and return - * a pointer to the corresponding tag byte. Exit with exception if the - * virtual address is not accessible for @ptr_access. - * - * The @ptr_size and @tag_size values may not have an obvious relation - * due to the alignment of @ptr, and the number of tag checks required. - * - * If there is no tag storage corresponding to @ptr, return NULL. - */ -static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, - uint64_t ptr, MMUAccessType ptr_access, - int ptr_size, MMUAccessType tag_access, - int tag_size, uintptr_t ra) -{ -#ifdef CONFIG_USER_ONLY - uint64_t clean_ptr = useronly_clean_ptr(ptr); - int flags = page_get_flags(clean_ptr); - uint8_t *tags; - uintptr_t index; - - if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE_ORG : PAGE_READ))) { - /* SIGSEGV */ - arm_cpu_tlb_fill(env_cpu(env), ptr, ptr_size, ptr_access, - ptr_mmu_idx, false, ra); - g_assert_not_reached(); - } - - /* Require both MAP_ANON and PROT_MTE for the page. */ - if (!(flags & PAGE_ANON) || !(flags & PAGE_MTE)) { - return NULL; - } - - tags = page_get_target_data(clean_ptr); - if (tags == NULL) { - size_t alloc_size = TARGET_PAGE_SIZE >> (LOG2_TAG_GRANULE + 1); - tags = page_alloc_target_data(clean_ptr, alloc_size); - assert(tags != NULL); - } - - index = extract32(ptr, LOG2_TAG_GRANULE + 1, - TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1); - return tags + index; -#else - uintptr_t index; - CPUIOTLBEntry *iotlbentry; - int in_page, flags; - ram_addr_t ptr_ra; - hwaddr ptr_paddr, tag_paddr, xlat; - MemoryRegion *mr; - ARMASIdx tag_asi; - AddressSpace *tag_as; - void *host; - - /* - * Probe the first byte of the virtual address. This raises an - * exception for inaccessible pages, and resolves the virtual address - * into the softmmu tlb. - * - * When RA == 0, this is for mte_probe. The page is expected to be - * valid. Indicate to probe_access_flags no-fault, then assert that - * we received a valid page. - */ - flags = probe_access_flags(env, ptr, ptr_access, ptr_mmu_idx, - ra == 0, &host, ra); - assert(!(flags & TLB_INVALID_MASK)); - - /* - * Find the iotlbentry for ptr. This *must* be present in the TLB - * because we just found the mapping. - * TODO: Perhaps there should be a cputlb helper that returns a - * matching tlb entry + iotlb entry. - */ - index = tlb_index(env, ptr_mmu_idx, ptr); -# ifdef CONFIG_DEBUG_TCG - { - CPUTLBEntry *entry = tlb_entry(env, ptr_mmu_idx, ptr); - target_ulong comparator = (ptr_access == MMU_DATA_LOAD - ? entry->addr_read - : tlb_addr_write(entry)); - g_assert(tlb_hit(comparator, ptr)); - } -# endif - iotlbentry = &env_tlb(env)->d[ptr_mmu_idx].iotlb[index]; - - /* If the virtual page MemAttr != Tagged, access unchecked. */ - if (!arm_tlb_mte_tagged(&iotlbentry->attrs)) { - return NULL; - } - - /* - * If not backed by host ram, there is no tag storage: access unchecked. - * This is probably a guest os bug though, so log it. - */ - if (unlikely(flags & TLB_MMIO)) { - qemu_log_mask(LOG_GUEST_ERROR, - "Page @ 0x%" PRIx64 " indicates Tagged Normal memory " - "but is not backed by host ram\n", ptr); - return NULL; - } - - /* - * The Normal memory access can extend to the next page. E.g. a single - * 8-byte access to the last byte of a page will check only the last - * tag on the first page. - * Any page access exception has priority over tag check exception. - */ - in_page = -(ptr | TARGET_PAGE_MASK); - if (unlikely(ptr_size > in_page)) { - void *ignore; - flags |= probe_access_flags(env, ptr + in_page, ptr_access, - ptr_mmu_idx, ra == 0, &ignore, ra); - assert(!(flags & TLB_INVALID_MASK)); - } - - /* Any debug exception has priority over a tag check exception. */ - if (unlikely(flags & TLB_WATCHPOINT)) { - int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE; - assert(ra != 0); - cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, - iotlbentry->attrs, wp, ra); - } - - /* - * Find the physical address within the normal mem space. - * The memory region lookup must succeed because TLB_MMIO was - * not set in the cputlb lookup above. - */ - mr = memory_region_from_host(host, &ptr_ra); - tcg_debug_assert(mr != NULL); - tcg_debug_assert(memory_region_is_ram(mr)); - ptr_paddr = ptr_ra; - do { - ptr_paddr += mr->addr; - mr = mr->container; - } while (mr); - - /* Convert to the physical address in tag space. */ - tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1); - - /* Look up the address in tag space. */ - tag_asi = iotlbentry->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS; - tag_as = cpu_get_address_space(env_cpu(env), tag_asi); - mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL, - tag_access == MMU_DATA_STORE, - iotlbentry->attrs); - - /* - * Note that @mr will never be NULL. If there is nothing in the address - * space at @tag_paddr, the translation will return the unallocated memory - * region. For our purposes, the result must be ram. - */ - if (unlikely(!memory_region_is_ram(mr))) { - /* ??? Failure is a board configuration error. */ - qemu_log_mask(LOG_UNIMP, - "Tag Memory @ 0x%" HWADDR_PRIx " not found for " - "Normal Memory @ 0x%" HWADDR_PRIx "\n", - tag_paddr, ptr_paddr); - return NULL; - } - - /* - * Ensure the tag memory is dirty on write, for migration. - * Tag memory can never contain code or display memory (vga). - */ - if (tag_access == MMU_DATA_STORE) { - ram_addr_t tag_ra = memory_region_get_ram_addr(mr) + xlat; - cpu_physical_memory_set_dirty_flag(tag_ra, DIRTY_MEMORY_MIGRATION); - } - - return memory_region_get_ram_ptr(mr) + xlat; -#endif -} - uint64_t HELPER(irg)(CPUARMState *env, uint64_t rn, uint64_t rm) { uint16_t exclude = extract32(rm | env->cp15.gcr_el1, 0, 16); diff --git a/target/arm/tcg/sysemu/mte_helper.c b/target/arm/tcg/sysemu/mte_helper.c new file mode 100644 index 0000000000..e333324437 --- /dev/null +++ b/target/arm/tcg/sysemu/mte_helper.c @@ -0,0 +1,159 @@ +/* + * ARM v8.5-MemTag Operations - System Emulation + * + * Copyright (c) 2020 Linaro, Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "exec/exec-all.h" +#include "exec/ram_addr.h" +#include "tcg/mte_helper.h" + +uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, + uint64_t ptr, MMUAccessType ptr_access, + int ptr_size, MMUAccessType tag_access, + int tag_size, uintptr_t ra) +{ + uintptr_t index; + CPUIOTLBEntry *iotlbentry; + int in_page, flags; + ram_addr_t ptr_ra; + hwaddr ptr_paddr, tag_paddr, xlat; + MemoryRegion *mr; + ARMASIdx tag_asi; + AddressSpace *tag_as; + void *host; + + /* + * Probe the first byte of the virtual address. This raises an + * exception for inaccessible pages, and resolves the virtual address + * into the softmmu tlb. + * + * When RA == 0, this is for mte_probe. The page is expected to be + * valid. Indicate to probe_access_flags no-fault, then assert that + * we received a valid page. + */ + flags = probe_access_flags(env, ptr, ptr_access, ptr_mmu_idx, + ra == 0, &host, ra); + assert(!(flags & TLB_INVALID_MASK)); + + /* + * Find the iotlbentry for ptr. This *must* be present in the TLB + * because we just found the mapping. + * TODO: Perhaps there should be a cputlb helper that returns a + * matching tlb entry + iotlb entry. + */ + index = tlb_index(env, ptr_mmu_idx, ptr); +# ifdef CONFIG_DEBUG_TCG + { + CPUTLBEntry *entry = tlb_entry(env, ptr_mmu_idx, ptr); + target_ulong comparator = (ptr_access == MMU_DATA_LOAD + ? entry->addr_read + : tlb_addr_write(entry)); + g_assert(tlb_hit(comparator, ptr)); + } +# endif + iotlbentry = &env_tlb(env)->d[ptr_mmu_idx].iotlb[index]; + + /* If the virtual page MemAttr != Tagged, access unchecked. */ + if (!arm_tlb_mte_tagged(&iotlbentry->attrs)) { + return NULL; + } + + /* + * If not backed by host ram, there is no tag storage: access unchecked. + * This is probably a guest os bug though, so log it. + */ + if (unlikely(flags & TLB_MMIO)) { + qemu_log_mask(LOG_GUEST_ERROR, + "Page @ 0x%" PRIx64 " indicates Tagged Normal memory " + "but is not backed by host ram\n", ptr); + return NULL; + } + + /* + * The Normal memory access can extend to the next page. E.g. a single + * 8-byte access to the last byte of a page will check only the last + * tag on the first page. + * Any page access exception has priority over tag check exception. + */ + in_page = -(ptr | TARGET_PAGE_MASK); + if (unlikely(ptr_size > in_page)) { + void *ignore; + flags |= probe_access_flags(env, ptr + in_page, ptr_access, + ptr_mmu_idx, ra == 0, &ignore, ra); + assert(!(flags & TLB_INVALID_MASK)); + } + + /* Any debug exception has priority over a tag check exception. */ + if (unlikely(flags & TLB_WATCHPOINT)) { + int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE; + assert(ra != 0); + cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, + iotlbentry->attrs, wp, ra); + } + + /* + * Find the physical address within the normal mem space. + * The memory region lookup must succeed because TLB_MMIO was + * not set in the cputlb lookup above. + */ + mr = memory_region_from_host(host, &ptr_ra); + tcg_debug_assert(mr != NULL); + tcg_debug_assert(memory_region_is_ram(mr)); + ptr_paddr = ptr_ra; + do { + ptr_paddr += mr->addr; + mr = mr->container; + } while (mr); + + /* Convert to the physical address in tag space. */ + tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1); + + /* Look up the address in tag space. */ + tag_asi = iotlbentry->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS; + tag_as = cpu_get_address_space(env_cpu(env), tag_asi); + mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL, + tag_access == MMU_DATA_STORE, + iotlbentry->attrs); + + /* + * Note that @mr will never be NULL. If there is nothing in the address + * space at @tag_paddr, the translation will return the unallocated memory + * region. For our purposes, the result must be ram. + */ + if (unlikely(!memory_region_is_ram(mr))) { + /* ??? Failure is a board configuration error. */ + qemu_log_mask(LOG_UNIMP, + "Tag Memory @ 0x%" HWADDR_PRIx " not found for " + "Normal Memory @ 0x%" HWADDR_PRIx "\n", + tag_paddr, ptr_paddr); + return NULL; + } + + /* + * Ensure the tag memory is dirty on write, for migration. + * Tag memory can never contain code or display memory (vga). + */ + if (tag_access == MMU_DATA_STORE) { + ram_addr_t tag_ra = memory_region_get_ram_addr(mr) + xlat; + cpu_physical_memory_set_dirty_flag(tag_ra, DIRTY_MEMORY_MIGRATION); + } + + return memory_region_get_ram_ptr(mr) + xlat; +} diff --git a/target/arm/tcg/user/mte_helper.c b/target/arm/tcg/user/mte_helper.c new file mode 100644 index 0000000000..610a85dc59 --- /dev/null +++ b/target/arm/tcg/user/mte_helper.c @@ -0,0 +1,57 @@ +/* + * ARM v8.5-MemTag Operations - User-mode + * + * Copyright (c) 2020 Linaro, Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "tcg/mte_helper.h" + +uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, + uint64_t ptr, MMUAccessType ptr_access, + int ptr_size, MMUAccessType tag_access, + int tag_size, uintptr_t ra) +{ + uint64_t clean_ptr = useronly_clean_ptr(ptr); + int flags = page_get_flags(clean_ptr); + uint8_t *tags; + uintptr_t index; + + if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE_ORG : PAGE_READ))) { + /* SIGSEGV */ + arm_cpu_tlb_fill(env_cpu(env), ptr, ptr_size, ptr_access, + ptr_mmu_idx, false, ra); + g_assert_not_reached(); + } + + /* Require both MAP_ANON and PROT_MTE for the page. */ + if (!(flags & PAGE_ANON) || !(flags & PAGE_MTE)) { + return NULL; + } + + tags = page_get_target_data(clean_ptr); + if (tags == NULL) { + size_t alloc_size = TARGET_PAGE_SIZE >> (LOG2_TAG_GRANULE + 1); + tags = page_alloc_target_data(clean_ptr, alloc_size); + assert(tags != NULL); + } + + index = extract32(ptr, LOG2_TAG_GRANULE + 1, + TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1); + return tags + index; +} diff --git a/target/arm/tcg/sysemu/meson.build b/target/arm/tcg/sysemu/meson.build index 726387b0b3..6f014f77ec 100644 --- a/target/arm/tcg/sysemu/meson.build +++ b/target/arm/tcg/sysemu/meson.build @@ -1,2 +1,3 @@ arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( + 'mte_helper.c', )) diff --git a/target/arm/tcg/user/meson.build b/target/arm/tcg/user/meson.build index 7af3311190..e681e5f5a1 100644 --- a/target/arm/tcg/user/meson.build +++ b/target/arm/tcg/user/meson.build @@ -1,2 +1,3 @@ arm_user_ss.add(when: 'CONFIG_TCG', if_true: files( + 'mte_helper.c', )) From patchwork Fri Jun 4 15:51:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454147 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp653956jae; Fri, 4 Jun 2021 11:25:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwzKZgJfVIPQ/SDKSslMETJ6ArND2QyVMkM/phh3FJzVYkAlwTbsCh8FxWXA7Ty5Ohj65Ov X-Received: by 2002:a9d:7d8e:: with SMTP id j14mr2139580otn.336.1622831107526; Fri, 04 Jun 2021 11:25:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831107; cv=none; d=google.com; s=arc-20160816; b=mjQ88a1IncGnzHzDmFbfvqPhe04m49DYJL+OlbYrmDAJlykInv2XwTC+opAUg1HGWJ uGk5VMlX/e8hAGPcgY6fY6n6fbjM9VFco7MEmLWpb2s2gSBU+s6C6JI8Ns1tWMNRRSIG DfUlbVbp1l4/fEr/XRjRdK638Nn1O4ckIbjTSahxVvFdGIwS30IjFqrPuehUCapZu39n XZYCBxL94S9CIDflgxPIIbA9H1bib0p1c0KJsNcgW5+a9TFD9qyR/jPYbMxbpRXdfIEY EuEoG6EWy7mlVayaOb2VWdSoWuVFWZPwcDArfdw8NXxmjvVx9qRxXiTy9WHTvarkfmOS ywtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=dUqxN4lb+GfiuAaelODfQHE3gWgKHRBOE5j2GQAObTg=; b=hyOqGUWq/ne1S/0QwQSeOKFRigwEQQNIiIYdVg6CEkhvyzo10UoHObxSA12trgT1ZD D6XJdsik3N2RewfXDIVIFTUmO5CNHHKMNr8jD0mENquLe4YjVCZVZSTzl0G2XD2/ws+j uVW6xrhpn1mjof/Kl2w1txA0Qd4cY35TTQUPniSuWEfYivCsUWoKHbMoQRqMoRqFLaZ2 Di+j3i48iKNOtgakWnTtl/z+xA7zff1tg3K7bn+ZpG6ceur+xkyTzuoDyHsvSv0++WP1 dkdBU/RMrfMolGKvLJpN4lNwelsjUuLYvHNa8Sx73OOZGkk6PV260UFtXL6mjqrpP/Lw BP0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=con1IPL0; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t13si1704158otr.249.2021.06.04.11.25.07 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:25:07 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=con1IPL0; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42174 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpEVG-000738-VD for patch@linaro.org; Fri, 04 Jun 2021 14:25:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42338) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpET3-0000rJ-0g for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:49 -0400 Received: from mail-wr1-x42e.google.com ([2a00:1450:4864:20::42e]:40675) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpESv-0000Jt-2N for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:48 -0400 Received: by mail-wr1-x42e.google.com with SMTP id y7so5556985wrh.7 for ; Fri, 04 Jun 2021 11:22:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dUqxN4lb+GfiuAaelODfQHE3gWgKHRBOE5j2GQAObTg=; b=con1IPL0H5Citz/LQVNUjQpcsxmgkwNYBNV+VAnRxx2DTqj9xwtk0zKC6/PmD+h/3c ko+vjrFDOg2zK6vLv7abojdId7lbwhUELOlt9w6lDUo65VdLyOqDizIxlArxYQUQA/l2 up+B1Ml8eYmWMEkl1hAYHYjuHmuByNNAghbZIsCh38RViJE3IOl9yuy3hcyKcc+MTU2t U0gRXemaHaVoYHqnxO/I6Ye6IvV/XQi4mbNAyu8vnmC74hq7YqD7lraBooglFZLoeHkg yIWpaU8LfcWdAjgwOT9g8Abm9h0K0YvyqQ9HsYg//T2CZn8BNHJn9NSpLyGUy4nXloZL 5opg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dUqxN4lb+GfiuAaelODfQHE3gWgKHRBOE5j2GQAObTg=; b=msL3Ad0CUNcJuoGMq9QV6CRbwKH1hUl7x7mAQu2+4yXH5V1Talt6ySIJF5/87yd9dz 0a9f/maigvbB9jClx1p1RL2vxHCfDhDVL9BCJ46WKjXJbZYZanQZ6DVUa0fGAvLbzYMQ vTMbabUw/NfhgercoxXENpRiRdh+hh7nSCtvFAHvn0Kx5WXs/9A6ARWE4NK7wDdgD9ef UcM9TYCNlh4UsZQH8dWI9xYVOBFNFOonGp2eT2aUbG4aSjGkoWJEEyRMBEq6qbsabCv3 ELgvOaKAIqgVEzBwS15qNSlUw6ihJno+LldD8PRwdp06vWLTV3pmJUHJ2+xVkdk3JF4O jPjw== X-Gm-Message-State: AOAM530KC5F6YkXtbSeNu6dMpBtyx9Aclz8DYaI8moDGYs9BmQaB6cb/ kOCQErXgQ5JPVnByH3f26IpBRA== X-Received: by 2002:adf:f7c3:: with SMTP id a3mr4752291wrq.253.1622830959561; Fri, 04 Jun 2021 11:22:39 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id y189sm6540544wmy.25.2021.06.04.11.22.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 7351C1FFAA; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 20/99] target/arm: tcg: move sysemu-only parts of debug_helper Date: Fri, 4 Jun 2021 16:51:53 +0100 Message-Id: <20210604155312.15902-21-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42e; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana move sysemu-only parts of debug_helper to sysemu/ Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/tcg/debug_helper.c | 27 ----------------------- target/arm/tcg/sysemu/debug_helper.c | 33 ++++++++++++++++++++++++++++ target/arm/tcg/sysemu/meson.build | 1 + 3 files changed, 34 insertions(+), 27 deletions(-) create mode 100644 target/arm/tcg/sysemu/debug_helper.c -- 2.20.1 diff --git a/target/arm/tcg/debug_helper.c b/target/arm/tcg/debug_helper.c index 2ff72d47d1..66a0915393 100644 --- a/target/arm/tcg/debug_helper.c +++ b/target/arm/tcg/debug_helper.c @@ -308,30 +308,3 @@ void arm_debug_excp_handler(CPUState *cs) arm_debug_target_el(env)); } } - -#if !defined(CONFIG_USER_ONLY) - -vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - - /* - * In BE32 system mode, target memory is stored byteswapped (on a - * little-endian host system), and by the time we reach here (via an - * opcode helper) the addresses of subword accesses have been adjusted - * to account for that, which means that watchpoints will not match. - * Undo the adjustment here. - */ - if (arm_sctlr_b(env)) { - if (len == 1) { - addr ^= 3; - } else if (len == 2) { - addr ^= 2; - } - } - - return addr; -} - -#endif diff --git a/target/arm/tcg/sysemu/debug_helper.c b/target/arm/tcg/sysemu/debug_helper.c new file mode 100644 index 0000000000..0bce00144f --- /dev/null +++ b/target/arm/tcg/sysemu/debug_helper.c @@ -0,0 +1,33 @@ +/* + * ARM debug helpers. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" + +vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + /* + * In BE32 system mode, target memory is stored byteswapped (on a + * little-endian host system), and by the time we reach here (via an + * opcode helper) the addresses of subword accesses have been adjusted + * to account for that, which means that watchpoints will not match. + * Undo the adjustment here. + */ + if (arm_sctlr_b(env)) { + if (len == 1) { + addr ^= 3; + } else if (len == 2) { + addr ^= 2; + } + } + + return addr; +} diff --git a/target/arm/tcg/sysemu/meson.build b/target/arm/tcg/sysemu/meson.build index 6f014f77ec..1a4d7a0940 100644 --- a/target/arm/tcg/sysemu/meson.build +++ b/target/arm/tcg/sysemu/meson.build @@ -1,3 +1,4 @@ arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( + 'debug_helper.c', 'mte_helper.c', )) From patchwork Fri Jun 4 15:51:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454069 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp552991jae; Fri, 4 Jun 2021 09:14:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwEZpFHD2gdS5sFPssQqK7pUoDMeSMzQ7PO7Y2V7lt0NinnUYSB75UoO/T9WI417aarvpZ0 X-Received: by 2002:a05:6102:2258:: with SMTP id e24mr3591310vsb.2.1622823246745; Fri, 04 Jun 2021 09:14:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823246; cv=none; d=google.com; s=arc-20160816; b=KGheSoq/QWHDtRhx9plzcEA2yHLQl4LRH7kag/WKIds2UOvQfql1KPx5IEfl63Odxy pQZxOqsltWFlgm1fnYGwx0hDuXO7eJd/HwA+sVWNORGwkRMnuWwKJvHc8ajo6GYJKB49 7CUSSguOg4iVrBFhiFgtpzN6ziXVoZzG8FIODPMaBWz3LFD8IGyVR5+4jko3DRu0dMzf N8BG78JdD8KlnhDQRP7wJFqjSe9h5DiT5TpU31R7nsqwT4hmj6KTlhAfbArhHRoZabES abDuajzfLBy4HX82royD4UY+Sel/VzP3qOLZxG86T07e5IG9VPx9AOd8AZdWzDD3zRQi DZoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=eXvK+3AOQ6m3KywdrH0nWhrBTBSo/h3FF5G5cv3gKts=; b=FpRjkSPTNajeIHjnziZbbWyiM6LoDGgToCOhK9PYzwVDFiLNJnLDWde8+/GqEddY4B PIJKVpUFhUqm1z3RNDymyVg8Zsk2m/xmSfTLv1r2Membhyo+BN7JawoGN6PveqOOsqTx KZW5baGD+guzks8Loixi0Mq31KKQRgZL1/vZytfwCd2dcI9bjtCwzifw3cVWJvx2F6z/ eE3lmN0ubOogm49iXNYUMjO0WwxCgxhZTr7I9+Lec4e8jdtO21cWhLkQyh7TTSCc+yfi Uy2f4JCZ4A5vGEINHB+dSFDmpcg9ux/8NM8DUgApdRrqHTJZn+xW5XZNoNGRhRF3LLFl joeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="TbUh/sVv"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a74si3084404vke.94.2021.06.04.09.14.06 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:14:06 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="TbUh/sVv"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49870 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCST-0004fr-Vi for patch@linaro.org; Fri, 04 Jun 2021 12:14:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48158) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHY-0007eY-RI for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:49 -0400 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]:51043) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHR-0005hM-Ap for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:48 -0400 Received: by mail-wm1-x32c.google.com with SMTP id f20so1465561wmg.0 for ; Fri, 04 Jun 2021 09:02:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eXvK+3AOQ6m3KywdrH0nWhrBTBSo/h3FF5G5cv3gKts=; b=TbUh/sVvJMx2lNXgKLs7LZ0ebdn2mcV+RRTLFTzw3w5qWuePqnwfVzScSvBwDekxth MVQXhGkyRJe7GfusnmnqJtzvdnS8/j93G3XsmjqVJo0m8yZFfPWm4N/3/TjBjbvizsfz mtJDpJ35Ad7WELaz3hOz2w2ipyVEGl6HE13nJ0PfMbu0476nH0Lydqf2aBn0KC54ftjj 6N+H1FwRdSlbVLETp9yvNPUflnNH2Cczal+yz08ocYVs2Z1ee3n5DdrzjyRhLco6vINo 9iuhoaP13DvCM/y5rhqQcTX25f1WXoa3ahLqpPS5oafJrvzt9k/4qKLxDS7fthS5VGLY hw9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eXvK+3AOQ6m3KywdrH0nWhrBTBSo/h3FF5G5cv3gKts=; b=TBYnTdi0GWrDu6zSxAx3YXeLVLu/2GAiRn1fY/cNgFpVcM2QF6gS2tbC5uqCrr+pUT cJDTp0lfYasvsB3uZXR5juLAYsryGyiz0dwX6961Mq2/UdYEnTCf1/ZDV1x7uiYiG35F 3haIIVQxV6AqXx+rR+yT4oWChsgqexhun3THvwrnHtuC/WwaIViqOPiXRmZ+OR1Fh9cm 7loCF9L9vUUVBtHsuvuur+sAzPOgEq1DxVd3cgqmsSeI0vsR408LOCH2ZzK8BslXDd/p kCNkyPhqfFeAjPQfMqVwKvRwR4IxHnVsW39KjnhqPa5nj4QwjcWfPYVh56m6cFawWFRA 9Xcg== X-Gm-Message-State: AOAM533CEKUwcsWiRXxECQrTDixrDYqZzNR8MSe+DdJVydq5JJlTPrC5 1GP1BvaiQEyVUa1b4W5tsknkMg== X-Received: by 2002:a1c:1bd8:: with SMTP id b207mr4229149wmb.80.1622822559175; Fri, 04 Jun 2021 09:02:39 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id x10sm7174349wrt.65.2021.06.04.09.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 959141FFAB; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 21/99] target/arm: tcg: split tlb_helper user-only and sysemu-only parts Date: Fri, 4 Jun 2021 16:51:54 +0100 Message-Id: <20210604155312.15902-22-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32c; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/tcg/tlb_helper.h | 17 ++++++ target/arm/tcg/sysemu/tlb_helper.c | 83 +++++++++++++++++++++++++ target/arm/tcg/tlb_helper.c | 97 ++---------------------------- target/arm/tcg/user/tlb_helper.c | 32 ++++++++++ target/arm/tcg/sysemu/meson.build | 1 + target/arm/tcg/user/meson.build | 1 + 6 files changed, 138 insertions(+), 93 deletions(-) create mode 100644 target/arm/tcg/tlb_helper.h create mode 100644 target/arm/tcg/sysemu/tlb_helper.c create mode 100644 target/arm/tcg/user/tlb_helper.c -- 2.20.1 diff --git a/target/arm/tcg/tlb_helper.h b/target/arm/tcg/tlb_helper.h new file mode 100644 index 0000000000..6ce3d315cf --- /dev/null +++ b/target/arm/tcg/tlb_helper.h @@ -0,0 +1,17 @@ +/* + * ARM TLB (Translation lookaside buffer) helpers. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#ifndef TLB_HELPER_H +#define TLB_HELPER_H + +#include "cpu.h" + +void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr, + MMUAccessType access_type, + int mmu_idx, ARMMMUFaultInfo *fi); + +#endif /* TLB_HELPER_H */ diff --git a/target/arm/tcg/sysemu/tlb_helper.c b/target/arm/tcg/sysemu/tlb_helper.c new file mode 100644 index 0000000000..586f602989 --- /dev/null +++ b/target/arm/tcg/sysemu/tlb_helper.c @@ -0,0 +1,83 @@ +/* + * ARM TLB (Translation lookaside buffer) helpers. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "exec/exec-all.h" +#include "tcg/tlb_helper.h" + +/* + * arm_cpu_do_transaction_failed: handle a memory system error response + * (eg "no device/memory present at address") by raising an external abort + * exception + */ +void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr, + vaddr addr, unsigned size, + MMUAccessType access_type, + int mmu_idx, MemTxAttrs attrs, + MemTxResult response, uintptr_t retaddr) +{ + ARMCPU *cpu = ARM_CPU(cs); + ARMMMUFaultInfo fi = {}; + + /* now we have a real cpu fault */ + cpu_restore_state(cs, retaddr, true); + + fi.ea = arm_extabort_type(response); + fi.type = ARMFault_SyncExternal; + arm_deliver_fault(cpu, addr, access_type, mmu_idx, &fi); +} + +bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size, + MMUAccessType access_type, int mmu_idx, + bool probe, uintptr_t retaddr) +{ + ARMCPU *cpu = ARM_CPU(cs); + ARMMMUFaultInfo fi = {}; + hwaddr phys_addr; + target_ulong page_size; + int prot, ret; + MemTxAttrs attrs = {}; + ARMCacheAttrs cacheattrs = {}; + + /* + * Walk the page table and (if the mapping exists) add the page + * to the TLB. On success, return true. Otherwise, if probing, + * return false. Otherwise populate fsr with ARM DFSR/IFSR fault + * register format, and signal the fault. + */ + ret = get_phys_addr(&cpu->env, address, access_type, + core_to_arm_mmu_idx(&cpu->env, mmu_idx), + &phys_addr, &attrs, &prot, &page_size, + &fi, &cacheattrs); + if (likely(!ret)) { + /* + * Map a single [sub]page. Regions smaller than our declared + * target page size are handled specially, so for those we + * pass in the exact addresses. + */ + if (page_size >= TARGET_PAGE_SIZE) { + phys_addr &= TARGET_PAGE_MASK; + address &= TARGET_PAGE_MASK; + } + /* Notice and record tagged memory. */ + if (cpu_isar_feature(aa64_mte, cpu) && cacheattrs.attrs == 0xf0) { + arm_tlb_mte_tagged(&attrs) = true; + } + + tlb_set_page_with_attrs(cs, address, phys_addr, attrs, + prot, mmu_idx, page_size); + return true; + } else if (probe) { + return false; + } else { + /* now we have a real cpu fault */ + cpu_restore_state(cs, retaddr, true); + arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi); + } +} diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c index 3107f9823e..77aefc274d 100644 --- a/target/arm/tcg/tlb_helper.c +++ b/target/arm/tcg/tlb_helper.c @@ -9,6 +9,7 @@ #include "cpu.h" #include "internals.h" #include "exec/exec-all.h" +#include "tcg/tlb_helper.h" static inline uint32_t merge_syn_data_abort(uint32_t template_syn, unsigned int target_el, @@ -49,9 +50,9 @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn, return syn; } -static void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr, - MMUAccessType access_type, - int mmu_idx, ARMMMUFaultInfo *fi) +void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr, + MMUAccessType access_type, + int mmu_idx, ARMMMUFaultInfo *fi) { CPUARMState *env = &cpu->env; int target_el; @@ -122,93 +123,3 @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr, fi.type = ARMFault_Alignment; arm_deliver_fault(cpu, vaddr, access_type, mmu_idx, &fi); } - -#if !defined(CONFIG_USER_ONLY) - -/* - * arm_cpu_do_transaction_failed: handle a memory system error response - * (eg "no device/memory present at address") by raising an external abort - * exception - */ -void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr, - vaddr addr, unsigned size, - MMUAccessType access_type, - int mmu_idx, MemTxAttrs attrs, - MemTxResult response, uintptr_t retaddr) -{ - ARMCPU *cpu = ARM_CPU(cs); - ARMMMUFaultInfo fi = {}; - - /* now we have a real cpu fault */ - cpu_restore_state(cs, retaddr, true); - - fi.ea = arm_extabort_type(response); - fi.type = ARMFault_SyncExternal; - arm_deliver_fault(cpu, addr, access_type, mmu_idx, &fi); -} - -#endif /* !defined(CONFIG_USER_ONLY) */ - -bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size, - MMUAccessType access_type, int mmu_idx, - bool probe, uintptr_t retaddr) -{ - ARMCPU *cpu = ARM_CPU(cs); - ARMMMUFaultInfo fi = {}; - -#ifdef CONFIG_USER_ONLY - int flags = page_get_flags(useronly_clean_ptr(address)); - if (flags & PAGE_VALID) { - fi.type = ARMFault_Permission; - } else { - fi.type = ARMFault_Translation; - } - fi.level = 3; - - /* now we have a real cpu fault */ - cpu_restore_state(cs, retaddr, true); - arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi); -#else - hwaddr phys_addr; - target_ulong page_size; - int prot, ret; - MemTxAttrs attrs = {}; - ARMCacheAttrs cacheattrs = {}; - - /* - * Walk the page table and (if the mapping exists) add the page - * to the TLB. On success, return true. Otherwise, if probing, - * return false. Otherwise populate fsr with ARM DFSR/IFSR fault - * register format, and signal the fault. - */ - ret = get_phys_addr(&cpu->env, address, access_type, - core_to_arm_mmu_idx(&cpu->env, mmu_idx), - &phys_addr, &attrs, &prot, &page_size, - &fi, &cacheattrs); - if (likely(!ret)) { - /* - * Map a single [sub]page. Regions smaller than our declared - * target page size are handled specially, so for those we - * pass in the exact addresses. - */ - if (page_size >= TARGET_PAGE_SIZE) { - phys_addr &= TARGET_PAGE_MASK; - address &= TARGET_PAGE_MASK; - } - /* Notice and record tagged memory. */ - if (cpu_isar_feature(aa64_mte, cpu) && cacheattrs.attrs == 0xf0) { - arm_tlb_mte_tagged(&attrs) = true; - } - - tlb_set_page_with_attrs(cs, address, phys_addr, attrs, - prot, mmu_idx, page_size); - return true; - } else if (probe) { - return false; - } else { - /* now we have a real cpu fault */ - cpu_restore_state(cs, retaddr, true); - arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi); - } -#endif -} diff --git a/target/arm/tcg/user/tlb_helper.c b/target/arm/tcg/user/tlb_helper.c new file mode 100644 index 0000000000..9f24c96ba0 --- /dev/null +++ b/target/arm/tcg/user/tlb_helper.c @@ -0,0 +1,32 @@ +/* + * ARM TLB (Translation lookaside buffer) helpers. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "exec/exec-all.h" +#include "tcg/tlb_helper.h" + +bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size, + MMUAccessType access_type, int mmu_idx, + bool probe, uintptr_t retaddr) +{ + ARMCPU *cpu = ARM_CPU(cs); + ARMMMUFaultInfo fi = {}; + + int flags = page_get_flags(useronly_clean_ptr(address)); + if (flags & PAGE_VALID) { + fi.type = ARMFault_Permission; + } else { + fi.type = ARMFault_Translation; + } + fi.level = 3; + + /* now we have a real cpu fault */ + cpu_restore_state(cs, retaddr, true); + arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi); +} diff --git a/target/arm/tcg/sysemu/meson.build b/target/arm/tcg/sysemu/meson.build index 1a4d7a0940..8f5e955cbd 100644 --- a/target/arm/tcg/sysemu/meson.build +++ b/target/arm/tcg/sysemu/meson.build @@ -1,4 +1,5 @@ arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( 'debug_helper.c', 'mte_helper.c', + 'tlb_helper.c', )) diff --git a/target/arm/tcg/user/meson.build b/target/arm/tcg/user/meson.build index e681e5f5a1..cdca5d970c 100644 --- a/target/arm/tcg/user/meson.build +++ b/target/arm/tcg/user/meson.build @@ -1,3 +1,4 @@ arm_user_ss.add(when: 'CONFIG_TCG', if_true: files( 'mte_helper.c', + 'tlb_helper.c', )) From patchwork Fri Jun 4 15:51:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454133 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp610768jae; Fri, 4 Jun 2021 10:25:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy29cmZT+AFWkLjStCQ5agD/HU/AgsD8QG2SpDY1JM0D81ImqijvM7mGwCosP7wa6CY/Rpf X-Received: by 2002:a1f:bf12:: with SMTP id p18mr3378356vkf.5.1622827537141; Fri, 04 Jun 2021 10:25:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827537; cv=none; d=google.com; s=arc-20160816; b=etoLmoZKY8RUjhs0SV9Ov+sljbN4VBB/FuEsTxnB3hPAZWo9YJEAK3qipU/I4ptK8F vknZss7UityumJQNq4c5STO7Npj3t4CVO29SK812qvb9qyqhg98JA1QNODRHDUo1oyIG 1b6NyEL/5vvGqE66hU77LQKWp6K/hyPJuWFoyxIVsCDXnDMZlW7JX3AqYlR+071Hord4 sA+GzxIKLYW9XapE7XCc7CyU444Dnr/KVg1rCpJsCxHlh7PMShkWk1AvFqKaYwVX4KYE K4ctRfprcrzDYDd1A52YCVjYxOqXiMLyR5DvtF2hgNiu9LBabhbzYxtnHonTPEFOclfR 45Gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=82B8itWPwsFy+XyKLmtn3Pgiv8Zj982nxS6sGKaSrzw=; b=0YEmIFh7x09u03anZirGvVc4+pd6gnM2Cte28RYVoRFbIRldrxgdjE67XaioKks79r IQa0aEzK76X9Zo1tIji5CGw4/JRhJ3+RC5m8FHPjYTV65kTHMToJZI13xhKhJH8fBufk UtkHLDSjhNxw98spugeNEHKVaH+Y7+gb6Kib5AKUPsyajNrXA3wjaE0jNVGr9PsNpv/r /KFdVqGcXf5SIw12b9zxw+xgqNW9vi88mZQVuQjThLV2SO8Q/z1IXJUVjs8iwNf003Lh qVARDJljnjk9XL4xdCSIFg8WkEXsO/gZUNv5VX805FlDjm7hIQGrKHqFTcVO+9Y13W86 Rxsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Ngw4oRos; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t6si3628947uae.15.2021.06.04.10.25.36 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:25:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Ngw4oRos; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:39794 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDZg-00050s-8n for patch@linaro.org; Fri, 04 Jun 2021 13:25:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33998) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCl7-0000m3-AN for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:21 -0400 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]:36770) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkp-0002Fu-4A for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:21 -0400 Received: by mail-wm1-x32b.google.com with SMTP id n17-20020a7bc5d10000b0290169edfadac9so8286768wmk.1 for ; Fri, 04 Jun 2021 09:33:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=82B8itWPwsFy+XyKLmtn3Pgiv8Zj982nxS6sGKaSrzw=; b=Ngw4oRosEXH5oBd38XQuvLgeMRlN7KAI4LXFyI6L1U3eq6MyvfOsky9+EeaGRyVdZ7 t+UtH7lZGUggAZ9JySjCgKUj8Rw50StADDqCymKjDSYO5fu1gjr//XCFG4r5aiL3lnVj goO629xAI081h8gdL8JqUHcAvjf64M98/IkuvYWLbjB5NGZRgLFewVceXPEtDjF3M89Z p/iV04Arp6rKqFgVZ7K3+AZWblY8q9vk9x5UdbE4MhKXYfK7NXaR73iGbCjW3vvVDBT5 a3k5IyPiWOd6mQyhgSatIPXVgYRedsYRkaacNsnl7eL3e3V/idbYnWqYDTcRuMy6dgFX gbhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=82B8itWPwsFy+XyKLmtn3Pgiv8Zj982nxS6sGKaSrzw=; b=XU235YmF3kxcOEy/R28ozwoaC9wuwevvCJKcp+C2IZwBLS3V0wDMgYUd3PZXGy5dZk BAGn6Nor7LNWnNA3HG5BeusAIfwVGY9RvyrJboPSRugoSE6IYArdHtEhBeggtAR9c05J 2t1Ru+pvObExa79wRdZet147YgY5UZR24QNQ6f7mU4puCutppD0XTzba6qi+SMDFq6Z5 27xFEOJXGktA1z7Mcnolrvu6S4kTvkUXxVhLy+Gk/4hSrRx1jTfUK0+j1o3SHT11QzLH XHP8di7b3pXMKPezRIEykgoovgz3fgYfbHKi5LAwybcr5PTmWdvHK17rdx91L1ugVEiV G4Ng== X-Gm-Message-State: AOAM530D7rzkPdoL3iOoUljMWMpH2TCGNRUrh659+IU4R0nDXM2lvh/5 Xxtouh23n/IzWBmg7PnVGeGWOw== X-Received: by 2002:a05:600c:251:: with SMTP id 17mr4461531wmj.137.1622824380078; Fri, 04 Jun 2021 09:33:00 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id b10sm6726379wrt.24.2021.06.04.09.32.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:57 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id CD27B1FF87; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 22/99] target/arm: tcg: split m_helper user-only and sysemu-only parts Date: Fri, 4 Jun 2021 16:51:55 +0100 Message-Id: <20210604155312.15902-23-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32b; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana in the process remove a few CONFIG_TCG that are superfluous now. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/tcg/m_helper.h | 21 + target/arm/tcg/m_helper.c | 2767 +---------------------------- target/arm/tcg/sysemu/m_helper.c | 2655 +++++++++++++++++++++++++++ target/arm/tcg/user/m_helper.c | 97 + target/arm/tcg/sysemu/meson.build | 1 + target/arm/tcg/user/meson.build | 1 + 6 files changed, 2780 insertions(+), 2762 deletions(-) create mode 100644 target/arm/tcg/m_helper.h create mode 100644 target/arm/tcg/sysemu/m_helper.c create mode 100644 target/arm/tcg/user/m_helper.c -- 2.20.1 diff --git a/target/arm/tcg/m_helper.h b/target/arm/tcg/m_helper.h new file mode 100644 index 0000000000..9da106aa65 --- /dev/null +++ b/target/arm/tcg/m_helper.h @@ -0,0 +1,21 @@ +/* + * ARM v7m generic helpers. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#ifndef M_HELPER_H +#define M_HELPER_H + +#include "cpu.h" + +void v7m_msr_xpsr(CPUARMState *env, uint32_t mask, + uint32_t reg, uint32_t val); + +uint32_t v7m_mrs_xpsr(CPUARMState *env, uint32_t reg, unsigned el); + +uint32_t v7m_mrs_control(CPUARMState *env, uint32_t secure); + +#endif /* M_HELPER_H */ diff --git a/target/arm/tcg/m_helper.c b/target/arm/tcg/m_helper.c index eda74e5545..8f3763155f 100644 --- a/target/arm/tcg/m_helper.c +++ b/target/arm/tcg/m_helper.c @@ -1,5 +1,5 @@ /* - * ARM generic helpers. + * ARM v7m generic helpers. * * This code is licensed under the GNU GPL v2 or later. * @@ -7,35 +7,11 @@ */ #include "qemu/osdep.h" -#include "qemu/units.h" -#include "target/arm/idau.h" -#include "trace.h" #include "cpu.h" #include "internals.h" -#include "exec/gdbstub.h" -#include "exec/helper-proto.h" -#include "qemu/host-utils.h" -#include "qemu/main-loop.h" -#include "qemu/bitops.h" -#include "qemu/crc32c.h" -#include "qemu/qemu-print.h" -#include "exec/exec-all.h" -#include /* For crc32 */ -#include "semihosting/semihost.h" -#include "sysemu/cpus.h" -#include "sysemu/kvm.h" -#include "qemu/range.h" -#include "qapi/qapi-commands-machine-target.h" -#include "qapi/error.h" -#include "qemu/guest-random.h" -#ifdef CONFIG_TCG -#include "arm_ldst.h" -#include "exec/cpu_ldst.h" -#include "semihosting/common-semi.h" -#endif +#include "tcg/m_helper.h" -static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask, - uint32_t reg, uint32_t val) +void v7m_msr_xpsr(CPUARMState *env, uint32_t mask, uint32_t reg, uint32_t val) { /* Only APSR is actually writable */ if (!(reg & 4)) { @@ -51,7 +27,7 @@ static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask, } } -static uint32_t v7m_mrs_xpsr(CPUARMState *env, uint32_t reg, unsigned el) +uint32_t v7m_mrs_xpsr(CPUARMState *env, uint32_t reg, unsigned el) { uint32_t mask = 0; @@ -68,7 +44,7 @@ static uint32_t v7m_mrs_xpsr(CPUARMState *env, uint32_t reg, unsigned el) return xpsr_read(env) & mask; } -static uint32_t v7m_mrs_control(CPUARMState *env, uint32_t secure) +uint32_t v7m_mrs_control(CPUARMState *env, uint32_t secure) { uint32_t value = env->v7m.control[secure]; @@ -79,2739 +55,6 @@ static uint32_t v7m_mrs_control(CPUARMState *env, uint32_t secure) return value; } -#ifdef CONFIG_USER_ONLY - -void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) -{ - uint32_t mask = extract32(maskreg, 8, 4); - uint32_t reg = extract32(maskreg, 0, 8); - - switch (reg) { - case 0 ... 7: /* xPSR sub-fields */ - v7m_msr_xpsr(env, mask, reg, val); - break; - case 20: /* CONTROL */ - /* There are no sub-fields that are actually writable from EL0. */ - break; - default: - /* Unprivileged writes to other registers are ignored */ - break; - } -} - -uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) -{ - switch (reg) { - case 0 ... 7: /* xPSR sub-fields */ - return v7m_mrs_xpsr(env, reg, 0); - case 20: /* CONTROL */ - return v7m_mrs_control(env, 0); - default: - /* Unprivileged reads others as zero. */ - return 0; - } -} - -void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) -{ - /* translate.c should never generate calls here in user-only mode */ - g_assert_not_reached(); -} - -void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) -{ - /* translate.c should never generate calls here in user-only mode */ - g_assert_not_reached(); -} - -void HELPER(v7m_preserve_fp_state)(CPUARMState *env) -{ - /* translate.c should never generate calls here in user-only mode */ - g_assert_not_reached(); -} - -void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr) -{ - /* translate.c should never generate calls here in user-only mode */ - g_assert_not_reached(); -} - -void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr) -{ - /* translate.c should never generate calls here in user-only mode */ - g_assert_not_reached(); -} - -uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) -{ - /* - * The TT instructions can be used by unprivileged code, but in - * user-only emulation we don't have the MPU. - * Luckily since we know we are NonSecure unprivileged (and that in - * turn means that the A flag wasn't specified), all the bits in the - * register must be zero: - * IREGION: 0 because IRVALID is 0 - * IRVALID: 0 because NS - * S: 0 because NS - * NSRW: 0 because NS - * NSR: 0 because NS - * RW: 0 because unpriv and A flag not set - * R: 0 because unpriv and A flag not set - * SRVALID: 0 because NS - * MRVALID: 0 because unpriv and A flag not set - * SREGION: 0 becaus SRVALID is 0 - * MREGION: 0 because MRVALID is 0 - */ - return 0; -} - -#else - -/* - * What kind of stack write are we doing? This affects how exceptions - * generated during the stacking are treated. - */ -typedef enum StackingMode { - STACK_NORMAL, - STACK_IGNFAULTS, - STACK_LAZYFP, -} StackingMode; - -static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value, - ARMMMUIdx mmu_idx, StackingMode mode) -{ - CPUState *cs = CPU(cpu); - CPUARMState *env = &cpu->env; - MemTxAttrs attrs = {}; - MemTxResult txres; - target_ulong page_size; - hwaddr physaddr; - int prot; - ARMMMUFaultInfo fi = {}; - ARMCacheAttrs cacheattrs = {}; - bool secure = mmu_idx & ARM_MMU_IDX_M_S; - int exc; - bool exc_secure; - - if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr, - &attrs, &prot, &page_size, &fi, &cacheattrs)) { - /* MPU/SAU lookup failed */ - if (fi.type == ARMFault_QEMU_SFault) { - if (mode == STACK_LAZYFP) { - qemu_log_mask(CPU_LOG_INT, - "...SecureFault with SFSR.LSPERR " - "during lazy stacking\n"); - env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK; - } else { - qemu_log_mask(CPU_LOG_INT, - "...SecureFault with SFSR.AUVIOL " - "during stacking\n"); - env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK; - } - env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK; - env->v7m.sfar = addr; - exc = ARMV7M_EXCP_SECURE; - exc_secure = false; - } else { - if (mode == STACK_LAZYFP) { - qemu_log_mask(CPU_LOG_INT, - "...MemManageFault with CFSR.MLSPERR\n"); - env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK; - } else { - qemu_log_mask(CPU_LOG_INT, - "...MemManageFault with CFSR.MSTKERR\n"); - env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK; - } - exc = ARMV7M_EXCP_MEM; - exc_secure = secure; - } - goto pend_fault; - } - address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value, - attrs, &txres); - if (txres != MEMTX_OK) { - /* BusFault trying to write the data */ - if (mode == STACK_LAZYFP) { - qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n"); - env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK; - } else { - qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n"); - env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK; - } - exc = ARMV7M_EXCP_BUS; - exc_secure = false; - goto pend_fault; - } - return true; - -pend_fault: - /* - * By pending the exception at this point we are making - * the IMPDEF choice "overridden exceptions pended" (see the - * MergeExcInfo() pseudocode). The other choice would be to not - * pend them now and then make a choice about which to throw away - * later if we have two derived exceptions. - * The only case when we must not pend the exception but instead - * throw it away is if we are doing the push of the callee registers - * and we've already generated a derived exception (this is indicated - * by the caller passing STACK_IGNFAULTS). Even in this case we will - * still update the fault status registers. - */ - switch (mode) { - case STACK_NORMAL: - armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure); - break; - case STACK_LAZYFP: - armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure); - break; - case STACK_IGNFAULTS: - break; - } - return false; -} - -static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr, - ARMMMUIdx mmu_idx) -{ - CPUState *cs = CPU(cpu); - CPUARMState *env = &cpu->env; - MemTxAttrs attrs = {}; - MemTxResult txres; - target_ulong page_size; - hwaddr physaddr; - int prot; - ARMMMUFaultInfo fi = {}; - ARMCacheAttrs cacheattrs = {}; - bool secure = mmu_idx & ARM_MMU_IDX_M_S; - int exc; - bool exc_secure; - uint32_t value; - - if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr, - &attrs, &prot, &page_size, &fi, &cacheattrs)) { - /* MPU/SAU lookup failed */ - if (fi.type == ARMFault_QEMU_SFault) { - qemu_log_mask(CPU_LOG_INT, - "...SecureFault with SFSR.AUVIOL during unstack\n"); - env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK; - env->v7m.sfar = addr; - exc = ARMV7M_EXCP_SECURE; - exc_secure = false; - } else { - qemu_log_mask(CPU_LOG_INT, - "...MemManageFault with CFSR.MUNSTKERR\n"); - env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK; - exc = ARMV7M_EXCP_MEM; - exc_secure = secure; - } - goto pend_fault; - } - - value = address_space_ldl(arm_addressspace(cs, attrs), physaddr, - attrs, &txres); - if (txres != MEMTX_OK) { - /* BusFault trying to read the data */ - qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n"); - env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK; - exc = ARMV7M_EXCP_BUS; - exc_secure = false; - goto pend_fault; - } - - *dest = value; - return true; - -pend_fault: - /* - * By pending the exception at this point we are making - * the IMPDEF choice "overridden exceptions pended" (see the - * MergeExcInfo() pseudocode). The other choice would be to not - * pend them now and then make a choice about which to throw away - * later if we have two derived exceptions. - */ - armv7m_nvic_set_pending(env->nvic, exc, exc_secure); - return false; -} - -void HELPER(v7m_preserve_fp_state)(CPUARMState *env) -{ - /* - * Preserve FP state (because LSPACT was set and we are about - * to execute an FP instruction). This corresponds to the - * PreserveFPState() pseudocode. - * We may throw an exception if the stacking fails. - */ - ARMCPU *cpu = env_archcpu(env); - bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK; - bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK); - bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK); - bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK; - uint32_t fpcar = env->v7m.fpcar[is_secure]; - bool stacked_ok = true; - bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK); - bool take_exception; - - /* Take the iothread lock as we are going to touch the NVIC */ - qemu_mutex_lock_iothread(); - - /* Check the background context had access to the FPU */ - if (!v7m_cpacr_pass(env, is_secure, is_priv)) { - armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure); - env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK; - stacked_ok = false; - } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) { - armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S); - env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK; - stacked_ok = false; - } - - if (!splimviol && stacked_ok) { - /* We only stack if the stack limit wasn't violated */ - int i; - ARMMMUIdx mmu_idx; - - mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri); - for (i = 0; i < (ts ? 32 : 16); i += 2) { - uint64_t dn = *aa32_vfp_dreg(env, i / 2); - uint32_t faddr = fpcar + 4 * i; - uint32_t slo = extract64(dn, 0, 32); - uint32_t shi = extract64(dn, 32, 32); - - if (i >= 16) { - faddr += 8; /* skip the slot for the FPSCR */ - } - stacked_ok = stacked_ok && - v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) && - v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP); - } - - stacked_ok = stacked_ok && - v7m_stack_write(cpu, fpcar + 0x40, - vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP); - } - - /* - * We definitely pended an exception, but it's possible that it - * might not be able to be taken now. If its priority permits us - * to take it now, then we must not update the LSPACT or FP regs, - * but instead jump out to take the exception immediately. - * If it's just pending and won't be taken until the current - * handler exits, then we do update LSPACT and the FP regs. - */ - take_exception = !stacked_ok && - armv7m_nvic_can_take_pending_exception(env->nvic); - - qemu_mutex_unlock_iothread(); - - if (take_exception) { - raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC()); - } - - env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK; - - if (ts) { - /* Clear s0 to s31 and the FPSCR */ - int i; - - for (i = 0; i < 32; i += 2) { - *aa32_vfp_dreg(env, i / 2) = 0; - } - vfp_set_fpscr(env, 0); - } - /* - * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them - * unchanged. - */ -} - -/* - * Write to v7M CONTROL.SPSEL bit for the specified security bank. - * This may change the current stack pointer between Main and Process - * stack pointers if it is done for the CONTROL register for the current - * security state. - */ -static void write_v7m_control_spsel_for_secstate(CPUARMState *env, - bool new_spsel, - bool secstate) -{ - bool old_is_psp = v7m_using_psp(env); - - env->v7m.control[secstate] = - deposit32(env->v7m.control[secstate], - R_V7M_CONTROL_SPSEL_SHIFT, - R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); - - if (secstate == env->v7m.secure) { - bool new_is_psp = v7m_using_psp(env); - uint32_t tmp; - - if (old_is_psp != new_is_psp) { - tmp = env->v7m.other_sp; - env->v7m.other_sp = env->regs[13]; - env->regs[13] = tmp; - } - } -} - -/* - * Write to v7M CONTROL.SPSEL bit. This may change the current - * stack pointer between Main and Process stack pointers. - */ -static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) -{ - write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure); -} - -void write_v7m_exception(CPUARMState *env, uint32_t new_exc) -{ - /* - * Write a new value to v7m.exception, thus transitioning into or out - * of Handler mode; this may result in a change of active stack pointer. - */ - bool new_is_psp, old_is_psp = v7m_using_psp(env); - uint32_t tmp; - - env->v7m.exception = new_exc; - - new_is_psp = v7m_using_psp(env); - - if (old_is_psp != new_is_psp) { - tmp = env->v7m.other_sp; - env->v7m.other_sp = env->regs[13]; - env->regs[13] = tmp; - } -} - -/* Switch M profile security state between NS and S */ -static void switch_v7m_security_state(CPUARMState *env, bool new_secstate) -{ - uint32_t new_ss_msp, new_ss_psp; - - if (env->v7m.secure == new_secstate) { - return; - } - - /* - * All the banked state is accessed by looking at env->v7m.secure - * except for the stack pointer; rearrange the SP appropriately. - */ - new_ss_msp = env->v7m.other_ss_msp; - new_ss_psp = env->v7m.other_ss_psp; - - if (v7m_using_psp(env)) { - env->v7m.other_ss_psp = env->regs[13]; - env->v7m.other_ss_msp = env->v7m.other_sp; - } else { - env->v7m.other_ss_msp = env->regs[13]; - env->v7m.other_ss_psp = env->v7m.other_sp; - } - - env->v7m.secure = new_secstate; - - if (v7m_using_psp(env)) { - env->regs[13] = new_ss_psp; - env->v7m.other_sp = new_ss_msp; - } else { - env->regs[13] = new_ss_msp; - env->v7m.other_sp = new_ss_psp; - } -} - -void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) -{ - /* - * Handle v7M BXNS: - * - if the return value is a magic value, do exception return (like BX) - * - otherwise bit 0 of the return value is the target security state - */ - uint32_t min_magic; - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - /* Covers FNC_RETURN and EXC_RETURN magic */ - min_magic = FNC_RETURN_MIN_MAGIC; - } else { - /* EXC_RETURN magic only */ - min_magic = EXC_RETURN_MIN_MAGIC; - } - - if (dest >= min_magic) { - /* - * This is an exception return magic value; put it where - * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT. - * Note that if we ever add gen_ss_advance() singlestep support to - * M profile this should count as an "instruction execution complete" - * event (compare gen_bx_excret_final_code()). - */ - env->regs[15] = dest & ~1; - env->thumb = dest & 1; - HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT); - /* notreached */ - } - - /* translate.c should have made BXNS UNDEF unless we're secure */ - assert(env->v7m.secure); - - if (!(dest & 1)) { - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; - } - switch_v7m_security_state(env, dest & 1); - env->thumb = 1; - env->regs[15] = dest & ~1; - arm_rebuild_hflags(env); -} - -void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) -{ - /* - * Handle v7M BLXNS: - * - bit 0 of the destination address is the target security state - */ - - /* At this point regs[15] is the address just after the BLXNS */ - uint32_t nextinst = env->regs[15] | 1; - uint32_t sp = env->regs[13] - 8; - uint32_t saved_psr; - - /* translate.c will have made BLXNS UNDEF unless we're secure */ - assert(env->v7m.secure); - - if (dest & 1) { - /* - * Target is Secure, so this is just a normal BLX, - * except that the low bit doesn't indicate Thumb/not. - */ - env->regs[14] = nextinst; - env->thumb = 1; - env->regs[15] = dest & ~1; - return; - } - - /* Target is non-secure: first push a stack frame */ - if (!QEMU_IS_ALIGNED(sp, 8)) { - qemu_log_mask(LOG_GUEST_ERROR, - "BLXNS with misaligned SP is UNPREDICTABLE\n"); - } - - if (sp < v7m_sp_limit(env)) { - raise_exception(env, EXCP_STKOF, 0, 1); - } - - saved_psr = env->v7m.exception; - if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) { - saved_psr |= XPSR_SFPA; - } - - /* Note that these stores can throw exceptions on MPU faults */ - cpu_stl_data_ra(env, sp, nextinst, GETPC()); - cpu_stl_data_ra(env, sp + 4, saved_psr, GETPC()); - - env->regs[13] = sp; - env->regs[14] = 0xfeffffff; - if (arm_v7m_is_handler_mode(env)) { - /* - * Write a dummy value to IPSR, to avoid leaking the current secure - * exception number to non-secure code. This is guaranteed not - * to cause write_v7m_exception() to actually change stacks. - */ - write_v7m_exception(env, 1); - } - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; - switch_v7m_security_state(env, 0); - env->thumb = 1; - env->regs[15] = dest; - arm_rebuild_hflags(env); -} - -static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, - bool spsel) -{ - /* - * Return a pointer to the location where we currently store the - * stack pointer for the requested security state and thread mode. - * This pointer will become invalid if the CPU state is updated - * such that the stack pointers are switched around (eg changing - * the SPSEL control bit). - * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode(). - * Unlike that pseudocode, we require the caller to pass us in the - * SPSEL control bit value; this is because we also use this - * function in handling of pushing of the callee-saves registers - * part of the v8M stack frame (pseudocode PushCalleeStack()), - * and in the tailchain codepath the SPSEL bit comes from the exception - * return magic LR value from the previous exception. The pseudocode - * opencodes the stack-selection in PushCalleeStack(), but we prefer - * to make this utility function generic enough to do the job. - */ - bool want_psp = threadmode && spsel; - - if (secure == env->v7m.secure) { - if (want_psp == v7m_using_psp(env)) { - return &env->regs[13]; - } else { - return &env->v7m.other_sp; - } - } else { - if (want_psp) { - return &env->v7m.other_ss_psp; - } else { - return &env->v7m.other_ss_msp; - } - } -} - -static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure, - uint32_t *pvec) -{ - CPUState *cs = CPU(cpu); - CPUARMState *env = &cpu->env; - MemTxResult result; - uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4; - uint32_t vector_entry; - MemTxAttrs attrs = {}; - ARMMMUIdx mmu_idx; - bool exc_secure; - - mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true); - - /* - * We don't do a get_phys_addr() here because the rules for vector - * loads are special: they always use the default memory map, and - * the default memory map permits reads from all addresses. - * Since there's no easy way to pass through to pmsav8_mpu_lookup() - * that we want this special case which would always say "yes", - * we just do the SAU lookup here followed by a direct physical load. - */ - attrs.secure = targets_secure; - attrs.user = false; - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - V8M_SAttributes sattrs = {}; - - v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); - if (sattrs.ns) { - attrs.secure = false; - } else if (!targets_secure) { - /* - * NS access to S memory: the underlying exception which we escalate - * to HardFault is SecureFault, which always targets Secure. - */ - exc_secure = true; - goto load_fail; - } - } - - vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr, - attrs, &result); - if (result != MEMTX_OK) { - /* - * Underlying exception is BusFault: its target security state - * depends on BFHFNMINS. - */ - exc_secure = !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK); - goto load_fail; - } - *pvec = vector_entry; - return true; - -load_fail: - /* - * All vector table fetch fails are reported as HardFault, with - * HFSR.VECTTBL and .FORCED set. (FORCED is set because - * technically the underlying exception is a SecureFault or BusFault - * that is escalated to HardFault.) This is a terminal exception, - * so we will either take the HardFault immediately or else enter - * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()). - * The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are - * secure); otherwise it targets the same security state as the - * underlying exception. - * In v8.1M HardFaults from vector table fetch fails don't set FORCED. - */ - if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) { - exc_secure = true; - } - env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK; - if (!arm_feature(env, ARM_FEATURE_V8_1M)) { - env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK; - } - armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure); - return false; -} - -static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr) -{ - /* - * Return the integrity signature value for the callee-saves - * stack frame section. @lr is the exception return payload/LR value - * whose FType bit forms bit 0 of the signature if FP is present. - */ - uint32_t sig = 0xfefa125a; - - if (!cpu_isar_feature(aa32_vfp_simd, env_archcpu(env)) - || (lr & R_V7M_EXCRET_FTYPE_MASK)) { - sig |= 1; - } - return sig; -} - -static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain, - bool ignore_faults) -{ - /* - * For v8M, push the callee-saves register part of the stack frame. - * Compare the v8M pseudocode PushCalleeStack(). - * In the tailchaining case this may not be the current stack. - */ - CPUARMState *env = &cpu->env; - uint32_t *frame_sp_p; - uint32_t frameptr; - ARMMMUIdx mmu_idx; - bool stacked_ok; - uint32_t limit; - bool want_psp; - uint32_t sig; - StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL; - - if (dotailchain) { - bool mode = lr & R_V7M_EXCRET_MODE_MASK; - bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) || - !mode; - - mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv); - frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode, - lr & R_V7M_EXCRET_SPSEL_MASK); - want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK); - if (want_psp) { - limit = env->v7m.psplim[M_REG_S]; - } else { - limit = env->v7m.msplim[M_REG_S]; - } - } else { - mmu_idx = arm_mmu_idx(env); - frame_sp_p = &env->regs[13]; - limit = v7m_sp_limit(env); - } - - frameptr = *frame_sp_p - 0x28; - if (frameptr < limit) { - /* - * Stack limit failure: set SP to the limit value, and generate - * STKOF UsageFault. Stack pushes below the limit must not be - * performed. It is IMPDEF whether pushes above the limit are - * performed; we choose not to. - */ - qemu_log_mask(CPU_LOG_INT, - "...STKOF during callee-saves register stacking\n"); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - *frame_sp_p = limit; - return true; - } - - /* - * Write as much of the stack frame as we can. A write failure may - * cause us to pend a derived exception. - */ - sig = v7m_integrity_sig(env, lr); - stacked_ok = - v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) && - v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode); - - /* Update SP regardless of whether any of the stack accesses failed. */ - *frame_sp_p = frameptr; - - return !stacked_ok; -} - -static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain, - bool ignore_stackfaults) -{ - /* - * Do the "take the exception" parts of exception entry, - * but not the pushing of state to the stack. This is - * similar to the pseudocode ExceptionTaken() function. - */ - CPUARMState *env = &cpu->env; - uint32_t addr; - bool targets_secure; - int exc; - bool push_failed = false; - - armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure); - qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n", - targets_secure ? "secure" : "nonsecure", exc); - - if (dotailchain) { - /* Sanitize LR FType and PREFIX bits */ - if (!cpu_isar_feature(aa32_vfp_simd, cpu)) { - lr |= R_V7M_EXCRET_FTYPE_MASK; - } - lr = deposit32(lr, 24, 8, 0xff); - } - - if (arm_feature(env, ARM_FEATURE_V8)) { - if (arm_feature(env, ARM_FEATURE_M_SECURITY) && - (lr & R_V7M_EXCRET_S_MASK)) { - /* - * The background code (the owner of the registers in the - * exception frame) is Secure. This means it may either already - * have or now needs to push callee-saves registers. - */ - if (targets_secure) { - if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) { - /* - * We took an exception from Secure to NonSecure - * (which means the callee-saved registers got stacked) - * and are now tailchaining to a Secure exception. - * Clear DCRS so eventual return from this Secure - * exception unstacks the callee-saved registers. - */ - lr &= ~R_V7M_EXCRET_DCRS_MASK; - } - } else { - /* - * We're going to a non-secure exception; push the - * callee-saves registers to the stack now, if they're - * not already saved. - */ - if (lr & R_V7M_EXCRET_DCRS_MASK && - !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) { - push_failed = v7m_push_callee_stack(cpu, lr, dotailchain, - ignore_stackfaults); - } - lr |= R_V7M_EXCRET_DCRS_MASK; - } - } - - lr &= ~R_V7M_EXCRET_ES_MASK; - if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) { - lr |= R_V7M_EXCRET_ES_MASK; - } - lr &= ~R_V7M_EXCRET_SPSEL_MASK; - if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) { - lr |= R_V7M_EXCRET_SPSEL_MASK; - } - - /* - * Clear registers if necessary to prevent non-secure exception - * code being able to see register values from secure code. - * Where register values become architecturally UNKNOWN we leave - * them with their previous values. v8.1M is tighter than v8.0M - * here and always zeroes the caller-saved registers regardless - * of the security state the exception is targeting. - */ - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) { - /* - * Always clear the caller-saved registers (they have been - * pushed to the stack earlier in v7m_push_stack()). - * Clear callee-saved registers if the background code is - * Secure (in which case these regs were saved in - * v7m_push_callee_stack()). - */ - int i; - /* - * r4..r11 are callee-saves, zero only if background - * state was Secure (EXCRET.S == 1) and exception - * targets Non-secure state - */ - bool zero_callee_saves = !targets_secure && - (lr & R_V7M_EXCRET_S_MASK); - - for (i = 0; i < 13; i++) { - if (i < 4 || i > 11 || zero_callee_saves) { - env->regs[i] = 0; - } - } - /* Clear EAPSR */ - xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT); - } - } - } - - if (push_failed && !ignore_stackfaults) { - /* - * Derived exception on callee-saves register stacking: - * we might now want to take a different exception which - * targets a different security state, so try again from the top. - */ - qemu_log_mask(CPU_LOG_INT, - "...derived exception on callee-saves register stacking"); - v7m_exception_taken(cpu, lr, true, true); - return; - } - - if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) { - /* Vector load failed: derived exception */ - qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load"); - v7m_exception_taken(cpu, lr, true, true); - return; - } - - /* - * Now we've done everything that might cause a derived exception - * we can go ahead and activate whichever exception we're going to - * take (which might now be the derived exception). - */ - armv7m_nvic_acknowledge_irq(env->nvic); - - /* Switch to target security state -- must do this before writing SPSEL */ - switch_v7m_security_state(env, targets_secure); - write_v7m_control_spsel(env, 0); - arm_clear_exclusive(env); - /* Clear SFPA and FPCA (has no effect if no FPU) */ - env->v7m.control[M_REG_S] &= - ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK); - /* Clear IT bits */ - env->condexec_bits = 0; - env->regs[14] = lr; - env->regs[15] = addr & 0xfffffffe; - env->thumb = addr & 1; - arm_rebuild_hflags(env); -} - -static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr, - bool apply_splim) -{ - /* - * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR - * that we will need later in order to do lazy FP reg stacking. - */ - bool is_secure = env->v7m.secure; - void *nvic = env->nvic; - /* - * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits - * are banked and we want to update the bit in the bank for the - * current security state; and in one case we want to specifically - * update the NS banked version of a bit even if we are secure. - */ - uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S]; - uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS]; - uint32_t *fpccr = &env->v7m.fpccr[is_secure]; - bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy; - - env->v7m.fpcar[is_secure] = frameptr & ~0x7; - - if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) { - bool splimviol; - uint32_t splim = v7m_sp_limit(env); - bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) && - (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK); - - splimviol = !ign && frameptr < splim; - *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol); - } - - *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1); - - *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure); - - *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0); - - *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD, - !arm_v7m_is_handler_mode(env)); - - hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false); - *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy); - - bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false); - *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy); - - mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure); - *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy); - - ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false); - *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy); - - monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false); - *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy); - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true); - *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy); - - sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false); - *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy); - } -} - -void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr) -{ - /* fptr is the value of Rn, the frame pointer we store the FP regs to */ - bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK; - bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK; - uintptr_t ra = GETPC(); - - assert(env->v7m.secure); - - if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) { - return; - } - - /* Check access to the coprocessor is permitted */ - if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) { - raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC()); - } - - if (lspact) { - /* LSPACT should not be active when there is active FP state */ - raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC()); - } - - if (fptr & 7) { - raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC()); - } - - /* - * Note that we do not use v7m_stack_write() here, because the - * accesses should not set the FSR bits for stacking errors if they - * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK - * or AccType_LAZYFP). Faults in cpu_stl_data_ra() will throw exceptions - * and longjmp out. - */ - if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) { - bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK; - int i; - - for (i = 0; i < (ts ? 32 : 16); i += 2) { - uint64_t dn = *aa32_vfp_dreg(env, i / 2); - uint32_t faddr = fptr + 4 * i; - uint32_t slo = extract64(dn, 0, 32); - uint32_t shi = extract64(dn, 32, 32); - - if (i >= 16) { - faddr += 8; /* skip the slot for the FPSCR */ - } - cpu_stl_data_ra(env, faddr, slo, ra); - cpu_stl_data_ra(env, faddr + 4, shi, ra); - } - cpu_stl_data_ra(env, fptr + 0x40, vfp_get_fpscr(env), ra); - - /* - * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to - * leave them unchanged, matching our choice in v7m_preserve_fp_state. - */ - if (ts) { - for (i = 0; i < 32; i += 2) { - *aa32_vfp_dreg(env, i / 2) = 0; - } - vfp_set_fpscr(env, 0); - } - } else { - v7m_update_fpccr(env, fptr, false); - } - - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK; -} - -void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr) -{ - uintptr_t ra = GETPC(); - - /* fptr is the value of Rn, the frame pointer we load the FP regs from */ - assert(env->v7m.secure); - - if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) { - return; - } - - /* Check access to the coprocessor is permitted */ - if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) { - raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC()); - } - - if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) { - /* State in FP is still valid */ - env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK; - } else { - bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK; - int i; - uint32_t fpscr; - - if (fptr & 7) { - raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC()); - } - - for (i = 0; i < (ts ? 32 : 16); i += 2) { - uint32_t slo, shi; - uint64_t dn; - uint32_t faddr = fptr + 4 * i; - - if (i >= 16) { - faddr += 8; /* skip the slot for the FPSCR */ - } - - slo = cpu_ldl_data_ra(env, faddr, ra); - shi = cpu_ldl_data_ra(env, faddr + 4, ra); - - dn = (uint64_t) shi << 32 | slo; - *aa32_vfp_dreg(env, i / 2) = dn; - } - fpscr = cpu_ldl_data_ra(env, fptr + 0x40, ra); - vfp_set_fpscr(env, fpscr); - } - - env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK; -} - -static bool v7m_push_stack(ARMCPU *cpu) -{ - /* - * Do the "set up stack frame" part of exception entry, - * similar to pseudocode PushStack(). - * Return true if we generate a derived exception (and so - * should ignore further stack faults trying to process - * that derived exception.) - */ - bool stacked_ok = true, limitviol = false; - CPUARMState *env = &cpu->env; - uint32_t xpsr = xpsr_read(env); - uint32_t frameptr = env->regs[13]; - ARMMMUIdx mmu_idx = arm_mmu_idx(env); - uint32_t framesize; - bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1); - - if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) && - (env->v7m.secure || nsacr_cp10)) { - if (env->v7m.secure && - env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) { - framesize = 0xa8; - } else { - framesize = 0x68; - } - } else { - framesize = 0x20; - } - - /* Align stack pointer if the guest wants that */ - if ((frameptr & 4) && - (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) { - frameptr -= 4; - xpsr |= XPSR_SPREALIGN; - } - - xpsr &= ~XPSR_SFPA; - if (env->v7m.secure && - (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) { - xpsr |= XPSR_SFPA; - } - - frameptr -= framesize; - - if (arm_feature(env, ARM_FEATURE_V8)) { - uint32_t limit = v7m_sp_limit(env); - - if (frameptr < limit) { - /* - * Stack limit failure: set SP to the limit value, and generate - * STKOF UsageFault. Stack pushes below the limit must not be - * performed. It is IMPDEF whether pushes above the limit are - * performed; we choose not to. - */ - qemu_log_mask(CPU_LOG_INT, - "...STKOF during stacking\n"); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - env->regs[13] = limit; - /* - * We won't try to perform any further memory accesses but - * we must continue through the following code to check for - * permission faults during FPU state preservation, and we - * must update FPCCR if lazy stacking is enabled. - */ - limitviol = true; - stacked_ok = false; - } - } - - /* - * Write as much of the stack frame as we can. If we fail a stack - * write this will result in a derived exception being pended - * (which may be taken in preference to the one we started with - * if it has higher priority). - */ - stacked_ok = stacked_ok && - v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, frameptr + 4, env->regs[1], - mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, frameptr + 8, env->regs[2], - mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, frameptr + 12, env->regs[3], - mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, frameptr + 16, env->regs[12], - mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, frameptr + 20, env->regs[14], - mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, frameptr + 24, env->regs[15], - mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL); - - if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) { - /* FPU is active, try to save its registers */ - bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK; - bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK; - - if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) { - qemu_log_mask(CPU_LOG_INT, - "...SecureFault because LSPACT and FPCA both set\n"); - env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - } else if (!env->v7m.secure && !nsacr_cp10) { - qemu_log_mask(CPU_LOG_INT, - "...Secure UsageFault with CFSR.NOCP because " - "NSACR.CP10 prevents stacking FP regs\n"); - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S); - env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK; - } else { - if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) { - /* Lazy stacking disabled, save registers now */ - int i; - bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure, - arm_current_el(env) != 0); - - if (stacked_ok && !cpacr_pass) { - /* - * Take UsageFault if CPACR forbids access. The pseudocode - * here does a full CheckCPEnabled() but we know the NSACR - * check can never fail as we have already handled that. - */ - qemu_log_mask(CPU_LOG_INT, - "...UsageFault with CFSR.NOCP because " - "CPACR.CP10 prevents stacking FP regs\n"); - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK; - stacked_ok = false; - } - - for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) { - uint64_t dn = *aa32_vfp_dreg(env, i / 2); - uint32_t faddr = frameptr + 0x20 + 4 * i; - uint32_t slo = extract64(dn, 0, 32); - uint32_t shi = extract64(dn, 32, 32); - - if (i >= 16) { - faddr += 8; /* skip the slot for the FPSCR */ - } - stacked_ok = stacked_ok && - v7m_stack_write(cpu, faddr, slo, - mmu_idx, STACK_NORMAL) && - v7m_stack_write(cpu, faddr + 4, shi, - mmu_idx, STACK_NORMAL); - } - stacked_ok = stacked_ok && - v7m_stack_write(cpu, frameptr + 0x60, - vfp_get_fpscr(env), mmu_idx, STACK_NORMAL); - if (cpacr_pass) { - for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) { - *aa32_vfp_dreg(env, i / 2) = 0; - } - vfp_set_fpscr(env, 0); - } - } else { - /* Lazy stacking enabled, save necessary info to stack later */ - v7m_update_fpccr(env, frameptr + 0x20, true); - } - } - } - - /* - * If we broke a stack limit then SP was already updated earlier; - * otherwise we update SP regardless of whether any of the stack - * accesses failed or we took some other kind of fault. - */ - if (!limitviol) { - env->regs[13] = frameptr; - } - - return !stacked_ok; -} - -static void do_v7m_exception_exit(ARMCPU *cpu) -{ - CPUARMState *env = &cpu->env; - uint32_t excret; - uint32_t xpsr, xpsr_mask; - bool ufault = false; - bool sfault = false; - bool return_to_sp_process; - bool return_to_handler; - bool rettobase = false; - bool exc_secure = false; - bool return_to_secure; - bool ftype; - bool restore_s16_s31 = false; - - /* - * If we're not in Handler mode then jumps to magic exception-exit - * addresses don't have magic behaviour. However for the v8M - * security extensions the magic secure-function-return has to - * work in thread mode too, so to avoid doing an extra check in - * the generated code we allow exception-exit magic to also cause the - * internal exception and bring us here in thread mode. Correct code - * will never try to do this (the following insn fetch will always - * fault) so we the overhead of having taken an unnecessary exception - * doesn't matter. - */ - if (!arm_v7m_is_handler_mode(env)) { - return; - } - - /* - * In the spec pseudocode ExceptionReturn() is called directly - * from BXWritePC() and gets the full target PC value including - * bit zero. In QEMU's implementation we treat it as a normal - * jump-to-register (which is then caught later on), and so split - * the target value up between env->regs[15] and env->thumb in - * gen_bx(). Reconstitute it. - */ - excret = env->regs[15]; - if (env->thumb) { - excret |= 1; - } - - qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32 - " previous exception %d\n", - excret, env->v7m.exception); - - if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) { - qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception " - "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n", - excret); - } - - ftype = excret & R_V7M_EXCRET_FTYPE_MASK; - - if (!ftype && !cpu_isar_feature(aa32_vfp_simd, cpu)) { - qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception " - "exit PC value 0x%" PRIx32 " is UNPREDICTABLE " - "if FPU not present\n", - excret); - ftype = true; - } - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - /* - * EXC_RETURN.ES validation check (R_SMFL). We must do this before - * we pick which FAULTMASK to clear. - */ - if (!env->v7m.secure && - ((excret & R_V7M_EXCRET_ES_MASK) || - !(excret & R_V7M_EXCRET_DCRS_MASK))) { - sfault = 1; - /* For all other purposes, treat ES as 0 (R_HXSR) */ - excret &= ~R_V7M_EXCRET_ES_MASK; - } - exc_secure = excret & R_V7M_EXCRET_ES_MASK; - } - - if (env->v7m.exception != ARMV7M_EXCP_NMI) { - /* - * Auto-clear FAULTMASK on return from other than NMI. - * If the security extension is implemented then this only - * happens if the raw execution priority is >= 0; the - * value of the ES bit in the exception return value indicates - * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.) - */ - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) { - env->v7m.faultmask[exc_secure] = 0; - } - } else { - env->v7m.faultmask[M_REG_NS] = 0; - } - } - - switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception, - exc_secure)) { - case -1: - /* attempt to exit an exception that isn't active */ - ufault = true; - break; - case 0: - /* still an irq active now */ - break; - case 1: - /* - * We returned to base exception level, no nesting. - * (In the pseudocode this is written using "NestedActivation != 1" - * where we have 'rettobase == false'.) - */ - rettobase = true; - break; - default: - g_assert_not_reached(); - } - - return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK); - return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK; - return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && - (excret & R_V7M_EXCRET_S_MASK); - - if (arm_feature(env, ARM_FEATURE_V8)) { - if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) { - /* - * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP); - * we choose to take the UsageFault. - */ - if ((excret & R_V7M_EXCRET_S_MASK) || - (excret & R_V7M_EXCRET_ES_MASK) || - !(excret & R_V7M_EXCRET_DCRS_MASK)) { - ufault = true; - } - } - if (excret & R_V7M_EXCRET_RES0_MASK) { - ufault = true; - } - } else { - /* For v7M we only recognize certain combinations of the low bits */ - switch (excret & 0xf) { - case 1: /* Return to Handler */ - break; - case 13: /* Return to Thread using Process stack */ - case 9: /* Return to Thread using Main stack */ - /* - * We only need to check NONBASETHRDENA for v7M, because in - * v8M this bit does not exist (it is RES1). - */ - if (!rettobase && - !(env->v7m.ccr[env->v7m.secure] & - R_V7M_CCR_NONBASETHRDENA_MASK)) { - ufault = true; - } - break; - default: - ufault = true; - } - } - - /* - * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in - * Handler mode (and will be until we write the new XPSR.Interrupt - * field) this does not switch around the current stack pointer. - * We must do this before we do any kind of tailchaining, including - * for the derived exceptions on integrity check failures, or we will - * give the guest an incorrect EXCRET.SPSEL value on exception entry. - */ - write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure); - - /* - * Clear scratch FP values left in caller saved registers; this - * must happen before any kind of tail chaining. - */ - if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) && - (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) { - if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) { - env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " - "stackframe: error during lazy state deactivation\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } else { - if (arm_feature(env, ARM_FEATURE_V8_1M)) { - /* v8.1M adds this NOCP check */ - bool nsacr_pass = exc_secure || - extract32(env->v7m.nsacr, 10, 1); - bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true); - if (!nsacr_pass) { - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true); - env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK; - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " - "stackframe: NSACR prevents clearing FPU registers\n"); - v7m_exception_taken(cpu, excret, true, false); - } else if (!cpacr_pass) { - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - exc_secure); - env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK; - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " - "stackframe: CPACR prevents clearing FPU registers\n"); - v7m_exception_taken(cpu, excret, true, false); - } - } - /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */ - int i; - - for (i = 0; i < 16; i += 2) { - *aa32_vfp_dreg(env, i / 2) = 0; - } - vfp_set_fpscr(env, 0); - } - } - - if (sfault) { - env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " - "stackframe: failed EXC_RETURN.ES validity check\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - if (ufault) { - /* - * Bad exception return: instead of popping the exception - * stack, directly take a usage fault on the current stack. - */ - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " - "stackframe: failed exception return integrity check\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - /* - * Tailchaining: if there is currently a pending exception that - * is high enough priority to preempt execution at the level we're - * about to return to, then just directly take that exception now, - * avoiding an unstack-and-then-stack. Note that now we have - * deactivated the previous exception by calling armv7m_nvic_complete_irq() - * our current execution priority is already the execution priority we are - * returning to -- none of the state we would unstack or set based on - * the EXCRET value affects it. - */ - if (armv7m_nvic_can_take_pending_exception(env->nvic)) { - qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - switch_v7m_security_state(env, return_to_secure); - - { - /* - * The stack pointer we should be reading the exception frame from - * depends on bits in the magic exception return type value (and - * for v8M isn't necessarily the stack pointer we will eventually - * end up resuming execution with). Get a pointer to the location - * in the CPU state struct where the SP we need is currently being - * stored; we will use and modify it in place. - * We use this limited C variable scope so we don't accidentally - * use 'frame_sp_p' after we do something that makes it invalid. - */ - bool spsel = env->v7m.control[return_to_secure] & R_V7M_CONTROL_SPSEL_MASK; - uint32_t *frame_sp_p = get_v7m_sp_ptr(env, - return_to_secure, - !return_to_handler, - spsel); - uint32_t frameptr = *frame_sp_p; - bool pop_ok = true; - ARMMMUIdx mmu_idx; - bool return_to_priv = return_to_handler || - !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK); - - mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure, - return_to_priv); - - if (!QEMU_IS_ALIGNED(frameptr, 8) && - arm_feature(env, ARM_FEATURE_V8)) { - qemu_log_mask(LOG_GUEST_ERROR, - "M profile exception return with non-8-aligned SP " - "for destination state is UNPREDICTABLE\n"); - } - - /* Do we need to pop callee-saved registers? */ - if (return_to_secure && - ((excret & R_V7M_EXCRET_ES_MASK) == 0 || - (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) { - uint32_t actual_sig; - - pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx); - - if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) { - /* Take a SecureFault on the current stack */ - env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " - "stackframe: failed exception return integrity " - "signature check\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - pop_ok = pop_ok && - v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) && - v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) && - v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) && - v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) && - v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) && - v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) && - v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) && - v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx); - - frameptr += 0x28; - } - - /* Pop registers */ - pop_ok = pop_ok && - v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) && - v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) && - v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) && - v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) && - v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) && - v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) && - v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) && - v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx); - - if (!pop_ok) { - /* - * v7m_stack_read() pended a fault, so take it (as a tail - * chained exception on the same stack frame) - */ - qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - /* - * Returning from an exception with a PC with bit 0 set is defined - * behaviour on v8M (bit 0 is ignored), but for v7M it was specified - * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore - * the lsbit, and there are several RTOSes out there which incorrectly - * assume the r15 in the stack frame should be a Thumb-style "lsbit - * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but - * complain about the badly behaved guest. - */ - if (env->regs[15] & 1) { - env->regs[15] &= ~1U; - if (!arm_feature(env, ARM_FEATURE_V8)) { - qemu_log_mask(LOG_GUEST_ERROR, - "M profile return from interrupt with misaligned " - "PC is UNPREDICTABLE on v7M\n"); - } - } - - if (arm_feature(env, ARM_FEATURE_V8)) { - /* - * For v8M we have to check whether the xPSR exception field - * matches the EXCRET value for return to handler/thread - * before we commit to changing the SP and xPSR. - */ - bool will_be_handler = (xpsr & XPSR_EXCP) != 0; - if (return_to_handler != will_be_handler) { - /* - * Take an INVPC UsageFault on the current stack. - * By this point we will have switched to the security state - * for the background state, so this UsageFault will target - * that state. - */ - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " - "stackframe: failed exception return integrity " - "check\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - } - - if (!ftype) { - /* FP present and we need to handle it */ - if (!return_to_secure && - (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) { - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; - qemu_log_mask(CPU_LOG_INT, - "...taking SecureFault on existing stackframe: " - "Secure LSPACT set but exception return is " - "not to secure state\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - restore_s16_s31 = return_to_secure && - (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK); - - if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) { - /* State in FPU is still valid, just clear LSPACT */ - env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK; - } else { - int i; - uint32_t fpscr; - bool cpacr_pass, nsacr_pass; - - cpacr_pass = v7m_cpacr_pass(env, return_to_secure, - return_to_priv); - nsacr_pass = return_to_secure || - extract32(env->v7m.nsacr, 10, 1); - - if (!cpacr_pass) { - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - return_to_secure); - env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK; - qemu_log_mask(CPU_LOG_INT, - "...taking UsageFault on existing " - "stackframe: CPACR.CP10 prevents unstacking " - "FP regs\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } else if (!nsacr_pass) { - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true); - env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK; - qemu_log_mask(CPU_LOG_INT, - "...taking Secure UsageFault on existing " - "stackframe: NSACR.CP10 prevents unstacking " - "FP regs\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) { - uint32_t slo, shi; - uint64_t dn; - uint32_t faddr = frameptr + 0x20 + 4 * i; - - if (i >= 16) { - faddr += 8; /* Skip the slot for the FPSCR */ - } - - pop_ok = pop_ok && - v7m_stack_read(cpu, &slo, faddr, mmu_idx) && - v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx); - - if (!pop_ok) { - break; - } - - dn = (uint64_t)shi << 32 | slo; - *aa32_vfp_dreg(env, i / 2) = dn; - } - pop_ok = pop_ok && - v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx); - if (pop_ok) { - vfp_set_fpscr(env, fpscr); - } - if (!pop_ok) { - /* - * These regs are 0 if security extension present; - * otherwise merely UNKNOWN. We zero always. - */ - for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) { - *aa32_vfp_dreg(env, i / 2) = 0; - } - vfp_set_fpscr(env, 0); - } - } - } - env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S], - V7M_CONTROL, FPCA, !ftype); - - /* Commit to consuming the stack frame */ - frameptr += 0x20; - if (!ftype) { - frameptr += 0x48; - if (restore_s16_s31) { - frameptr += 0x40; - } - } - /* - * Undo stack alignment (the SPREALIGN bit indicates that the original - * pre-exception SP was not 8-aligned and we added a padding word to - * align it, so we undo this by ORing in the bit that increases it - * from the current 8-aligned value to the 8-unaligned value. (Adding 4 - * would work too but a logical OR is how the pseudocode specifies it.) - */ - if (xpsr & XPSR_SPREALIGN) { - frameptr |= 4; - } - *frame_sp_p = frameptr; - } - - xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA); - if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) { - xpsr_mask &= ~XPSR_GE; - } - /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */ - xpsr_write(env, xpsr, xpsr_mask); - - if (env->v7m.secure) { - bool sfpa = xpsr & XPSR_SFPA; - - env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S], - V7M_CONTROL, SFPA, sfpa); - } - - /* - * The restored xPSR exception field will be zero if we're - * resuming in Thread mode. If that doesn't match what the - * exception return excret specified then this is a UsageFault. - * v7M requires we make this check here; v8M did it earlier. - */ - if (return_to_handler != arm_v7m_is_handler_mode(env)) { - /* - * Take an INVPC UsageFault by pushing the stack again; - * we know we're v7M so this is never a Secure UsageFault. - */ - bool ignore_stackfaults; - - assert(!arm_feature(env, ARM_FEATURE_V8)); - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; - ignore_stackfaults = v7m_push_stack(cpu); - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: " - "failed exception return integrity check\n"); - v7m_exception_taken(cpu, excret, false, ignore_stackfaults); - return; - } - - /* Otherwise, we have a successful exception exit. */ - arm_clear_exclusive(env); - arm_rebuild_hflags(env); - qemu_log_mask(CPU_LOG_INT, "...successful exception return\n"); -} - -static bool do_v7m_function_return(ARMCPU *cpu) -{ - /* - * v8M security extensions magic function return. - * We may either: - * (1) throw an exception (longjump) - * (2) return true if we successfully handled the function return - * (3) return false if we failed a consistency check and have - * pended a UsageFault that needs to be taken now - * - * At this point the magic return value is split between env->regs[15] - * and env->thumb. We don't bother to reconstitute it because we don't - * need it (all values are handled the same way). - */ - CPUARMState *env = &cpu->env; - uint32_t newpc, newpsr, newpsr_exc; - - qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n"); - - { - bool threadmode, spsel; - TCGMemOpIdx oi; - ARMMMUIdx mmu_idx; - uint32_t *frame_sp_p; - uint32_t frameptr; - - /* Pull the return address and IPSR from the Secure stack */ - threadmode = !arm_v7m_is_handler_mode(env); - spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK; - - frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel); - frameptr = *frame_sp_p; - - /* - * These loads may throw an exception (for MPU faults). We want to - * do them as secure, so work out what MMU index that is. - */ - mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); - oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx)); - newpc = helper_le_ldul_mmu(env, frameptr, oi, 0); - newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0); - - /* Consistency checks on new IPSR */ - newpsr_exc = newpsr & XPSR_EXCP; - if (!((env->v7m.exception == 0 && newpsr_exc == 0) || - (env->v7m.exception == 1 && newpsr_exc != 0))) { - /* Pend the fault and tell our caller to take it */ - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - qemu_log_mask(CPU_LOG_INT, - "...taking INVPC UsageFault: " - "IPSR consistency check failed\n"); - return false; - } - - *frame_sp_p = frameptr + 8; - } - - /* This invalidates frame_sp_p */ - switch_v7m_security_state(env, true); - env->v7m.exception = newpsr_exc; - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; - if (newpsr & XPSR_SFPA) { - env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK; - } - xpsr_write(env, 0, XPSR_IT); - env->thumb = newpc & 1; - env->regs[15] = newpc & ~1; - arm_rebuild_hflags(env); - - qemu_log_mask(CPU_LOG_INT, "...function return successful\n"); - return true; -} - -static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, - uint32_t addr, uint16_t *insn) -{ - /* - * Load a 16-bit portion of a v7M instruction, returning true on success, - * or false on failure (in which case we will have pended the appropriate - * exception). - * We need to do the instruction fetch's MPU and SAU checks - * like this because there is no MMU index that would allow - * doing the load with a single function call. Instead we must - * first check that the security attributes permit the load - * and that they don't mismatch on the two halves of the instruction, - * and then we do the load as a secure load (ie using the security - * attributes of the address, not the CPU, as architecturally required). - */ - CPUState *cs = CPU(cpu); - CPUARMState *env = &cpu->env; - V8M_SAttributes sattrs = {}; - MemTxAttrs attrs = {}; - ARMMMUFaultInfo fi = {}; - ARMCacheAttrs cacheattrs = {}; - MemTxResult txres; - target_ulong page_size; - hwaddr physaddr; - int prot; - - v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs); - if (!sattrs.nsc || sattrs.ns) { - /* - * This must be the second half of the insn, and it straddles a - * region boundary with the second half not being S&NSC. - */ - env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n"); - return false; - } - if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx, &physaddr, - &attrs, &prot, &page_size, &fi, &cacheattrs)) { - /* the MPU lookup failed */ - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure); - qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n"); - return false; - } - *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr, - attrs, &txres); - if (txres != MEMTX_OK) { - env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); - qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n"); - return false; - } - return true; -} - -static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx, - uint32_t addr, uint32_t *spdata) -{ - /* - * Read a word of data from the stack for the SG instruction, - * writing the value into *spdata. If the load succeeds, return - * true; otherwise pend an appropriate exception and return false. - * (We can't use data load helpers here that throw an exception - * because of the context we're called in, which is halfway through - * arm_v7m_cpu_do_interrupt().) - */ - CPUState *cs = CPU(cpu); - CPUARMState *env = &cpu->env; - MemTxAttrs attrs = {}; - MemTxResult txres; - target_ulong page_size; - hwaddr physaddr; - int prot; - ARMMMUFaultInfo fi = {}; - ARMCacheAttrs cacheattrs = {}; - uint32_t value; - - if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr, - &attrs, &prot, &page_size, &fi, &cacheattrs)) { - /* MPU/SAU lookup failed */ - if (fi.type == ARMFault_QEMU_SFault) { - qemu_log_mask(CPU_LOG_INT, - "...SecureFault during stack word read\n"); - env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK; - env->v7m.sfar = addr; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - } else { - qemu_log_mask(CPU_LOG_INT, - "...MemManageFault during stack word read\n"); - env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK | - R_V7M_CFSR_MMARVALID_MASK; - env->v7m.mmfar[M_REG_S] = addr; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false); - } - return false; - } - value = address_space_ldl(arm_addressspace(cs, attrs), physaddr, - attrs, &txres); - if (txres != MEMTX_OK) { - /* BusFault trying to read the data */ - qemu_log_mask(CPU_LOG_INT, - "...BusFault during stack word read\n"); - env->v7m.cfsr[M_REG_NS] |= - (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK); - env->v7m.bfar = addr; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); - return false; - } - - *spdata = value; - return true; -} - -static bool v7m_handle_execute_nsc(ARMCPU *cpu) -{ - /* - * Check whether this attempt to execute code in a Secure & NS-Callable - * memory region is for an SG instruction; if so, then emulate the - * effect of the SG instruction and return true. Otherwise pend - * the correct kind of exception and return false. - */ - CPUARMState *env = &cpu->env; - ARMMMUIdx mmu_idx; - uint16_t insn; - - /* - * We should never get here unless get_phys_addr_pmsav8() caused - * an exception for NS executing in S&NSC memory. - */ - assert(!env->v7m.secure); - assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); - - /* We want to do the MPU lookup as secure; work out what mmu_idx that is */ - mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); - - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) { - return false; - } - - if (!env->thumb) { - goto gen_invep; - } - - if (insn != 0xe97f) { - /* - * Not an SG instruction first half (we choose the IMPDEF - * early-SG-check option). - */ - goto gen_invep; - } - - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) { - return false; - } - - if (insn != 0xe97f) { - /* - * Not an SG instruction second half (yes, both halves of the SG - * insn have the same hex value) - */ - goto gen_invep; - } - - /* - * OK, we have confirmed that we really have an SG instruction. - * We know we're NS in S memory so don't need to repeat those checks. - */ - qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32 - ", executing it\n", env->regs[15]); - - if (cpu_isar_feature(aa32_m_sec_state, cpu) && - !arm_v7m_is_handler_mode(env)) { - /* - * v8.1M exception stack frame integrity check. Note that we - * must perform the memory access even if CCR_S.TRD is zero - * and we aren't going to check what the data loaded is. - */ - uint32_t spdata, sp; - - /* - * We know we are currently NS, so the S stack pointers must be - * in other_ss_{psp,msp}, not in regs[13]/other_sp. - */ - sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp; - if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) { - /* Stack access failed and an exception has been pended */ - return false; - } - - if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) { - if (((spdata & ~1) == 0xfefa125a) || - !(env->v7m.control[M_REG_S] & 1)) { - goto gen_invep; - } - } - } - - env->regs[14] &= ~1; - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; - switch_v7m_security_state(env, true); - xpsr_write(env, 0, XPSR_IT); - env->regs[15] += 4; - arm_rebuild_hflags(env); - return true; - -gen_invep: - env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n"); - return false; -} - -void arm_v7m_cpu_do_interrupt(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - uint32_t lr; - bool ignore_stackfaults; - - arm_log_exception(cs->exception_index); - - /* - * For exceptions we just mark as pending on the NVIC, and let that - * handle it. - */ - switch (cs->exception_index) { - case EXCP_UDEF: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK; - break; - case EXCP_NOCP: - { - /* - * NOCP might be directed to something other than the current - * security state if this fault is because of NSACR; we indicate - * the target security state using exception.target_el. - */ - int target_secstate; - - if (env->exception.target_el == 3) { - target_secstate = M_REG_S; - } else { - target_secstate = env->v7m.secure; - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate); - env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK; - break; - } - case EXCP_INVSTATE: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK; - break; - case EXCP_STKOF: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK; - break; - case EXCP_LSERR: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; - break; - case EXCP_UNALIGNED: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK; - break; - case EXCP_SWI: - /* The PC already points to the next instruction. */ - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure); - break; - case EXCP_PREFETCH_ABORT: - case EXCP_DATA_ABORT: - /* - * Note that for M profile we don't have a guest facing FSR, but - * the env->exception.fsr will be populated by the code that - * raises the fault, in the A profile short-descriptor format. - */ - switch (env->exception.fsr & 0xf) { - case M_FAKE_FSR_NSC_EXEC: - /* - * Exception generated when we try to execute code at an address - * which is marked as Secure & Non-Secure Callable and the CPU - * is in the Non-Secure state. The only instruction which can - * be executed like this is SG (and that only if both halves of - * the SG instruction have the same security attributes.) - * Everything else must generate an INVEP SecureFault, so we - * emulate the SG instruction here. - */ - if (v7m_handle_execute_nsc(cpu)) { - return; - } - break; - case M_FAKE_FSR_SFAULT: - /* - * Various flavours of SecureFault for attempts to execute or - * access data in the wrong security state. - */ - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - if (env->v7m.secure) { - env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK; - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVTRAN\n"); - } else { - env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n"); - } - break; - case EXCP_DATA_ABORT: - /* This must be an NS access to S memory */ - env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK; - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.AUVIOL\n"); - break; - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - break; - case 0x8: /* External Abort */ - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; - qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n"); - break; - case EXCP_DATA_ABORT: - env->v7m.cfsr[M_REG_NS] |= - (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK); - env->v7m.bfar = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, - "...with CFSR.PRECISERR and BFAR 0x%x\n", - env->v7m.bfar); - break; - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); - break; - default: - /* - * All other FSR values are either MPU faults or "can't happen - * for M profile" cases. - */ - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK; - qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n"); - break; - case EXCP_DATA_ABORT: - env->v7m.cfsr[env->v7m.secure] |= - (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK); - env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, - "...with CFSR.DACCVIOL and MMFAR 0x%x\n", - env->v7m.mmfar[env->v7m.secure]); - break; - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, - env->v7m.secure); - break; - } - break; - case EXCP_SEMIHOST: - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%x\n", - env->regs[0]); -#ifdef CONFIG_TCG - env->regs[0] = do_common_semihosting(cs); -#else - g_assert_not_reached(); -#endif - env->regs[15] += env->thumb ? 2 : 4; - return; - case EXCP_BKPT: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false); - break; - case EXCP_IRQ: - break; - case EXCP_EXCEPTION_EXIT: - if (env->regs[15] < EXC_RETURN_MIN_MAGIC) { - /* Must be v8M security extension function return */ - assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC); - assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); - if (do_v7m_function_return(cpu)) { - return; - } - } else { - do_v7m_exception_exit(cpu); - return; - } - break; - case EXCP_LAZYFP: - /* - * We already pended the specific exception in the NVIC in the - * v7m_preserve_fp_state() helper function. - */ - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - return; /* Never happens. Keep compiler happy. */ - } - - if (arm_feature(env, ARM_FEATURE_V8)) { - lr = R_V7M_EXCRET_RES1_MASK | - R_V7M_EXCRET_DCRS_MASK; - /* - * The S bit indicates whether we should return to Secure - * or NonSecure (ie our current state). - * The ES bit indicates whether we're taking this exception - * to Secure or NonSecure (ie our target state). We set it - * later, in v7m_exception_taken(). - * The SPSEL bit is also set in v7m_exception_taken() for v8M. - * This corresponds to the ARM ARM pseudocode for v8M setting - * some LR bits in PushStack() and some in ExceptionTaken(); - * the distinction matters for the tailchain cases where we - * can take an exception without pushing the stack. - */ - if (env->v7m.secure) { - lr |= R_V7M_EXCRET_S_MASK; - } - } else { - lr = R_V7M_EXCRET_RES1_MASK | - R_V7M_EXCRET_S_MASK | - R_V7M_EXCRET_DCRS_MASK | - R_V7M_EXCRET_ES_MASK; - if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) { - lr |= R_V7M_EXCRET_SPSEL_MASK; - } - } - if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) { - lr |= R_V7M_EXCRET_FTYPE_MASK; - } - if (!arm_v7m_is_handler_mode(env)) { - lr |= R_V7M_EXCRET_MODE_MASK; - } - - ignore_stackfaults = v7m_push_stack(cpu); - v7m_exception_taken(cpu, lr, false, ignore_stackfaults); -} - -uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) -{ - unsigned el = arm_current_el(env); - - /* First handle registers which unprivileged can read */ - switch (reg) { - case 0 ... 7: /* xPSR sub-fields */ - return v7m_mrs_xpsr(env, reg, el); - case 20: /* CONTROL */ - return v7m_mrs_control(env, env->v7m.secure); - case 0x94: /* CONTROL_NS */ - /* - * We have to handle this here because unprivileged Secure code - * can read the NS CONTROL register. - */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.control[M_REG_NS] | - (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK); - } - - if (el == 0) { - return 0; /* unprivileged reads others as zero */ - } - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - switch (reg) { - case 0x88: /* MSP_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.other_ss_msp; - case 0x89: /* PSP_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.other_ss_psp; - case 0x8a: /* MSPLIM_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.msplim[M_REG_NS]; - case 0x8b: /* PSPLIM_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.psplim[M_REG_NS]; - case 0x90: /* PRIMASK_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.primask[M_REG_NS]; - case 0x91: /* BASEPRI_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.basepri[M_REG_NS]; - case 0x93: /* FAULTMASK_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.faultmask[M_REG_NS]; - case 0x98: /* SP_NS */ - { - /* - * This gives the non-secure SP selected based on whether we're - * currently in handler mode or not, using the NS CONTROL.SPSEL. - */ - bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK; - - if (!env->v7m.secure) { - return 0; - } - if (!arm_v7m_is_handler_mode(env) && spsel) { - return env->v7m.other_ss_psp; - } else { - return env->v7m.other_ss_msp; - } - } - default: - break; - } - } - - switch (reg) { - case 8: /* MSP */ - return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13]; - case 9: /* PSP */ - return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp; - case 10: /* MSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - return env->v7m.msplim[env->v7m.secure]; - case 11: /* PSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - return env->v7m.psplim[env->v7m.secure]; - case 16: /* PRIMASK */ - return env->v7m.primask[env->v7m.secure]; - case 17: /* BASEPRI */ - case 18: /* BASEPRI_MAX */ - return env->v7m.basepri[env->v7m.secure]; - case 19: /* FAULTMASK */ - return env->v7m.faultmask[env->v7m.secure]; - default: - bad_reg: - qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special" - " register %d\n", reg); - return 0; - } -} - -void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) -{ - /* - * We're passed bits [11..0] of the instruction; extract - * SYSm and the mask bits. - * Invalid combinations of SYSm and mask are UNPREDICTABLE; - * we choose to treat them as if the mask bits were valid. - * NB that the pseudocode 'mask' variable is bits [11..10], - * whereas ours is [11..8]. - */ - uint32_t mask = extract32(maskreg, 8, 4); - uint32_t reg = extract32(maskreg, 0, 8); - int cur_el = arm_current_el(env); - - if (cur_el == 0 && reg > 7 && reg != 20) { - /* - * only xPSR sub-fields and CONTROL.SFPA may be written by - * unprivileged code - */ - return; - } - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - switch (reg) { - case 0x88: /* MSP_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.other_ss_msp = val; - return; - case 0x89: /* PSP_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.other_ss_psp = val; - return; - case 0x8a: /* MSPLIM_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.msplim[M_REG_NS] = val & ~7; - return; - case 0x8b: /* PSPLIM_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.psplim[M_REG_NS] = val & ~7; - return; - case 0x90: /* PRIMASK_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.primask[M_REG_NS] = val & 1; - return; - case 0x91: /* BASEPRI_NS */ - if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) { - return; - } - env->v7m.basepri[M_REG_NS] = val & 0xff; - return; - case 0x93: /* FAULTMASK_NS */ - if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) { - return; - } - env->v7m.faultmask[M_REG_NS] = val & 1; - return; - case 0x94: /* CONTROL_NS */ - if (!env->v7m.secure) { - return; - } - write_v7m_control_spsel_for_secstate(env, - val & R_V7M_CONTROL_SPSEL_MASK, - M_REG_NS); - if (arm_feature(env, ARM_FEATURE_M_MAIN)) { - env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK; - env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK; - } - /* - * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0, - * RES0 if the FPU is not present, and is stored in the S bank - */ - if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env)) && - extract32(env->v7m.nsacr, 10, 1)) { - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK; - env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK; - } - return; - case 0x98: /* SP_NS */ - { - /* - * This gives the non-secure SP selected based on whether we're - * currently in handler mode or not, using the NS CONTROL.SPSEL. - */ - bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK; - bool is_psp = !arm_v7m_is_handler_mode(env) && spsel; - uint32_t limit; - - if (!env->v7m.secure) { - return; - } - - limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false]; - - if (val < limit) { - CPUState *cs = env_cpu(env); - - cpu_restore_state(cs, GETPC(), true); - raise_exception(env, EXCP_STKOF, 0, 1); - } - - if (is_psp) { - env->v7m.other_ss_psp = val; - } else { - env->v7m.other_ss_msp = val; - } - return; - } - default: - break; - } - } - - switch (reg) { - case 0 ... 7: /* xPSR sub-fields */ - v7m_msr_xpsr(env, mask, reg, val); - break; - case 8: /* MSP */ - if (v7m_using_psp(env)) { - env->v7m.other_sp = val; - } else { - env->regs[13] = val; - } - break; - case 9: /* PSP */ - if (v7m_using_psp(env)) { - env->regs[13] = val; - } else { - env->v7m.other_sp = val; - } - break; - case 10: /* MSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - env->v7m.msplim[env->v7m.secure] = val & ~7; - break; - case 11: /* PSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - env->v7m.psplim[env->v7m.secure] = val & ~7; - break; - case 16: /* PRIMASK */ - env->v7m.primask[env->v7m.secure] = val & 1; - break; - case 17: /* BASEPRI */ - if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { - goto bad_reg; - } - env->v7m.basepri[env->v7m.secure] = val & 0xff; - break; - case 18: /* BASEPRI_MAX */ - if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { - goto bad_reg; - } - val &= 0xff; - if (val != 0 && (val < env->v7m.basepri[env->v7m.secure] - || env->v7m.basepri[env->v7m.secure] == 0)) { - env->v7m.basepri[env->v7m.secure] = val; - } - break; - case 19: /* FAULTMASK */ - if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { - goto bad_reg; - } - env->v7m.faultmask[env->v7m.secure] = val & 1; - break; - case 20: /* CONTROL */ - /* - * Writing to the SPSEL bit only has an effect if we are in - * thread mode; other bits can be updated by any privileged code. - * write_v7m_control_spsel() deals with updating the SPSEL bit in - * env->v7m.control, so we only need update the others. - * For v7M, we must just ignore explicit writes to SPSEL in handler - * mode; for v8M the write is permitted but will have no effect. - * All these bits are writes-ignored from non-privileged code, - * except for SFPA. - */ - if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) || - !arm_v7m_is_handler_mode(env))) { - write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0); - } - if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) { - env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK; - env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK; - } - if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) { - /* - * SFPA is RAZ/WI from NS or if no FPU. - * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present. - * Both are stored in the S bank. - */ - if (env->v7m.secure) { - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; - env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK; - } - if (cur_el > 0 && - (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) || - extract32(env->v7m.nsacr, 10, 1))) { - env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK; - env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK; - } - } - break; - default: - bad_reg: - qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special" - " register %d\n", reg); - return; - } -} - -uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) -{ - /* Implement the TT instruction. op is bits [7:6] of the insn. */ - bool forceunpriv = op & 1; - bool alt = op & 2; - V8M_SAttributes sattrs = {}; - uint32_t tt_resp; - bool r, rw, nsr, nsrw, mrvalid; - int prot; - ARMMMUFaultInfo fi = {}; - MemTxAttrs attrs = {}; - hwaddr phys_addr; - ARMMMUIdx mmu_idx; - uint32_t mregion; - bool targetpriv; - bool targetsec = env->v7m.secure; - bool is_subpage; - - /* - * Work out what the security state and privilege level we're - * interested in is... - */ - if (alt) { - targetsec = !targetsec; - } - - if (forceunpriv) { - targetpriv = false; - } else { - targetpriv = arm_v7m_is_handler_mode(env) || - !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK); - } - - /* ...and then figure out which MMU index this is */ - mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv); - - /* - * We know that the MPU and SAU don't care about the access type - * for our purposes beyond that we don't want to claim to be - * an insn fetch, so we arbitrarily call this a read. - */ - - /* - * MPU region info only available for privileged or if - * inspecting the other MPU state. - */ - if (arm_current_el(env) != 0 || alt) { - /* We can ignore the return value as prot is always set */ - pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, - &phys_addr, &attrs, &prot, &is_subpage, - &fi, &mregion); - if (mregion == -1) { - mrvalid = false; - mregion = 0; - } else { - mrvalid = true; - } - r = prot & PAGE_READ; - rw = prot & PAGE_WRITE; - } else { - r = false; - rw = false; - mrvalid = false; - mregion = 0; - } - - if (env->v7m.secure) { - v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); - nsr = sattrs.ns && r; - nsrw = sattrs.ns && rw; - } else { - sattrs.ns = true; - nsr = false; - nsrw = false; - } - - tt_resp = (sattrs.iregion << 24) | - (sattrs.irvalid << 23) | - ((!sattrs.ns) << 22) | - (nsrw << 21) | - (nsr << 20) | - (rw << 19) | - (r << 18) | - (sattrs.srvalid << 17) | - (mrvalid << 16) | - (sattrs.sregion << 8) | - mregion; - - return tt_resp; -} - -#endif /* !CONFIG_USER_ONLY */ - ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env, bool secstate, bool priv, bool negpri) { diff --git a/target/arm/tcg/sysemu/m_helper.c b/target/arm/tcg/sysemu/m_helper.c new file mode 100644 index 0000000000..77c9fd0b6e --- /dev/null +++ b/target/arm/tcg/sysemu/m_helper.c @@ -0,0 +1,2655 @@ +/* + * ARM v7m generic helpers. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "exec/helper-proto.h" +#include "qemu/main-loop.h" +#include "exec/exec-all.h" +#include "semihosting/common-semi.h" + +#include "tcg/m_helper.h" + +/* + * What kind of stack write are we doing? This affects how exceptions + * generated during the stacking are treated. + */ +typedef enum StackingMode { + STACK_NORMAL, + STACK_IGNFAULTS, + STACK_LAZYFP, +} StackingMode; + +static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value, + ARMMMUIdx mmu_idx, StackingMode mode) +{ + CPUState *cs = CPU(cpu); + CPUARMState *env = &cpu->env; + MemTxAttrs attrs = {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + ARMMMUFaultInfo fi = {}; + ARMCacheAttrs cacheattrs = {}; + bool secure = mmu_idx & ARM_MMU_IDX_M_S; + int exc; + bool exc_secure; + + if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr, + &attrs, &prot, &page_size, &fi, &cacheattrs)) { + /* MPU/SAU lookup failed */ + if (fi.type == ARMFault_QEMU_SFault) { + if (mode == STACK_LAZYFP) { + qemu_log_mask(CPU_LOG_INT, + "...SecureFault with SFSR.LSPERR " + "during lazy stacking\n"); + env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK; + } else { + qemu_log_mask(CPU_LOG_INT, + "...SecureFault with SFSR.AUVIOL " + "during stacking\n"); + env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK; + } + env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK; + env->v7m.sfar = addr; + exc = ARMV7M_EXCP_SECURE; + exc_secure = false; + } else { + if (mode == STACK_LAZYFP) { + qemu_log_mask(CPU_LOG_INT, + "...MemManageFault with CFSR.MLSPERR\n"); + env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK; + } else { + qemu_log_mask(CPU_LOG_INT, + "...MemManageFault with CFSR.MSTKERR\n"); + env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK; + } + exc = ARMV7M_EXCP_MEM; + exc_secure = secure; + } + goto pend_fault; + } + address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value, + attrs, &txres); + if (txres != MEMTX_OK) { + /* BusFault trying to write the data */ + if (mode == STACK_LAZYFP) { + qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n"); + env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK; + } else { + qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n"); + env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK; + } + exc = ARMV7M_EXCP_BUS; + exc_secure = false; + goto pend_fault; + } + return true; + +pend_fault: + /* + * By pending the exception at this point we are making + * the IMPDEF choice "overridden exceptions pended" (see the + * MergeExcInfo() pseudocode). The other choice would be to not + * pend them now and then make a choice about which to throw away + * later if we have two derived exceptions. + * The only case when we must not pend the exception but instead + * throw it away is if we are doing the push of the callee registers + * and we've already generated a derived exception (this is indicated + * by the caller passing STACK_IGNFAULTS). Even in this case we will + * still update the fault status registers. + */ + switch (mode) { + case STACK_NORMAL: + armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure); + break; + case STACK_LAZYFP: + armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure); + break; + case STACK_IGNFAULTS: + break; + } + return false; +} + +static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr, + ARMMMUIdx mmu_idx) +{ + CPUState *cs = CPU(cpu); + CPUARMState *env = &cpu->env; + MemTxAttrs attrs = {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + ARMMMUFaultInfo fi = {}; + ARMCacheAttrs cacheattrs = {}; + bool secure = mmu_idx & ARM_MMU_IDX_M_S; + int exc; + bool exc_secure; + uint32_t value; + + if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr, + &attrs, &prot, &page_size, &fi, &cacheattrs)) { + /* MPU/SAU lookup failed */ + if (fi.type == ARMFault_QEMU_SFault) { + qemu_log_mask(CPU_LOG_INT, + "...SecureFault with SFSR.AUVIOL during unstack\n"); + env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK; + env->v7m.sfar = addr; + exc = ARMV7M_EXCP_SECURE; + exc_secure = false; + } else { + qemu_log_mask(CPU_LOG_INT, + "...MemManageFault with CFSR.MUNSTKERR\n"); + env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK; + exc = ARMV7M_EXCP_MEM; + exc_secure = secure; + } + goto pend_fault; + } + + value = address_space_ldl(arm_addressspace(cs, attrs), physaddr, + attrs, &txres); + if (txres != MEMTX_OK) { + /* BusFault trying to read the data */ + qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n"); + env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK; + exc = ARMV7M_EXCP_BUS; + exc_secure = false; + goto pend_fault; + } + + *dest = value; + return true; + +pend_fault: + /* + * By pending the exception at this point we are making + * the IMPDEF choice "overridden exceptions pended" (see the + * MergeExcInfo() pseudocode). The other choice would be to not + * pend them now and then make a choice about which to throw away + * later if we have two derived exceptions. + */ + armv7m_nvic_set_pending(env->nvic, exc, exc_secure); + return false; +} + +void HELPER(v7m_preserve_fp_state)(CPUARMState *env) +{ + /* + * Preserve FP state (because LSPACT was set and we are about + * to execute an FP instruction). This corresponds to the + * PreserveFPState() pseudocode. + * We may throw an exception if the stacking fails. + */ + ARMCPU *cpu = env_archcpu(env); + bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK; + bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK); + bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK); + bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK; + uint32_t fpcar = env->v7m.fpcar[is_secure]; + bool stacked_ok = true; + bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK); + bool take_exception; + + /* Take the iothread lock as we are going to touch the NVIC */ + qemu_mutex_lock_iothread(); + + /* Check the background context had access to the FPU */ + if (!v7m_cpacr_pass(env, is_secure, is_priv)) { + armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure); + env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK; + stacked_ok = false; + } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) { + armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S); + env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK; + stacked_ok = false; + } + + if (!splimviol && stacked_ok) { + /* We only stack if the stack limit wasn't violated */ + int i; + ARMMMUIdx mmu_idx; + + mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri); + for (i = 0; i < (ts ? 32 : 16); i += 2) { + uint64_t dn = *aa32_vfp_dreg(env, i / 2); + uint32_t faddr = fpcar + 4 * i; + uint32_t slo = extract64(dn, 0, 32); + uint32_t shi = extract64(dn, 32, 32); + + if (i >= 16) { + faddr += 8; /* skip the slot for the FPSCR */ + } + stacked_ok = stacked_ok && + v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) && + v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP); + } + + stacked_ok = stacked_ok && + v7m_stack_write(cpu, fpcar + 0x40, + vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP); + } + + /* + * We definitely pended an exception, but it's possible that it + * might not be able to be taken now. If its priority permits us + * to take it now, then we must not update the LSPACT or FP regs, + * but instead jump out to take the exception immediately. + * If it's just pending and won't be taken until the current + * handler exits, then we do update LSPACT and the FP regs. + */ + take_exception = !stacked_ok && + armv7m_nvic_can_take_pending_exception(env->nvic); + + qemu_mutex_unlock_iothread(); + + if (take_exception) { + raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC()); + } + + env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK; + + if (ts) { + /* Clear s0 to s31 and the FPSCR */ + int i; + + for (i = 0; i < 32; i += 2) { + *aa32_vfp_dreg(env, i / 2) = 0; + } + vfp_set_fpscr(env, 0); + } + /* + * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them + * unchanged. + */ +} + +/* + * Write to v7M CONTROL.SPSEL bit for the specified security bank. + * This may change the current stack pointer between Main and Process + * stack pointers if it is done for the CONTROL register for the current + * security state. + */ +static void write_v7m_control_spsel_for_secstate(CPUARMState *env, + bool new_spsel, + bool secstate) +{ + bool old_is_psp = v7m_using_psp(env); + + env->v7m.control[secstate] = + deposit32(env->v7m.control[secstate], + R_V7M_CONTROL_SPSEL_SHIFT, + R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); + + if (secstate == env->v7m.secure) { + bool new_is_psp = v7m_using_psp(env); + uint32_t tmp; + + if (old_is_psp != new_is_psp) { + tmp = env->v7m.other_sp; + env->v7m.other_sp = env->regs[13]; + env->regs[13] = tmp; + } + } +} + +/* + * Write to v7M CONTROL.SPSEL bit. This may change the current + * stack pointer between Main and Process stack pointers. + */ +static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) +{ + write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure); +} + +void write_v7m_exception(CPUARMState *env, uint32_t new_exc) +{ + /* + * Write a new value to v7m.exception, thus transitioning into or out + * of Handler mode; this may result in a change of active stack pointer. + */ + bool new_is_psp, old_is_psp = v7m_using_psp(env); + uint32_t tmp; + + env->v7m.exception = new_exc; + + new_is_psp = v7m_using_psp(env); + + if (old_is_psp != new_is_psp) { + tmp = env->v7m.other_sp; + env->v7m.other_sp = env->regs[13]; + env->regs[13] = tmp; + } +} + +/* Switch M profile security state between NS and S */ +static void switch_v7m_security_state(CPUARMState *env, bool new_secstate) +{ + uint32_t new_ss_msp, new_ss_psp; + + if (env->v7m.secure == new_secstate) { + return; + } + + /* + * All the banked state is accessed by looking at env->v7m.secure + * except for the stack pointer; rearrange the SP appropriately. + */ + new_ss_msp = env->v7m.other_ss_msp; + new_ss_psp = env->v7m.other_ss_psp; + + if (v7m_using_psp(env)) { + env->v7m.other_ss_psp = env->regs[13]; + env->v7m.other_ss_msp = env->v7m.other_sp; + } else { + env->v7m.other_ss_msp = env->regs[13]; + env->v7m.other_ss_psp = env->v7m.other_sp; + } + + env->v7m.secure = new_secstate; + + if (v7m_using_psp(env)) { + env->regs[13] = new_ss_psp; + env->v7m.other_sp = new_ss_msp; + } else { + env->regs[13] = new_ss_msp; + env->v7m.other_sp = new_ss_psp; + } +} + +void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) +{ + /* + * Handle v7M BXNS: + * - if the return value is a magic value, do exception return (like BX) + * - otherwise bit 0 of the return value is the target security state + */ + uint32_t min_magic; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* Covers FNC_RETURN and EXC_RETURN magic */ + min_magic = FNC_RETURN_MIN_MAGIC; + } else { + /* EXC_RETURN magic only */ + min_magic = EXC_RETURN_MIN_MAGIC; + } + + if (dest >= min_magic) { + /* + * This is an exception return magic value; put it where + * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT. + * Note that if we ever add gen_ss_advance() singlestep support to + * M profile this should count as an "instruction execution complete" + * event (compare gen_bx_excret_final_code()). + */ + env->regs[15] = dest & ~1; + env->thumb = dest & 1; + HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT); + /* notreached */ + } + + /* translate.c should have made BXNS UNDEF unless we're secure */ + assert(env->v7m.secure); + + if (!(dest & 1)) { + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; + } + switch_v7m_security_state(env, dest & 1); + env->thumb = 1; + env->regs[15] = dest & ~1; + arm_rebuild_hflags(env); +} + +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* + * Handle v7M BLXNS: + * - bit 0 of the destination address is the target security state + */ + + /* At this point regs[15] is the address just after the BLXNS */ + uint32_t nextinst = env->regs[15] | 1; + uint32_t sp = env->regs[13] - 8; + uint32_t saved_psr; + + /* translate.c will have made BLXNS UNDEF unless we're secure */ + assert(env->v7m.secure); + + if (dest & 1) { + /* + * Target is Secure, so this is just a normal BLX, + * except that the low bit doesn't indicate Thumb/not. + */ + env->regs[14] = nextinst; + env->thumb = 1; + env->regs[15] = dest & ~1; + return; + } + + /* Target is non-secure: first push a stack frame */ + if (!QEMU_IS_ALIGNED(sp, 8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "BLXNS with misaligned SP is UNPREDICTABLE\n"); + } + + if (sp < v7m_sp_limit(env)) { + raise_exception(env, EXCP_STKOF, 0, 1); + } + + saved_psr = env->v7m.exception; + if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) { + saved_psr |= XPSR_SFPA; + } + + /* Note that these stores can throw exceptions on MPU faults */ + cpu_stl_data_ra(env, sp, nextinst, GETPC()); + cpu_stl_data_ra(env, sp + 4, saved_psr, GETPC()); + + env->regs[13] = sp; + env->regs[14] = 0xfeffffff; + if (arm_v7m_is_handler_mode(env)) { + /* + * Write a dummy value to IPSR, to avoid leaking the current secure + * exception number to non-secure code. This is guaranteed not + * to cause write_v7m_exception() to actually change stacks. + */ + write_v7m_exception(env, 1); + } + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; + switch_v7m_security_state(env, 0); + env->thumb = 1; + env->regs[15] = dest; + arm_rebuild_hflags(env); +} + +static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, + bool spsel) +{ + /* + * Return a pointer to the location where we currently store the + * stack pointer for the requested security state and thread mode. + * This pointer will become invalid if the CPU state is updated + * such that the stack pointers are switched around (eg changing + * the SPSEL control bit). + * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode(). + * Unlike that pseudocode, we require the caller to pass us in the + * SPSEL control bit value; this is because we also use this + * function in handling of pushing of the callee-saves registers + * part of the v8M stack frame (pseudocode PushCalleeStack()), + * and in the tailchain codepath the SPSEL bit comes from the exception + * return magic LR value from the previous exception. The pseudocode + * opencodes the stack-selection in PushCalleeStack(), but we prefer + * to make this utility function generic enough to do the job. + */ + bool want_psp = threadmode && spsel; + + if (secure == env->v7m.secure) { + if (want_psp == v7m_using_psp(env)) { + return &env->regs[13]; + } else { + return &env->v7m.other_sp; + } + } else { + if (want_psp) { + return &env->v7m.other_ss_psp; + } else { + return &env->v7m.other_ss_msp; + } + } +} + +static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure, + uint32_t *pvec) +{ + CPUState *cs = CPU(cpu); + CPUARMState *env = &cpu->env; + MemTxResult result; + uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4; + uint32_t vector_entry; + MemTxAttrs attrs = {}; + ARMMMUIdx mmu_idx; + bool exc_secure; + + mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true); + + /* + * We don't do a get_phys_addr() here because the rules for vector + * loads are special: they always use the default memory map, and + * the default memory map permits reads from all addresses. + * Since there's no easy way to pass through to pmsav8_mpu_lookup() + * that we want this special case which would always say "yes", + * we just do the SAU lookup here followed by a direct physical load. + */ + attrs.secure = targets_secure; + attrs.user = false; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + V8M_SAttributes sattrs = {}; + + v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); + if (sattrs.ns) { + attrs.secure = false; + } else if (!targets_secure) { + /* + * NS access to S memory: the underlying exception which we escalate + * to HardFault is SecureFault, which always targets Secure. + */ + exc_secure = true; + goto load_fail; + } + } + + vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr, + attrs, &result); + if (result != MEMTX_OK) { + /* + * Underlying exception is BusFault: its target security state + * depends on BFHFNMINS. + */ + exc_secure = !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK); + goto load_fail; + } + *pvec = vector_entry; + return true; + +load_fail: + /* + * All vector table fetch fails are reported as HardFault, with + * HFSR.VECTTBL and .FORCED set. (FORCED is set because + * technically the underlying exception is a SecureFault or BusFault + * that is escalated to HardFault.) This is a terminal exception, + * so we will either take the HardFault immediately or else enter + * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()). + * The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are + * secure); otherwise it targets the same security state as the + * underlying exception. + * In v8.1M HardFaults from vector table fetch fails don't set FORCED. + */ + if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) { + exc_secure = true; + } + env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK; + if (!arm_feature(env, ARM_FEATURE_V8_1M)) { + env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK; + } + armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure); + return false; +} + +static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr) +{ + /* + * Return the integrity signature value for the callee-saves + * stack frame section. @lr is the exception return payload/LR value + * whose FType bit forms bit 0 of the signature if FP is present. + */ + uint32_t sig = 0xfefa125a; + + if (!cpu_isar_feature(aa32_vfp_simd, env_archcpu(env)) + || (lr & R_V7M_EXCRET_FTYPE_MASK)) { + sig |= 1; + } + return sig; +} + +static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain, + bool ignore_faults) +{ + /* + * For v8M, push the callee-saves register part of the stack frame. + * Compare the v8M pseudocode PushCalleeStack(). + * In the tailchaining case this may not be the current stack. + */ + CPUARMState *env = &cpu->env; + uint32_t *frame_sp_p; + uint32_t frameptr; + ARMMMUIdx mmu_idx; + bool stacked_ok; + uint32_t limit; + bool want_psp; + uint32_t sig; + StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL; + + if (dotailchain) { + bool mode = lr & R_V7M_EXCRET_MODE_MASK; + bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) || + !mode; + + mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv); + frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode, + lr & R_V7M_EXCRET_SPSEL_MASK); + want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK); + if (want_psp) { + limit = env->v7m.psplim[M_REG_S]; + } else { + limit = env->v7m.msplim[M_REG_S]; + } + } else { + mmu_idx = arm_mmu_idx(env); + frame_sp_p = &env->regs[13]; + limit = v7m_sp_limit(env); + } + + frameptr = *frame_sp_p - 0x28; + if (frameptr < limit) { + /* + * Stack limit failure: set SP to the limit value, and generate + * STKOF UsageFault. Stack pushes below the limit must not be + * performed. It is IMPDEF whether pushes above the limit are + * performed; we choose not to. + */ + qemu_log_mask(CPU_LOG_INT, + "...STKOF during callee-saves register stacking\n"); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + *frame_sp_p = limit; + return true; + } + + /* + * Write as much of the stack frame as we can. A write failure may + * cause us to pend a derived exception. + */ + sig = v7m_integrity_sig(env, lr); + stacked_ok = + v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) && + v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode); + + /* Update SP regardless of whether any of the stack accesses failed. */ + *frame_sp_p = frameptr; + + return !stacked_ok; +} + +static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain, + bool ignore_stackfaults) +{ + /* + * Do the "take the exception" parts of exception entry, + * but not the pushing of state to the stack. This is + * similar to the pseudocode ExceptionTaken() function. + */ + CPUARMState *env = &cpu->env; + uint32_t addr; + bool targets_secure; + int exc; + bool push_failed = false; + + armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure); + qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n", + targets_secure ? "secure" : "nonsecure", exc); + + if (dotailchain) { + /* Sanitize LR FType and PREFIX bits */ + if (!cpu_isar_feature(aa32_vfp_simd, cpu)) { + lr |= R_V7M_EXCRET_FTYPE_MASK; + } + lr = deposit32(lr, 24, 8, 0xff); + } + + if (arm_feature(env, ARM_FEATURE_V8)) { + if (arm_feature(env, ARM_FEATURE_M_SECURITY) && + (lr & R_V7M_EXCRET_S_MASK)) { + /* + * The background code (the owner of the registers in the + * exception frame) is Secure. This means it may either already + * have or now needs to push callee-saves registers. + */ + if (targets_secure) { + if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) { + /* + * We took an exception from Secure to NonSecure + * (which means the callee-saved registers got stacked) + * and are now tailchaining to a Secure exception. + * Clear DCRS so eventual return from this Secure + * exception unstacks the callee-saved registers. + */ + lr &= ~R_V7M_EXCRET_DCRS_MASK; + } + } else { + /* + * We're going to a non-secure exception; push the + * callee-saves registers to the stack now, if they're + * not already saved. + */ + if (lr & R_V7M_EXCRET_DCRS_MASK && + !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) { + push_failed = v7m_push_callee_stack(cpu, lr, dotailchain, + ignore_stackfaults); + } + lr |= R_V7M_EXCRET_DCRS_MASK; + } + } + + lr &= ~R_V7M_EXCRET_ES_MASK; + if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) { + lr |= R_V7M_EXCRET_ES_MASK; + } + lr &= ~R_V7M_EXCRET_SPSEL_MASK; + if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) { + lr |= R_V7M_EXCRET_SPSEL_MASK; + } + + /* + * Clear registers if necessary to prevent non-secure exception + * code being able to see register values from secure code. + * Where register values become architecturally UNKNOWN we leave + * them with their previous values. v8.1M is tighter than v8.0M + * here and always zeroes the caller-saved registers regardless + * of the security state the exception is targeting. + */ + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) { + /* + * Always clear the caller-saved registers (they have been + * pushed to the stack earlier in v7m_push_stack()). + * Clear callee-saved registers if the background code is + * Secure (in which case these regs were saved in + * v7m_push_callee_stack()). + */ + int i; + /* + * r4..r11 are callee-saves, zero only if background + * state was Secure (EXCRET.S == 1) and exception + * targets Non-secure state + */ + bool zero_callee_saves = !targets_secure && + (lr & R_V7M_EXCRET_S_MASK); + + for (i = 0; i < 13; i++) { + if (i < 4 || i > 11 || zero_callee_saves) { + env->regs[i] = 0; + } + } + /* Clear EAPSR */ + xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT); + } + } + } + + if (push_failed && !ignore_stackfaults) { + /* + * Derived exception on callee-saves register stacking: + * we might now want to take a different exception which + * targets a different security state, so try again from the top. + */ + qemu_log_mask(CPU_LOG_INT, + "...derived exception on callee-saves register stacking"); + v7m_exception_taken(cpu, lr, true, true); + return; + } + + if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) { + /* Vector load failed: derived exception */ + qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load"); + v7m_exception_taken(cpu, lr, true, true); + return; + } + + /* + * Now we've done everything that might cause a derived exception + * we can go ahead and activate whichever exception we're going to + * take (which might now be the derived exception). + */ + armv7m_nvic_acknowledge_irq(env->nvic); + + /* Switch to target security state -- must do this before writing SPSEL */ + switch_v7m_security_state(env, targets_secure); + write_v7m_control_spsel(env, 0); + arm_clear_exclusive(env); + /* Clear SFPA and FPCA (has no effect if no FPU) */ + env->v7m.control[M_REG_S] &= + ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK); + /* Clear IT bits */ + env->condexec_bits = 0; + env->regs[14] = lr; + env->regs[15] = addr & 0xfffffffe; + env->thumb = addr & 1; + arm_rebuild_hflags(env); +} + +static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr, + bool apply_splim) +{ + /* + * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR + * that we will need later in order to do lazy FP reg stacking. + */ + bool is_secure = env->v7m.secure; + void *nvic = env->nvic; + /* + * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits + * are banked and we want to update the bit in the bank for the + * current security state; and in one case we want to specifically + * update the NS banked version of a bit even if we are secure. + */ + uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S]; + uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS]; + uint32_t *fpccr = &env->v7m.fpccr[is_secure]; + bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy; + + env->v7m.fpcar[is_secure] = frameptr & ~0x7; + + if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) { + bool splimviol; + uint32_t splim = v7m_sp_limit(env); + bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) && + (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK); + + splimviol = !ign && frameptr < splim; + *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol); + } + + *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1); + + *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure); + + *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0); + + *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD, + !arm_v7m_is_handler_mode(env)); + + hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false); + *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy); + + bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false); + *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy); + + mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure); + *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy); + + ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false); + *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy); + + monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false); + *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy); + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true); + *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy); + + sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false); + *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy); + } +} + +void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr) +{ + /* fptr is the value of Rn, the frame pointer we store the FP regs to */ + bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK; + bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK; + uintptr_t ra = GETPC(); + + assert(env->v7m.secure); + + if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) { + return; + } + + /* Check access to the coprocessor is permitted */ + if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) { + raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC()); + } + + if (lspact) { + /* LSPACT should not be active when there is active FP state */ + raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC()); + } + + if (fptr & 7) { + raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC()); + } + + /* + * Note that we do not use v7m_stack_write() here, because the + * accesses should not set the FSR bits for stacking errors if they + * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK + * or AccType_LAZYFP). Faults in cpu_stl_data_ra() will throw exceptions + * and longjmp out. + */ + if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) { + bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK; + int i; + + for (i = 0; i < (ts ? 32 : 16); i += 2) { + uint64_t dn = *aa32_vfp_dreg(env, i / 2); + uint32_t faddr = fptr + 4 * i; + uint32_t slo = extract64(dn, 0, 32); + uint32_t shi = extract64(dn, 32, 32); + + if (i >= 16) { + faddr += 8; /* skip the slot for the FPSCR */ + } + cpu_stl_data_ra(env, faddr, slo, ra); + cpu_stl_data_ra(env, faddr + 4, shi, ra); + } + cpu_stl_data_ra(env, fptr + 0x40, vfp_get_fpscr(env), ra); + + /* + * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to + * leave them unchanged, matching our choice in v7m_preserve_fp_state. + */ + if (ts) { + for (i = 0; i < 32; i += 2) { + *aa32_vfp_dreg(env, i / 2) = 0; + } + vfp_set_fpscr(env, 0); + } + } else { + v7m_update_fpccr(env, fptr, false); + } + + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK; +} + +void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr) +{ + uintptr_t ra = GETPC(); + + /* fptr is the value of Rn, the frame pointer we load the FP regs from */ + assert(env->v7m.secure); + + if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) { + return; + } + + /* Check access to the coprocessor is permitted */ + if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) { + raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC()); + } + + if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) { + /* State in FP is still valid */ + env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK; + } else { + bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK; + int i; + uint32_t fpscr; + + if (fptr & 7) { + raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC()); + } + + for (i = 0; i < (ts ? 32 : 16); i += 2) { + uint32_t slo, shi; + uint64_t dn; + uint32_t faddr = fptr + 4 * i; + + if (i >= 16) { + faddr += 8; /* skip the slot for the FPSCR */ + } + + slo = cpu_ldl_data_ra(env, faddr, ra); + shi = cpu_ldl_data_ra(env, faddr + 4, ra); + + dn = (uint64_t) shi << 32 | slo; + *aa32_vfp_dreg(env, i / 2) = dn; + } + fpscr = cpu_ldl_data_ra(env, fptr + 0x40, ra); + vfp_set_fpscr(env, fpscr); + } + + env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK; +} + +static bool v7m_push_stack(ARMCPU *cpu) +{ + /* + * Do the "set up stack frame" part of exception entry, + * similar to pseudocode PushStack(). + * Return true if we generate a derived exception (and so + * should ignore further stack faults trying to process + * that derived exception.) + */ + bool stacked_ok = true, limitviol = false; + CPUARMState *env = &cpu->env; + uint32_t xpsr = xpsr_read(env); + uint32_t frameptr = env->regs[13]; + ARMMMUIdx mmu_idx = arm_mmu_idx(env); + uint32_t framesize; + bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1); + + if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) && + (env->v7m.secure || nsacr_cp10)) { + if (env->v7m.secure && + env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) { + framesize = 0xa8; + } else { + framesize = 0x68; + } + } else { + framesize = 0x20; + } + + /* Align stack pointer if the guest wants that */ + if ((frameptr & 4) && + (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) { + frameptr -= 4; + xpsr |= XPSR_SPREALIGN; + } + + xpsr &= ~XPSR_SFPA; + if (env->v7m.secure && + (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) { + xpsr |= XPSR_SFPA; + } + + frameptr -= framesize; + + if (arm_feature(env, ARM_FEATURE_V8)) { + uint32_t limit = v7m_sp_limit(env); + + if (frameptr < limit) { + /* + * Stack limit failure: set SP to the limit value, and generate + * STKOF UsageFault. Stack pushes below the limit must not be + * performed. It is IMPDEF whether pushes above the limit are + * performed; we choose not to. + */ + qemu_log_mask(CPU_LOG_INT, + "...STKOF during stacking\n"); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + env->regs[13] = limit; + /* + * We won't try to perform any further memory accesses but + * we must continue through the following code to check for + * permission faults during FPU state preservation, and we + * must update FPCCR if lazy stacking is enabled. + */ + limitviol = true; + stacked_ok = false; + } + } + + /* + * Write as much of the stack frame as we can. If we fail a stack + * write this will result in a derived exception being pended + * (which may be taken in preference to the one we started with + * if it has higher priority). + */ + stacked_ok = stacked_ok && + v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, frameptr + 4, env->regs[1], + mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, frameptr + 8, env->regs[2], + mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, frameptr + 12, env->regs[3], + mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, frameptr + 16, env->regs[12], + mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, frameptr + 20, env->regs[14], + mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, frameptr + 24, env->regs[15], + mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL); + + if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) { + /* FPU is active, try to save its registers */ + bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK; + bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK; + + if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) { + qemu_log_mask(CPU_LOG_INT, + "...SecureFault because LSPACT and FPCA both set\n"); + env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + } else if (!env->v7m.secure && !nsacr_cp10) { + qemu_log_mask(CPU_LOG_INT, + "...Secure UsageFault with CFSR.NOCP because " + "NSACR.CP10 prevents stacking FP regs\n"); + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S); + env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK; + } else { + if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) { + /* Lazy stacking disabled, save registers now */ + int i; + bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure, + arm_current_el(env) != 0); + + if (stacked_ok && !cpacr_pass) { + /* + * Take UsageFault if CPACR forbids access. The pseudocode + * here does a full CheckCPEnabled() but we know the NSACR + * check can never fail as we have already handled that. + */ + qemu_log_mask(CPU_LOG_INT, + "...UsageFault with CFSR.NOCP because " + "CPACR.CP10 prevents stacking FP regs\n"); + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK; + stacked_ok = false; + } + + for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) { + uint64_t dn = *aa32_vfp_dreg(env, i / 2); + uint32_t faddr = frameptr + 0x20 + 4 * i; + uint32_t slo = extract64(dn, 0, 32); + uint32_t shi = extract64(dn, 32, 32); + + if (i >= 16) { + faddr += 8; /* skip the slot for the FPSCR */ + } + stacked_ok = stacked_ok && + v7m_stack_write(cpu, faddr, slo, + mmu_idx, STACK_NORMAL) && + v7m_stack_write(cpu, faddr + 4, shi, + mmu_idx, STACK_NORMAL); + } + stacked_ok = stacked_ok && + v7m_stack_write(cpu, frameptr + 0x60, + vfp_get_fpscr(env), mmu_idx, STACK_NORMAL); + if (cpacr_pass) { + for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) { + *aa32_vfp_dreg(env, i / 2) = 0; + } + vfp_set_fpscr(env, 0); + } + } else { + /* Lazy stacking enabled, save necessary info to stack later */ + v7m_update_fpccr(env, frameptr + 0x20, true); + } + } + } + + /* + * If we broke a stack limit then SP was already updated earlier; + * otherwise we update SP regardless of whether any of the stack + * accesses failed or we took some other kind of fault. + */ + if (!limitviol) { + env->regs[13] = frameptr; + } + + return !stacked_ok; +} + +static void do_v7m_exception_exit(ARMCPU *cpu) +{ + CPUARMState *env = &cpu->env; + uint32_t excret; + uint32_t xpsr, xpsr_mask; + bool ufault = false; + bool sfault = false; + bool return_to_sp_process; + bool return_to_handler; + bool rettobase = false; + bool exc_secure = false; + bool return_to_secure; + bool ftype; + bool restore_s16_s31 = false; + + /* + * If we're not in Handler mode then jumps to magic exception-exit + * addresses don't have magic behaviour. However for the v8M + * security extensions the magic secure-function-return has to + * work in thread mode too, so to avoid doing an extra check in + * the generated code we allow exception-exit magic to also cause the + * internal exception and bring us here in thread mode. Correct code + * will never try to do this (the following insn fetch will always + * fault) so we the overhead of having taken an unnecessary exception + * doesn't matter. + */ + if (!arm_v7m_is_handler_mode(env)) { + return; + } + + /* + * In the spec pseudocode ExceptionReturn() is called directly + * from BXWritePC() and gets the full target PC value including + * bit zero. In QEMU's implementation we treat it as a normal + * jump-to-register (which is then caught later on), and so split + * the target value up between env->regs[15] and env->thumb in + * gen_bx(). Reconstitute it. + */ + excret = env->regs[15]; + if (env->thumb) { + excret |= 1; + } + + qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32 + " previous exception %d\n", + excret, env->v7m.exception); + + if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) { + qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception " + "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n", + excret); + } + + ftype = excret & R_V7M_EXCRET_FTYPE_MASK; + + if (!ftype && !cpu_isar_feature(aa32_vfp_simd, cpu)) { + qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception " + "exit PC value 0x%" PRIx32 " is UNPREDICTABLE " + "if FPU not present\n", + excret); + ftype = true; + } + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* + * EXC_RETURN.ES validation check (R_SMFL). We must do this before + * we pick which FAULTMASK to clear. + */ + if (!env->v7m.secure && + ((excret & R_V7M_EXCRET_ES_MASK) || + !(excret & R_V7M_EXCRET_DCRS_MASK))) { + sfault = 1; + /* For all other purposes, treat ES as 0 (R_HXSR) */ + excret &= ~R_V7M_EXCRET_ES_MASK; + } + exc_secure = excret & R_V7M_EXCRET_ES_MASK; + } + + if (env->v7m.exception != ARMV7M_EXCP_NMI) { + /* + * Auto-clear FAULTMASK on return from other than NMI. + * If the security extension is implemented then this only + * happens if the raw execution priority is >= 0; the + * value of the ES bit in the exception return value indicates + * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.) + */ + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) { + env->v7m.faultmask[exc_secure] = 0; + } + } else { + env->v7m.faultmask[M_REG_NS] = 0; + } + } + + switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception, + exc_secure)) { + case -1: + /* attempt to exit an exception that isn't active */ + ufault = true; + break; + case 0: + /* still an irq active now */ + break; + case 1: + /* + * We returned to base exception level, no nesting. + * (In the pseudocode this is written using "NestedActivation != 1" + * where we have 'rettobase == false'.) + */ + rettobase = true; + break; + default: + g_assert_not_reached(); + } + + return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK); + return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK; + return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && + (excret & R_V7M_EXCRET_S_MASK); + + if (arm_feature(env, ARM_FEATURE_V8)) { + if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* + * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP); + * we choose to take the UsageFault. + */ + if ((excret & R_V7M_EXCRET_S_MASK) || + (excret & R_V7M_EXCRET_ES_MASK) || + !(excret & R_V7M_EXCRET_DCRS_MASK)) { + ufault = true; + } + } + if (excret & R_V7M_EXCRET_RES0_MASK) { + ufault = true; + } + } else { + /* For v7M we only recognize certain combinations of the low bits */ + switch (excret & 0xf) { + case 1: /* Return to Handler */ + break; + case 13: /* Return to Thread using Process stack */ + case 9: /* Return to Thread using Main stack */ + /* + * We only need to check NONBASETHRDENA for v7M, because in + * v8M this bit does not exist (it is RES1). + */ + if (!rettobase && + !(env->v7m.ccr[env->v7m.secure] & + R_V7M_CCR_NONBASETHRDENA_MASK)) { + ufault = true; + } + break; + default: + ufault = true; + } + } + + /* + * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in + * Handler mode (and will be until we write the new XPSR.Interrupt + * field) this does not switch around the current stack pointer. + * We must do this before we do any kind of tailchaining, including + * for the derived exceptions on integrity check failures, or we will + * give the guest an incorrect EXCRET.SPSEL value on exception entry. + */ + write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure); + + /* + * Clear scratch FP values left in caller saved registers; this + * must happen before any kind of tail chaining. + */ + if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) && + (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) { + if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) { + env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " + "stackframe: error during lazy state deactivation\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } else { + if (arm_feature(env, ARM_FEATURE_V8_1M)) { + /* v8.1M adds this NOCP check */ + bool nsacr_pass = exc_secure || + extract32(env->v7m.nsacr, 10, 1); + bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true); + if (!nsacr_pass) { + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true); + env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK; + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " + "stackframe: NSACR prevents clearing FPU registers\n"); + v7m_exception_taken(cpu, excret, true, false); + } else if (!cpacr_pass) { + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + exc_secure); + env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK; + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " + "stackframe: CPACR prevents clearing FPU registers\n"); + v7m_exception_taken(cpu, excret, true, false); + } + } + /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */ + int i; + + for (i = 0; i < 16; i += 2) { + *aa32_vfp_dreg(env, i / 2) = 0; + } + vfp_set_fpscr(env, 0); + } + } + + if (sfault) { + env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " + "stackframe: failed EXC_RETURN.ES validity check\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + if (ufault) { + /* + * Bad exception return: instead of popping the exception + * stack, directly take a usage fault on the current stack. + */ + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " + "stackframe: failed exception return integrity check\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + /* + * Tailchaining: if there is currently a pending exception that + * is high enough priority to preempt execution at the level we're + * about to return to, then just directly take that exception now, + * avoiding an unstack-and-then-stack. Note that now we have + * deactivated the previous exception by calling armv7m_nvic_complete_irq() + * our current execution priority is already the execution priority we are + * returning to -- none of the state we would unstack or set based on + * the EXCRET value affects it. + */ + if (armv7m_nvic_can_take_pending_exception(env->nvic)) { + qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + switch_v7m_security_state(env, return_to_secure); + + { + /* + * The stack pointer we should be reading the exception frame from + * depends on bits in the magic exception return type value (and + * for v8M isn't necessarily the stack pointer we will eventually + * end up resuming execution with). Get a pointer to the location + * in the CPU state struct where the SP we need is currently being + * stored; we will use and modify it in place. + * We use this limited C variable scope so we don't accidentally + * use 'frame_sp_p' after we do something that makes it invalid. + */ + uint32_t *frame_sp_p = get_v7m_sp_ptr(env, + return_to_secure, + !return_to_handler, + return_to_sp_process); + uint32_t frameptr = *frame_sp_p; + bool pop_ok = true; + ARMMMUIdx mmu_idx; + bool return_to_priv = return_to_handler || + !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK); + + mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure, + return_to_priv); + + if (!QEMU_IS_ALIGNED(frameptr, 8) && + arm_feature(env, ARM_FEATURE_V8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "M profile exception return with non-8-aligned SP " + "for destination state is UNPREDICTABLE\n"); + } + + /* Do we need to pop callee-saved registers? */ + if (return_to_secure && + ((excret & R_V7M_EXCRET_ES_MASK) == 0 || + (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) { + uint32_t actual_sig; + + pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx); + + if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) { + /* Take a SecureFault on the current stack */ + env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " + "stackframe: failed exception return integrity " + "signature check\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + pop_ok = pop_ok && + v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) && + v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) && + v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) && + v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) && + v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) && + v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) && + v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) && + v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx); + + frameptr += 0x28; + } + + /* Pop registers */ + pop_ok = pop_ok && + v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) && + v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) && + v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) && + v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) && + v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) && + v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) && + v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) && + v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx); + + if (!pop_ok) { + /* + * v7m_stack_read() pended a fault, so take it (as a tail + * chained exception on the same stack frame) + */ + qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + /* + * Returning from an exception with a PC with bit 0 set is defined + * behaviour on v8M (bit 0 is ignored), but for v7M it was specified + * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore + * the lsbit, and there are several RTOSes out there which incorrectly + * assume the r15 in the stack frame should be a Thumb-style "lsbit + * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but + * complain about the badly behaved guest. + */ + if (env->regs[15] & 1) { + env->regs[15] &= ~1U; + if (!arm_feature(env, ARM_FEATURE_V8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "M profile return from interrupt with misaligned " + "PC is UNPREDICTABLE on v7M\n"); + } + } + + if (arm_feature(env, ARM_FEATURE_V8)) { + /* + * For v8M we have to check whether the xPSR exception field + * matches the EXCRET value for return to handler/thread + * before we commit to changing the SP and xPSR. + */ + bool will_be_handler = (xpsr & XPSR_EXCP) != 0; + if (return_to_handler != will_be_handler) { + /* + * Take an INVPC UsageFault on the current stack. + * By this point we will have switched to the security state + * for the background state, so this UsageFault will target + * that state. + */ + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " + "stackframe: failed exception return integrity " + "check\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + } + + if (!ftype) { + /* FP present and we need to handle it */ + if (!return_to_secure && + (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) { + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; + qemu_log_mask(CPU_LOG_INT, + "...taking SecureFault on existing stackframe: " + "Secure LSPACT set but exception return is " + "not to secure state\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + restore_s16_s31 = return_to_secure && + (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK); + + if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) { + /* State in FPU is still valid, just clear LSPACT */ + env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK; + } else { + int i; + uint32_t fpscr; + bool cpacr_pass, nsacr_pass; + + cpacr_pass = v7m_cpacr_pass(env, return_to_secure, + return_to_priv); + nsacr_pass = return_to_secure || + extract32(env->v7m.nsacr, 10, 1); + + if (!cpacr_pass) { + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + return_to_secure); + env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK; + qemu_log_mask(CPU_LOG_INT, + "...taking UsageFault on existing " + "stackframe: CPACR.CP10 prevents unstacking " + "FP regs\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } else if (!nsacr_pass) { + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true); + env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK; + qemu_log_mask(CPU_LOG_INT, + "...taking Secure UsageFault on existing " + "stackframe: NSACR.CP10 prevents unstacking " + "FP regs\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) { + uint32_t slo, shi; + uint64_t dn; + uint32_t faddr = frameptr + 0x20 + 4 * i; + + if (i >= 16) { + faddr += 8; /* Skip the slot for the FPSCR */ + } + + pop_ok = pop_ok && + v7m_stack_read(cpu, &slo, faddr, mmu_idx) && + v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx); + + if (!pop_ok) { + break; + } + + dn = (uint64_t)shi << 32 | slo; + *aa32_vfp_dreg(env, i / 2) = dn; + } + pop_ok = pop_ok && + v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx); + if (pop_ok) { + vfp_set_fpscr(env, fpscr); + } + if (!pop_ok) { + /* + * These regs are 0 if security extension present; + * otherwise merely UNKNOWN. We zero always. + */ + for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) { + *aa32_vfp_dreg(env, i / 2) = 0; + } + vfp_set_fpscr(env, 0); + } + } + } + env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S], + V7M_CONTROL, FPCA, !ftype); + + /* Commit to consuming the stack frame */ + frameptr += 0x20; + if (!ftype) { + frameptr += 0x48; + if (restore_s16_s31) { + frameptr += 0x40; + } + } + /* + * Undo stack alignment (the SPREALIGN bit indicates that the original + * pre-exception SP was not 8-aligned and we added a padding word to + * align it, so we undo this by ORing in the bit that increases it + * from the current 8-aligned value to the 8-unaligned value. (Adding 4 + * would work too but a logical OR is how the pseudocode specifies it.) + */ + if (xpsr & XPSR_SPREALIGN) { + frameptr |= 4; + } + *frame_sp_p = frameptr; + } + + xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA); + if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) { + xpsr_mask &= ~XPSR_GE; + } + /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */ + xpsr_write(env, xpsr, xpsr_mask); + + if (env->v7m.secure) { + bool sfpa = xpsr & XPSR_SFPA; + + env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S], + V7M_CONTROL, SFPA, sfpa); + } + + /* + * The restored xPSR exception field will be zero if we're + * resuming in Thread mode. If that doesn't match what the + * exception return excret specified then this is a UsageFault. + * v7M requires we make this check here; v8M did it earlier. + */ + if (return_to_handler != arm_v7m_is_handler_mode(env)) { + /* + * Take an INVPC UsageFault by pushing the stack again; + * we know we're v7M so this is never a Secure UsageFault. + */ + bool ignore_stackfaults; + + assert(!arm_feature(env, ARM_FEATURE_V8)); + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; + ignore_stackfaults = v7m_push_stack(cpu); + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: " + "failed exception return integrity check\n"); + v7m_exception_taken(cpu, excret, false, ignore_stackfaults); + return; + } + + /* Otherwise, we have a successful exception exit. */ + arm_clear_exclusive(env); + arm_rebuild_hflags(env); + qemu_log_mask(CPU_LOG_INT, "...successful exception return\n"); +} + +static bool do_v7m_function_return(ARMCPU *cpu) +{ + /* + * v8M security extensions magic function return. + * We may either: + * (1) throw an exception (longjump) + * (2) return true if we successfully handled the function return + * (3) return false if we failed a consistency check and have + * pended a UsageFault that needs to be taken now + * + * At this point the magic return value is split between env->regs[15] + * and env->thumb. We don't bother to reconstitute it because we don't + * need it (all values are handled the same way). + */ + CPUARMState *env = &cpu->env; + uint32_t newpc, newpsr, newpsr_exc; + + qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n"); + + { + bool threadmode, spsel; + TCGMemOpIdx oi; + ARMMMUIdx mmu_idx; + uint32_t *frame_sp_p; + uint32_t frameptr; + + /* Pull the return address and IPSR from the Secure stack */ + threadmode = !arm_v7m_is_handler_mode(env); + spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK; + + frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel); + frameptr = *frame_sp_p; + + /* + * These loads may throw an exception (for MPU faults). We want to + * do them as secure, so work out what MMU index that is. + */ + mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); + oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx)); + newpc = helper_le_ldul_mmu(env, frameptr, oi, 0); + newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0); + + /* Consistency checks on new IPSR */ + newpsr_exc = newpsr & XPSR_EXCP; + if (!((env->v7m.exception == 0 && newpsr_exc == 0) || + (env->v7m.exception == 1 && newpsr_exc != 0))) { + /* Pend the fault and tell our caller to take it */ + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, + "...taking INVPC UsageFault: " + "IPSR consistency check failed\n"); + return false; + } + + *frame_sp_p = frameptr + 8; + } + + /* This invalidates frame_sp_p */ + switch_v7m_security_state(env, true); + env->v7m.exception = newpsr_exc; + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; + if (newpsr & XPSR_SFPA) { + env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK; + } + xpsr_write(env, 0, XPSR_IT); + env->thumb = newpc & 1; + env->regs[15] = newpc & ~1; + arm_rebuild_hflags(env); + + qemu_log_mask(CPU_LOG_INT, "...function return successful\n"); + return true; +} + +static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, + uint32_t addr, uint16_t *insn) +{ + /* + * Load a 16-bit portion of a v7M instruction, returning true on success, + * or false on failure (in which case we will have pended the appropriate + * exception). + * We need to do the instruction fetch's MPU and SAU checks + * like this because there is no MMU index that would allow + * doing the load with a single function call. Instead we must + * first check that the security attributes permit the load + * and that they don't mismatch on the two halves of the instruction, + * and then we do the load as a secure load (ie using the security + * attributes of the address, not the CPU, as architecturally required). + */ + CPUState *cs = CPU(cpu); + CPUARMState *env = &cpu->env; + V8M_SAttributes sattrs = {}; + MemTxAttrs attrs = {}; + ARMMMUFaultInfo fi = {}; + ARMCacheAttrs cacheattrs = {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + + v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs); + if (!sattrs.nsc || sattrs.ns) { + /* + * This must be the second half of the insn, and it straddles a + * region boundary with the second half not being S&NSC. + */ + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; + } + if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx, &physaddr, + &attrs, &prot, &page_size, &fi, &cacheattrs)) { + /* the MPU lookup failed */ + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n"); + return false; + } + *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr, + attrs, &txres); + if (txres != MEMTX_OK) { + env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); + qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n"); + return false; + } + return true; +} + +static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx, + uint32_t addr, uint32_t *spdata) +{ + /* + * Read a word of data from the stack for the SG instruction, + * writing the value into *spdata. If the load succeeds, return + * true; otherwise pend an appropriate exception and return false. + * (We can't use data load helpers here that throw an exception + * because of the context we're called in, which is halfway through + * arm_v7m_cpu_do_interrupt().) + */ + CPUState *cs = CPU(cpu); + CPUARMState *env = &cpu->env; + MemTxAttrs attrs = {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + ARMMMUFaultInfo fi = {}; + ARMCacheAttrs cacheattrs = {}; + uint32_t value; + + if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr, + &attrs, &prot, &page_size, &fi, &cacheattrs)) { + /* MPU/SAU lookup failed */ + if (fi.type == ARMFault_QEMU_SFault) { + qemu_log_mask(CPU_LOG_INT, + "...SecureFault during stack word read\n"); + env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK; + env->v7m.sfar = addr; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + } else { + qemu_log_mask(CPU_LOG_INT, + "...MemManageFault during stack word read\n"); + env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK | + R_V7M_CFSR_MMARVALID_MASK; + env->v7m.mmfar[M_REG_S] = addr; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false); + } + return false; + } + value = address_space_ldl(arm_addressspace(cs, attrs), physaddr, + attrs, &txres); + if (txres != MEMTX_OK) { + /* BusFault trying to read the data */ + qemu_log_mask(CPU_LOG_INT, + "...BusFault during stack word read\n"); + env->v7m.cfsr[M_REG_NS] |= + (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK); + env->v7m.bfar = addr; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); + return false; + } + + *spdata = value; + return true; +} + +static bool v7m_handle_execute_nsc(ARMCPU *cpu) +{ + /* + * Check whether this attempt to execute code in a Secure & NS-Callable + * memory region is for an SG instruction; if so, then emulate the + * effect of the SG instruction and return true. Otherwise pend + * the correct kind of exception and return false. + */ + CPUARMState *env = &cpu->env; + ARMMMUIdx mmu_idx; + uint16_t insn; + + /* + * We should never get here unless get_phys_addr_pmsav8() caused + * an exception for NS executing in S&NSC memory. + */ + assert(!env->v7m.secure); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + + /* We want to do the MPU lookup as secure; work out what mmu_idx that is */ + mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); + + if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) { + return false; + } + + if (!env->thumb) { + goto gen_invep; + } + + if (insn != 0xe97f) { + /* + * Not an SG instruction first half (we choose the IMPDEF + * early-SG-check option). + */ + goto gen_invep; + } + + if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) { + return false; + } + + if (insn != 0xe97f) { + /* + * Not an SG instruction second half (yes, both halves of the SG + * insn have the same hex value) + */ + goto gen_invep; + } + + /* + * OK, we have confirmed that we really have an SG instruction. + * We know we're NS in S memory so don't need to repeat those checks. + */ + qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32 + ", executing it\n", env->regs[15]); + + if (cpu_isar_feature(aa32_m_sec_state, cpu) && + !arm_v7m_is_handler_mode(env)) { + /* + * v8.1M exception stack frame integrity check. Note that we + * must perform the memory access even if CCR_S.TRD is zero + * and we aren't going to check what the data loaded is. + */ + uint32_t spdata, sp; + + /* + * We know we are currently NS, so the S stack pointers must be + * in other_ss_{psp,msp}, not in regs[13]/other_sp. + */ + sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp; + if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) { + /* Stack access failed and an exception has been pended */ + return false; + } + + if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) { + if (((spdata & ~1) == 0xfefa125a) || + !(env->v7m.control[M_REG_S] & 1)) { + goto gen_invep; + } + } + } + + env->regs[14] &= ~1; + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; + switch_v7m_security_state(env, true); + xpsr_write(env, 0, XPSR_IT); + env->regs[15] += 4; + arm_rebuild_hflags(env); + return true; + +gen_invep: + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; +} + +void arm_v7m_cpu_do_interrupt(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + uint32_t lr; + bool ignore_stackfaults; + + arm_log_exception(cs->exception_index); + + /* + * For exceptions we just mark as pending on the NVIC, and let that + * handle it. + */ + switch (cs->exception_index) { + case EXCP_UDEF: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK; + break; + case EXCP_NOCP: + { + /* + * NOCP might be directed to something other than the current + * security state if this fault is because of NSACR; we indicate + * the target security state using exception.target_el. + */ + int target_secstate; + + if (env->exception.target_el == 3) { + target_secstate = M_REG_S; + } else { + target_secstate = env->v7m.secure; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate); + env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK; + break; + } + case EXCP_INVSTATE: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK; + break; + case EXCP_STKOF: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK; + break; + case EXCP_LSERR: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK; + break; + case EXCP_UNALIGNED: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK; + break; + case EXCP_SWI: + /* The PC already points to the next instruction. */ + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure); + break; + case EXCP_PREFETCH_ABORT: + case EXCP_DATA_ABORT: + /* + * Note that for M profile we don't have a guest facing FSR, but + * the env->exception.fsr will be populated by the code that + * raises the fault, in the A profile short-descriptor format. + */ + switch (env->exception.fsr & 0xf) { + case M_FAKE_FSR_NSC_EXEC: + /* + * Exception generated when we try to execute code at an address + * which is marked as Secure & Non-Secure Callable and the CPU + * is in the Non-Secure state. The only instruction which can + * be executed like this is SG (and that only if both halves of + * the SG instruction have the same security attributes.) + * Everything else must generate an INVEP SecureFault, so we + * emulate the SG instruction here. + */ + if (v7m_handle_execute_nsc(cpu)) { + return; + } + break; + case M_FAKE_FSR_SFAULT: + /* + * Various flavours of SecureFault for attempts to execute or + * access data in the wrong security state. + */ + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + if (env->v7m.secure) { + env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVTRAN\n"); + } else { + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + } + break; + case EXCP_DATA_ABORT: + /* This must be an NS access to S memory */ + env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.AUVIOL\n"); + break; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + break; + case 0x8: /* External Abort */ + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; + qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n"); + break; + case EXCP_DATA_ABORT: + env->v7m.cfsr[M_REG_NS] |= + (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK); + env->v7m.bfar = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, + "...with CFSR.PRECISERR and BFAR 0x%x\n", + env->v7m.bfar); + break; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); + break; + default: + /* + * All other FSR values are either MPU faults or "can't happen + * for M profile" cases. + */ + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK; + qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n"); + break; + case EXCP_DATA_ABORT: + env->v7m.cfsr[env->v7m.secure] |= + (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK); + env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, + "...with CFSR.DACCVIOL and MMFAR 0x%x\n", + env->v7m.mmfar[env->v7m.secure]); + break; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, + env->v7m.secure); + break; + } + break; + case EXCP_SEMIHOST: + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%x\n", + env->regs[0]); + env->regs[0] = do_common_semihosting(cs); + env->regs[15] += env->thumb ? 2 : 4; + return; + case EXCP_BKPT: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false); + break; + case EXCP_IRQ: + break; + case EXCP_EXCEPTION_EXIT: + if (env->regs[15] < EXC_RETURN_MIN_MAGIC) { + /* Must be v8M security extension function return */ + assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + if (do_v7m_function_return(cpu)) { + return; + } + } else { + do_v7m_exception_exit(cpu); + return; + } + break; + case EXCP_LAZYFP: + /* + * We already pended the specific exception in the NVIC in the + * v7m_preserve_fp_state() helper function. + */ + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + return; /* Never happens. Keep compiler happy. */ + } + + if (arm_feature(env, ARM_FEATURE_V8)) { + lr = R_V7M_EXCRET_RES1_MASK | + R_V7M_EXCRET_DCRS_MASK; + /* + * The S bit indicates whether we should return to Secure + * or NonSecure (ie our current state). + * The ES bit indicates whether we're taking this exception + * to Secure or NonSecure (ie our target state). We set it + * later, in v7m_exception_taken(). + * The SPSEL bit is also set in v7m_exception_taken() for v8M. + * This corresponds to the ARM ARM pseudocode for v8M setting + * some LR bits in PushStack() and some in ExceptionTaken(); + * the distinction matters for the tailchain cases where we + * can take an exception without pushing the stack. + */ + if (env->v7m.secure) { + lr |= R_V7M_EXCRET_S_MASK; + } + } else { + lr = R_V7M_EXCRET_RES1_MASK | + R_V7M_EXCRET_S_MASK | + R_V7M_EXCRET_DCRS_MASK | + R_V7M_EXCRET_ES_MASK; + if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) { + lr |= R_V7M_EXCRET_SPSEL_MASK; + } + } + if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) { + lr |= R_V7M_EXCRET_FTYPE_MASK; + } + if (!arm_v7m_is_handler_mode(env)) { + lr |= R_V7M_EXCRET_MODE_MASK; + } + + ignore_stackfaults = v7m_push_stack(cpu); + v7m_exception_taken(cpu, lr, false, ignore_stackfaults); +} + +uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) +{ + unsigned el = arm_current_el(env); + + /* First handle registers which unprivileged can read */ + switch (reg) { + case 0 ... 7: /* xPSR sub-fields */ + return v7m_mrs_xpsr(env, reg, el); + case 20: /* CONTROL */ + return v7m_mrs_control(env, env->v7m.secure); + case 0x94: /* CONTROL_NS */ + /* + * We have to handle this here because unprivileged Secure code + * can read the NS CONTROL register. + */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.control[M_REG_NS] | + (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK); + } + + if (el == 0) { + return 0; /* unprivileged reads others as zero */ + } + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + switch (reg) { + case 0x88: /* MSP_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.other_ss_msp; + case 0x89: /* PSP_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.other_ss_psp; + case 0x8a: /* MSPLIM_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.msplim[M_REG_NS]; + case 0x8b: /* PSPLIM_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.psplim[M_REG_NS]; + case 0x90: /* PRIMASK_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.primask[M_REG_NS]; + case 0x91: /* BASEPRI_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.basepri[M_REG_NS]; + case 0x93: /* FAULTMASK_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.faultmask[M_REG_NS]; + case 0x98: /* SP_NS */ + { + /* + * This gives the non-secure SP selected based on whether we're + * currently in handler mode or not, using the NS CONTROL.SPSEL. + */ + bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK; + + if (!env->v7m.secure) { + return 0; + } + if (!arm_v7m_is_handler_mode(env) && spsel) { + return env->v7m.other_ss_psp; + } else { + return env->v7m.other_ss_msp; + } + } + default: + break; + } + } + + switch (reg) { + case 8: /* MSP */ + return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13]; + case 9: /* PSP */ + return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp; + case 10: /* MSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + return env->v7m.msplim[env->v7m.secure]; + case 11: /* PSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + return env->v7m.psplim[env->v7m.secure]; + case 16: /* PRIMASK */ + return env->v7m.primask[env->v7m.secure]; + case 17: /* BASEPRI */ + case 18: /* BASEPRI_MAX */ + return env->v7m.basepri[env->v7m.secure]; + case 19: /* FAULTMASK */ + return env->v7m.faultmask[env->v7m.secure]; + default: + bad_reg: + qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special" + " register %d\n", reg); + return 0; + } +} + +void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) +{ + /* + * We're passed bits [11..0] of the instruction; extract + * SYSm and the mask bits. + * Invalid combinations of SYSm and mask are UNPREDICTABLE; + * we choose to treat them as if the mask bits were valid. + * NB that the pseudocode 'mask' variable is bits [11..10], + * whereas ours is [11..8]. + */ + uint32_t mask = extract32(maskreg, 8, 4); + uint32_t reg = extract32(maskreg, 0, 8); + int cur_el = arm_current_el(env); + + if (cur_el == 0 && reg > 7 && reg != 20) { + /* + * only xPSR sub-fields and CONTROL.SFPA may be written by + * unprivileged code + */ + return; + } + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + switch (reg) { + case 0x88: /* MSP_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.other_ss_msp = val; + return; + case 0x89: /* PSP_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.other_ss_psp = val; + return; + case 0x8a: /* MSPLIM_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.msplim[M_REG_NS] = val & ~7; + return; + case 0x8b: /* PSPLIM_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.psplim[M_REG_NS] = val & ~7; + return; + case 0x90: /* PRIMASK_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.primask[M_REG_NS] = val & 1; + return; + case 0x91: /* BASEPRI_NS */ + if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) { + return; + } + env->v7m.basepri[M_REG_NS] = val & 0xff; + return; + case 0x93: /* FAULTMASK_NS */ + if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) { + return; + } + env->v7m.faultmask[M_REG_NS] = val & 1; + return; + case 0x94: /* CONTROL_NS */ + if (!env->v7m.secure) { + return; + } + write_v7m_control_spsel_for_secstate(env, + val & R_V7M_CONTROL_SPSEL_MASK, + M_REG_NS); + if (arm_feature(env, ARM_FEATURE_M_MAIN)) { + env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK; + env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK; + } + /* + * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0, + * RES0 if the FPU is not present, and is stored in the S bank + */ + if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env)) && + extract32(env->v7m.nsacr, 10, 1)) { + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK; + env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK; + } + return; + case 0x98: /* SP_NS */ + { + /* + * This gives the non-secure SP selected based on whether we're + * currently in handler mode or not, using the NS CONTROL.SPSEL. + */ + bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK; + bool is_psp = !arm_v7m_is_handler_mode(env) && spsel; + uint32_t limit; + + if (!env->v7m.secure) { + return; + } + + limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false]; + + if (val < limit) { + CPUState *cs = env_cpu(env); + + cpu_restore_state(cs, GETPC(), true); + raise_exception(env, EXCP_STKOF, 0, 1); + } + + if (is_psp) { + env->v7m.other_ss_psp = val; + } else { + env->v7m.other_ss_msp = val; + } + return; + } + default: + break; + } + } + + switch (reg) { + case 0 ... 7: /* xPSR sub-fields */ + v7m_msr_xpsr(env, mask, reg, val); + break; + case 8: /* MSP */ + if (v7m_using_psp(env)) { + env->v7m.other_sp = val; + } else { + env->regs[13] = val; + } + break; + case 9: /* PSP */ + if (v7m_using_psp(env)) { + env->regs[13] = val; + } else { + env->v7m.other_sp = val; + } + break; + case 10: /* MSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + env->v7m.msplim[env->v7m.secure] = val & ~7; + break; + case 11: /* PSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + env->v7m.psplim[env->v7m.secure] = val & ~7; + break; + case 16: /* PRIMASK */ + env->v7m.primask[env->v7m.secure] = val & 1; + break; + case 17: /* BASEPRI */ + if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { + goto bad_reg; + } + env->v7m.basepri[env->v7m.secure] = val & 0xff; + break; + case 18: /* BASEPRI_MAX */ + if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { + goto bad_reg; + } + val &= 0xff; + if (val != 0 && (val < env->v7m.basepri[env->v7m.secure] + || env->v7m.basepri[env->v7m.secure] == 0)) { + env->v7m.basepri[env->v7m.secure] = val; + } + break; + case 19: /* FAULTMASK */ + if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { + goto bad_reg; + } + env->v7m.faultmask[env->v7m.secure] = val & 1; + break; + case 20: /* CONTROL */ + /* + * Writing to the SPSEL bit only has an effect if we are in + * thread mode; other bits can be updated by any privileged code. + * write_v7m_control_spsel() deals with updating the SPSEL bit in + * env->v7m.control, so we only need update the others. + * For v7M, we must just ignore explicit writes to SPSEL in handler + * mode; for v8M the write is permitted but will have no effect. + * All these bits are writes-ignored from non-privileged code, + * except for SFPA. + */ + if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) || + !arm_v7m_is_handler_mode(env))) { + write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0); + } + if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) { + env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK; + env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK; + } + if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) { + /* + * SFPA is RAZ/WI from NS or if no FPU. + * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present. + * Both are stored in the S bank. + */ + if (env->v7m.secure) { + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; + env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK; + } + if (cur_el > 0 && + (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) || + extract32(env->v7m.nsacr, 10, 1))) { + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK; + env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK; + } + } + break; + default: + bad_reg: + qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special" + " register %d\n", reg); + return; + } +} + +uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) +{ + /* Implement the TT instruction. op is bits [7:6] of the insn. */ + bool forceunpriv = op & 1; + bool alt = op & 2; + V8M_SAttributes sattrs = {}; + uint32_t tt_resp; + bool r, rw, nsr, nsrw, mrvalid; + int prot; + ARMMMUFaultInfo fi = {}; + MemTxAttrs attrs = {}; + hwaddr phys_addr; + ARMMMUIdx mmu_idx; + uint32_t mregion; + bool targetpriv; + bool targetsec = env->v7m.secure; + bool is_subpage; + + /* + * Work out what the security state and privilege level we're + * interested in is... + */ + if (alt) { + targetsec = !targetsec; + } + + if (forceunpriv) { + targetpriv = false; + } else { + targetpriv = arm_v7m_is_handler_mode(env) || + !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK); + } + + /* ...and then figure out which MMU index this is */ + mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv); + + /* + * We know that the MPU and SAU don't care about the access type + * for our purposes beyond that we don't want to claim to be + * an insn fetch, so we arbitrarily call this a read. + */ + + /* + * MPU region info only available for privileged or if + * inspecting the other MPU state. + */ + if (arm_current_el(env) != 0 || alt) { + /* We can ignore the return value as prot is always set */ + pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, + &phys_addr, &attrs, &prot, &is_subpage, + &fi, &mregion); + if (mregion == -1) { + mrvalid = false; + mregion = 0; + } else { + mrvalid = true; + } + r = prot & PAGE_READ; + rw = prot & PAGE_WRITE; + } else { + r = false; + rw = false; + mrvalid = false; + mregion = 0; + } + + if (env->v7m.secure) { + v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); + nsr = sattrs.ns && r; + nsrw = sattrs.ns && rw; + } else { + sattrs.ns = true; + nsr = false; + nsrw = false; + } + + tt_resp = (sattrs.iregion << 24) | + (sattrs.irvalid << 23) | + ((!sattrs.ns) << 22) | + (nsrw << 21) | + (nsr << 20) | + (rw << 19) | + (r << 18) | + (sattrs.srvalid << 17) | + (mrvalid << 16) | + (sattrs.sregion << 8) | + mregion; + + return tt_resp; +} diff --git a/target/arm/tcg/user/m_helper.c b/target/arm/tcg/user/m_helper.c new file mode 100644 index 0000000000..65f0ff9976 --- /dev/null +++ b/target/arm/tcg/user/m_helper.c @@ -0,0 +1,97 @@ +/* + * ARM v7m generic helpers. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "exec/helper-proto.h" + +#include "tcg/m_helper.h" + +void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) +{ + uint32_t mask = extract32(maskreg, 8, 4); + uint32_t reg = extract32(maskreg, 0, 8); + + switch (reg) { + case 0 ... 7: /* xPSR sub-fields */ + v7m_msr_xpsr(env, mask, reg, val); + break; + case 20: /* CONTROL */ + /* There are no sub-fields that are actually writable from EL0. */ + break; + default: + /* Unprivileged writes to other registers are ignored */ + break; + } +} + +uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) +{ + switch (reg) { + case 0 ... 7: /* xPSR sub-fields */ + return v7m_mrs_xpsr(env, reg, 0); + case 20: /* CONTROL */ + return v7m_mrs_control(env, 0); + default: + /* Unprivileged reads others as zero. */ + return 0; + } +} + +void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + +void HELPER(v7m_preserve_fp_state)(CPUARMState *env) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + +void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + +void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + +uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) +{ + /* + * The TT instructions can be used by unprivileged code, but in + * user-only emulation we don't have the MPU. + * Luckily since we know we are NonSecure unprivileged (and that in + * turn means that the A flag wasn't specified), all the bits in the + * register must be zero: + * IREGION: 0 because IRVALID is 0 + * IRVALID: 0 because NS + * S: 0 because NS + * NSRW: 0 because NS + * NSR: 0 because NS + * RW: 0 because unpriv and A flag not set + * R: 0 because unpriv and A flag not set + * SRVALID: 0 because NS + * MRVALID: 0 because unpriv and A flag not set + * SREGION: 0 becaus SRVALID is 0 + * MREGION: 0 because MRVALID is 0 + */ + return 0; +} diff --git a/target/arm/tcg/sysemu/meson.build b/target/arm/tcg/sysemu/meson.build index 8f5e955cbd..26014851bd 100644 --- a/target/arm/tcg/sysemu/meson.build +++ b/target/arm/tcg/sysemu/meson.build @@ -1,5 +1,6 @@ arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( 'debug_helper.c', + 'm_helper.c', 'mte_helper.c', 'tlb_helper.c', )) diff --git a/target/arm/tcg/user/meson.build b/target/arm/tcg/user/meson.build index cdca5d970c..4a652406e8 100644 --- a/target/arm/tcg/user/meson.build +++ b/target/arm/tcg/user/meson.build @@ -1,4 +1,5 @@ arm_user_ss.add(when: 'CONFIG_TCG', if_true: files( + 'm_helper.c', 'mte_helper.c', 'tlb_helper.c', )) From patchwork Fri Jun 4 15:51:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454111 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp592405jae; Fri, 4 Jun 2021 10:02:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy2u4tAlfclYoWEDJEMH4MsZfBl08Z6xmzVswu49MmS8h2QSUKcoZEf2dvMA5FO2StDDWqi X-Received: by 2002:adf:ebc4:: with SMTP id v4mr4837362wrn.217.1622826152769; Fri, 04 Jun 2021 10:02:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826152; cv=none; d=google.com; s=arc-20160816; b=Tf+yKHHpXhvlLGN8FW7b6DJe4DDYpAQhDHoKhZN7XYndx/NGZrV8nDR/z32tM6+D0a 9LlRCJB5KBrr+TLhMY3LwbvpuZ2AKt6UvUS5vlYMFQx2v687eJcjm1600eX2SuI2bjdc ewbk8REdvt9Tm5QwIDGJFr8YkzRVtnMsJjemVOMvtHhRaWgWxy1uCQcA6/6fAhknCYVp W07XNVp7TQM9OXaA4tISm1tSKvmY5gf5bYk5itGDwae+AKPC39znPqjsiTKCB4SK94pu x5r0jmGiHTRWkYJGyz7gMAwvJcYnRgyrSxZgTBVvJ5jJsl4kHe+h5fGHc1XPg3IpCuR3 tcwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=T5Otzi7s7hlRC+aw9fJv8J9Wjbg5nAbpIivh9IdigL8=; b=G7cZfq5Z1a/B2yTNe2C1ph5mn91lV2mIRYha8v6QLpvKyk3hRksv1cO5svYBTbmXYh 5R6cUfnpewlBsMHMbN9qrfbQCi5J0qYrtdLUHlADOdAunhhiqrGug/pTnhL8sqoJKui4 gKGeOp0vK9rbVS78aTDwhszPex9dYgP677NHoCJgr8vTfTRgNopGYsFkntZT1w509Y2S lGhylhE9/o7MpP97ybNnZPi9jUsYo7SsZRZL57aGYuXKe6+B2a3Gw9qCmpoXymAVEOs5 9VKPKMxyq18Zm9uWZd7f8pgGbJLsIF6mf+KAF/xUo1KlE9/QBb6F2U/P43Wj/Rar9R0d WyCw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=LJ0el1L6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id m18si4991645edd.534.2021.06.04.10.02.32 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:02:32 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=LJ0el1L6; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:46800 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDDL-0000yJ-Qb for patch@linaro.org; Fri, 04 Jun 2021 13:02:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33220) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkU-0007sw-8v for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:42 -0400 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]:45678) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkS-00025K-80 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:41 -0400 Received: by mail-wr1-x434.google.com with SMTP id z8so9889785wrp.12 for ; Fri, 04 Jun 2021 09:32:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T5Otzi7s7hlRC+aw9fJv8J9Wjbg5nAbpIivh9IdigL8=; b=LJ0el1L6WupgMwudXB24M6QmkErVD5gVaWie17x8mH5lPE66JPonxEpSNLuvJWYY6X hEWzavu7OdgKmVsIg99T4RQ+IpIJZx88Bk5TzQpaJO5qZuOJk8OETOrDPa67k5roIEic GtmcJ91W4HZaX+mKWJ7wobb++/IBN5gIS1WLV6X/eVQGP05MvmRyTNRAkEmj2IaP5XwS ZCC37NmkIKGScSihyOZAf62IZTzT3ETou8YYk332b4DVZT6GOxRkO+F3hA8F9J/7xuoS XwQyPYL/4uuhmYKM4U13Moh+YUggKcXF2sO5FMWGln9zPUb2e1130hrl7r0YTeFhl2Ep FAhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T5Otzi7s7hlRC+aw9fJv8J9Wjbg5nAbpIivh9IdigL8=; b=rRsEua0zgpBAo69T/zFOPZBqX33WLM1yMR8iYqMoQyaWeo8JTZiGtr5MQzBsGFOAwm MXw637oioUhf45hJR+6Z+fa9Wet1wgn0LgyeTW6+hdbrPbU9vVyYqQ7mdCpaN7ouPa1i 0aPYIYcHtkeDGwis9WfGnjjnJOUqQaKopyfDqyd/yA1pD7aUM/ZWNCl6y2aF0f0Bt30n kyY6fUx6BbkSB823InsO2mf5wX2WwWX/fhG1Z683k7KKvjLh2CYvPsniMNDLMc57aBu3 H+h0BB5veA/Kac2vvYMFNFxznBmym3JQ2cTegPm/YIYiu5tQRh+2UDtEfqBH1joO3W44 uUPg== X-Gm-Message-State: AOAM531ar8vR/ApR0Y12DKT418a1jkMtdxzAs9KFcDOL86FHOj2pgkC9 /I7xgm9WmLF7k0vGl52X+mtRsg== X-Received: by 2002:adf:ebc4:: with SMTP id v4mr4721123wrn.217.1622824358639; Fri, 04 Jun 2021 09:32:38 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id p12sm8462680wme.43.2021.06.04.09.32.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id F2F9D1FFAC; Fri, 4 Jun 2021 16:53:14 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 23/99] target/arm: only build psci for TCG Date: Fri, 4 Jun 2021 16:51:56 +0100 Message-Id: <20210604155312.15902-24-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Alexander Graf , Richard Henderson , qemu-arm@nongnu.org, Claudio Fontana , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana We do not move psci.c to tcg/ because we expect other hypervisors to use it (waiting for HVF enablement). Signed-off-by: Claudio Fontana Cc: Alexander Graf Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/meson.build | 4 ++++ 1 file changed, 4 insertions(+) -- 2.20.1 diff --git a/target/arm/meson.build b/target/arm/meson.build index 0172937b40..a9fdada0cc 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -19,8 +19,12 @@ arm_softmmu_ss.add(files( 'arm-powerctl.c', 'machine.c', 'monitor.c', +)) + +arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( 'psci.c', )) + arm_user_ss = ss.source_set() subdir('tcg') From patchwork Fri Jun 4 15:51:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454080 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp565054jae; Fri, 4 Jun 2021 09:28:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyxhX39lv2Vj60NVmxd90Hk1AayMS2Zr9k5IjnFLPiWHkv92t0eXayKuK5KeZ7PFT9rT/Eo X-Received: by 2002:a1f:eac6:: with SMTP id i189mr3151805vkh.3.1622824108306; Fri, 04 Jun 2021 09:28:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824108; cv=none; d=google.com; s=arc-20160816; b=KL8VVzkmZgtgGR4MygfoUbHBCjqCqqLcWQc+ufCGRBYM6PW9biNGqk6gq1IF993NWJ thEr0IHkKypAyWlcY8hc5kF70JyXhjS+n0+t8+ZB5R5CCInVRcKOymN6hUkhXjZK954t fHGoBNH+05VWio5txyvX4QA9hXez1cS/mwnAPPFojhkvt80CAm2J1td8I1Q8Eg5DkS9I iqEBwrrCTdV7NDt3P+vV7hU0v/1x370q7LNcB149cDaXXPqA19JyuO1c9LeABojYSKjb EfnvcCdQgw3fJHtSxn+LviP68VEu4cNITDTeewZBMhqUuz5JmH7Zn4uS0AuMEsNDDXRN 1QWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=GCSulBcboNnhY3ZaOpOOMESbDYWsZSCOIUDwjeQhgs4=; b=fojJlKQcY+6esmtBPegjrhgd9KAIhvjXP4ScjwZdy6Z6iZUXQQnvU4VWvjkcZJngDU hDQUw/Mgb8qtwJHXe0D10294CaH2T+JOf69+yOxFAJzMG8+0KdfQ8liRd7Dw9xjCKwZC V5fpXPHsAnlAUF5WaSGTxQDQKODaZzEAA/kSjhszNilLe9xR90n5m09HkRBfO5/7NjlR t+ZXSkUBHziHN472OtvOxJOmPEwOUoNrAv5dc2u4SqWwiE6hmWUgICFTYwqMhILWxqOP scm8CTKGK3w/gkqKrdqxBF5DBBCGBqzIT0DTK01Z3CYHstTd8/mcUcrpE/zGuW31ly/R tsmA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=K+VlibPV; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t7si3171323vsh.173.2021.06.04.09.28.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:28:28 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=K+VlibPV; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:40932 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCgN-0003uA-It for patch@linaro.org; Fri, 04 Jun 2021 12:28:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51824) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRI-0002x6-4v for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:52 -0400 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]:46009) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRC-0003pF-QP for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:51 -0400 Received: by mail-wr1-x436.google.com with SMTP id z8so9831954wrp.12 for ; Fri, 04 Jun 2021 09:12:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GCSulBcboNnhY3ZaOpOOMESbDYWsZSCOIUDwjeQhgs4=; b=K+VlibPVepMujXzNtGQBh8v6ArVzXuqim/LVzd/SGYDgZ3mc7+iMaFoQDQXX3M4gGL aWVwCUi3SkYIRe1Z9lWGra7axoT6/IU6M2sADR8uUOPXeINsSaSvuXx32flCfe5Jsfjx FVF4tX662du1+bT1+ya8eb0NUpvTrqWlVdnIYz5LkLGs5HF9pmkegoV8gq4PwJWZ2QJv UDlvk66hlV5iDUzeCxAk79SNiRU8BUh0WlEsAlAEBdDiKCEQZ2N8kmDYrRaAeOF/SW/H PPJBb0dQH+xIeQ4LbRPz9fhyVdjHIqyDuFrC/VE/VoBtscSUXfJFjBVhUW788lYIhh+W fauw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GCSulBcboNnhY3ZaOpOOMESbDYWsZSCOIUDwjeQhgs4=; b=KxKJaoK6xJM4xWIDSR3Ou4GwQfrPqwM2heNqW057pOp0lfKPrupk8r4Exc+0awIGXX 9xwb01qeJX3rND/W/dlhqxdIX0F9YzjOde3dPs3BNqx65Ea33UzFSPifrtnC9x9f9Zw+ 5ttA4y0lqZiOvR+MrVDI8LQ43f4/HiZxIUN5w248D2oQ6PYKX2qCdr42dFP3pbtekfVt MPxCxVBc8k5kT/o7ylxLSbaKdjkQUpGaouLNGVkKpsAoxuegehSSqbxrPc8i7Q5rp8GS WjcTse+QoCNpAYRh3MYfNC+cHXzntCQ11gXKRWaaWIDopzPyVofSP+NRCrHu4b5RHhpp jHlA== X-Gm-Message-State: AOAM531obpmzttaupgYFs8knokcRfppXnlkzAjET9Jv0Kqd9qMMv7Ias K86xryKP6WF40wTmkWVDzjjDzA== X-Received: by 2002:adf:df02:: with SMTP id y2mr4641178wrl.120.1622823165406; Fri, 04 Jun 2021 09:12:45 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l8sm9299316wrf.0.2021.06.04.09.12.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:42 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 1BDFF1FFAE; Fri, 4 Jun 2021 16:53:15 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 24/99] target/arm: split off cpu-sysemu.c Date: Fri, 4 Jun 2021 16:51:57 +0100 Message-Id: <20210604155312.15902-25-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::436; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x436.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana move work is needed later on to split things into tcg-specific portions and kvm-specific portions of this Signed-off-by: Claudio Fontana Reviewed-by: Alex Bennée Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/internals.h | 8 ++- target/arm/cpu-sysemu.c | 105 ++++++++++++++++++++++++++++++++++++++++ target/arm/cpu.c | 83 ------------------------------- target/arm/meson.build | 1 + 4 files changed, 113 insertions(+), 84 deletions(-) create mode 100644 target/arm/cpu-sysemu.c -- 2.20.1 diff --git a/target/arm/internals.h b/target/arm/internals.h index 886db56b58..8809334228 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1202,4 +1202,10 @@ static inline uint64_t useronly_maybe_clean_ptr(uint32_t desc, uint64_t ptr) return ptr; } -#endif +#ifndef CONFIG_USER_ONLY +void arm_cpu_set_irq(void *opaque, int irq, int level); +void arm_cpu_kvm_set_irq(void *opaque, int irq, int level); +bool arm_cpu_virtio_is_big_endian(CPUState *cs); +#endif /* !CONFIG_USER_ONLY */ + +#endif /* TARGET_ARM_INTERNALS_H */ diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c new file mode 100644 index 0000000000..db1c8cb245 --- /dev/null +++ b/target/arm/cpu-sysemu.c @@ -0,0 +1,105 @@ +/* + * QEMU ARM CPU + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "sysemu/hw_accel.h" +#include "kvm_arm.h" + +void arm_cpu_set_irq(void *opaque, int irq, int level) +{ + ARMCPU *cpu = opaque; + CPUARMState *env = &cpu->env; + CPUState *cs = CPU(cpu); + static const int mask[] = { + [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD, + [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ, + [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ, + [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ + }; + + if (level) { + env->irq_line_state |= mask[irq]; + } else { + env->irq_line_state &= ~mask[irq]; + } + + switch (irq) { + case ARM_CPU_VIRQ: + assert(arm_feature(env, ARM_FEATURE_EL2)); + arm_cpu_update_virq(cpu); + break; + case ARM_CPU_VFIQ: + assert(arm_feature(env, ARM_FEATURE_EL2)); + arm_cpu_update_vfiq(cpu); + break; + case ARM_CPU_IRQ: + case ARM_CPU_FIQ: + if (level) { + cpu_interrupt(cs, mask[irq]); + } else { + cpu_reset_interrupt(cs, mask[irq]); + } + break; + default: + g_assert_not_reached(); + } +} + +void arm_cpu_kvm_set_irq(void *opaque, int irq, int level) +{ +#ifdef CONFIG_KVM + ARMCPU *cpu = opaque; + CPUARMState *env = &cpu->env; + CPUState *cs = CPU(cpu); + uint32_t linestate_bit; + int irq_id; + + switch (irq) { + case ARM_CPU_IRQ: + irq_id = KVM_ARM_IRQ_CPU_IRQ; + linestate_bit = CPU_INTERRUPT_HARD; + break; + case ARM_CPU_FIQ: + irq_id = KVM_ARM_IRQ_CPU_FIQ; + linestate_bit = CPU_INTERRUPT_FIQ; + break; + default: + g_assert_not_reached(); + } + + if (level) { + env->irq_line_state |= linestate_bit; + } else { + env->irq_line_state &= ~linestate_bit; + } + kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level); +#endif +} + +bool arm_cpu_virtio_is_big_endian(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + cpu_synchronize_state(cs); + return arm_cpu_data_is_big_endian(env); +} diff --git a/target/arm/cpu.c b/target/arm/cpu.c index ad65b60b04..bd8413c161 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -649,89 +649,6 @@ void arm_cpu_update_vfiq(ARMCPU *cpu) } } -#ifndef CONFIG_USER_ONLY -static void arm_cpu_set_irq(void *opaque, int irq, int level) -{ - ARMCPU *cpu = opaque; - CPUARMState *env = &cpu->env; - CPUState *cs = CPU(cpu); - static const int mask[] = { - [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD, - [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ, - [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ, - [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ - }; - - if (level) { - env->irq_line_state |= mask[irq]; - } else { - env->irq_line_state &= ~mask[irq]; - } - - switch (irq) { - case ARM_CPU_VIRQ: - assert(arm_feature(env, ARM_FEATURE_EL2)); - arm_cpu_update_virq(cpu); - break; - case ARM_CPU_VFIQ: - assert(arm_feature(env, ARM_FEATURE_EL2)); - arm_cpu_update_vfiq(cpu); - break; - case ARM_CPU_IRQ: - case ARM_CPU_FIQ: - if (level) { - cpu_interrupt(cs, mask[irq]); - } else { - cpu_reset_interrupt(cs, mask[irq]); - } - break; - default: - g_assert_not_reached(); - } -} - -static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level) -{ -#ifdef CONFIG_KVM - ARMCPU *cpu = opaque; - CPUARMState *env = &cpu->env; - CPUState *cs = CPU(cpu); - uint32_t linestate_bit; - int irq_id; - - switch (irq) { - case ARM_CPU_IRQ: - irq_id = KVM_ARM_IRQ_CPU_IRQ; - linestate_bit = CPU_INTERRUPT_HARD; - break; - case ARM_CPU_FIQ: - irq_id = KVM_ARM_IRQ_CPU_FIQ; - linestate_bit = CPU_INTERRUPT_FIQ; - break; - default: - g_assert_not_reached(); - } - - if (level) { - env->irq_line_state |= linestate_bit; - } else { - env->irq_line_state &= ~linestate_bit; - } - kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level); -#endif -} - -static bool arm_cpu_virtio_is_big_endian(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - - cpu_synchronize_state(cs); - return arm_cpu_data_is_big_endian(env); -} - -#endif - static int print_insn_thumb1(bfd_vma pc, disassemble_info *info) { diff --git a/target/arm/meson.build b/target/arm/meson.build index a9fdada0cc..b75392e3e9 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -17,6 +17,7 @@ arm_softmmu_ss = ss.source_set() arm_softmmu_ss.add(files( 'arch_dump.c', 'arm-powerctl.c', + 'cpu-sysemu.c', 'machine.c', 'monitor.c', )) From patchwork Fri Jun 4 15:51:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454112 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp593321jae; Fri, 4 Jun 2021 10:03:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwwQq7NdAsM3WQwmMK6eBAzoozgZRW1R5J+hatbn2chfiTKd5saNYQfbY2Os4hVO/quDmD3 X-Received: by 2002:a17:906:2a1b:: with SMTP id j27mr5083255eje.370.1622826207122; Fri, 04 Jun 2021 10:03:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826207; cv=none; d=google.com; s=arc-20160816; b=fPednW4QUy+Hjy1uPaCONTvlSbY41bL3UfEZ0tu79+hHQYwGAQvnFSdPypYD/0mJxl 22L8lviDKNpBOwSKU/j9MLmZGrZleObolFrqZU/6AiOxA5H+UMiIq+hTrdPA3L/B0JL2 x7vmRzpqobn84BjoZZgi1dcmu+2dytxDdtdJGVvZ+7XLPCetAvgjO0zxfdndb9Kc7qv0 piYCxlM7pMzZ3xQAK35YPMaKRS/l+ujfpfrgZ5bOSTd6k5nnScizkLgoSAhHt7XnoP8o gVIZlT4l8YHYeHKq6tx+kgxfEXFWbz4VRRiFexzVwd9Y+tgWMkCfgGFsTXc3naB8mjdj y8DA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=S7aS1OcWQ66/PtC9FuFs6o6VaPhc4vwGyLuPu8rbtBw=; b=uuO9QYNj2Io2fKXYAW7GTQV/zludkKMPtAyr0eWoDDFEra1CsqOjnUK8pFmwDrF1+a DB6AqhuGOoVkkSBmabd5oMb9oNE2JHJSEI+x7ptpaYZ7UCfrMkFTGn91EwSArvbxEz6W C9GRt50iEx3n77zmPqKNVqGPwfqv0qNwjrLcOqDXPrRNtclFH+qAWGwiNraIEpTFRquV Ap2wVJWuqSAsq5I2Lkftf21k4Plbq648E7zD5/k7NNwhstFIIHdjBgFXSgDAALOj6w6I ePO1zqohas39JJswSWDOylxkj75fARXrS4FREGfx6fWwMiIg9otiMBhl2poDGmtFGiLH CoiQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=x+m82SoI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t14si4921143ejb.624.2021.06.04.10.03.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:03:27 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=x+m82SoI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:50070 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDEE-0003Ba-0M for patch@linaro.org; Fri, 04 Jun 2021 13:03:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51760) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRF-0002sI-MV for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:49 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]:44008) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRA-0003nv-1m for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:49 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 3-20020a05600c0243b029019f2f9b2b8aso5918310wmj.2 for ; Fri, 04 Jun 2021 09:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S7aS1OcWQ66/PtC9FuFs6o6VaPhc4vwGyLuPu8rbtBw=; b=x+m82SoI/BQ/TeyWVXAo1IJysggCCsHSsFt6svYkPcG6HLy1LPWQahTsfJ3Riy6iDE Vp0EtCDclVfiVFJHFrbrwQPpzLBrFpodcOnb4VOHGlrixo0+N07HPJ98J8Rq14oBLvDv Ied62TBHW4vm7+uU60Hom+wLQGjNDfJrJMuiH8cAaw4gWhwVR3SeDilTBz0diriiywIC v4OIx983BR+WFxrt9bXSX1mYc3XY2TZRb7Pts8i18u4xolAj5L3Xmj+fe1rqMak2JAm0 K6xmA5pF2Yg2ij9qjIpA5HII3GmvfxOQ4AgxkIgLt+VRlwlVIpQGL62yTqDKWJQbl4qj c6Nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S7aS1OcWQ66/PtC9FuFs6o6VaPhc4vwGyLuPu8rbtBw=; b=PaRj8JofKowq1LQUaJG633hCW8Iwx66ayNqmIPWmShkmJ5jA5+sPjTSnYH5OEAnXaY DdWXTzv/cZtkRq2PMQRB6ilQkxcXR4Eja5UpbdsiwFas0/GdYWwCAxvlZT8KbchNFlvh Kgp9cAoLPiHAhX8VTpbjRl5e7fWxMc4rWLlVovj6mi8lMv/UlRgtWaVUUJsBfLznta/J 2xaccxmSAccrLWCyxKsU/TzhqH/AgEVCmUGqJLh4LE2eRareKELi/1bXL2Y3ejMA8SLv JuRtiglx7n1hGgocYBcADj4X+SVBn2l0K6Lh/qtTVXgEKRyoAJd+QrAm2MoWa4wEgfFH 7r7g== X-Gm-Message-State: AOAM532cjw18xwRCVHKtE87yKguOU9k9x6mZ8MSg7ogP3jWeX9oSMMPB FQvOL6DFz/Ko34L089EhJ4Ajpw== X-Received: by 2002:a7b:c095:: with SMTP id r21mr4464018wmh.86.1622823162455; Fri, 04 Jun 2021 09:12:42 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id b8sm5422422wmd.35.2021.06.04.09.12.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 348D21FFAF; Fri, 4 Jun 2021 16:53:15 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 25/99] target/arm: tcg: fix comment style before move to cpu-mmu Date: Fri, 4 Jun 2021 16:51:58 +0100 Message-Id: <20210604155312.15902-26-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana before exporting some functionality from helper.c into a new module, fix the comment style of those functions. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/tcg/helper.c | 152 ++++++++++++++++++++++++++-------------- 1 file changed, 101 insertions(+), 51 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index a66c1f0b9e..2a5022032c 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -10477,7 +10477,8 @@ static inline bool regime_translation_disabled(CPUARMState *env, return false; case 0: default: - /* HFNMIENA set and ENABLE clear is UNPREDICTABLE, but + /* + * HFNMIENA set and ENABLE clear is UNPREDICTABLE, but * we warned about that in armv7m_nvic.c when the guest set it. */ return true; @@ -10531,7 +10532,8 @@ static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, #endif /* !CONFIG_USER_ONLY */ -/* Convert a possible stage1+2 MMU index into the appropriate +/* + * Convert a possible stage1+2 MMU index into the appropriate * stage 1 MMU index */ static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx) @@ -10602,7 +10604,8 @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) } } -/* Translate section/page access permissions to page +/* + * Translate section/page access permissions to page * R/W protection flags * * @env: CPUARMState @@ -10658,7 +10661,8 @@ static inline int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, } } -/* Translate section/page access permissions to page +/* + * Translate section/page access permissions to page * R/W protection flags. * * @ap: The 2-bit simple AP (AP[2:1]) @@ -10686,7 +10690,8 @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap) return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx)); } -/* Translate S2 section/page access permissions to protection flags +/* + * Translate S2 section/page access permissions to protection flags * * @env: CPUARMState * @s2ap: The 2-bit stage2 access permissions (S2AP) @@ -10734,7 +10739,8 @@ static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0) return prot; } -/* Translate section/page access permissions to protection flags +/* + * Translate section/page access permissions to protection flags * * @env: CPUARMState * @mmu_idx: MMU index indicating required translation regime @@ -10771,7 +10777,8 @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64, return prot_rw; } - /* TODO have_wxn should be replaced with + /* + * TODO have_wxn should be replaced with * ARM_FEATURE_V8 || (ARM_FEATURE_V7 && ARM_FEATURE_EL2) * when ARM_FEATURE_EL2 starts getting set. For now we assume all LPAE * compatible processors have EL2, which is required for [U]WXN. @@ -11043,7 +11050,8 @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, phys_addr = (desc & 0xfffff000) | (address & 0xfff); *page_size = 0x1000; } else { - /* UNPREDICTABLE in ARMv5; we choose to take a + /* + * UNPREDICTABLE in ARMv5; we choose to take a * page translation fault. */ fi->type = ARMFault_Translation; @@ -11109,7 +11117,8 @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, } type = (desc & 3); if (type == 0 || (type == 3 && !cpu_isar_feature(aa32_pxn, cpu))) { - /* Section translation fault, or attempt to use the encoding + /* + * Section translation fault, or attempt to use the encoding * which is Reserved on implementations without PXN. */ fi->type = ARMFault_Translation; @@ -11214,7 +11223,8 @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, } } if (ns) { - /* The NS bit will (as required by the architecture) have no effect if + /* + * The NS bit will (as required by the architecture) have no effect if * the CPU doesn't support TZ or this is a non-secure translation * regime, because the attribute will already be non-secure. */ @@ -11296,7 +11306,8 @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level, return true; } -/* Translate from the 4-bit stage 2 representation of +/* + * Translate from the 4-bit stage 2 representation of * memory attributes (without cache-allocation hints) to * the 8-bit representation of the stage 1 MAIR registers * (which includes allocation hints). @@ -11585,7 +11596,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, stride = 9; } - /* Note that QEMU ignores shareability and cacheability attributes, + /* + * Note that QEMU ignores shareability and cacheability attributes, * so we don't need to do anything with the SH, ORGN, IRGN fields * in the TTBCR. Similarly, TTBCR:A1 selects whether we get the * ASID from TTBR0 or TTBR1, but QEMU's TLB doesn't currently @@ -11594,19 +11606,22 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, */ ttbr = regime_ttbr(env, mmu_idx, param.select); - /* Here we should have set up all the parameters for the translation: + /* + * Here we should have set up all the parameters for the translation: * inputsize, ttbr, epd, stride, tbi */ if (param.epd) { - /* Translation table walk disabled => Translation fault on TLB miss + /* + * Translation table walk disabled => Translation fault on TLB miss * Note: This is always 0 on 64-bit EL2 and EL3. */ goto do_fault; } if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { - /* The starting level depends on the virtual address size (which can + /* + * The starting level depends on the virtual address size (which can * be up to 48 bits) and the translation granule size. It indicates * the number of strides (stride bits at a time) needed to * consume the bits of the input address. In the pseudocode this is: @@ -11619,7 +11634,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, */ level = 4 - (inputsize - 4) / stride; } else { - /* For stage 2 translations the starting level is specified by the + /* + * For stage 2 translations the starting level is specified by the * VTCR_EL2.SL0 field (whose interpretation depends on the page size) */ uint32_t sl0 = extract32(tcr->raw_tcr, 6, 2); @@ -11659,7 +11675,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, */ descaddr &= ~indexmask; - /* The address field in the descriptor goes up to bit 39 for ARMv7 + /* + * The address field in the descriptor goes up to bit 39 for ARMv7 * but up to bit 47 for ARMv8, but we use the descaddrmask * up to bit 39 for AArch32, because we don't need other bits in that case * to construct next descriptor address (anyway they should be all zeroes). @@ -11667,7 +11684,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, descaddrmask = ((1ull << (aarch64 ? 48 : 40)) - 1) & ~indexmask_grainsize; - /* Secure accesses start with the page table in secure memory and + /* + * Secure accesses start with the page table in secure memory and * can be downgraded to non-secure at any step. Non-secure accesses * remain non-secure. We implement this by just ORing in the NSTable/NS * bits at each step. @@ -11693,7 +11711,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, descaddr = descriptor & descaddrmask; if ((descriptor & 2) && (level < 3)) { - /* Table entry. The top five bits are attributes which may + /* + * Table entry. The top five bits are attributes which may * propagate down through lower levels of the table (and * which are all arranged so that 0 means "no effect", so * we can gather them up by ORing in the bits at each level). @@ -11703,7 +11722,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, indexmask = indexmask_grainsize; continue; } - /* Block entry at level 1 or 2, or page entry at level 3. + /* + * Block entry at level 1 or 2, or page entry at level 3. * These are basically the same thing, although the number * of bits we pull in from the vaddr varies. */ @@ -11725,15 +11745,17 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, break; } attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */ - /* The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1 + /* + * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1 * means "force PL1 access only", which means forcing AP[1] to 0. */ attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */ attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */ break; } - /* Here descaddr is the final physical address, and attributes - * are all in attrs. + /* + * Here descaddr is the final physical address, + * and attributes are all in attrs. */ fault_type = ARMFault_AccessFlag; if ((attrs & (1 << 8)) == 0) { @@ -11760,7 +11782,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, } if (ns) { - /* The NS bit will (as required by the architecture) have no effect if + /* + * The NS bit will (as required by the architecture) have no effect if * the CPU doesn't support TZ or this is a non-secure translation * regime, because the attribute will already be non-secure. */ @@ -11814,7 +11837,8 @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env, break; } } else { - /* Default system address map for M profile cores. + /* + * Default system address map for M profile cores. * The architecture specifies which regions are execute-never; * at the MPU level no other checks are defined. */ @@ -11840,7 +11864,8 @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env, static bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool is_user) { - /* Return true if we should use the default memory map as a + /* + * Return true if we should use the default memory map as a * "background" region if there are no hits against any MPU regions. */ CPUARMState *env = &cpu->env; @@ -11866,7 +11891,8 @@ static inline bool m_is_ppb_region(CPUARMState *env, uint32_t address) static inline bool m_is_system_region(CPUARMState *env, uint32_t address) { - /* True if address is in the M profile system region + /* + * True if address is in the M profile system region * 0xe0000000 - 0xffffffff */ return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7; @@ -11888,7 +11914,8 @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, if (regime_translation_disabled(env, mmu_idx) || m_is_ppb_region(env, address)) { - /* MPU disabled or M profile PPB access: use default memory map. + /* + * MPU disabled or M profile PPB access: use default memory map. * The other case which uses the default memory map in the * v7M ARM ARM pseudocode is exception vector reads from the vector * table. In QEMU those accesses are done in arm_v7m_load_vector(), @@ -11954,7 +11981,8 @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, srdis_mask = srdis ? 0x3 : 0x0; for (i = 2; i <= 8 && rsize < TARGET_PAGE_BITS; i *= 2) { - /* This will check in groups of 2, 4 and then 8, whether + /* + * This will check in groups of 2, 4 and then 8, whether * the subregion bits are consistent. rsize is incremented * back up to give the region size, considering consistent * adjacent subregions as one region. Stop testing if rsize @@ -12062,7 +12090,8 @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, static bool v8m_is_sau_exempt(CPUARMState *env, uint32_t address, MMUAccessType access_type) { - /* The architecture specifies that certain address ranges are + /* + * The architecture specifies that certain address ranges are * exempt from v8M SAU/IDAU checks. */ return @@ -12078,7 +12107,8 @@ void v8m_security_lookup(CPUARMState *env, uint32_t address, MMUAccessType access_type, ARMMMUIdx mmu_idx, V8M_SAttributes *sattrs) { - /* Look up the security attributes for this address. Compare the + /* + * Look up the security attributes for this address. Compare the * pseudocode SecurityCheck() function. * We assume the caller has zero-initialized *sattrs. */ @@ -12129,7 +12159,8 @@ void v8m_security_lookup(CPUARMState *env, uint32_t address, sattrs->subpage = true; } if (sattrs->srvalid) { - /* If we hit in more than one region then we must report + /* + * If we hit in more than one region then we must report * as Secure, not NS-Callable, with no valid region * number info. */ @@ -12187,7 +12218,8 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, int *prot, bool *is_subpage, ARMMMUFaultInfo *fi, uint32_t *mregion) { - /* Perform a PMSAv8 MPU lookup (without also doing the SAU check + /* + * Perform a PMSAv8 MPU lookup (without also doing the SAU check * that a full phys-to-virt translation does). * mregion is (if not NULL) set to the region number which matched, * or -1 if no region number is returned (MPU off, address did not @@ -12211,7 +12243,8 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, *mregion = -1; } - /* Unlike the ARM ARM pseudocode, we don't need to check whether this + /* + * Unlike the ARM ARM pseudocode, we don't need to check whether this * was an exception vector read from the vector table (which is always * done using the default system address map), because those accesses * are done in arm_v7m_load_vector(), which always does a direct @@ -12228,7 +12261,8 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) { /* region search */ - /* Note that the base address is bits [31:5] from the register + /* + * Note that the base address is bits [31:5] from the register * with bits [4:0] all zeroes, but the limit address is bits * [31:5] from the register with bits [4:0] all ones. */ @@ -12264,7 +12298,8 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, } if (matchregion != -1) { - /* Multiple regions match -- always a failure (unlike + /* + * Multiple regions match -- always a failure (unlike * PMSAv7 where highest-numbered-region wins) */ fi->type = ARMFault_Permission; @@ -12304,7 +12339,8 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, if (*prot && !xn && !(pxn && !is_user)) { *prot |= PAGE_EXEC; } - /* We don't need to look the attribute up in the MAIR0/MAIR1 + /* + * We don't need to look the attribute up in the MAIR0/MAIR1 * registers because that only tells us about cacheability. */ if (mregion) { @@ -12332,7 +12368,8 @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs); if (access_type == MMU_INST_FETCH) { - /* Instruction fetches always use the MMU bank and the + /* + * Instruction fetches always use the MMU bank and the * transaction attribute determined by the fetch address, * regardless of CPU state. This is painful for QEMU * to handle, because it would mean we need to encode @@ -12361,14 +12398,16 @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, return true; } } else { - /* For data accesses we always use the MMU bank indicated + /* + * For data accesses we always use the MMU bank indicated * by the current CPU state, but the security attributes * might downgrade a secure access to nonsecure. */ if (sattrs.ns) { txattrs->secure = false; } else if (!secure) { - /* NS access to S memory must fault. + /* + * NS access to S memory must fault. * Architecturally we should first check whether the * MPU information for this address indicates that we * are doing an unaligned access to Device memory, which @@ -12416,8 +12455,10 @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, continue; } mask = 1 << ((base >> 1) & 0x1f); - /* Keep this shift separate from the above to avoid an - (undefined) << 32. */ + /* + * Keep this shift separate from the above to avoid an + * (undefined) << 32 + */ mask = (mask << 1) - 1; if (((base ^ address) & ~mask) == 0) { break; @@ -12477,7 +12518,8 @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, return false; } -/* Combine either inner or outer cacheability attributes for normal +/* + * Combine either inner or outer cacheability attributes for normal * memory, according to table D4-42 and pseudocode procedure * CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM). * @@ -12493,7 +12535,8 @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2) /* stage 1 write-through takes precedence */ return s1; } else if (extract32(s2, 2, 2) == 2) { - /* stage 2 write-through takes precedence, but the allocation hint + /* + * stage 2 write-through takes precedence, but the allocation hint * is still taken from stage 1 */ return (2 << 2) | extract32(s1, 0, 2); @@ -12502,7 +12545,8 @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2) } } -/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4 +/* + * Combine S1 and S2 cacheability/shareability attributes, per D4.5.4 * and CombineS1S2Desc() * * @s1: Attributes from stage 1 walk @@ -12552,7 +12596,8 @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2) ret.attrs = 0xc; /* GRE */ } - /* Any location for which the resultant memory type is any + /* + * Any location for which the resultant memory type is any * type of Device memory is always treated as Outer Shareable. */ ret.shareability = 2; @@ -12562,7 +12607,8 @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2) | combine_cacheattr_nibble(s1lo, s2lo); if (ret.attrs == 0x44) { - /* Any location for which the resultant memory type is Normal + /* + * Any location for which the resultant memory type is Normal * Inner Non-cacheable, Outer Non-cacheable is always treated * as Outer Shareable. */ @@ -12579,7 +12625,8 @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2) } -/* get_phys_addr - get the physical address for this virtual address +/* + * get_phys_addr - get the physical address for this virtual address * * Find the physical address corresponding to the given virtual address, * by doing a translation table walk on MMU based systems or using the @@ -12614,7 +12661,8 @@ bool get_phys_addr(CPUARMState *env, target_ulong address, ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx); if (mmu_idx != s1_mmu_idx) { - /* Call ourselves recursively to do the stage 1 and then stage 2 + /* + * Call ourselves recursively to do the stage 1 and then stage 2 * translations if mmu_idx is a two-stage regime. */ if (arm_feature(env, ARM_FEATURE_EL2)) { @@ -12686,14 +12734,16 @@ bool get_phys_addr(CPUARMState *env, target_ulong address, } } - /* The page table entries may downgrade secure to non-secure, but + /* + * The page table entries may downgrade secure to non-secure, but * cannot upgrade an non-secure translation regime's attributes * to secure. */ attrs->secure = regime_is_secure(env, mmu_idx); attrs->user = regime_is_user(env, mmu_idx); - /* Fast Context Switch Extension. This doesn't exist at all in v8. + /* + * Fast Context Switch Extension. This doesn't exist at all in v8. * In v7 and earlier it affects all stage 1 translations. */ if (address < 0x02000000 && mmu_idx != ARMMMUIdx_Stage2 From patchwork Fri Jun 4 15:51:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454074 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp559683jae; Fri, 4 Jun 2021 09:22:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyah6NeoH468FRlIiTx2d0cjFekgwXTiS72LBKqsRWV65y8Vp7C98OoRwjGZftCLcKu2PN5 X-Received: by 2002:a1f:3dd5:: with SMTP id k204mr243958vka.9.1622823725917; Fri, 04 Jun 2021 09:22:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823725; cv=none; d=google.com; s=arc-20160816; b=qOqQtEZmiPZ5M8XZ7oZiw4mVfwwDND05wJIwk2I4TMNeeUIQoSJlCjowDuvMyUtym7 nfoe7spJ++14N88QWu2c70BdOLAC4JhmFvsi59L21707nBrGEruJe1f9KvkpDN4UcS8A gY365HwzDSkmMmPXPMdgTf1gAvyrtXgHi7CsScQ9u6F/xNvYNoiEUlXvFRjJvRCC8wa3 HpFcs4BX5wgBljvmhfAN2XUUEiNlJht8rUsOM7h48QuSttMF4/BC/onXT+O44itiTtyz Dyl65vqfytc5tXAMoPHsI7rspzqpSV9XABdYut4E7crM8lyH3E+406ftskQAOm40nFA5 UZFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Hzn7/GNrn6dmW6d6QUtsqKB5J40Zcw3XmUUuIt9agbs=; b=QCkffa0Ie2AyRT9+Um8IZRvNgIO9UBJJ697fY17i/1ShsGRgAYSQumCHYalJGOwVNd AlsuGDqpk1FegdkolsH/FtkIp5RwbiCRDyg8pHKKqSkk0SHkAAYsBQiGSdqjAam5w1op aMgfSmayRHlIgEKjsa6Fa58W58/qS89a2Qo7inRIcrwz1lL0qBN+U+3pxrFscoRFjaPD BfAlcHWmPsoieV7gC+gxZkeDzwQKSqb2XhrEjseN5s7nUmpA9JizQJOmbJp4KA/kajsk Q6pI0Q++mvHGvSN/nPIesWbWpoMYj/EQVUFESVaHBB5bmOA4wBNczvSgankTKcWWR6Z6 nKPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KmKS4zAg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t127si3506541vkf.40.2021.06.04.09.22.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:22:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KmKS4zAg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43862 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCaC-0003Vb-P4 for patch@linaro.org; Fri, 04 Jun 2021 12:22:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48706) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCIC-00005B-N5 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:29 -0400 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]:55101) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHk-0005vE-I4 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:28 -0400 Received: by mail-wm1-x335.google.com with SMTP id o127so5677620wmo.4 for ; Fri, 04 Jun 2021 09:03:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hzn7/GNrn6dmW6d6QUtsqKB5J40Zcw3XmUUuIt9agbs=; b=KmKS4zAgplYKq+zWodGDHz2mCqTaGJa3C4cuFQO9RC7MR/4HZYzjlhc/C9mH3KZvfi eFPVViNat9Up7jTHg+ESkdd/10ZJeB1QvxfYSxs1uSMGk+cg6acXTw/Qmhrepef0HaDM 3vfbUTp8mdkWZqZmw4MlYwowAm670ut/RvRad4/fl2vzQ2+dZCjiPmZU1jitNMMwC80k sNamP9N5xI40XOSBOm0nlzjf1G8FiFAz0U/I0ZQYPoWkEu++7+uhu3Bw/kD/KUNHBFld XpDASjEE6MeGGpHW9Usj61NGKK4XMPBIGUMF8WlMdEqTDSYh1RfZtXfq38y4GC9mNpZm B92w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hzn7/GNrn6dmW6d6QUtsqKB5J40Zcw3XmUUuIt9agbs=; b=fi9R8gxzXH6iC9WjEhGnXc3aRbFlNxQXR5v5vCoSck4/HvKSa45e/ttjluB2zLzoau E1vwgljyoRhaJtQHDf116Fvt/TILh134IVVL0L7O1X718u3E1/9SMKCt0Uz8ftbZqDfX hzU3pHyzlMD8i4yo0JsalvxD2yhAPqjIwClatu2X2ocJ4SS6UvNoZFnbVWv33Z3trCu4 4F5vt4JImhMS80hQ+SiyrbfQ8jNVWIxZC4G+wNjgbREnWZHSISgXjMfTeYSA6HHem+SA oZ6VyvQHhrZclpRNLlggp0bZBCB8ag9l2NNNDiLLv8lCBiipeQkzP057DtiDrRKiWgU6 968A== X-Gm-Message-State: AOAM530NNdf4X+LITesGuhdyVngqRfI1AdmqItX5W6M+9dgBoDCDwgnx iqQ20xOe5tz8V1X61SB8LwZR2QeNrqGpbQ== X-Received: by 2002:a05:600c:4e8c:: with SMTP id f12mr4475362wmq.187.1622822578004; Fri, 04 Jun 2021 09:02:58 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id p9sm5669885wmq.48.2021.06.04.09.02.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:53 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 759021FF8C; Fri, 4 Jun 2021 16:53:15 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 26/99] target/arm: move physical address translation to cpu-mmu Date: Fri, 4 Jun 2021 16:51:59 +0100 Message-Id: <20210604155312.15902-27-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::335; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x335.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana get_phys_addr is needed for KVM too, and in turn it requires the aa64_va_parameter* family of functions. Create cpu-mmu and cpu-mmu-sysemu to store these and other mmu-related functions. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/cpu-mmu.h | 119 ++ target/arm/cpu.h | 3 - target/arm/internals.h | 34 - target/arm/cpu-mmu-sysemu.c | 2307 ++++++++++++++++++++++++++ target/arm/cpu-mmu.c | 124 ++ target/arm/cpu.c | 1 + target/arm/tcg/helper.c | 2442 +--------------------------- target/arm/tcg/pauth_helper.c | 2 +- target/arm/tcg/sysemu/m_helper.c | 2 +- target/arm/tcg/sysemu/tlb_helper.c | 1 + target/arm/meson.build | 2 + 11 files changed, 2557 insertions(+), 2480 deletions(-) create mode 100644 target/arm/cpu-mmu.h create mode 100644 target/arm/cpu-mmu-sysemu.c create mode 100644 target/arm/cpu-mmu.c -- 2.20.1 diff --git a/target/arm/cpu-mmu.h b/target/arm/cpu-mmu.h new file mode 100644 index 0000000000..01b060613a --- /dev/null +++ b/target/arm/cpu-mmu.h @@ -0,0 +1,119 @@ +/* + * QEMU ARM CPU address translation related code + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ +#ifndef ARM_CPU_MMU_H +#define ARM_CPU_MMU_H + +#include "cpu.h" +#include "internals.h" + +/* + * Parameters of a given virtual address, as extracted from the + * translation control register (TCR) for a given regime. + */ +typedef struct ARMVAParameters { + unsigned tsz : 8; + unsigned select : 1; + bool tbi : 1; + bool epd : 1; + bool hpd : 1; + bool using16k : 1; + bool using64k : 1; +} ARMVAParameters; + +/* cpu-mmu.c */ + +int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx); +int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx); +int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx); +ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, + ARMMMUIdx mmu_idx, bool data); + +/* Return the SCTLR value which controls this address translation regime */ +static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx) +{ + return env->cp15.sctlr_el[regime_el(env, mmu_idx)]; +} + +/* + * Convert a possible stage1+2 MMU index into the appropriate + * stage 1 MMU index + */ +static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx) +{ + switch (mmu_idx) { + case ARMMMUIdx_SE10_0: + return ARMMMUIdx_Stage1_SE0; + case ARMMMUIdx_SE10_1: + return ARMMMUIdx_Stage1_SE1; + case ARMMMUIdx_SE10_1_PAN: + return ARMMMUIdx_Stage1_SE1_PAN; + case ARMMMUIdx_E10_0: + return ARMMMUIdx_Stage1_E0; + case ARMMMUIdx_E10_1: + return ARMMMUIdx_Stage1_E1; + case ARMMMUIdx_E10_1_PAN: + return ARMMMUIdx_Stage1_E1_PAN; + default: + return mmu_idx; + } +} + +/* Return true if the translation regime is using LPAE format page tables */ +static inline bool regime_using_lpae_format(CPUARMState *env, + ARMMMUIdx mmu_idx) +{ + int el = regime_el(env, mmu_idx); + if (el == 2 || arm_el_is_aa64(env, el)) { + return true; + } + if (arm_feature(env, ARM_FEATURE_LPAE) + && (regime_tcr(env, mmu_idx)->raw_tcr & TTBCR_EAE)) { + return true; + } + return false; +} + +#ifndef CONFIG_USER_ONLY + +/* cpu-mmu-sysemu.c */ + +void v8m_security_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + V8M_SAttributes *sattrs); + +bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, bool *is_subpage, + ARMMMUFaultInfo *fi, uint32_t *mregion); + +bool get_phys_addr(CPUARMState *env, target_ulong address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) + __attribute__((nonnull)); + +hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, + MemTxAttrs *attrs); + +#endif /* !CONFIG_USER_ONLY */ + +#endif /* ARM_CPU_MMU_H */ diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 04f8be35bf..f9ce70e607 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1033,9 +1033,6 @@ void arm_cpu_do_interrupt(CPUState *cpu); void arm_v7m_cpu_do_interrupt(CPUState *cpu); bool arm_cpu_exec_interrupt(CPUState *cpu, int int_req); -hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cpu, vaddr addr, - MemTxAttrs *attrs); - int arm_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg); int arm_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg); diff --git a/target/arm/internals.h b/target/arm/internals.h index 8809334228..c41f91f1c0 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1022,23 +1022,6 @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id) return valid; } -/* - * Parameters of a given virtual address, as extracted from the - * translation control register (TCR) for a given regime. - */ -typedef struct ARMVAParameters { - unsigned tsz : 8; - unsigned select : 1; - bool tbi : 1; - bool epd : 1; - bool hpd : 1; - bool using16k : 1; - bool using64k : 1; -} ARMVAParameters; - -ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, - ARMMMUIdx mmu_idx, bool data); - static inline int exception_target_el(CPUARMState *env) { int target_el = MAX(1, arm_current_el(env)); @@ -1086,29 +1069,12 @@ typedef struct V8M_SAttributes { bool irvalid; } V8M_SAttributes; -void v8m_security_lookup(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - V8M_SAttributes *sattrs); - -bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *txattrs, - int *prot, bool *is_subpage, - ARMMMUFaultInfo *fi, uint32_t *mregion); - /* Cacheability and shareability attributes for a memory access */ typedef struct ARMCacheAttrs { unsigned int attrs:8; /* as in the MAIR register encoding */ unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */ } ARMCacheAttrs; -bool get_phys_addr(CPUARMState *env, target_ulong address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) - __attribute__((nonnull)); - void arm_log_exception(int idx); #endif /* !CONFIG_USER_ONLY */ diff --git a/target/arm/cpu-mmu-sysemu.c b/target/arm/cpu-mmu-sysemu.c new file mode 100644 index 0000000000..9d4735a190 --- /dev/null +++ b/target/arm/cpu-mmu-sysemu.c @@ -0,0 +1,2307 @@ +/* + * QEMU ARM CPU address translation related code (sysemu-only) + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qemu/log.h" + +#include "target/arm/idau.h" +#include "qemu/range.h" +#include "cpu-mmu.h" + +static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + bool s1_is_el0, + hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot, + target_ulong *page_size_ptr, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) + __attribute__((nonnull)); + +/* Return true if the specified stage of address translation is disabled */ +static inline bool regime_translation_disabled(CPUARMState *env, + ARMMMUIdx mmu_idx) +{ + uint64_t hcr_el2; + + if (arm_feature(env, ARM_FEATURE_M)) { + switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] & + (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) { + case R_V7M_MPU_CTRL_ENABLE_MASK: + /* Enabled, but not for HardFault and NMI */ + return mmu_idx & ARM_MMU_IDX_M_NEGPRI; + case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK: + /* Enabled for all cases */ + return false; + case 0: + default: + /* + * HFNMIENA set and ENABLE clear is UNPREDICTABLE, but + * we warned about that in armv7m_nvic.c when the guest set it. + */ + return true; + } + } + + hcr_el2 = arm_hcr_el2_eff(env); + + if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { + /* HCR.DC means HCR.VM behaves as 1 */ + return (hcr_el2 & (HCR_DC | HCR_VM)) == 0; + } + + if (hcr_el2 & HCR_TGE) { + /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ + if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) { + return true; + } + } + + if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) { + /* HCR.DC means SCTLR_EL1.M behaves as 0 */ + return true; + } + + return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0; +} + +static inline bool regime_translation_big_endian(CPUARMState *env, + ARMMMUIdx mmu_idx) +{ + return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0; +} + +/* Return the TTBR associated with this translation regime */ +static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, + int ttbrn) +{ + if (mmu_idx == ARMMMUIdx_Stage2) { + return env->cp15.vttbr_el2; + } + if (mmu_idx == ARMMMUIdx_Stage2_S) { + return env->cp15.vsttbr_el2; + } + if (ttbrn == 0) { + return env->cp15.ttbr0_el[regime_el(env, mmu_idx)]; + } else { + return env->cp15.ttbr1_el[regime_el(env, mmu_idx)]; + } +} + +static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) +{ + switch (mmu_idx) { + case ARMMMUIdx_SE10_0: + case ARMMMUIdx_E20_0: + case ARMMMUIdx_SE20_0: + case ARMMMUIdx_Stage1_E0: + case ARMMMUIdx_Stage1_SE0: + case ARMMMUIdx_MUser: + case ARMMMUIdx_MSUser: + case ARMMMUIdx_MUserNegPri: + case ARMMMUIdx_MSUserNegPri: + return true; + default: + return false; + case ARMMMUIdx_E10_0: + case ARMMMUIdx_E10_1: + case ARMMMUIdx_E10_1_PAN: + g_assert_not_reached(); + } +} + +/* + * Translate section/page access permissions to page + * R/W protection flags + * + * @env: CPUARMState + * @mmu_idx: MMU index indicating required translation regime + * @ap: The 3-bit access permissions (AP[2:0]) + * @domain_prot: The 2-bit domain access permissions + */ +static inline int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, + int ap, int domain_prot) +{ + bool is_user = regime_is_user(env, mmu_idx); + + if (domain_prot == 3) { + return PAGE_READ | PAGE_WRITE; + } + + switch (ap) { + case 0: + if (arm_feature(env, ARM_FEATURE_V7)) { + return 0; + } + switch (regime_sctlr(env, mmu_idx) & (SCTLR_S | SCTLR_R)) { + case SCTLR_S: + return is_user ? 0 : PAGE_READ; + case SCTLR_R: + return PAGE_READ; + default: + return 0; + } + case 1: + return is_user ? 0 : PAGE_READ | PAGE_WRITE; + case 2: + if (is_user) { + return PAGE_READ; + } else { + return PAGE_READ | PAGE_WRITE; + } + case 3: + return PAGE_READ | PAGE_WRITE; + case 4: /* Reserved. */ + return 0; + case 5: + return is_user ? 0 : PAGE_READ; + case 6: + return PAGE_READ; + case 7: + if (!arm_feature(env, ARM_FEATURE_V6K)) { + return 0; + } + return PAGE_READ; + default: + g_assert_not_reached(); + } +} + +/* + * Translate section/page access permissions to page + * R/W protection flags. + * + * @ap: The 2-bit simple AP (AP[2:1]) + * @is_user: TRUE if accessing from PL0 + */ +static inline int simple_ap_to_rw_prot_is_user(int ap, bool is_user) +{ + switch (ap) { + case 0: + return is_user ? 0 : PAGE_READ | PAGE_WRITE; + case 1: + return PAGE_READ | PAGE_WRITE; + case 2: + return is_user ? 0 : PAGE_READ; + case 3: + return PAGE_READ; + default: + g_assert_not_reached(); + } +} + +static inline int +simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap) +{ + return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx)); +} + +/* + * Translate S2 section/page access permissions to protection flags + * + * @env: CPUARMState + * @s2ap: The 2-bit stage2 access permissions (S2AP) + * @xn: XN (execute-never) bits + * @s1_is_el0: true if this is S2 of an S1+2 walk for EL0 + */ +static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0) +{ + int prot = 0; + + if (s2ap & 1) { + prot |= PAGE_READ; + } + if (s2ap & 2) { + prot |= PAGE_WRITE; + } + + if (cpu_isar_feature(any_tts2uxn, env_archcpu(env))) { + switch (xn) { + case 0: + prot |= PAGE_EXEC; + break; + case 1: + if (s1_is_el0) { + prot |= PAGE_EXEC; + } + break; + case 2: + break; + case 3: + if (!s1_is_el0) { + prot |= PAGE_EXEC; + } + break; + default: + g_assert_not_reached(); + } + } else { + if (!extract32(xn, 1, 1)) { + if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) { + prot |= PAGE_EXEC; + } + } + } + return prot; +} + +/* + * Translate section/page access permissions to protection flags + * + * @env: CPUARMState + * @mmu_idx: MMU index indicating required translation regime + * @is_aa64: TRUE if AArch64 + * @ap: The 2-bit simple AP (AP[2:1]) + * @ns: NS (non-secure) bit + * @xn: XN (execute-never) bit + * @pxn: PXN (privileged execute-never) bit + */ +static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64, + int ap, int ns, int xn, int pxn) +{ + bool is_user = regime_is_user(env, mmu_idx); + int prot_rw, user_rw; + bool have_wxn; + int wxn = 0; + + assert(mmu_idx != ARMMMUIdx_Stage2); + assert(mmu_idx != ARMMMUIdx_Stage2_S); + + user_rw = simple_ap_to_rw_prot_is_user(ap, true); + if (is_user) { + prot_rw = user_rw; + } else { + if (user_rw && regime_is_pan(env, mmu_idx)) { + /* PAN forbids data accesses but doesn't affect insn fetch */ + prot_rw = 0; + } else { + prot_rw = simple_ap_to_rw_prot_is_user(ap, false); + } + } + + if (ns && arm_is_secure(env) && (env->cp15.scr_el3 & SCR_SIF)) { + return prot_rw; + } + + /* + * TODO have_wxn should be replaced with + * ARM_FEATURE_V8 || (ARM_FEATURE_V7 && ARM_FEATURE_EL2) + * when ARM_FEATURE_EL2 starts getting set. For now we assume all LPAE + * compatible processors have EL2, which is required for [U]WXN. + */ + have_wxn = arm_feature(env, ARM_FEATURE_LPAE); + + if (have_wxn) { + wxn = regime_sctlr(env, mmu_idx) & SCTLR_WXN; + } + + if (is_aa64) { + if (regime_has_2_ranges(mmu_idx) && !is_user) { + xn = pxn || (user_rw & PAGE_WRITE); + } + } else if (arm_feature(env, ARM_FEATURE_V7)) { + switch (regime_el(env, mmu_idx)) { + case 1: + case 3: + if (is_user) { + xn = xn || !(user_rw & PAGE_READ); + } else { + int uwxn = 0; + if (have_wxn) { + uwxn = regime_sctlr(env, mmu_idx) & SCTLR_UWXN; + } + xn = xn || !(prot_rw & PAGE_READ) || pxn || + (uwxn && (user_rw & PAGE_WRITE)); + } + break; + case 2: + break; + } + } else { + xn = wxn = 0; + } + + if (xn || (wxn && (prot_rw & PAGE_WRITE))) { + return prot_rw; + } + return prot_rw | PAGE_EXEC; +} + +static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx, + uint32_t *table, uint32_t address) +{ + /* Note that we can only get here for an AArch32 PL0/PL1 lookup */ + TCR *tcr = regime_tcr(env, mmu_idx); + + if (address & tcr->mask) { + if (tcr->raw_tcr & TTBCR_PD1) { + /* Translation table walk disabled for TTBR1 */ + return false; + } + *table = regime_ttbr(env, mmu_idx, 1) & 0xffffc000; + } else { + if (tcr->raw_tcr & TTBCR_PD0) { + /* Translation table walk disabled for TTBR0 */ + return false; + } + *table = regime_ttbr(env, mmu_idx, 0) & tcr->base_mask; + } + *table |= (address >> 18) & 0x3ffc; + return true; +} + +/* Translate a S1 pagetable walk through S2 if needed. */ +static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, + hwaddr addr, bool *is_secure, + ARMMMUFaultInfo *fi) +{ + if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && + !regime_translation_disabled(env, ARMMMUIdx_Stage2)) { + target_ulong s2size; + hwaddr s2pa; + int s2prot; + int ret; + ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S + : ARMMMUIdx_Stage2; + ARMCacheAttrs cacheattrs = {}; + MemTxAttrs txattrs = {}; + + ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false, + &s2pa, &txattrs, &s2prot, &s2size, fi, + &cacheattrs); + if (ret) { + assert(fi->type != ARMFault_None); + fi->s2addr = addr; + fi->stage2 = true; + fi->s1ptw = true; + fi->s1ns = !*is_secure; + return ~0; + } + if ((arm_hcr_el2_eff(env) & HCR_PTW) && + (cacheattrs.attrs & 0xf0) == 0) { + /* + * PTW set and S1 walk touched S2 Device memory: + * generate Permission fault. + */ + fi->type = ARMFault_Permission; + fi->s2addr = addr; + fi->stage2 = true; + fi->s1ptw = true; + fi->s1ns = !*is_secure; + return ~0; + } + + if (arm_is_secure_below_el3(env)) { + /* Check if page table walk is to secure or non-secure PA space. */ + if (*is_secure) { + *is_secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW); + } else { + *is_secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW); + } + } else { + assert(!*is_secure); + } + + addr = s2pa; + } + return addr; +} + +/* All loads done in the course of a page table walk go through here. */ +static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure, + ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + MemTxAttrs attrs = {}; + MemTxResult result = MEMTX_OK; + AddressSpace *as; + uint32_t data; + + addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi); + attrs.secure = is_secure; + as = arm_addressspace(cs, attrs); + if (fi->s1ptw) { + return 0; + } + if (regime_translation_big_endian(env, mmu_idx)) { + data = address_space_ldl_be(as, addr, attrs, &result); + } else { + data = address_space_ldl_le(as, addr, attrs, &result); + } + if (result == MEMTX_OK) { + return data; + } + fi->type = ARMFault_SyncExternalOnWalk; + fi->ea = arm_extabort_type(result); + return 0; +} + +static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure, + ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + MemTxAttrs attrs = {}; + MemTxResult result = MEMTX_OK; + AddressSpace *as; + uint64_t data; + + addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi); + attrs.secure = is_secure; + as = arm_addressspace(cs, attrs); + if (fi->s1ptw) { + return 0; + } + if (regime_translation_big_endian(env, mmu_idx)) { + data = address_space_ldq_be(as, addr, attrs, &result); + } else { + data = address_space_ldq_le(as, addr, attrs, &result); + } + if (result == MEMTX_OK) { + return data; + } + fi->type = ARMFault_SyncExternalOnWalk; + fi->ea = arm_extabort_type(result); + return 0; +} + +static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi) +{ + CPUState *cs = env_cpu(env); + int level = 1; + uint32_t table; + uint32_t desc; + int type; + int ap; + int domain = 0; + int domain_prot; + hwaddr phys_addr; + uint32_t dacr; + + /* Pagetable walk. */ + /* Lookup l1 descriptor. */ + if (!get_level1_table_address(env, mmu_idx, &table, address)) { + /* Section translation fault if page walk is disabled by PD0 or PD1 */ + fi->type = ARMFault_Translation; + goto do_fault; + } + desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), + mmu_idx, fi); + if (fi->type != ARMFault_None) { + goto do_fault; + } + type = (desc & 3); + domain = (desc >> 5) & 0x0f; + if (regime_el(env, mmu_idx) == 1) { + dacr = env->cp15.dacr_ns; + } else { + dacr = env->cp15.dacr_s; + } + domain_prot = (dacr >> (domain * 2)) & 3; + if (type == 0) { + /* Section translation fault. */ + fi->type = ARMFault_Translation; + goto do_fault; + } + if (type != 2) { + level = 2; + } + if (domain_prot == 0 || domain_prot == 2) { + fi->type = ARMFault_Domain; + goto do_fault; + } + if (type == 2) { + /* 1Mb section. */ + phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); + ap = (desc >> 10) & 3; + *page_size = 1024 * 1024; + } else { + /* Lookup l2 entry. */ + if (type == 1) { + /* Coarse pagetable. */ + table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc); + } else { + /* Fine pagetable. */ + table = (desc & 0xfffff000) | ((address >> 8) & 0xffc); + } + desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), + mmu_idx, fi); + if (fi->type != ARMFault_None) { + goto do_fault; + } + switch (desc & 3) { + case 0: /* Page translation fault. */ + fi->type = ARMFault_Translation; + goto do_fault; + case 1: /* 64k page. */ + phys_addr = (desc & 0xffff0000) | (address & 0xffff); + ap = (desc >> (4 + ((address >> 13) & 6))) & 3; + *page_size = 0x10000; + break; + case 2: /* 4k page. */ + phys_addr = (desc & 0xfffff000) | (address & 0xfff); + ap = (desc >> (4 + ((address >> 9) & 6))) & 3; + *page_size = 0x1000; + break; + case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */ + if (type == 1) { + /* ARMv6/XScale extended small page format */ + if (arm_feature(env, ARM_FEATURE_XSCALE) + || arm_feature(env, ARM_FEATURE_V6)) { + phys_addr = (desc & 0xfffff000) | (address & 0xfff); + *page_size = 0x1000; + } else { + /* + * UNPREDICTABLE in ARMv5; we choose to take a + * page translation fault. + */ + fi->type = ARMFault_Translation; + goto do_fault; + } + } else { + phys_addr = (desc & 0xfffffc00) | (address & 0x3ff); + *page_size = 0x400; + } + ap = (desc >> 4) & 3; + break; + default: + /* Never happens, but compiler isn't smart enough to tell. */ + abort(); + } + } + *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); + *prot |= *prot ? PAGE_EXEC : 0; + if (!(*prot & (1 << access_type))) { + /* Access permission fault. */ + fi->type = ARMFault_Permission; + goto do_fault; + } + *phys_ptr = phys_addr; + return false; +do_fault: + fi->domain = domain; + fi->level = level; + return true; +} + +static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, ARMMMUFaultInfo *fi) +{ + CPUState *cs = env_cpu(env); + ARMCPU *cpu = env_archcpu(env); + int level = 1; + uint32_t table; + uint32_t desc; + uint32_t xn; + uint32_t pxn = 0; + int type; + int ap; + int domain = 0; + int domain_prot; + hwaddr phys_addr; + uint32_t dacr; + bool ns; + + /* Pagetable walk. */ + /* Lookup l1 descriptor. */ + if (!get_level1_table_address(env, mmu_idx, &table, address)) { + /* Section translation fault if page walk is disabled by PD0 or PD1 */ + fi->type = ARMFault_Translation; + goto do_fault; + } + desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), + mmu_idx, fi); + if (fi->type != ARMFault_None) { + goto do_fault; + } + type = (desc & 3); + if (type == 0 || (type == 3 && !cpu_isar_feature(aa32_pxn, cpu))) { + /* + * Section translation fault, or attempt to use the encoding + * which is Reserved on implementations without PXN. + */ + fi->type = ARMFault_Translation; + goto do_fault; + } + if ((type == 1) || !(desc & (1 << 18))) { + /* Page or Section. */ + domain = (desc >> 5) & 0x0f; + } + if (regime_el(env, mmu_idx) == 1) { + dacr = env->cp15.dacr_ns; + } else { + dacr = env->cp15.dacr_s; + } + if (type == 1) { + level = 2; + } + domain_prot = (dacr >> (domain * 2)) & 3; + if (domain_prot == 0 || domain_prot == 2) { + /* Section or Page domain fault */ + fi->type = ARMFault_Domain; + goto do_fault; + } + if (type != 1) { + if (desc & (1 << 18)) { + /* Supersection. */ + phys_addr = (desc & 0xff000000) | (address & 0x00ffffff); + phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32; + phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36; + *page_size = 0x1000000; + } else { + /* Section. */ + phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); + *page_size = 0x100000; + } + ap = ((desc >> 10) & 3) | ((desc >> 13) & 4); + xn = desc & (1 << 4); + pxn = desc & 1; + ns = extract32(desc, 19, 1); + } else { + if (cpu_isar_feature(aa32_pxn, cpu)) { + pxn = (desc >> 2) & 1; + } + ns = extract32(desc, 3, 1); + /* Lookup l2 entry. */ + table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc); + desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), + mmu_idx, fi); + if (fi->type != ARMFault_None) { + goto do_fault; + } + ap = ((desc >> 4) & 3) | ((desc >> 7) & 4); + switch (desc & 3) { + case 0: /* Page translation fault. */ + fi->type = ARMFault_Translation; + goto do_fault; + case 1: /* 64k page. */ + phys_addr = (desc & 0xffff0000) | (address & 0xffff); + xn = desc & (1 << 15); + *page_size = 0x10000; + break; + case 2: case 3: /* 4k page. */ + phys_addr = (desc & 0xfffff000) | (address & 0xfff); + xn = desc & 1; + *page_size = 0x1000; + break; + default: + /* Never happens, but compiler isn't smart enough to tell. */ + abort(); + } + } + if (domain_prot == 3) { + *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; + } else { + if (pxn && !regime_is_user(env, mmu_idx)) { + xn = 1; + } + if (xn && access_type == MMU_INST_FETCH) { + fi->type = ARMFault_Permission; + goto do_fault; + } + + if (arm_feature(env, ARM_FEATURE_V6K) && + (regime_sctlr(env, mmu_idx) & SCTLR_AFE)) { + /* The simplified model uses AP[0] as an access control bit. */ + if ((ap & 1) == 0) { + /* Access flag fault. */ + fi->type = ARMFault_AccessFlag; + goto do_fault; + } + *prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1); + } else { + *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); + } + if (*prot && !xn) { + *prot |= PAGE_EXEC; + } + if (!(*prot & (1 << access_type))) { + /* Access permission fault. */ + fi->type = ARMFault_Permission; + goto do_fault; + } + } + if (ns) { + /* + * The NS bit will (as required by the architecture) have no effect if + * the CPU doesn't support TZ or this is a non-secure translation + * regime, because the attribute will already be non-secure. + */ + attrs->secure = false; + } + *phys_ptr = phys_addr; + return false; +do_fault: + fi->domain = domain; + fi->level = level; + return true; +} + +/* + * check_s2_mmu_setup + * @cpu: ARMCPU + * @is_aa64: True if the translation regime is in AArch64 state + * @startlevel: Suggested starting level + * @inputsize: Bitsize of IPAs + * @stride: Page-table stride (See the ARM ARM) + * + * Returns true if the suggested S2 translation parameters are OK and + * false otherwise. + */ +static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level, + int inputsize, int stride) +{ + const int grainsize = stride + 3; + int startsizecheck; + + /* Negative levels are never allowed. */ + if (level < 0) { + return false; + } + + startsizecheck = inputsize - ((3 - level) * stride + grainsize); + if (startsizecheck < 1 || startsizecheck > stride + 4) { + return false; + } + + if (is_aa64) { + CPUARMState *env = &cpu->env; + unsigned int pamax = arm_pamax(cpu); + + switch (stride) { + case 13: /* 64KB Pages. */ + if (level == 0 || (level == 1 && pamax <= 42)) { + return false; + } + break; + case 11: /* 16KB Pages. */ + if (level == 0 || (level == 1 && pamax <= 40)) { + return false; + } + break; + case 9: /* 4KB Pages. */ + if (level == 0 && pamax <= 42) { + return false; + } + break; + default: + g_assert_not_reached(); + } + + /* Inputsize checks. */ + if (inputsize > pamax && + (arm_el_is_aa64(env, 1) || inputsize > 40)) { + /* This is CONSTRAINED UNPREDICTABLE and we choose to fault. */ + return false; + } + } else { + /* AArch32 only supports 4KB pages. Assert on that. */ + assert(stride == 9); + + if (level == 0) { + return false; + } + } + return true; +} + +/* + * Translate from the 4-bit stage 2 representation of + * memory attributes (without cache-allocation hints) to + * the 8-bit representation of the stage 1 MAIR registers + * (which includes allocation hints). + * + * ref: shared/translation/attrs/S2AttrDecode() + * .../S2ConvertAttrsHints() + */ +static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs) +{ + uint8_t hiattr = extract32(s2attrs, 2, 2); + uint8_t loattr = extract32(s2attrs, 0, 2); + uint8_t hihint = 0, lohint = 0; + + if (hiattr != 0) { /* normal memory */ + if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */ + hiattr = loattr = 1; /* non-cacheable */ + } else { + if (hiattr != 1) { /* Write-through or write-back */ + hihint = 3; /* RW allocate */ + } + if (loattr != 1) { /* Write-through or write-back */ + lohint = 3; /* RW allocate */ + } + } + } + + return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint; +} + +static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va, + ARMMMUIdx mmu_idx) +{ + uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr; + uint32_t el = regime_el(env, mmu_idx); + int select, tsz; + bool epd, hpd; + + assert(mmu_idx != ARMMMUIdx_Stage2_S); + + if (mmu_idx == ARMMMUIdx_Stage2) { + /* VTCR */ + bool sext = extract32(tcr, 4, 1); + bool sign = extract32(tcr, 3, 1); + + /* + * If the sign-extend bit is not the same as t0sz[3], the result + * is unpredictable. Flag this as a guest error. + */ + if (sign != sext) { + qemu_log_mask(LOG_GUEST_ERROR, + "AArch32: VTCR.S / VTCR.T0SZ[3] mismatch\n"); + } + tsz = sextract32(tcr, 0, 4) + 8; + select = 0; + hpd = false; + epd = false; + } else if (el == 2) { + /* HTCR */ + tsz = extract32(tcr, 0, 3); + select = 0; + hpd = extract64(tcr, 24, 1); + epd = false; + } else { + int t0sz = extract32(tcr, 0, 3); + int t1sz = extract32(tcr, 16, 3); + + if (t1sz == 0) { + select = va > (0xffffffffu >> t0sz); + } else { + /* Note that we will detect errors later. */ + select = va >= ~(0xffffffffu >> t1sz); + } + if (!select) { + tsz = t0sz; + epd = extract32(tcr, 7, 1); + hpd = extract64(tcr, 41, 1); + } else { + tsz = t1sz; + epd = extract32(tcr, 23, 1); + hpd = extract64(tcr, 42, 1); + } + /* For aarch32, hpd0 is not enabled without t2e as well. */ + hpd &= extract32(tcr, 6, 1); + } + + return (ARMVAParameters) { + .tsz = tsz, + .select = select, + .epd = epd, + .hpd = hpd, + }; +} + +/** + * get_phys_addr_lpae: perform one stage of page table walk, LPAE format + * + * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, + * prot and page_size may not be filled in, and the populated fsr value provides + * information on why the translation aborted, in the format of a long-format + * DFSR/IFSR fault register, with the following caveats: + * * the WnR bit is never set (the caller must do this). + * + * @env: CPUARMState + * @address: virtual address to get physical address for + * @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH + * @mmu_idx: MMU index indicating required translation regime + * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page table + * walk), must be true if this is stage 2 of a stage 1+2 walk for an + * EL0 access). If @mmu_idx is anything else, @s1_is_el0 is ignored. + * @phys_ptr: set to the physical address corresponding to the virtual address + * @attrs: set to the memory transaction attributes to use + * @prot: set to the permissions for the page containing phys_ptr + * @page_size_ptr: set to the size of the page containing phys_ptr + * @fi: set to fault info if the translation fails + * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes + */ +static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + bool s1_is_el0, + hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot, + target_ulong *page_size_ptr, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) +{ + ARMCPU *cpu = env_archcpu(env); + CPUState *cs = CPU(cpu); + /* Read an LPAE long-descriptor translation table. */ + ARMFaultType fault_type = ARMFault_Translation; + uint32_t level; + ARMVAParameters param; + uint64_t ttbr; + hwaddr descaddr, indexmask, indexmask_grainsize; + uint32_t tableattrs; + target_ulong page_size; + uint32_t attrs; + int32_t stride; + int addrsize, inputsize; + TCR *tcr = regime_tcr(env, mmu_idx); + int ap, ns, xn, pxn; + uint32_t el = regime_el(env, mmu_idx); + uint64_t descaddrmask; + bool aarch64 = arm_el_is_aa64(env, el); + bool guarded = false; + + /* TODO: This code does not support shareability levels. */ + if (aarch64) { + param = aa64_va_parameters(env, address, mmu_idx, + access_type != MMU_INST_FETCH); + level = 0; + addrsize = 64 - 8 * param.tbi; + inputsize = 64 - param.tsz; + } else { + param = aa32_va_parameters(env, address, mmu_idx); + level = 1; + addrsize = (mmu_idx == ARMMMUIdx_Stage2 ? 40 : 32); + inputsize = addrsize - param.tsz; + } + + /* + * We determined the region when collecting the parameters, but we + * have not yet validated that the address is valid for the region. + * Extract the top bits and verify that they all match select. + * + * For aa32, if inputsize == addrsize, then we have selected the + * region by exclusion in aa32_va_parameters and there is no more + * validation to do here. + */ + if (inputsize < addrsize) { + target_ulong top_bits = sextract64(address, inputsize, + addrsize - inputsize); + if (-top_bits != param.select) { + /* The gap between the two regions is a Translation fault */ + fault_type = ARMFault_Translation; + goto do_fault; + } + } + + if (param.using64k) { + stride = 13; + } else if (param.using16k) { + stride = 11; + } else { + stride = 9; + } + + /* + * Note that QEMU ignores shareability and cacheability attributes, + * so we don't need to do anything with the SH, ORGN, IRGN fields + * in the TTBCR. Similarly, TTBCR:A1 selects whether we get the + * ASID from TTBR0 or TTBR1, but QEMU's TLB doesn't currently + * implement any ASID-like capability so we can ignore it (instead + * we will always flush the TLB any time the ASID is changed). + */ + ttbr = regime_ttbr(env, mmu_idx, param.select); + + /* + * Here we should have set up all the parameters for the translation: + * inputsize, ttbr, epd, stride, tbi + */ + + if (param.epd) { + /* + * Translation table walk disabled => Translation fault on TLB miss + * Note: This is always 0 on 64-bit EL2 and EL3. + */ + goto do_fault; + } + + if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { + /* + * The starting level depends on the virtual address size (which can + * be up to 48 bits) and the translation granule size. It indicates + * the number of strides (stride bits at a time) needed to + * consume the bits of the input address. In the pseudocode this is: + * level = 4 - RoundUp((inputsize - grainsize) / stride) + * where their 'inputsize' is our 'inputsize', 'grainsize' is + * our 'stride + 3' and 'stride' is our 'stride'. + * Applying the usual "rounded up m/n is (m+n-1)/n" and simplifying: + * = 4 - (inputsize - stride - 3 + stride - 1) / stride + * = 4 - (inputsize - 4) / stride; + */ + level = 4 - (inputsize - 4) / stride; + } else { + /* + * For stage 2 translations the starting level is specified by the + * VTCR_EL2.SL0 field (whose interpretation depends on the page size) + */ + uint32_t sl0 = extract32(tcr->raw_tcr, 6, 2); + uint32_t startlevel; + bool ok; + + if (!aarch64 || stride == 9) { + /* AArch32 or 4KB pages */ + startlevel = 2 - sl0; + + if (cpu_isar_feature(aa64_st, cpu)) { + startlevel &= 3; + } + } else { + /* 16KB or 64KB pages */ + startlevel = 3 - sl0; + } + + /* Check that the starting level is valid. */ + ok = check_s2_mmu_setup(cpu, aarch64, startlevel, + inputsize, stride); + if (!ok) { + fault_type = ARMFault_Translation; + goto do_fault; + } + level = startlevel; + } + + indexmask_grainsize = (1ULL << (stride + 3)) - 1; + indexmask = (1ULL << (inputsize - (stride * (4 - level)))) - 1; + + /* Now we can extract the actual base address from the TTBR */ + descaddr = extract64(ttbr, 0, 48); + /* + * We rely on this masking to clear the RES0 bits at the bottom of the TTBR + * and also to mask out CnP (bit 0) which could validly be non-zero. + */ + descaddr &= ~indexmask; + + /* + * The address field in the descriptor goes up to bit 39 for ARMv7 + * but up to bit 47 for ARMv8, but we use the descaddrmask + * up to bit 39 for AArch32, because we don't need other bits in that case + * to construct next descriptor address (anyway they should be all zeroes). + */ + descaddrmask = ((1ull << (aarch64 ? 48 : 40)) - 1) & + ~indexmask_grainsize; + + /* + * Secure accesses start with the page table in secure memory and + * can be downgraded to non-secure at any step. Non-secure accesses + * remain non-secure. We implement this by just ORing in the NSTable/NS + * bits at each step. + */ + tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4); + for (;;) { + uint64_t descriptor; + bool nstable; + + descaddr |= (address >> (stride * (4 - level))) & indexmask; + descaddr &= ~7ULL; + nstable = extract32(tableattrs, 4, 1); + descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi); + if (fi->type != ARMFault_None) { + goto do_fault; + } + + if (!(descriptor & 1) || + (!(descriptor & 2) && (level == 3))) { + /* Invalid, or the Reserved level 3 encoding */ + goto do_fault; + } + descaddr = descriptor & descaddrmask; + + if ((descriptor & 2) && (level < 3)) { + /* + * Table entry. The top five bits are attributes which may + * propagate down through lower levels of the table (and + * which are all arranged so that 0 means "no effect", so + * we can gather them up by ORing in the bits at each level). + */ + tableattrs |= extract64(descriptor, 59, 5); + level++; + indexmask = indexmask_grainsize; + continue; + } + /* + * Block entry at level 1 or 2, or page entry at level 3. + * These are basically the same thing, although the number + * of bits we pull in from the vaddr varies. + */ + page_size = (1ULL << ((stride * (4 - level)) + 3)); + descaddr |= (address & (page_size - 1)); + /* Extract attributes from the descriptor */ + attrs = extract64(descriptor, 2, 10) + | (extract64(descriptor, 52, 12) << 10); + + if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { + /* Stage 2 table descriptors do not include any attribute fields */ + break; + } + /* Merge in attributes from table descriptors */ + attrs |= nstable << 3; /* NS */ + guarded = extract64(descriptor, 50, 1); /* GP */ + if (param.hpd) { + /* HPD disables all the table attributes except NSTable. */ + break; + } + attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */ + /* + * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1 + * means "force PL1 access only", which means forcing AP[1] to 0. + */ + attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */ + attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */ + break; + } + /* + * Here descaddr is the final physical address, + * and attributes are all in attrs. + */ + fault_type = ARMFault_AccessFlag; + if ((attrs & (1 << 8)) == 0) { + /* Access flag */ + goto do_fault; + } + + ap = extract32(attrs, 4, 2); + + if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { + ns = mmu_idx == ARMMMUIdx_Stage2; + xn = extract32(attrs, 11, 2); + *prot = get_S2prot(env, ap, xn, s1_is_el0); + } else { + ns = extract32(attrs, 3, 1); + xn = extract32(attrs, 12, 1); + pxn = extract32(attrs, 11, 1); + *prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn); + } + + fault_type = ARMFault_Permission; + if (!(*prot & (1 << access_type))) { + goto do_fault; + } + + if (ns) { + /* + * The NS bit will (as required by the architecture) have no effect if + * the CPU doesn't support TZ or this is a non-secure translation + * regime, because the attribute will already be non-secure. + */ + txattrs->secure = false; + } + /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */ + if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) { + arm_tlb_bti_gp(txattrs) = true; + } + + if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { + cacheattrs->attrs = convert_stage2_attrs(env, extract32(attrs, 0, 4)); + } else { + /* Index into MAIR registers for cache attributes */ + uint8_t attrindx = extract32(attrs, 0, 3); + uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)]; + assert(attrindx <= 7); + cacheattrs->attrs = extract64(mair, attrindx * 8, 8); + } + cacheattrs->shareability = extract32(attrs, 6, 2); + + *phys_ptr = descaddr; + *page_size_ptr = page_size; + return false; + +do_fault: + fi->type = fault_type; + fi->level = level; + /* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */ + fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 || + mmu_idx == ARMMMUIdx_Stage2_S); + fi->s1ns = mmu_idx == ARMMMUIdx_Stage2; + return true; +} + +static inline void get_phys_addr_pmsav7_default(CPUARMState *env, + ARMMMUIdx mmu_idx, + int32_t address, int *prot) +{ + if (!arm_feature(env, ARM_FEATURE_M)) { + *prot = PAGE_READ | PAGE_WRITE; + switch (address) { + case 0xF0000000 ... 0xFFFFFFFF: + if (regime_sctlr(env, mmu_idx) & SCTLR_V) { + /* hivecs execing is ok */ + *prot |= PAGE_EXEC; + } + break; + case 0x00000000 ... 0x7FFFFFFF: + *prot |= PAGE_EXEC; + break; + } + } else { + /* + * Default system address map for M profile cores. + * The architecture specifies which regions are execute-never; + * at the MPU level no other checks are defined. + */ + switch (address) { + case 0x00000000 ... 0x1fffffff: /* ROM */ + case 0x20000000 ... 0x3fffffff: /* SRAM */ + case 0x60000000 ... 0x7fffffff: /* RAM */ + case 0x80000000 ... 0x9fffffff: /* RAM */ + *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; + break; + case 0x40000000 ... 0x5fffffff: /* Peripheral */ + case 0xa0000000 ... 0xbfffffff: /* Device */ + case 0xc0000000 ... 0xdfffffff: /* Device */ + case 0xe0000000 ... 0xffffffff: /* System */ + *prot = PAGE_READ | PAGE_WRITE; + break; + default: + g_assert_not_reached(); + } + } +} + +static bool pmsav7_use_background_region(ARMCPU *cpu, + ARMMMUIdx mmu_idx, bool is_user) +{ + /* + * Return true if we should use the default memory map as a + * "background" region if there are no hits against any MPU regions. + */ + CPUARMState *env = &cpu->env; + + if (is_user) { + return false; + } + + if (arm_feature(env, ARM_FEATURE_M)) { + return env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] + & R_V7M_MPU_CTRL_PRIVDEFENA_MASK; + } else { + return regime_sctlr(env, mmu_idx) & SCTLR_BR; + } +} + +static inline bool m_is_ppb_region(CPUARMState *env, uint32_t address) +{ + /* True if address is in the M profile PPB region 0xe0000000 - 0xe00fffff */ + return arm_feature(env, ARM_FEATURE_M) && + extract32(address, 20, 12) == 0xe00; +} + +static inline bool m_is_system_region(CPUARMState *env, uint32_t address) +{ + /* + * True if address is in the M profile system region + * 0xe0000000 - 0xffffffff + */ + return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7; +} + +static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi) +{ + ARMCPU *cpu = env_archcpu(env); + int n; + bool is_user = regime_is_user(env, mmu_idx); + + *phys_ptr = address; + *page_size = TARGET_PAGE_SIZE; + *prot = 0; + + if (regime_translation_disabled(env, mmu_idx) || + m_is_ppb_region(env, address)) { + /* + * MPU disabled or M profile PPB access: use default memory map. + * The other case which uses the default memory map in the + * v7M ARM ARM pseudocode is exception vector reads from the vector + * table. In QEMU those accesses are done in arm_v7m_load_vector(), + * which always does a direct read using address_space_ldl(), rather + * than going via this function, so we don't need to check that here. + */ + get_phys_addr_pmsav7_default(env, mmu_idx, address, prot); + } else { /* MPU enabled */ + for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) { + /* region search */ + uint32_t base = env->pmsav7.drbar[n]; + uint32_t rsize = extract32(env->pmsav7.drsr[n], 1, 5); + uint32_t rmask; + bool srdis = false; + + if (!(env->pmsav7.drsr[n] & 0x1)) { + continue; + } + + if (!rsize) { + qemu_log_mask(LOG_GUEST_ERROR, + "DRSR[%d]: Rsize field cannot be 0\n", n); + continue; + } + rsize++; + rmask = (1ull << rsize) - 1; + + if (base & rmask) { + qemu_log_mask(LOG_GUEST_ERROR, + "DRBAR[%d]: 0x%" PRIx32 " misaligned " + "to DRSR region size, mask = 0x%" PRIx32 "\n", + n, base, rmask); + continue; + } + + if (address < base || address > base + rmask) { + /* + * Address not in this region. We must check whether the + * region covers addresses in the same page as our address. + * In that case we must not report a size that covers the + * whole page for a subsequent hit against a different MPU + * region or the background region, because it would result in + * incorrect TLB hits for subsequent accesses to addresses that + * are in this MPU region. + */ + if (ranges_overlap(base, rmask, + address & TARGET_PAGE_MASK, + TARGET_PAGE_SIZE)) { + *page_size = 1; + } + continue; + } + + /* Region matched */ + + if (rsize >= 8) { /* no subregions for regions < 256 bytes */ + int i, snd; + uint32_t srdis_mask; + + rsize -= 3; /* sub region size (power of 2) */ + snd = ((address - base) >> rsize) & 0x7; + srdis = extract32(env->pmsav7.drsr[n], snd + 8, 1); + + srdis_mask = srdis ? 0x3 : 0x0; + for (i = 2; i <= 8 && rsize < TARGET_PAGE_BITS; i *= 2) { + /* + * This will check in groups of 2, 4 and then 8, whether + * the subregion bits are consistent. rsize is incremented + * back up to give the region size, considering consistent + * adjacent subregions as one region. Stop testing if rsize + * is already big enough for an entire QEMU page. + */ + int snd_rounded = snd & ~(i - 1); + uint32_t srdis_multi = extract32(env->pmsav7.drsr[n], + snd_rounded + 8, i); + if (srdis_mask ^ srdis_multi) { + break; + } + srdis_mask = (srdis_mask << i) | srdis_mask; + rsize++; + } + } + if (srdis) { + continue; + } + if (rsize < TARGET_PAGE_BITS) { + *page_size = 1 << rsize; + } + break; + } + + if (n == -1) { /* no hits */ + if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) { + /* background fault */ + fi->type = ARMFault_Background; + return true; + } + get_phys_addr_pmsav7_default(env, mmu_idx, address, prot); + } else { /* a MPU hit! */ + uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3); + uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1); + + if (m_is_system_region(env, address)) { + /* System space is always execute never */ + xn = 1; + } + + if (is_user) { /* User mode AP bit decoding */ + switch (ap) { + case 0: + case 1: + case 5: + break; /* no access */ + case 3: + *prot |= PAGE_WRITE; + /* fall through */ + case 2: + case 6: + *prot |= PAGE_READ | PAGE_EXEC; + break; + case 7: + /* for v7M, same as 6; for R profile a reserved value */ + if (arm_feature(env, ARM_FEATURE_M)) { + *prot |= PAGE_READ | PAGE_EXEC; + break; + } + /* fall through */ + default: + qemu_log_mask(LOG_GUEST_ERROR, + "DRACR[%d]: Bad value for AP bits: 0x%" + PRIx32 "\n", n, ap); + } + } else { /* Priv. mode AP bits decoding */ + switch (ap) { + case 0: + break; /* no access */ + case 1: + case 2: + case 3: + *prot |= PAGE_WRITE; + /* fall through */ + case 5: + case 6: + *prot |= PAGE_READ | PAGE_EXEC; + break; + case 7: + /* for v7M, same as 6; for R profile a reserved value */ + if (arm_feature(env, ARM_FEATURE_M)) { + *prot |= PAGE_READ | PAGE_EXEC; + break; + } + /* fall through */ + default: + qemu_log_mask(LOG_GUEST_ERROR, + "DRACR[%d]: Bad value for AP bits: 0x%" + PRIx32 "\n", n, ap); + } + } + + /* execute never */ + if (xn) { + *prot &= ~PAGE_EXEC; + } + } + } + + fi->type = ARMFault_Permission; + fi->level = 1; + return !(*prot & (1 << access_type)); +} + +static bool v8m_is_sau_exempt(CPUARMState *env, + uint32_t address, MMUAccessType access_type) +{ + /* + * The architecture specifies that certain address ranges are + * exempt from v8M SAU/IDAU checks. + */ + return + (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) || + (address >= 0xe0000000 && address <= 0xe0002fff) || + (address >= 0xe000e000 && address <= 0xe000efff) || + (address >= 0xe002e000 && address <= 0xe002efff) || + (address >= 0xe0040000 && address <= 0xe0041fff) || + (address >= 0xe00ff000 && address <= 0xe00fffff); +} + +void v8m_security_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + V8M_SAttributes *sattrs) +{ + /* + * Look up the security attributes for this address. Compare the + * pseudocode SecurityCheck() function. + * We assume the caller has zero-initialized *sattrs. + */ + ARMCPU *cpu = env_archcpu(env); + int r; + bool idau_exempt = false, idau_ns = true, idau_nsc = true; + int idau_region = IREGION_NOTVALID; + uint32_t addr_page_base = address & TARGET_PAGE_MASK; + uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1); + + if (cpu->idau) { + IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau); + IDAUInterface *ii = IDAU_INTERFACE(cpu->idau); + + iic->check(ii, address, &idau_region, &idau_exempt, &idau_ns, + &idau_nsc); + } + + if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) { + /* 0xf0000000..0xffffffff is always S for insn fetches */ + return; + } + + if (idau_exempt || v8m_is_sau_exempt(env, address, access_type)) { + sattrs->ns = !regime_is_secure(env, mmu_idx); + return; + } + + if (idau_region != IREGION_NOTVALID) { + sattrs->irvalid = true; + sattrs->iregion = idau_region; + } + + switch (env->sau.ctrl & 3) { + case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */ + break; + case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */ + sattrs->ns = true; + break; + default: /* SAU.ENABLE == 1 */ + for (r = 0; r < cpu->sau_sregion; r++) { + if (env->sau.rlar[r] & 1) { + uint32_t base = env->sau.rbar[r] & ~0x1f; + uint32_t limit = env->sau.rlar[r] | 0x1f; + + if (base <= address && limit >= address) { + if (base > addr_page_base || limit < addr_page_limit) { + sattrs->subpage = true; + } + if (sattrs->srvalid) { + /* + * If we hit in more than one region then we must report + * as Secure, not NS-Callable, with no valid region + * number info. + */ + sattrs->ns = false; + sattrs->nsc = false; + sattrs->sregion = 0; + sattrs->srvalid = false; + break; + } else { + if (env->sau.rlar[r] & 2) { + sattrs->nsc = true; + } else { + sattrs->ns = true; + } + sattrs->srvalid = true; + sattrs->sregion = r; + } + } else { + /* + * Address not in this region. We must check whether the + * region covers addresses in the same page as our address. + * In that case we must not report a size that covers the + * whole page for a subsequent hit against a different MPU + * region or the background region, because it would result + * in incorrect TLB hits for subsequent accesses to + * addresses that are in this MPU region. + */ + if (limit >= base && + ranges_overlap(base, limit - base + 1, + addr_page_base, + TARGET_PAGE_SIZE)) { + sattrs->subpage = true; + } + } + } + } + break; + } + + /* + * The IDAU will override the SAU lookup results if it specifies + * higher security than the SAU does. + */ + if (!idau_ns) { + if (sattrs->ns || (!idau_nsc && sattrs->nsc)) { + sattrs->ns = false; + sattrs->nsc = idau_nsc; + } + } +} + +bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, bool *is_subpage, + ARMMMUFaultInfo *fi, uint32_t *mregion) +{ + /* + * Perform a PMSAv8 MPU lookup (without also doing the SAU check + * that a full phys-to-virt translation does). + * mregion is (if not NULL) set to the region number which matched, + * or -1 if no region number is returned (MPU off, address did not + * hit a region, address hit in multiple regions). + * We set is_subpage to true if the region hit doesn't cover the + * entire TARGET_PAGE the address is within. + */ + ARMCPU *cpu = env_archcpu(env); + bool is_user = regime_is_user(env, mmu_idx); + uint32_t secure = regime_is_secure(env, mmu_idx); + int n; + int matchregion = -1; + bool hit = false; + uint32_t addr_page_base = address & TARGET_PAGE_MASK; + uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1); + + *is_subpage = false; + *phys_ptr = address; + *prot = 0; + if (mregion) { + *mregion = -1; + } + + /* + * Unlike the ARM ARM pseudocode, we don't need to check whether this + * was an exception vector read from the vector table (which is always + * done using the default system address map), because those accesses + * are done in arm_v7m_load_vector(), which always does a direct + * read using address_space_ldl(), rather than going via this function. + */ + if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */ + hit = true; + } else if (m_is_ppb_region(env, address)) { + hit = true; + } else { + if (pmsav7_use_background_region(cpu, mmu_idx, is_user)) { + hit = true; + } + + for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) { + /* region search */ + /* + * Note that the base address is bits [31:5] from the register + * with bits [4:0] all zeroes, but the limit address is bits + * [31:5] from the register with bits [4:0] all ones. + */ + uint32_t base = env->pmsav8.rbar[secure][n] & ~0x1f; + uint32_t limit = env->pmsav8.rlar[secure][n] | 0x1f; + + if (!(env->pmsav8.rlar[secure][n] & 0x1)) { + /* Region disabled */ + continue; + } + + if (address < base || address > limit) { + /* + * Address not in this region. We must check whether the + * region covers addresses in the same page as our address. + * In that case we must not report a size that covers the + * whole page for a subsequent hit against a different MPU + * region or the background region, because it would result in + * incorrect TLB hits for subsequent accesses to addresses that + * are in this MPU region. + */ + if (limit >= base && + ranges_overlap(base, limit - base + 1, + addr_page_base, + TARGET_PAGE_SIZE)) { + *is_subpage = true; + } + continue; + } + + if (base > addr_page_base || limit < addr_page_limit) { + *is_subpage = true; + } + + if (matchregion != -1) { + /* + * Multiple regions match -- always a failure (unlike + * PMSAv7 where highest-numbered-region wins) + */ + fi->type = ARMFault_Permission; + fi->level = 1; + return true; + } + + matchregion = n; + hit = true; + } + } + + if (!hit) { + /* background fault */ + fi->type = ARMFault_Background; + return true; + } + + if (matchregion == -1) { + /* hit using the background region */ + get_phys_addr_pmsav7_default(env, mmu_idx, address, prot); + } else { + uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2); + uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1); + bool pxn = false; + + if (arm_feature(env, ARM_FEATURE_V8_1M)) { + pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1); + } + + if (m_is_system_region(env, address)) { + /* System space is always execute never */ + xn = 1; + } + + *prot = simple_ap_to_rw_prot(env, mmu_idx, ap); + if (*prot && !xn && !(pxn && !is_user)) { + *prot |= PAGE_EXEC; + } + /* + * We don't need to look the attribute up in the MAIR0/MAIR1 + * registers because that only tells us about cacheability. + */ + if (mregion) { + *mregion = matchregion; + } + } + + fi->type = ARMFault_Permission; + fi->level = 1; + return !(*prot & (1 << access_type)); +} + + +static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, target_ulong *page_size, + ARMMMUFaultInfo *fi) +{ + uint32_t secure = regime_is_secure(env, mmu_idx); + V8M_SAttributes sattrs = {}; + bool ret; + bool mpu_is_subpage; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs); + if (access_type == MMU_INST_FETCH) { + /* + * Instruction fetches always use the MMU bank and the + * transaction attribute determined by the fetch address, + * regardless of CPU state. This is painful for QEMU + * to handle, because it would mean we need to encode + * into the mmu_idx not just the (user, negpri) information + * for the current security state but also that for the + * other security state, which would balloon the number + * of mmu_idx values needed alarmingly. + * Fortunately we can avoid this because it's not actually + * possible to arbitrarily execute code from memory with + * the wrong security attribute: it will always generate + * an exception of some kind or another, apart from the + * special case of an NS CPU executing an SG instruction + * in S&NSC memory. So we always just fail the translation + * here and sort things out in the exception handler + * (including possibly emulating an SG instruction). + */ + if (sattrs.ns != !secure) { + if (sattrs.nsc) { + fi->type = ARMFault_QEMU_NSCExec; + } else { + fi->type = ARMFault_QEMU_SFault; + } + *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; + *phys_ptr = address; + *prot = 0; + return true; + } + } else { + /* + * For data accesses we always use the MMU bank indicated + * by the current CPU state, but the security attributes + * might downgrade a secure access to nonsecure. + */ + if (sattrs.ns) { + txattrs->secure = false; + } else if (!secure) { + /* + * NS access to S memory must fault. + * Architecturally we should first check whether the + * MPU information for this address indicates that we + * are doing an unaligned access to Device memory, which + * should generate a UsageFault instead. QEMU does not + * currently check for that kind of unaligned access though. + * If we added it we would need to do so as a special case + * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt(). + */ + fi->type = ARMFault_QEMU_SFault; + *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; + *phys_ptr = address; + *prot = 0; + return true; + } + } + } + + ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr, + txattrs, prot, &mpu_is_subpage, fi, NULL); + *page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE; + return ret; +} + +static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + ARMMMUFaultInfo *fi) +{ + int n; + uint32_t mask; + uint32_t base; + bool is_user = regime_is_user(env, mmu_idx); + + if (regime_translation_disabled(env, mmu_idx)) { + /* MPU disabled. */ + *phys_ptr = address; + *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; + return false; + } + + *phys_ptr = address; + for (n = 7; n >= 0; n--) { + base = env->cp15.c6_region[n]; + if ((base & 1) == 0) { + continue; + } + mask = 1 << ((base >> 1) & 0x1f); + /* + * Keep this shift separate from the above to avoid an + * (undefined) << 32 + */ + mask = (mask << 1) - 1; + if (((base ^ address) & ~mask) == 0) { + break; + } + } + if (n < 0) { + fi->type = ARMFault_Background; + return true; + } + + if (access_type == MMU_INST_FETCH) { + mask = env->cp15.pmsav5_insn_ap; + } else { + mask = env->cp15.pmsav5_data_ap; + } + mask = (mask >> (n * 4)) & 0xf; + switch (mask) { + case 0: + fi->type = ARMFault_Permission; + fi->level = 1; + return true; + case 1: + if (is_user) { + fi->type = ARMFault_Permission; + fi->level = 1; + return true; + } + *prot = PAGE_READ | PAGE_WRITE; + break; + case 2: + *prot = PAGE_READ; + if (!is_user) { + *prot |= PAGE_WRITE; + } + break; + case 3: + *prot = PAGE_READ | PAGE_WRITE; + break; + case 5: + if (is_user) { + fi->type = ARMFault_Permission; + fi->level = 1; + return true; + } + *prot = PAGE_READ; + break; + case 6: + *prot = PAGE_READ; + break; + default: + /* Bad permission. */ + fi->type = ARMFault_Permission; + fi->level = 1; + return true; + } + *prot |= PAGE_EXEC; + return false; +} + +/* + * Combine either inner or outer cacheability attributes for normal + * memory, according to table D4-42 and pseudocode procedure + * CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM). + * + * NB: only stage 1 includes allocation hints (RW bits), leading to + * some asymmetry. + */ +static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2) +{ + if (s1 == 4 || s2 == 4) { + /* non-cacheable has precedence */ + return 4; + } else if (extract32(s1, 2, 2) == 0 || extract32(s1, 2, 2) == 2) { + /* stage 1 write-through takes precedence */ + return s1; + } else if (extract32(s2, 2, 2) == 2) { + /* + * stage 2 write-through takes precedence, but the allocation hint + * is still taken from stage 1 + */ + return (2 << 2) | extract32(s1, 0, 2); + } else { /* write-back */ + return s1; + } +} + +/* + * Combine S1 and S2 cacheability/shareability attributes, per D4.5.4 + * and CombineS1S2Desc() + * + * @s1: Attributes from stage 1 walk + * @s2: Attributes from stage 2 walk + */ +static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2) +{ + uint8_t s1lo, s2lo, s1hi, s2hi; + ARMCacheAttrs ret; + bool tagged = false; + + if (s1.attrs == 0xf0) { + tagged = true; + s1.attrs = 0xff; + } + + s1lo = extract32(s1.attrs, 0, 4); + s2lo = extract32(s2.attrs, 0, 4); + s1hi = extract32(s1.attrs, 4, 4); + s2hi = extract32(s2.attrs, 4, 4); + + /* Combine shareability attributes (table D4-43) */ + if (s1.shareability == 2 || s2.shareability == 2) { + /* if either are outer-shareable, the result is outer-shareable */ + ret.shareability = 2; + } else if (s1.shareability == 3 || s2.shareability == 3) { + /* if either are inner-shareable, the result is inner-shareable */ + ret.shareability = 3; + } else { + /* both non-shareable */ + ret.shareability = 0; + } + + /* Combine memory type and cacheability attributes */ + if (s1hi == 0 || s2hi == 0) { + /* Device has precedence over normal */ + if (s1lo == 0 || s2lo == 0) { + /* nGnRnE has precedence over anything */ + ret.attrs = 0; + } else if (s1lo == 4 || s2lo == 4) { + /* non-Reordering has precedence over Reordering */ + ret.attrs = 4; /* nGnRE */ + } else if (s1lo == 8 || s2lo == 8) { + /* non-Gathering has precedence over Gathering */ + ret.attrs = 8; /* nGRE */ + } else { + ret.attrs = 0xc; /* GRE */ + } + + /* + * Any location for which the resultant memory type is any + * type of Device memory is always treated as Outer Shareable. + */ + ret.shareability = 2; + } else { /* Normal memory */ + /* Outer/inner cacheability combine independently */ + ret.attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4 + | combine_cacheattr_nibble(s1lo, s2lo); + + if (ret.attrs == 0x44) { + /* + * Any location for which the resultant memory type is Normal + * Inner Non-cacheable, Outer Non-cacheable is always treated + * as Outer Shareable. + */ + ret.shareability = 2; + } + } + + /* TODO: CombineS1S2Desc does not consider transient, only WB, RWA. */ + if (tagged && ret.attrs == 0xff) { + ret.attrs = 0xf0; + } + + return ret; +} + + +/* + * get_phys_addr - get the physical address for this virtual address + * + * Find the physical address corresponding to the given virtual address, + * by doing a translation table walk on MMU based systems or using the + * MPU state on MPU based systems. + * + * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, + * prot and page_size may not be filled in, and the populated fsr value provides + * information on why the translation aborted, in the format of a + * DFSR/IFSR fault register, with the following caveats: + * * we honour the short vs long DFSR format differences. + * * the WnR bit is never set (the caller must do this). + * * for PSMAv5 based systems we don't bother to return a full FSR format + * value. + * + * @env: CPUARMState + * @address: virtual address to get physical address for + * @access_type: 0 for read, 1 for write, 2 for execute + * @mmu_idx: MMU index indicating required translation regime + * @phys_ptr: set to the physical address corresponding to the virtual address + * @attrs: set to the memory transaction attributes to use + * @prot: set to the permissions for the page containing phys_ptr + * @page_size: set to the size of the page containing phys_ptr + * @fi: set to fault info if the translation fails + * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes + */ +bool get_phys_addr(CPUARMState *env, target_ulong address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) +{ + ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx); + + if (mmu_idx != s1_mmu_idx) { + /* + * Call ourselves recursively to do the stage 1 and then stage 2 + * translations if mmu_idx is a two-stage regime. + */ + if (arm_feature(env, ARM_FEATURE_EL2)) { + hwaddr ipa; + int s2_prot; + int ret; + ARMCacheAttrs cacheattrs2 = {}; + ARMMMUIdx s2_mmu_idx; + bool is_el0; + + ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa, + attrs, prot, page_size, fi, cacheattrs); + + /* If S1 fails or S2 is disabled, return early. */ + if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) { + *phys_ptr = ipa; + return ret; + } + + s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; + is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0; + + /* S1 is done. Now do S2 translation. */ + ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, is_el0, + phys_ptr, attrs, &s2_prot, + page_size, fi, &cacheattrs2); + fi->s2addr = ipa; + /* Combine the S1 and S2 perms. */ + *prot &= s2_prot; + + /* If S2 fails, return early. */ + if (ret) { + return ret; + } + + /* Combine the S1 and S2 cache attributes. */ + if (arm_hcr_el2_eff(env) & HCR_DC) { + /* + * HCR.DC forces the first stage attributes to + * Normal Non-Shareable, + * Inner Write-Back Read-Allocate Write-Allocate, + * Outer Write-Back Read-Allocate Write-Allocate. + * Do not overwrite Tagged within attrs. + */ + if (cacheattrs->attrs != 0xf0) { + cacheattrs->attrs = 0xff; + } + cacheattrs->shareability = 0; + } + *cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2); + + /* Check if IPA translates to secure or non-secure PA space. */ + if (arm_is_secure_below_el3(env)) { + if (attrs->secure) { + attrs->secure = + !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)); + } else { + attrs->secure = + !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW)) + || (env->cp15.vstcr_el2.raw_tcr & VSTCR_SA)); + } + } + return 0; + } else { + /* + * For non-EL2 CPUs a stage1+stage2 translation is just stage 1. + */ + mmu_idx = stage_1_mmu_idx(mmu_idx); + } + } + + /* + * The page table entries may downgrade secure to non-secure, but + * cannot upgrade an non-secure translation regime's attributes + * to secure. + */ + attrs->secure = regime_is_secure(env, mmu_idx); + attrs->user = regime_is_user(env, mmu_idx); + + /* + * Fast Context Switch Extension. This doesn't exist at all in v8. + * In v7 and earlier it affects all stage 1 translations. + */ + if (address < 0x02000000 && mmu_idx != ARMMMUIdx_Stage2 + && !arm_feature(env, ARM_FEATURE_V8)) { + if (regime_el(env, mmu_idx) == 3) { + address += env->cp15.fcseidr_s; + } else { + address += env->cp15.fcseidr_ns; + } + } + + if (arm_feature(env, ARM_FEATURE_PMSA)) { + bool ret; + *page_size = TARGET_PAGE_SIZE; + + if (arm_feature(env, ARM_FEATURE_V8)) { + /* PMSAv8 */ + ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx, + phys_ptr, attrs, prot, page_size, fi); + } else if (arm_feature(env, ARM_FEATURE_V7)) { + /* PMSAv7 */ + ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx, + phys_ptr, prot, page_size, fi); + } else { + /* Pre-v7 MPU */ + ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx, + phys_ptr, prot, fi); + } + qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32 + " mmu_idx %u -> %s (prot %c%c%c)\n", + access_type == MMU_DATA_LOAD ? "reading" : + (access_type == MMU_DATA_STORE ? "writing" : "execute"), + (uint32_t)address, mmu_idx, + ret ? "Miss" : "Hit", + *prot & PAGE_READ ? 'r' : '-', + *prot & PAGE_WRITE ? 'w' : '-', + *prot & PAGE_EXEC ? 'x' : '-'); + + return ret; + } + + /* Definitely a real MMU, not an MPU */ + + if (regime_translation_disabled(env, mmu_idx)) { + uint64_t hcr; + uint8_t memattr; + + /* + * MMU disabled. S1 addresses within aa64 translation regimes are + * still checked for bounds -- see AArch64.TranslateAddressS1Off. + */ + if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { + int r_el = regime_el(env, mmu_idx); + if (arm_el_is_aa64(env, r_el)) { + int pamax = arm_pamax(env_archcpu(env)); + uint64_t tcr = env->cp15.tcr_el[r_el].raw_tcr; + int addrtop, tbi; + + tbi = aa64_va_parameter_tbi(tcr, mmu_idx); + if (access_type == MMU_INST_FETCH) { + tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); + } + tbi = (tbi >> extract64(address, 55, 1)) & 1; + addrtop = (tbi ? 55 : 63); + + if (extract64(address, pamax, addrtop - pamax + 1) != 0) { + fi->type = ARMFault_AddressSize; + fi->level = 0; + fi->stage2 = false; + return 1; + } + + /* + * When TBI is disabled, we've just validated that all of the + * bits above PAMax are zero, so logically we only need to + * clear the top byte for TBI. But it's clearer to follow + * the pseudocode set of addrdesc.paddress. + */ + address = extract64(address, 0, 52); + } + } + *phys_ptr = address; + *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; + *page_size = TARGET_PAGE_SIZE; + + /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ + hcr = arm_hcr_el2_eff(env); + cacheattrs->shareability = 0; + if (hcr & HCR_DC) { + if (hcr & HCR_DCT) { + memattr = 0xf0; /* Tagged, Normal, WB, RWA */ + } else { + memattr = 0xff; /* Normal, WB, RWA */ + } + } else if (access_type == MMU_INST_FETCH) { + if (regime_sctlr(env, mmu_idx) & SCTLR_I) { + memattr = 0xee; /* Normal, WT, RA, NT */ + } else { + memattr = 0x44; /* Normal, NC, No */ + } + cacheattrs->shareability = 2; /* outer sharable */ + } else { + memattr = 0x00; /* Device, nGnRnE */ + } + cacheattrs->attrs = memattr; + return 0; + } + + if (regime_using_lpae_format(env, mmu_idx)) { + return get_phys_addr_lpae(env, address, access_type, mmu_idx, false, + phys_ptr, attrs, prot, page_size, + fi, cacheattrs); + } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) { + return get_phys_addr_v6(env, address, access_type, mmu_idx, + phys_ptr, attrs, prot, page_size, fi); + } else { + return get_phys_addr_v5(env, address, access_type, mmu_idx, + phys_ptr, prot, page_size, fi); + } +} + +hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, + MemTxAttrs *attrs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + hwaddr phys_addr; + target_ulong page_size; + int prot; + bool ret; + ARMMMUFaultInfo fi = {}; + ARMMMUIdx mmu_idx = arm_mmu_idx(env); + ARMCacheAttrs cacheattrs = {}; + + *attrs = (MemTxAttrs) {}; + + ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &phys_addr, + attrs, &prot, &page_size, &fi, &cacheattrs); + + if (ret) { + return -1; + } + return phys_addr; +} diff --git a/target/arm/cpu-mmu.c b/target/arm/cpu-mmu.c new file mode 100644 index 0000000000..f463f8458e --- /dev/null +++ b/target/arm/cpu-mmu.c @@ -0,0 +1,124 @@ +/* + * QEMU ARM CPU address translation related code + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "cpu-mmu.h" + +int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx) +{ + if (regime_has_2_ranges(mmu_idx)) { + return extract64(tcr, 37, 2); + } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { + return 0; /* VTCR_EL2 */ + } else { + /* Replicate the single TBI bit so we always have 2 bits. */ + return extract32(tcr, 20, 1) * 3; + } +} + +int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx) +{ + if (regime_has_2_ranges(mmu_idx)) { + return extract64(tcr, 51, 2); + } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { + return 0; /* VTCR_EL2 */ + } else { + /* Replicate the single TBID bit so we always have 2 bits. */ + return extract32(tcr, 29, 1) * 3; + } +} + +int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx) +{ + if (regime_has_2_ranges(mmu_idx)) { + return extract64(tcr, 57, 2); + } else { + /* Replicate the single TCMA bit so we always have 2 bits. */ + return extract32(tcr, 30, 1) * 3; + } +} + +ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, + ARMMMUIdx mmu_idx, bool data) +{ + uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr; + bool epd, hpd, using16k, using64k; + int select, tsz, tbi, max_tsz; + + if (!regime_has_2_ranges(mmu_idx)) { + select = 0; + tsz = extract32(tcr, 0, 6); + using64k = extract32(tcr, 14, 1); + using16k = extract32(tcr, 15, 1); + if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { + /* VTCR_EL2 */ + hpd = false; + } else { + hpd = extract32(tcr, 24, 1); + } + epd = false; + } else { + /* + * Bit 55 is always between the two regions, and is canonical for + * determining if address tagging is enabled. + */ + select = extract64(va, 55, 1); + if (!select) { + tsz = extract32(tcr, 0, 6); + epd = extract32(tcr, 7, 1); + using64k = extract32(tcr, 14, 1); + using16k = extract32(tcr, 15, 1); + hpd = extract64(tcr, 41, 1); + } else { + int tg = extract32(tcr, 30, 2); + using16k = tg == 1; + using64k = tg == 3; + tsz = extract32(tcr, 16, 6); + epd = extract32(tcr, 23, 1); + hpd = extract64(tcr, 42, 1); + } + } + + if (cpu_isar_feature(aa64_st, env_archcpu(env))) { + max_tsz = 48 - using64k; + } else { + max_tsz = 39; + } + + tsz = MIN(tsz, max_tsz); + tsz = MAX(tsz, 16); /* TODO: ARMv8.2-LVA */ + + /* Present TBI as a composite with TBID. */ + tbi = aa64_va_parameter_tbi(tcr, mmu_idx); + if (!data) { + tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); + } + tbi = (tbi >> select) & 1; + + return (ARMVAParameters) { + .tsz = tsz, + .select = select, + .tbi = tbi, + .epd = epd, + .hpd = hpd, + .using16k = using16k, + .using64k = using64k, + }; +} diff --git a/target/arm/cpu.c b/target/arm/cpu.c index bd8413c161..17dc0d4255 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -41,6 +41,7 @@ #include "kvm_arm.h" #include "disas/capstone.h" #include "fpu/softfloat.h" +#include "cpu-mmu.h" static void arm_cpu_set_pc(CPUState *cs, vaddr value) { diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 2a5022032c..7f818e5860 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -36,23 +36,12 @@ #include "exec/cpu_ldst.h" #include "semihosting/common-semi.h" #endif +#include "cpu-mmu.h" #define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */ #define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */ -#ifndef CONFIG_USER_ONLY - -static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - bool s1_is_el0, - hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot, - target_ulong *page_size_ptr, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) - __attribute__((nonnull)); -#endif - static void switch_mode(CPUARMState *env, int mode); -static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx); static int vfp_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg) { @@ -10452,125 +10441,6 @@ uint64_t arm_sctlr(CPUARMState *env, int el) return env->cp15.sctlr_el[el]; } -/* Return the SCTLR value which controls this address translation regime */ -static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx) -{ - return env->cp15.sctlr_el[regime_el(env, mmu_idx)]; -} - -#ifndef CONFIG_USER_ONLY - -/* Return true if the specified stage of address translation is disabled */ -static inline bool regime_translation_disabled(CPUARMState *env, - ARMMMUIdx mmu_idx) -{ - uint64_t hcr_el2; - - if (arm_feature(env, ARM_FEATURE_M)) { - switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] & - (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) { - case R_V7M_MPU_CTRL_ENABLE_MASK: - /* Enabled, but not for HardFault and NMI */ - return mmu_idx & ARM_MMU_IDX_M_NEGPRI; - case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK: - /* Enabled for all cases */ - return false; - case 0: - default: - /* - * HFNMIENA set and ENABLE clear is UNPREDICTABLE, but - * we warned about that in armv7m_nvic.c when the guest set it. - */ - return true; - } - } - - hcr_el2 = arm_hcr_el2_eff(env); - - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { - /* HCR.DC means HCR.VM behaves as 1 */ - return (hcr_el2 & (HCR_DC | HCR_VM)) == 0; - } - - if (hcr_el2 & HCR_TGE) { - /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ - if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) { - return true; - } - } - - if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) { - /* HCR.DC means SCTLR_EL1.M behaves as 0 */ - return true; - } - - return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0; -} - -static inline bool regime_translation_big_endian(CPUARMState *env, - ARMMMUIdx mmu_idx) -{ - return (regime_sctlr(env, mmu_idx) & SCTLR_EE) != 0; -} - -/* Return the TTBR associated with this translation regime */ -static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, - int ttbrn) -{ - if (mmu_idx == ARMMMUIdx_Stage2) { - return env->cp15.vttbr_el2; - } - if (mmu_idx == ARMMMUIdx_Stage2_S) { - return env->cp15.vsttbr_el2; - } - if (ttbrn == 0) { - return env->cp15.ttbr0_el[regime_el(env, mmu_idx)]; - } else { - return env->cp15.ttbr1_el[regime_el(env, mmu_idx)]; - } -} - -#endif /* !CONFIG_USER_ONLY */ - -/* - * Convert a possible stage1+2 MMU index into the appropriate - * stage 1 MMU index - */ -static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx) -{ - switch (mmu_idx) { - case ARMMMUIdx_SE10_0: - return ARMMMUIdx_Stage1_SE0; - case ARMMMUIdx_SE10_1: - return ARMMMUIdx_Stage1_SE1; - case ARMMMUIdx_SE10_1_PAN: - return ARMMMUIdx_Stage1_SE1_PAN; - case ARMMMUIdx_E10_0: - return ARMMMUIdx_Stage1_E0; - case ARMMMUIdx_E10_1: - return ARMMMUIdx_Stage1_E1; - case ARMMMUIdx_E10_1_PAN: - return ARMMMUIdx_Stage1_E1_PAN; - default: - return mmu_idx; - } -} - -/* Return true if the translation regime is using LPAE format page tables */ -static inline bool regime_using_lpae_format(CPUARMState *env, - ARMMMUIdx mmu_idx) -{ - int el = regime_el(env, mmu_idx); - if (el == 2 || arm_el_is_aa64(env, el)) { - return true; - } - if (arm_feature(env, ARM_FEATURE_LPAE) - && (regime_tcr(env, mmu_idx)->raw_tcr & TTBCR_EAE)) { - return true; - } - return false; -} - /* Returns true if the stage 1 translation regime is using LPAE format page * tables. Used when raising alignment exceptions, whose FSR changes depending * on whether the long or short descriptor format is in use. */ @@ -10581,2316 +10451,6 @@ bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx) return regime_using_lpae_format(env, mmu_idx); } -#ifndef CONFIG_USER_ONLY -static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) -{ - switch (mmu_idx) { - case ARMMMUIdx_SE10_0: - case ARMMMUIdx_E20_0: - case ARMMMUIdx_SE20_0: - case ARMMMUIdx_Stage1_E0: - case ARMMMUIdx_Stage1_SE0: - case ARMMMUIdx_MUser: - case ARMMMUIdx_MSUser: - case ARMMMUIdx_MUserNegPri: - case ARMMMUIdx_MSUserNegPri: - return true; - default: - return false; - case ARMMMUIdx_E10_0: - case ARMMMUIdx_E10_1: - case ARMMMUIdx_E10_1_PAN: - g_assert_not_reached(); - } -} - -/* - * Translate section/page access permissions to page - * R/W protection flags - * - * @env: CPUARMState - * @mmu_idx: MMU index indicating required translation regime - * @ap: The 3-bit access permissions (AP[2:0]) - * @domain_prot: The 2-bit domain access permissions - */ -static inline int ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, - int ap, int domain_prot) -{ - bool is_user = regime_is_user(env, mmu_idx); - - if (domain_prot == 3) { - return PAGE_READ | PAGE_WRITE; - } - - switch (ap) { - case 0: - if (arm_feature(env, ARM_FEATURE_V7)) { - return 0; - } - switch (regime_sctlr(env, mmu_idx) & (SCTLR_S | SCTLR_R)) { - case SCTLR_S: - return is_user ? 0 : PAGE_READ; - case SCTLR_R: - return PAGE_READ; - default: - return 0; - } - case 1: - return is_user ? 0 : PAGE_READ | PAGE_WRITE; - case 2: - if (is_user) { - return PAGE_READ; - } else { - return PAGE_READ | PAGE_WRITE; - } - case 3: - return PAGE_READ | PAGE_WRITE; - case 4: /* Reserved. */ - return 0; - case 5: - return is_user ? 0 : PAGE_READ; - case 6: - return PAGE_READ; - case 7: - if (!arm_feature(env, ARM_FEATURE_V6K)) { - return 0; - } - return PAGE_READ; - default: - g_assert_not_reached(); - } -} - -/* - * Translate section/page access permissions to page - * R/W protection flags. - * - * @ap: The 2-bit simple AP (AP[2:1]) - * @is_user: TRUE if accessing from PL0 - */ -static inline int simple_ap_to_rw_prot_is_user(int ap, bool is_user) -{ - switch (ap) { - case 0: - return is_user ? 0 : PAGE_READ | PAGE_WRITE; - case 1: - return PAGE_READ | PAGE_WRITE; - case 2: - return is_user ? 0 : PAGE_READ; - case 3: - return PAGE_READ; - default: - g_assert_not_reached(); - } -} - -static inline int -simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap) -{ - return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx)); -} - -/* - * Translate S2 section/page access permissions to protection flags - * - * @env: CPUARMState - * @s2ap: The 2-bit stage2 access permissions (S2AP) - * @xn: XN (execute-never) bits - * @s1_is_el0: true if this is S2 of an S1+2 walk for EL0 - */ -static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0) -{ - int prot = 0; - - if (s2ap & 1) { - prot |= PAGE_READ; - } - if (s2ap & 2) { - prot |= PAGE_WRITE; - } - - if (cpu_isar_feature(any_tts2uxn, env_archcpu(env))) { - switch (xn) { - case 0: - prot |= PAGE_EXEC; - break; - case 1: - if (s1_is_el0) { - prot |= PAGE_EXEC; - } - break; - case 2: - break; - case 3: - if (!s1_is_el0) { - prot |= PAGE_EXEC; - } - break; - default: - g_assert_not_reached(); - } - } else { - if (!extract32(xn, 1, 1)) { - if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) { - prot |= PAGE_EXEC; - } - } - } - return prot; -} - -/* - * Translate section/page access permissions to protection flags - * - * @env: CPUARMState - * @mmu_idx: MMU index indicating required translation regime - * @is_aa64: TRUE if AArch64 - * @ap: The 2-bit simple AP (AP[2:1]) - * @ns: NS (non-secure) bit - * @xn: XN (execute-never) bit - * @pxn: PXN (privileged execute-never) bit - */ -static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64, - int ap, int ns, int xn, int pxn) -{ - bool is_user = regime_is_user(env, mmu_idx); - int prot_rw, user_rw; - bool have_wxn; - int wxn = 0; - - assert(mmu_idx != ARMMMUIdx_Stage2); - assert(mmu_idx != ARMMMUIdx_Stage2_S); - - user_rw = simple_ap_to_rw_prot_is_user(ap, true); - if (is_user) { - prot_rw = user_rw; - } else { - if (user_rw && regime_is_pan(env, mmu_idx)) { - /* PAN forbids data accesses but doesn't affect insn fetch */ - prot_rw = 0; - } else { - prot_rw = simple_ap_to_rw_prot_is_user(ap, false); - } - } - - if (ns && arm_is_secure(env) && (env->cp15.scr_el3 & SCR_SIF)) { - return prot_rw; - } - - /* - * TODO have_wxn should be replaced with - * ARM_FEATURE_V8 || (ARM_FEATURE_V7 && ARM_FEATURE_EL2) - * when ARM_FEATURE_EL2 starts getting set. For now we assume all LPAE - * compatible processors have EL2, which is required for [U]WXN. - */ - have_wxn = arm_feature(env, ARM_FEATURE_LPAE); - - if (have_wxn) { - wxn = regime_sctlr(env, mmu_idx) & SCTLR_WXN; - } - - if (is_aa64) { - if (regime_has_2_ranges(mmu_idx) && !is_user) { - xn = pxn || (user_rw & PAGE_WRITE); - } - } else if (arm_feature(env, ARM_FEATURE_V7)) { - switch (regime_el(env, mmu_idx)) { - case 1: - case 3: - if (is_user) { - xn = xn || !(user_rw & PAGE_READ); - } else { - int uwxn = 0; - if (have_wxn) { - uwxn = regime_sctlr(env, mmu_idx) & SCTLR_UWXN; - } - xn = xn || !(prot_rw & PAGE_READ) || pxn || - (uwxn && (user_rw & PAGE_WRITE)); - } - break; - case 2: - break; - } - } else { - xn = wxn = 0; - } - - if (xn || (wxn && (prot_rw & PAGE_WRITE))) { - return prot_rw; - } - return prot_rw | PAGE_EXEC; -} - -static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx, - uint32_t *table, uint32_t address) -{ - /* Note that we can only get here for an AArch32 PL0/PL1 lookup */ - TCR *tcr = regime_tcr(env, mmu_idx); - - if (address & tcr->mask) { - if (tcr->raw_tcr & TTBCR_PD1) { - /* Translation table walk disabled for TTBR1 */ - return false; - } - *table = regime_ttbr(env, mmu_idx, 1) & 0xffffc000; - } else { - if (tcr->raw_tcr & TTBCR_PD0) { - /* Translation table walk disabled for TTBR0 */ - return false; - } - *table = regime_ttbr(env, mmu_idx, 0) & tcr->base_mask; - } - *table |= (address >> 18) & 0x3ffc; - return true; -} - -/* Translate a S1 pagetable walk through S2 if needed. */ -static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, - hwaddr addr, bool *is_secure, - ARMMMUFaultInfo *fi) -{ - if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && - !regime_translation_disabled(env, ARMMMUIdx_Stage2)) { - target_ulong s2size; - hwaddr s2pa; - int s2prot; - int ret; - ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S - : ARMMMUIdx_Stage2; - ARMCacheAttrs cacheattrs = {}; - MemTxAttrs txattrs = {}; - - ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false, - &s2pa, &txattrs, &s2prot, &s2size, fi, - &cacheattrs); - if (ret) { - assert(fi->type != ARMFault_None); - fi->s2addr = addr; - fi->stage2 = true; - fi->s1ptw = true; - fi->s1ns = !*is_secure; - return ~0; - } - if ((arm_hcr_el2_eff(env) & HCR_PTW) && - (cacheattrs.attrs & 0xf0) == 0) { - /* - * PTW set and S1 walk touched S2 Device memory: - * generate Permission fault. - */ - fi->type = ARMFault_Permission; - fi->s2addr = addr; - fi->stage2 = true; - fi->s1ptw = true; - fi->s1ns = !*is_secure; - return ~0; - } - - if (arm_is_secure_below_el3(env)) { - /* Check if page table walk is to secure or non-secure PA space. */ - if (*is_secure) { - *is_secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW); - } else { - *is_secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW); - } - } else { - assert(!*is_secure); - } - - addr = s2pa; - } - return addr; -} - -/* All loads done in the course of a page table walk go through here. */ -static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure, - ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - MemTxAttrs attrs = {}; - MemTxResult result = MEMTX_OK; - AddressSpace *as; - uint32_t data; - - addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi); - attrs.secure = is_secure; - as = arm_addressspace(cs, attrs); - if (fi->s1ptw) { - return 0; - } - if (regime_translation_big_endian(env, mmu_idx)) { - data = address_space_ldl_be(as, addr, attrs, &result); - } else { - data = address_space_ldl_le(as, addr, attrs, &result); - } - if (result == MEMTX_OK) { - return data; - } - fi->type = ARMFault_SyncExternalOnWalk; - fi->ea = arm_extabort_type(result); - return 0; -} - -static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure, - ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - MemTxAttrs attrs = {}; - MemTxResult result = MEMTX_OK; - AddressSpace *as; - uint64_t data; - - addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi); - attrs.secure = is_secure; - as = arm_addressspace(cs, attrs); - if (fi->s1ptw) { - return 0; - } - if (regime_translation_big_endian(env, mmu_idx)) { - data = address_space_ldq_be(as, addr, attrs, &result); - } else { - data = address_space_ldq_le(as, addr, attrs, &result); - } - if (result == MEMTX_OK) { - return data; - } - fi->type = ARMFault_SyncExternalOnWalk; - fi->ea = arm_extabort_type(result); - return 0; -} - -static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi) -{ - CPUState *cs = env_cpu(env); - int level = 1; - uint32_t table; - uint32_t desc; - int type; - int ap; - int domain = 0; - int domain_prot; - hwaddr phys_addr; - uint32_t dacr; - - /* Pagetable walk. */ - /* Lookup l1 descriptor. */ - if (!get_level1_table_address(env, mmu_idx, &table, address)) { - /* Section translation fault if page walk is disabled by PD0 or PD1 */ - fi->type = ARMFault_Translation; - goto do_fault; - } - desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), - mmu_idx, fi); - if (fi->type != ARMFault_None) { - goto do_fault; - } - type = (desc & 3); - domain = (desc >> 5) & 0x0f; - if (regime_el(env, mmu_idx) == 1) { - dacr = env->cp15.dacr_ns; - } else { - dacr = env->cp15.dacr_s; - } - domain_prot = (dacr >> (domain * 2)) & 3; - if (type == 0) { - /* Section translation fault. */ - fi->type = ARMFault_Translation; - goto do_fault; - } - if (type != 2) { - level = 2; - } - if (domain_prot == 0 || domain_prot == 2) { - fi->type = ARMFault_Domain; - goto do_fault; - } - if (type == 2) { - /* 1Mb section. */ - phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); - ap = (desc >> 10) & 3; - *page_size = 1024 * 1024; - } else { - /* Lookup l2 entry. */ - if (type == 1) { - /* Coarse pagetable. */ - table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc); - } else { - /* Fine pagetable. */ - table = (desc & 0xfffff000) | ((address >> 8) & 0xffc); - } - desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), - mmu_idx, fi); - if (fi->type != ARMFault_None) { - goto do_fault; - } - switch (desc & 3) { - case 0: /* Page translation fault. */ - fi->type = ARMFault_Translation; - goto do_fault; - case 1: /* 64k page. */ - phys_addr = (desc & 0xffff0000) | (address & 0xffff); - ap = (desc >> (4 + ((address >> 13) & 6))) & 3; - *page_size = 0x10000; - break; - case 2: /* 4k page. */ - phys_addr = (desc & 0xfffff000) | (address & 0xfff); - ap = (desc >> (4 + ((address >> 9) & 6))) & 3; - *page_size = 0x1000; - break; - case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */ - if (type == 1) { - /* ARMv6/XScale extended small page format */ - if (arm_feature(env, ARM_FEATURE_XSCALE) - || arm_feature(env, ARM_FEATURE_V6)) { - phys_addr = (desc & 0xfffff000) | (address & 0xfff); - *page_size = 0x1000; - } else { - /* - * UNPREDICTABLE in ARMv5; we choose to take a - * page translation fault. - */ - fi->type = ARMFault_Translation; - goto do_fault; - } - } else { - phys_addr = (desc & 0xfffffc00) | (address & 0x3ff); - *page_size = 0x400; - } - ap = (desc >> 4) & 3; - break; - default: - /* Never happens, but compiler isn't smart enough to tell. */ - abort(); - } - } - *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); - *prot |= *prot ? PAGE_EXEC : 0; - if (!(*prot & (1 << access_type))) { - /* Access permission fault. */ - fi->type = ARMFault_Permission; - goto do_fault; - } - *phys_ptr = phys_addr; - return false; -do_fault: - fi->domain = domain; - fi->level = level; - return true; -} - -static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, - target_ulong *page_size, ARMMMUFaultInfo *fi) -{ - CPUState *cs = env_cpu(env); - ARMCPU *cpu = env_archcpu(env); - int level = 1; - uint32_t table; - uint32_t desc; - uint32_t xn; - uint32_t pxn = 0; - int type; - int ap; - int domain = 0; - int domain_prot; - hwaddr phys_addr; - uint32_t dacr; - bool ns; - - /* Pagetable walk. */ - /* Lookup l1 descriptor. */ - if (!get_level1_table_address(env, mmu_idx, &table, address)) { - /* Section translation fault if page walk is disabled by PD0 or PD1 */ - fi->type = ARMFault_Translation; - goto do_fault; - } - desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), - mmu_idx, fi); - if (fi->type != ARMFault_None) { - goto do_fault; - } - type = (desc & 3); - if (type == 0 || (type == 3 && !cpu_isar_feature(aa32_pxn, cpu))) { - /* - * Section translation fault, or attempt to use the encoding - * which is Reserved on implementations without PXN. - */ - fi->type = ARMFault_Translation; - goto do_fault; - } - if ((type == 1) || !(desc & (1 << 18))) { - /* Page or Section. */ - domain = (desc >> 5) & 0x0f; - } - if (regime_el(env, mmu_idx) == 1) { - dacr = env->cp15.dacr_ns; - } else { - dacr = env->cp15.dacr_s; - } - if (type == 1) { - level = 2; - } - domain_prot = (dacr >> (domain * 2)) & 3; - if (domain_prot == 0 || domain_prot == 2) { - /* Section or Page domain fault */ - fi->type = ARMFault_Domain; - goto do_fault; - } - if (type != 1) { - if (desc & (1 << 18)) { - /* Supersection. */ - phys_addr = (desc & 0xff000000) | (address & 0x00ffffff); - phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32; - phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36; - *page_size = 0x1000000; - } else { - /* Section. */ - phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); - *page_size = 0x100000; - } - ap = ((desc >> 10) & 3) | ((desc >> 13) & 4); - xn = desc & (1 << 4); - pxn = desc & 1; - ns = extract32(desc, 19, 1); - } else { - if (cpu_isar_feature(aa32_pxn, cpu)) { - pxn = (desc >> 2) & 1; - } - ns = extract32(desc, 3, 1); - /* Lookup l2 entry. */ - table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc); - desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx), - mmu_idx, fi); - if (fi->type != ARMFault_None) { - goto do_fault; - } - ap = ((desc >> 4) & 3) | ((desc >> 7) & 4); - switch (desc & 3) { - case 0: /* Page translation fault. */ - fi->type = ARMFault_Translation; - goto do_fault; - case 1: /* 64k page. */ - phys_addr = (desc & 0xffff0000) | (address & 0xffff); - xn = desc & (1 << 15); - *page_size = 0x10000; - break; - case 2: case 3: /* 4k page. */ - phys_addr = (desc & 0xfffff000) | (address & 0xfff); - xn = desc & 1; - *page_size = 0x1000; - break; - default: - /* Never happens, but compiler isn't smart enough to tell. */ - abort(); - } - } - if (domain_prot == 3) { - *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; - } else { - if (pxn && !regime_is_user(env, mmu_idx)) { - xn = 1; - } - if (xn && access_type == MMU_INST_FETCH) { - fi->type = ARMFault_Permission; - goto do_fault; - } - - if (arm_feature(env, ARM_FEATURE_V6K) && - (regime_sctlr(env, mmu_idx) & SCTLR_AFE)) { - /* The simplified model uses AP[0] as an access control bit. */ - if ((ap & 1) == 0) { - /* Access flag fault. */ - fi->type = ARMFault_AccessFlag; - goto do_fault; - } - *prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1); - } else { - *prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); - } - if (*prot && !xn) { - *prot |= PAGE_EXEC; - } - if (!(*prot & (1 << access_type))) { - /* Access permission fault. */ - fi->type = ARMFault_Permission; - goto do_fault; - } - } - if (ns) { - /* - * The NS bit will (as required by the architecture) have no effect if - * the CPU doesn't support TZ or this is a non-secure translation - * regime, because the attribute will already be non-secure. - */ - attrs->secure = false; - } - *phys_ptr = phys_addr; - return false; -do_fault: - fi->domain = domain; - fi->level = level; - return true; -} - -/* - * check_s2_mmu_setup - * @cpu: ARMCPU - * @is_aa64: True if the translation regime is in AArch64 state - * @startlevel: Suggested starting level - * @inputsize: Bitsize of IPAs - * @stride: Page-table stride (See the ARM ARM) - * - * Returns true if the suggested S2 translation parameters are OK and - * false otherwise. - */ -static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level, - int inputsize, int stride) -{ - const int grainsize = stride + 3; - int startsizecheck; - - /* Negative levels are never allowed. */ - if (level < 0) { - return false; - } - - startsizecheck = inputsize - ((3 - level) * stride + grainsize); - if (startsizecheck < 1 || startsizecheck > stride + 4) { - return false; - } - - if (is_aa64) { - CPUARMState *env = &cpu->env; - unsigned int pamax = arm_pamax(cpu); - - switch (stride) { - case 13: /* 64KB Pages. */ - if (level == 0 || (level == 1 && pamax <= 42)) { - return false; - } - break; - case 11: /* 16KB Pages. */ - if (level == 0 || (level == 1 && pamax <= 40)) { - return false; - } - break; - case 9: /* 4KB Pages. */ - if (level == 0 && pamax <= 42) { - return false; - } - break; - default: - g_assert_not_reached(); - } - - /* Inputsize checks. */ - if (inputsize > pamax && - (arm_el_is_aa64(env, 1) || inputsize > 40)) { - /* This is CONSTRAINED UNPREDICTABLE and we choose to fault. */ - return false; - } - } else { - /* AArch32 only supports 4KB pages. Assert on that. */ - assert(stride == 9); - - if (level == 0) { - return false; - } - } - return true; -} - -/* - * Translate from the 4-bit stage 2 representation of - * memory attributes (without cache-allocation hints) to - * the 8-bit representation of the stage 1 MAIR registers - * (which includes allocation hints). - * - * ref: shared/translation/attrs/S2AttrDecode() - * .../S2ConvertAttrsHints() - */ -static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs) -{ - uint8_t hiattr = extract32(s2attrs, 2, 2); - uint8_t loattr = extract32(s2attrs, 0, 2); - uint8_t hihint = 0, lohint = 0; - - if (hiattr != 0) { /* normal memory */ - if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */ - hiattr = loattr = 1; /* non-cacheable */ - } else { - if (hiattr != 1) { /* Write-through or write-back */ - hihint = 3; /* RW allocate */ - } - if (loattr != 1) { /* Write-through or write-back */ - lohint = 3; /* RW allocate */ - } - } - } - - return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint; -} -#endif /* !CONFIG_USER_ONLY */ - -static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx) -{ - if (regime_has_2_ranges(mmu_idx)) { - return extract64(tcr, 37, 2); - } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { - return 0; /* VTCR_EL2 */ - } else { - /* Replicate the single TBI bit so we always have 2 bits. */ - return extract32(tcr, 20, 1) * 3; - } -} - -static int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx) -{ - if (regime_has_2_ranges(mmu_idx)) { - return extract64(tcr, 51, 2); - } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { - return 0; /* VTCR_EL2 */ - } else { - /* Replicate the single TBID bit so we always have 2 bits. */ - return extract32(tcr, 29, 1) * 3; - } -} - -static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx) -{ - if (regime_has_2_ranges(mmu_idx)) { - return extract64(tcr, 57, 2); - } else { - /* Replicate the single TCMA bit so we always have 2 bits. */ - return extract32(tcr, 30, 1) * 3; - } -} - -ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, - ARMMMUIdx mmu_idx, bool data) -{ - uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr; - bool epd, hpd, using16k, using64k; - int select, tsz, tbi, max_tsz; - - if (!regime_has_2_ranges(mmu_idx)) { - select = 0; - tsz = extract32(tcr, 0, 6); - using64k = extract32(tcr, 14, 1); - using16k = extract32(tcr, 15, 1); - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { - /* VTCR_EL2 */ - hpd = false; - } else { - hpd = extract32(tcr, 24, 1); - } - epd = false; - } else { - /* - * Bit 55 is always between the two regions, and is canonical for - * determining if address tagging is enabled. - */ - select = extract64(va, 55, 1); - if (!select) { - tsz = extract32(tcr, 0, 6); - epd = extract32(tcr, 7, 1); - using64k = extract32(tcr, 14, 1); - using16k = extract32(tcr, 15, 1); - hpd = extract64(tcr, 41, 1); - } else { - int tg = extract32(tcr, 30, 2); - using16k = tg == 1; - using64k = tg == 3; - tsz = extract32(tcr, 16, 6); - epd = extract32(tcr, 23, 1); - hpd = extract64(tcr, 42, 1); - } - } - - if (cpu_isar_feature(aa64_st, env_archcpu(env))) { - max_tsz = 48 - using64k; - } else { - max_tsz = 39; - } - - tsz = MIN(tsz, max_tsz); - tsz = MAX(tsz, 16); /* TODO: ARMv8.2-LVA */ - - /* Present TBI as a composite with TBID. */ - tbi = aa64_va_parameter_tbi(tcr, mmu_idx); - if (!data) { - tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); - } - tbi = (tbi >> select) & 1; - - return (ARMVAParameters) { - .tsz = tsz, - .select = select, - .tbi = tbi, - .epd = epd, - .hpd = hpd, - .using16k = using16k, - .using64k = using64k, - }; -} - -#ifndef CONFIG_USER_ONLY -static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va, - ARMMMUIdx mmu_idx) -{ - uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr; - uint32_t el = regime_el(env, mmu_idx); - int select, tsz; - bool epd, hpd; - - assert(mmu_idx != ARMMMUIdx_Stage2_S); - - if (mmu_idx == ARMMMUIdx_Stage2) { - /* VTCR */ - bool sext = extract32(tcr, 4, 1); - bool sign = extract32(tcr, 3, 1); - - /* - * If the sign-extend bit is not the same as t0sz[3], the result - * is unpredictable. Flag this as a guest error. - */ - if (sign != sext) { - qemu_log_mask(LOG_GUEST_ERROR, - "AArch32: VTCR.S / VTCR.T0SZ[3] mismatch\n"); - } - tsz = sextract32(tcr, 0, 4) + 8; - select = 0; - hpd = false; - epd = false; - } else if (el == 2) { - /* HTCR */ - tsz = extract32(tcr, 0, 3); - select = 0; - hpd = extract64(tcr, 24, 1); - epd = false; - } else { - int t0sz = extract32(tcr, 0, 3); - int t1sz = extract32(tcr, 16, 3); - - if (t1sz == 0) { - select = va > (0xffffffffu >> t0sz); - } else { - /* Note that we will detect errors later. */ - select = va >= ~(0xffffffffu >> t1sz); - } - if (!select) { - tsz = t0sz; - epd = extract32(tcr, 7, 1); - hpd = extract64(tcr, 41, 1); - } else { - tsz = t1sz; - epd = extract32(tcr, 23, 1); - hpd = extract64(tcr, 42, 1); - } - /* For aarch32, hpd0 is not enabled without t2e as well. */ - hpd &= extract32(tcr, 6, 1); - } - - return (ARMVAParameters) { - .tsz = tsz, - .select = select, - .epd = epd, - .hpd = hpd, - }; -} - -/** - * get_phys_addr_lpae: perform one stage of page table walk, LPAE format - * - * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, - * prot and page_size may not be filled in, and the populated fsr value provides - * information on why the translation aborted, in the format of a long-format - * DFSR/IFSR fault register, with the following caveats: - * * the WnR bit is never set (the caller must do this). - * - * @env: CPUARMState - * @address: virtual address to get physical address for - * @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH - * @mmu_idx: MMU index indicating required translation regime - * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page table - * walk), must be true if this is stage 2 of a stage 1+2 walk for an - * EL0 access). If @mmu_idx is anything else, @s1_is_el0 is ignored. - * @phys_ptr: set to the physical address corresponding to the virtual address - * @attrs: set to the memory transaction attributes to use - * @prot: set to the permissions for the page containing phys_ptr - * @page_size_ptr: set to the size of the page containing phys_ptr - * @fi: set to fault info if the translation fails - * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes - */ -static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - bool s1_is_el0, - hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot, - target_ulong *page_size_ptr, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) -{ - ARMCPU *cpu = env_archcpu(env); - CPUState *cs = CPU(cpu); - /* Read an LPAE long-descriptor translation table. */ - ARMFaultType fault_type = ARMFault_Translation; - uint32_t level; - ARMVAParameters param; - uint64_t ttbr; - hwaddr descaddr, indexmask, indexmask_grainsize; - uint32_t tableattrs; - target_ulong page_size; - uint32_t attrs; - int32_t stride; - int addrsize, inputsize; - TCR *tcr = regime_tcr(env, mmu_idx); - int ap, ns, xn, pxn; - uint32_t el = regime_el(env, mmu_idx); - uint64_t descaddrmask; - bool aarch64 = arm_el_is_aa64(env, el); - bool guarded = false; - - /* TODO: This code does not support shareability levels. */ - if (aarch64) { - param = aa64_va_parameters(env, address, mmu_idx, - access_type != MMU_INST_FETCH); - level = 0; - addrsize = 64 - 8 * param.tbi; - inputsize = 64 - param.tsz; - } else { - param = aa32_va_parameters(env, address, mmu_idx); - level = 1; - addrsize = (mmu_idx == ARMMMUIdx_Stage2 ? 40 : 32); - inputsize = addrsize - param.tsz; - } - - /* - * We determined the region when collecting the parameters, but we - * have not yet validated that the address is valid for the region. - * Extract the top bits and verify that they all match select. - * - * For aa32, if inputsize == addrsize, then we have selected the - * region by exclusion in aa32_va_parameters and there is no more - * validation to do here. - */ - if (inputsize < addrsize) { - target_ulong top_bits = sextract64(address, inputsize, - addrsize - inputsize); - if (-top_bits != param.select) { - /* The gap between the two regions is a Translation fault */ - fault_type = ARMFault_Translation; - goto do_fault; - } - } - - if (param.using64k) { - stride = 13; - } else if (param.using16k) { - stride = 11; - } else { - stride = 9; - } - - /* - * Note that QEMU ignores shareability and cacheability attributes, - * so we don't need to do anything with the SH, ORGN, IRGN fields - * in the TTBCR. Similarly, TTBCR:A1 selects whether we get the - * ASID from TTBR0 or TTBR1, but QEMU's TLB doesn't currently - * implement any ASID-like capability so we can ignore it (instead - * we will always flush the TLB any time the ASID is changed). - */ - ttbr = regime_ttbr(env, mmu_idx, param.select); - - /* - * Here we should have set up all the parameters for the translation: - * inputsize, ttbr, epd, stride, tbi - */ - - if (param.epd) { - /* - * Translation table walk disabled => Translation fault on TLB miss - * Note: This is always 0 on 64-bit EL2 and EL3. - */ - goto do_fault; - } - - if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { - /* - * The starting level depends on the virtual address size (which can - * be up to 48 bits) and the translation granule size. It indicates - * the number of strides (stride bits at a time) needed to - * consume the bits of the input address. In the pseudocode this is: - * level = 4 - RoundUp((inputsize - grainsize) / stride) - * where their 'inputsize' is our 'inputsize', 'grainsize' is - * our 'stride + 3' and 'stride' is our 'stride'. - * Applying the usual "rounded up m/n is (m+n-1)/n" and simplifying: - * = 4 - (inputsize - stride - 3 + stride - 1) / stride - * = 4 - (inputsize - 4) / stride; - */ - level = 4 - (inputsize - 4) / stride; - } else { - /* - * For stage 2 translations the starting level is specified by the - * VTCR_EL2.SL0 field (whose interpretation depends on the page size) - */ - uint32_t sl0 = extract32(tcr->raw_tcr, 6, 2); - uint32_t startlevel; - bool ok; - - if (!aarch64 || stride == 9) { - /* AArch32 or 4KB pages */ - startlevel = 2 - sl0; - - if (cpu_isar_feature(aa64_st, cpu)) { - startlevel &= 3; - } - } else { - /* 16KB or 64KB pages */ - startlevel = 3 - sl0; - } - - /* Check that the starting level is valid. */ - ok = check_s2_mmu_setup(cpu, aarch64, startlevel, - inputsize, stride); - if (!ok) { - fault_type = ARMFault_Translation; - goto do_fault; - } - level = startlevel; - } - - indexmask_grainsize = (1ULL << (stride + 3)) - 1; - indexmask = (1ULL << (inputsize - (stride * (4 - level)))) - 1; - - /* Now we can extract the actual base address from the TTBR */ - descaddr = extract64(ttbr, 0, 48); - /* - * We rely on this masking to clear the RES0 bits at the bottom of the TTBR - * and also to mask out CnP (bit 0) which could validly be non-zero. - */ - descaddr &= ~indexmask; - - /* - * The address field in the descriptor goes up to bit 39 for ARMv7 - * but up to bit 47 for ARMv8, but we use the descaddrmask - * up to bit 39 for AArch32, because we don't need other bits in that case - * to construct next descriptor address (anyway they should be all zeroes). - */ - descaddrmask = ((1ull << (aarch64 ? 48 : 40)) - 1) & - ~indexmask_grainsize; - - /* - * Secure accesses start with the page table in secure memory and - * can be downgraded to non-secure at any step. Non-secure accesses - * remain non-secure. We implement this by just ORing in the NSTable/NS - * bits at each step. - */ - tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4); - for (;;) { - uint64_t descriptor; - bool nstable; - - descaddr |= (address >> (stride * (4 - level))) & indexmask; - descaddr &= ~7ULL; - nstable = extract32(tableattrs, 4, 1); - descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi); - if (fi->type != ARMFault_None) { - goto do_fault; - } - - if (!(descriptor & 1) || - (!(descriptor & 2) && (level == 3))) { - /* Invalid, or the Reserved level 3 encoding */ - goto do_fault; - } - descaddr = descriptor & descaddrmask; - - if ((descriptor & 2) && (level < 3)) { - /* - * Table entry. The top five bits are attributes which may - * propagate down through lower levels of the table (and - * which are all arranged so that 0 means "no effect", so - * we can gather them up by ORing in the bits at each level). - */ - tableattrs |= extract64(descriptor, 59, 5); - level++; - indexmask = indexmask_grainsize; - continue; - } - /* - * Block entry at level 1 or 2, or page entry at level 3. - * These are basically the same thing, although the number - * of bits we pull in from the vaddr varies. - */ - page_size = (1ULL << ((stride * (4 - level)) + 3)); - descaddr |= (address & (page_size - 1)); - /* Extract attributes from the descriptor */ - attrs = extract64(descriptor, 2, 10) - | (extract64(descriptor, 52, 12) << 10); - - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { - /* Stage 2 table descriptors do not include any attribute fields */ - break; - } - /* Merge in attributes from table descriptors */ - attrs |= nstable << 3; /* NS */ - guarded = extract64(descriptor, 50, 1); /* GP */ - if (param.hpd) { - /* HPD disables all the table attributes except NSTable. */ - break; - } - attrs |= extract32(tableattrs, 0, 2) << 11; /* XN, PXN */ - /* - * The sense of AP[1] vs APTable[0] is reversed, as APTable[0] == 1 - * means "force PL1 access only", which means forcing AP[1] to 0. - */ - attrs &= ~(extract32(tableattrs, 2, 1) << 4); /* !APT[0] => AP[1] */ - attrs |= extract32(tableattrs, 3, 1) << 5; /* APT[1] => AP[2] */ - break; - } - /* - * Here descaddr is the final physical address, - * and attributes are all in attrs. - */ - fault_type = ARMFault_AccessFlag; - if ((attrs & (1 << 8)) == 0) { - /* Access flag */ - goto do_fault; - } - - ap = extract32(attrs, 4, 2); - - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { - ns = mmu_idx == ARMMMUIdx_Stage2; - xn = extract32(attrs, 11, 2); - *prot = get_S2prot(env, ap, xn, s1_is_el0); - } else { - ns = extract32(attrs, 3, 1); - xn = extract32(attrs, 12, 1); - pxn = extract32(attrs, 11, 1); - *prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn); - } - - fault_type = ARMFault_Permission; - if (!(*prot & (1 << access_type))) { - goto do_fault; - } - - if (ns) { - /* - * The NS bit will (as required by the architecture) have no effect if - * the CPU doesn't support TZ or this is a non-secure translation - * regime, because the attribute will already be non-secure. - */ - txattrs->secure = false; - } - /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */ - if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) { - arm_tlb_bti_gp(txattrs) = true; - } - - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { - cacheattrs->attrs = convert_stage2_attrs(env, extract32(attrs, 0, 4)); - } else { - /* Index into MAIR registers for cache attributes */ - uint8_t attrindx = extract32(attrs, 0, 3); - uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)]; - assert(attrindx <= 7); - cacheattrs->attrs = extract64(mair, attrindx * 8, 8); - } - cacheattrs->shareability = extract32(attrs, 6, 2); - - *phys_ptr = descaddr; - *page_size_ptr = page_size; - return false; - -do_fault: - fi->type = fault_type; - fi->level = level; - /* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */ - fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 || - mmu_idx == ARMMMUIdx_Stage2_S); - fi->s1ns = mmu_idx == ARMMMUIdx_Stage2; - return true; -} - -static inline void get_phys_addr_pmsav7_default(CPUARMState *env, - ARMMMUIdx mmu_idx, - int32_t address, int *prot) -{ - if (!arm_feature(env, ARM_FEATURE_M)) { - *prot = PAGE_READ | PAGE_WRITE; - switch (address) { - case 0xF0000000 ... 0xFFFFFFFF: - if (regime_sctlr(env, mmu_idx) & SCTLR_V) { - /* hivecs execing is ok */ - *prot |= PAGE_EXEC; - } - break; - case 0x00000000 ... 0x7FFFFFFF: - *prot |= PAGE_EXEC; - break; - } - } else { - /* - * Default system address map for M profile cores. - * The architecture specifies which regions are execute-never; - * at the MPU level no other checks are defined. - */ - switch (address) { - case 0x00000000 ... 0x1fffffff: /* ROM */ - case 0x20000000 ... 0x3fffffff: /* SRAM */ - case 0x60000000 ... 0x7fffffff: /* RAM */ - case 0x80000000 ... 0x9fffffff: /* RAM */ - *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; - break; - case 0x40000000 ... 0x5fffffff: /* Peripheral */ - case 0xa0000000 ... 0xbfffffff: /* Device */ - case 0xc0000000 ... 0xdfffffff: /* Device */ - case 0xe0000000 ... 0xffffffff: /* System */ - *prot = PAGE_READ | PAGE_WRITE; - break; - default: - g_assert_not_reached(); - } - } -} - -static bool pmsav7_use_background_region(ARMCPU *cpu, - ARMMMUIdx mmu_idx, bool is_user) -{ - /* - * Return true if we should use the default memory map as a - * "background" region if there are no hits against any MPU regions. - */ - CPUARMState *env = &cpu->env; - - if (is_user) { - return false; - } - - if (arm_feature(env, ARM_FEATURE_M)) { - return env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] - & R_V7M_MPU_CTRL_PRIVDEFENA_MASK; - } else { - return regime_sctlr(env, mmu_idx) & SCTLR_BR; - } -} - -static inline bool m_is_ppb_region(CPUARMState *env, uint32_t address) -{ - /* True if address is in the M profile PPB region 0xe0000000 - 0xe00fffff */ - return arm_feature(env, ARM_FEATURE_M) && - extract32(address, 20, 12) == 0xe00; -} - -static inline bool m_is_system_region(CPUARMState *env, uint32_t address) -{ - /* - * True if address is in the M profile system region - * 0xe0000000 - 0xffffffff - */ - return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) == 0x7; -} - -static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi) -{ - ARMCPU *cpu = env_archcpu(env); - int n; - bool is_user = regime_is_user(env, mmu_idx); - - *phys_ptr = address; - *page_size = TARGET_PAGE_SIZE; - *prot = 0; - - if (regime_translation_disabled(env, mmu_idx) || - m_is_ppb_region(env, address)) { - /* - * MPU disabled or M profile PPB access: use default memory map. - * The other case which uses the default memory map in the - * v7M ARM ARM pseudocode is exception vector reads from the vector - * table. In QEMU those accesses are done in arm_v7m_load_vector(), - * which always does a direct read using address_space_ldl(), rather - * than going via this function, so we don't need to check that here. - */ - get_phys_addr_pmsav7_default(env, mmu_idx, address, prot); - } else { /* MPU enabled */ - for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) { - /* region search */ - uint32_t base = env->pmsav7.drbar[n]; - uint32_t rsize = extract32(env->pmsav7.drsr[n], 1, 5); - uint32_t rmask; - bool srdis = false; - - if (!(env->pmsav7.drsr[n] & 0x1)) { - continue; - } - - if (!rsize) { - qemu_log_mask(LOG_GUEST_ERROR, - "DRSR[%d]: Rsize field cannot be 0\n", n); - continue; - } - rsize++; - rmask = (1ull << rsize) - 1; - - if (base & rmask) { - qemu_log_mask(LOG_GUEST_ERROR, - "DRBAR[%d]: 0x%" PRIx32 " misaligned " - "to DRSR region size, mask = 0x%" PRIx32 "\n", - n, base, rmask); - continue; - } - - if (address < base || address > base + rmask) { - /* - * Address not in this region. We must check whether the - * region covers addresses in the same page as our address. - * In that case we must not report a size that covers the - * whole page for a subsequent hit against a different MPU - * region or the background region, because it would result in - * incorrect TLB hits for subsequent accesses to addresses that - * are in this MPU region. - */ - if (ranges_overlap(base, rmask, - address & TARGET_PAGE_MASK, - TARGET_PAGE_SIZE)) { - *page_size = 1; - } - continue; - } - - /* Region matched */ - - if (rsize >= 8) { /* no subregions for regions < 256 bytes */ - int i, snd; - uint32_t srdis_mask; - - rsize -= 3; /* sub region size (power of 2) */ - snd = ((address - base) >> rsize) & 0x7; - srdis = extract32(env->pmsav7.drsr[n], snd + 8, 1); - - srdis_mask = srdis ? 0x3 : 0x0; - for (i = 2; i <= 8 && rsize < TARGET_PAGE_BITS; i *= 2) { - /* - * This will check in groups of 2, 4 and then 8, whether - * the subregion bits are consistent. rsize is incremented - * back up to give the region size, considering consistent - * adjacent subregions as one region. Stop testing if rsize - * is already big enough for an entire QEMU page. - */ - int snd_rounded = snd & ~(i - 1); - uint32_t srdis_multi = extract32(env->pmsav7.drsr[n], - snd_rounded + 8, i); - if (srdis_mask ^ srdis_multi) { - break; - } - srdis_mask = (srdis_mask << i) | srdis_mask; - rsize++; - } - } - if (srdis) { - continue; - } - if (rsize < TARGET_PAGE_BITS) { - *page_size = 1 << rsize; - } - break; - } - - if (n == -1) { /* no hits */ - if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) { - /* background fault */ - fi->type = ARMFault_Background; - return true; - } - get_phys_addr_pmsav7_default(env, mmu_idx, address, prot); - } else { /* a MPU hit! */ - uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3); - uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1); - - if (m_is_system_region(env, address)) { - /* System space is always execute never */ - xn = 1; - } - - if (is_user) { /* User mode AP bit decoding */ - switch (ap) { - case 0: - case 1: - case 5: - break; /* no access */ - case 3: - *prot |= PAGE_WRITE; - /* fall through */ - case 2: - case 6: - *prot |= PAGE_READ | PAGE_EXEC; - break; - case 7: - /* for v7M, same as 6; for R profile a reserved value */ - if (arm_feature(env, ARM_FEATURE_M)) { - *prot |= PAGE_READ | PAGE_EXEC; - break; - } - /* fall through */ - default: - qemu_log_mask(LOG_GUEST_ERROR, - "DRACR[%d]: Bad value for AP bits: 0x%" - PRIx32 "\n", n, ap); - } - } else { /* Priv. mode AP bits decoding */ - switch (ap) { - case 0: - break; /* no access */ - case 1: - case 2: - case 3: - *prot |= PAGE_WRITE; - /* fall through */ - case 5: - case 6: - *prot |= PAGE_READ | PAGE_EXEC; - break; - case 7: - /* for v7M, same as 6; for R profile a reserved value */ - if (arm_feature(env, ARM_FEATURE_M)) { - *prot |= PAGE_READ | PAGE_EXEC; - break; - } - /* fall through */ - default: - qemu_log_mask(LOG_GUEST_ERROR, - "DRACR[%d]: Bad value for AP bits: 0x%" - PRIx32 "\n", n, ap); - } - } - - /* execute never */ - if (xn) { - *prot &= ~PAGE_EXEC; - } - } - } - - fi->type = ARMFault_Permission; - fi->level = 1; - return !(*prot & (1 << access_type)); -} - -static bool v8m_is_sau_exempt(CPUARMState *env, - uint32_t address, MMUAccessType access_type) -{ - /* - * The architecture specifies that certain address ranges are - * exempt from v8M SAU/IDAU checks. - */ - return - (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) || - (address >= 0xe0000000 && address <= 0xe0002fff) || - (address >= 0xe000e000 && address <= 0xe000efff) || - (address >= 0xe002e000 && address <= 0xe002efff) || - (address >= 0xe0040000 && address <= 0xe0041fff) || - (address >= 0xe00ff000 && address <= 0xe00fffff); -} - -void v8m_security_lookup(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - V8M_SAttributes *sattrs) -{ - /* - * Look up the security attributes for this address. Compare the - * pseudocode SecurityCheck() function. - * We assume the caller has zero-initialized *sattrs. - */ - ARMCPU *cpu = env_archcpu(env); - int r; - bool idau_exempt = false, idau_ns = true, idau_nsc = true; - int idau_region = IREGION_NOTVALID; - uint32_t addr_page_base = address & TARGET_PAGE_MASK; - uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1); - - if (cpu->idau) { - IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau); - IDAUInterface *ii = IDAU_INTERFACE(cpu->idau); - - iic->check(ii, address, &idau_region, &idau_exempt, &idau_ns, - &idau_nsc); - } - - if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) { - /* 0xf0000000..0xffffffff is always S for insn fetches */ - return; - } - - if (idau_exempt || v8m_is_sau_exempt(env, address, access_type)) { - sattrs->ns = !regime_is_secure(env, mmu_idx); - return; - } - - if (idau_region != IREGION_NOTVALID) { - sattrs->irvalid = true; - sattrs->iregion = idau_region; - } - - switch (env->sau.ctrl & 3) { - case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */ - break; - case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */ - sattrs->ns = true; - break; - default: /* SAU.ENABLE == 1 */ - for (r = 0; r < cpu->sau_sregion; r++) { - if (env->sau.rlar[r] & 1) { - uint32_t base = env->sau.rbar[r] & ~0x1f; - uint32_t limit = env->sau.rlar[r] | 0x1f; - - if (base <= address && limit >= address) { - if (base > addr_page_base || limit < addr_page_limit) { - sattrs->subpage = true; - } - if (sattrs->srvalid) { - /* - * If we hit in more than one region then we must report - * as Secure, not NS-Callable, with no valid region - * number info. - */ - sattrs->ns = false; - sattrs->nsc = false; - sattrs->sregion = 0; - sattrs->srvalid = false; - break; - } else { - if (env->sau.rlar[r] & 2) { - sattrs->nsc = true; - } else { - sattrs->ns = true; - } - sattrs->srvalid = true; - sattrs->sregion = r; - } - } else { - /* - * Address not in this region. We must check whether the - * region covers addresses in the same page as our address. - * In that case we must not report a size that covers the - * whole page for a subsequent hit against a different MPU - * region or the background region, because it would result - * in incorrect TLB hits for subsequent accesses to - * addresses that are in this MPU region. - */ - if (limit >= base && - ranges_overlap(base, limit - base + 1, - addr_page_base, - TARGET_PAGE_SIZE)) { - sattrs->subpage = true; - } - } - } - } - break; - } - - /* - * The IDAU will override the SAU lookup results if it specifies - * higher security than the SAU does. - */ - if (!idau_ns) { - if (sattrs->ns || (!idau_nsc && sattrs->nsc)) { - sattrs->ns = false; - sattrs->nsc = idau_nsc; - } - } -} - -bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *txattrs, - int *prot, bool *is_subpage, - ARMMMUFaultInfo *fi, uint32_t *mregion) -{ - /* - * Perform a PMSAv8 MPU lookup (without also doing the SAU check - * that a full phys-to-virt translation does). - * mregion is (if not NULL) set to the region number which matched, - * or -1 if no region number is returned (MPU off, address did not - * hit a region, address hit in multiple regions). - * We set is_subpage to true if the region hit doesn't cover the - * entire TARGET_PAGE the address is within. - */ - ARMCPU *cpu = env_archcpu(env); - bool is_user = regime_is_user(env, mmu_idx); - uint32_t secure = regime_is_secure(env, mmu_idx); - int n; - int matchregion = -1; - bool hit = false; - uint32_t addr_page_base = address & TARGET_PAGE_MASK; - uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1); - - *is_subpage = false; - *phys_ptr = address; - *prot = 0; - if (mregion) { - *mregion = -1; - } - - /* - * Unlike the ARM ARM pseudocode, we don't need to check whether this - * was an exception vector read from the vector table (which is always - * done using the default system address map), because those accesses - * are done in arm_v7m_load_vector(), which always does a direct - * read using address_space_ldl(), rather than going via this function. - */ - if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */ - hit = true; - } else if (m_is_ppb_region(env, address)) { - hit = true; - } else { - if (pmsav7_use_background_region(cpu, mmu_idx, is_user)) { - hit = true; - } - - for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) { - /* region search */ - /* - * Note that the base address is bits [31:5] from the register - * with bits [4:0] all zeroes, but the limit address is bits - * [31:5] from the register with bits [4:0] all ones. - */ - uint32_t base = env->pmsav8.rbar[secure][n] & ~0x1f; - uint32_t limit = env->pmsav8.rlar[secure][n] | 0x1f; - - if (!(env->pmsav8.rlar[secure][n] & 0x1)) { - /* Region disabled */ - continue; - } - - if (address < base || address > limit) { - /* - * Address not in this region. We must check whether the - * region covers addresses in the same page as our address. - * In that case we must not report a size that covers the - * whole page for a subsequent hit against a different MPU - * region or the background region, because it would result in - * incorrect TLB hits for subsequent accesses to addresses that - * are in this MPU region. - */ - if (limit >= base && - ranges_overlap(base, limit - base + 1, - addr_page_base, - TARGET_PAGE_SIZE)) { - *is_subpage = true; - } - continue; - } - - if (base > addr_page_base || limit < addr_page_limit) { - *is_subpage = true; - } - - if (matchregion != -1) { - /* - * Multiple regions match -- always a failure (unlike - * PMSAv7 where highest-numbered-region wins) - */ - fi->type = ARMFault_Permission; - fi->level = 1; - return true; - } - - matchregion = n; - hit = true; - } - } - - if (!hit) { - /* background fault */ - fi->type = ARMFault_Background; - return true; - } - - if (matchregion == -1) { - /* hit using the background region */ - get_phys_addr_pmsav7_default(env, mmu_idx, address, prot); - } else { - uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2); - uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1); - bool pxn = false; - - if (arm_feature(env, ARM_FEATURE_V8_1M)) { - pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1); - } - - if (m_is_system_region(env, address)) { - /* System space is always execute never */ - xn = 1; - } - - *prot = simple_ap_to_rw_prot(env, mmu_idx, ap); - if (*prot && !xn && !(pxn && !is_user)) { - *prot |= PAGE_EXEC; - } - /* - * We don't need to look the attribute up in the MAIR0/MAIR1 - * registers because that only tells us about cacheability. - */ - if (mregion) { - *mregion = matchregion; - } - } - - fi->type = ARMFault_Permission; - fi->level = 1; - return !(*prot & (1 << access_type)); -} - - -static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *txattrs, - int *prot, target_ulong *page_size, - ARMMMUFaultInfo *fi) -{ - uint32_t secure = regime_is_secure(env, mmu_idx); - V8M_SAttributes sattrs = {}; - bool ret; - bool mpu_is_subpage; - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs); - if (access_type == MMU_INST_FETCH) { - /* - * Instruction fetches always use the MMU bank and the - * transaction attribute determined by the fetch address, - * regardless of CPU state. This is painful for QEMU - * to handle, because it would mean we need to encode - * into the mmu_idx not just the (user, negpri) information - * for the current security state but also that for the - * other security state, which would balloon the number - * of mmu_idx values needed alarmingly. - * Fortunately we can avoid this because it's not actually - * possible to arbitrarily execute code from memory with - * the wrong security attribute: it will always generate - * an exception of some kind or another, apart from the - * special case of an NS CPU executing an SG instruction - * in S&NSC memory. So we always just fail the translation - * here and sort things out in the exception handler - * (including possibly emulating an SG instruction). - */ - if (sattrs.ns != !secure) { - if (sattrs.nsc) { - fi->type = ARMFault_QEMU_NSCExec; - } else { - fi->type = ARMFault_QEMU_SFault; - } - *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; - *phys_ptr = address; - *prot = 0; - return true; - } - } else { - /* - * For data accesses we always use the MMU bank indicated - * by the current CPU state, but the security attributes - * might downgrade a secure access to nonsecure. - */ - if (sattrs.ns) { - txattrs->secure = false; - } else if (!secure) { - /* - * NS access to S memory must fault. - * Architecturally we should first check whether the - * MPU information for this address indicates that we - * are doing an unaligned access to Device memory, which - * should generate a UsageFault instead. QEMU does not - * currently check for that kind of unaligned access though. - * If we added it we would need to do so as a special case - * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt(). - */ - fi->type = ARMFault_QEMU_SFault; - *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; - *phys_ptr = address; - *prot = 0; - return true; - } - } - } - - ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr, - txattrs, prot, &mpu_is_subpage, fi, NULL); - *page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE; - return ret; -} - -static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, int *prot, - ARMMMUFaultInfo *fi) -{ - int n; - uint32_t mask; - uint32_t base; - bool is_user = regime_is_user(env, mmu_idx); - - if (regime_translation_disabled(env, mmu_idx)) { - /* MPU disabled. */ - *phys_ptr = address; - *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; - return false; - } - - *phys_ptr = address; - for (n = 7; n >= 0; n--) { - base = env->cp15.c6_region[n]; - if ((base & 1) == 0) { - continue; - } - mask = 1 << ((base >> 1) & 0x1f); - /* - * Keep this shift separate from the above to avoid an - * (undefined) << 32 - */ - mask = (mask << 1) - 1; - if (((base ^ address) & ~mask) == 0) { - break; - } - } - if (n < 0) { - fi->type = ARMFault_Background; - return true; - } - - if (access_type == MMU_INST_FETCH) { - mask = env->cp15.pmsav5_insn_ap; - } else { - mask = env->cp15.pmsav5_data_ap; - } - mask = (mask >> (n * 4)) & 0xf; - switch (mask) { - case 0: - fi->type = ARMFault_Permission; - fi->level = 1; - return true; - case 1: - if (is_user) { - fi->type = ARMFault_Permission; - fi->level = 1; - return true; - } - *prot = PAGE_READ | PAGE_WRITE; - break; - case 2: - *prot = PAGE_READ; - if (!is_user) { - *prot |= PAGE_WRITE; - } - break; - case 3: - *prot = PAGE_READ | PAGE_WRITE; - break; - case 5: - if (is_user) { - fi->type = ARMFault_Permission; - fi->level = 1; - return true; - } - *prot = PAGE_READ; - break; - case 6: - *prot = PAGE_READ; - break; - default: - /* Bad permission. */ - fi->type = ARMFault_Permission; - fi->level = 1; - return true; - } - *prot |= PAGE_EXEC; - return false; -} - -/* - * Combine either inner or outer cacheability attributes for normal - * memory, according to table D4-42 and pseudocode procedure - * CombineS1S2AttrHints() of ARM DDI 0487B.b (the ARMv8 ARM). - * - * NB: only stage 1 includes allocation hints (RW bits), leading to - * some asymmetry. - */ -static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2) -{ - if (s1 == 4 || s2 == 4) { - /* non-cacheable has precedence */ - return 4; - } else if (extract32(s1, 2, 2) == 0 || extract32(s1, 2, 2) == 2) { - /* stage 1 write-through takes precedence */ - return s1; - } else if (extract32(s2, 2, 2) == 2) { - /* - * stage 2 write-through takes precedence, but the allocation hint - * is still taken from stage 1 - */ - return (2 << 2) | extract32(s1, 0, 2); - } else { /* write-back */ - return s1; - } -} - -/* - * Combine S1 and S2 cacheability/shareability attributes, per D4.5.4 - * and CombineS1S2Desc() - * - * @s1: Attributes from stage 1 walk - * @s2: Attributes from stage 2 walk - */ -static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2) -{ - uint8_t s1lo, s2lo, s1hi, s2hi; - ARMCacheAttrs ret; - bool tagged = false; - - if (s1.attrs == 0xf0) { - tagged = true; - s1.attrs = 0xff; - } - - s1lo = extract32(s1.attrs, 0, 4); - s2lo = extract32(s2.attrs, 0, 4); - s1hi = extract32(s1.attrs, 4, 4); - s2hi = extract32(s2.attrs, 4, 4); - - /* Combine shareability attributes (table D4-43) */ - if (s1.shareability == 2 || s2.shareability == 2) { - /* if either are outer-shareable, the result is outer-shareable */ - ret.shareability = 2; - } else if (s1.shareability == 3 || s2.shareability == 3) { - /* if either are inner-shareable, the result is inner-shareable */ - ret.shareability = 3; - } else { - /* both non-shareable */ - ret.shareability = 0; - } - - /* Combine memory type and cacheability attributes */ - if (s1hi == 0 || s2hi == 0) { - /* Device has precedence over normal */ - if (s1lo == 0 || s2lo == 0) { - /* nGnRnE has precedence over anything */ - ret.attrs = 0; - } else if (s1lo == 4 || s2lo == 4) { - /* non-Reordering has precedence over Reordering */ - ret.attrs = 4; /* nGnRE */ - } else if (s1lo == 8 || s2lo == 8) { - /* non-Gathering has precedence over Gathering */ - ret.attrs = 8; /* nGRE */ - } else { - ret.attrs = 0xc; /* GRE */ - } - - /* - * Any location for which the resultant memory type is any - * type of Device memory is always treated as Outer Shareable. - */ - ret.shareability = 2; - } else { /* Normal memory */ - /* Outer/inner cacheability combine independently */ - ret.attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4 - | combine_cacheattr_nibble(s1lo, s2lo); - - if (ret.attrs == 0x44) { - /* - * Any location for which the resultant memory type is Normal - * Inner Non-cacheable, Outer Non-cacheable is always treated - * as Outer Shareable. - */ - ret.shareability = 2; - } - } - - /* TODO: CombineS1S2Desc does not consider transient, only WB, RWA. */ - if (tagged && ret.attrs == 0xff) { - ret.attrs = 0xf0; - } - - return ret; -} - - -/* - * get_phys_addr - get the physical address for this virtual address - * - * Find the physical address corresponding to the given virtual address, - * by doing a translation table walk on MMU based systems or using the - * MPU state on MPU based systems. - * - * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, - * prot and page_size may not be filled in, and the populated fsr value provides - * information on why the translation aborted, in the format of a - * DFSR/IFSR fault register, with the following caveats: - * * we honour the short vs long DFSR format differences. - * * the WnR bit is never set (the caller must do this). - * * for PSMAv5 based systems we don't bother to return a full FSR format - * value. - * - * @env: CPUARMState - * @address: virtual address to get physical address for - * @access_type: 0 for read, 1 for write, 2 for execute - * @mmu_idx: MMU index indicating required translation regime - * @phys_ptr: set to the physical address corresponding to the virtual address - * @attrs: set to the memory transaction attributes to use - * @prot: set to the permissions for the page containing phys_ptr - * @page_size: set to the size of the page containing phys_ptr - * @fi: set to fault info if the translation fails - * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes - */ -bool get_phys_addr(CPUARMState *env, target_ulong address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) -{ - ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx); - - if (mmu_idx != s1_mmu_idx) { - /* - * Call ourselves recursively to do the stage 1 and then stage 2 - * translations if mmu_idx is a two-stage regime. - */ - if (arm_feature(env, ARM_FEATURE_EL2)) { - hwaddr ipa; - int s2_prot; - int ret; - ARMCacheAttrs cacheattrs2 = {}; - ARMMMUIdx s2_mmu_idx; - bool is_el0; - - ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa, - attrs, prot, page_size, fi, cacheattrs); - - /* If S1 fails or S2 is disabled, return early. */ - if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) { - *phys_ptr = ipa; - return ret; - } - - s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; - is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0; - - /* S1 is done. Now do S2 translation. */ - ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, is_el0, - phys_ptr, attrs, &s2_prot, - page_size, fi, &cacheattrs2); - fi->s2addr = ipa; - /* Combine the S1 and S2 perms. */ - *prot &= s2_prot; - - /* If S2 fails, return early. */ - if (ret) { - return ret; - } - - /* Combine the S1 and S2 cache attributes. */ - if (arm_hcr_el2_eff(env) & HCR_DC) { - /* - * HCR.DC forces the first stage attributes to - * Normal Non-Shareable, - * Inner Write-Back Read-Allocate Write-Allocate, - * Outer Write-Back Read-Allocate Write-Allocate. - * Do not overwrite Tagged within attrs. - */ - if (cacheattrs->attrs != 0xf0) { - cacheattrs->attrs = 0xff; - } - cacheattrs->shareability = 0; - } - *cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2); - - /* Check if IPA translates to secure or non-secure PA space. */ - if (arm_is_secure_below_el3(env)) { - if (attrs->secure) { - attrs->secure = - !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)); - } else { - attrs->secure = - !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW)) - || (env->cp15.vstcr_el2.raw_tcr & VSTCR_SA)); - } - } - return 0; - } else { - /* - * For non-EL2 CPUs a stage1+stage2 translation is just stage 1. - */ - mmu_idx = stage_1_mmu_idx(mmu_idx); - } - } - - /* - * The page table entries may downgrade secure to non-secure, but - * cannot upgrade an non-secure translation regime's attributes - * to secure. - */ - attrs->secure = regime_is_secure(env, mmu_idx); - attrs->user = regime_is_user(env, mmu_idx); - - /* - * Fast Context Switch Extension. This doesn't exist at all in v8. - * In v7 and earlier it affects all stage 1 translations. - */ - if (address < 0x02000000 && mmu_idx != ARMMMUIdx_Stage2 - && !arm_feature(env, ARM_FEATURE_V8)) { - if (regime_el(env, mmu_idx) == 3) { - address += env->cp15.fcseidr_s; - } else { - address += env->cp15.fcseidr_ns; - } - } - - if (arm_feature(env, ARM_FEATURE_PMSA)) { - bool ret; - *page_size = TARGET_PAGE_SIZE; - - if (arm_feature(env, ARM_FEATURE_V8)) { - /* PMSAv8 */ - ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx, - phys_ptr, attrs, prot, page_size, fi); - } else if (arm_feature(env, ARM_FEATURE_V7)) { - /* PMSAv7 */ - ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx, - phys_ptr, prot, page_size, fi); - } else { - /* Pre-v7 MPU */ - ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx, - phys_ptr, prot, fi); - } - qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32 - " mmu_idx %u -> %s (prot %c%c%c)\n", - access_type == MMU_DATA_LOAD ? "reading" : - (access_type == MMU_DATA_STORE ? "writing" : "execute"), - (uint32_t)address, mmu_idx, - ret ? "Miss" : "Hit", - *prot & PAGE_READ ? 'r' : '-', - *prot & PAGE_WRITE ? 'w' : '-', - *prot & PAGE_EXEC ? 'x' : '-'); - - return ret; - } - - /* Definitely a real MMU, not an MPU */ - - if (regime_translation_disabled(env, mmu_idx)) { - uint64_t hcr; - uint8_t memattr; - - /* - * MMU disabled. S1 addresses within aa64 translation regimes are - * still checked for bounds -- see AArch64.TranslateAddressS1Off. - */ - if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { - int r_el = regime_el(env, mmu_idx); - if (arm_el_is_aa64(env, r_el)) { - int pamax = arm_pamax(env_archcpu(env)); - uint64_t tcr = env->cp15.tcr_el[r_el].raw_tcr; - int addrtop, tbi; - - tbi = aa64_va_parameter_tbi(tcr, mmu_idx); - if (access_type == MMU_INST_FETCH) { - tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); - } - tbi = (tbi >> extract64(address, 55, 1)) & 1; - addrtop = (tbi ? 55 : 63); - - if (extract64(address, pamax, addrtop - pamax + 1) != 0) { - fi->type = ARMFault_AddressSize; - fi->level = 0; - fi->stage2 = false; - return 1; - } - - /* - * When TBI is disabled, we've just validated that all of the - * bits above PAMax are zero, so logically we only need to - * clear the top byte for TBI. But it's clearer to follow - * the pseudocode set of addrdesc.paddress. - */ - address = extract64(address, 0, 52); - } - } - *phys_ptr = address; - *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; - *page_size = TARGET_PAGE_SIZE; - - /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ - hcr = arm_hcr_el2_eff(env); - cacheattrs->shareability = 0; - if (hcr & HCR_DC) { - if (hcr & HCR_DCT) { - memattr = 0xf0; /* Tagged, Normal, WB, RWA */ - } else { - memattr = 0xff; /* Normal, WB, RWA */ - } - } else if (access_type == MMU_INST_FETCH) { - if (regime_sctlr(env, mmu_idx) & SCTLR_I) { - memattr = 0xee; /* Normal, WT, RA, NT */ - } else { - memattr = 0x44; /* Normal, NC, No */ - } - cacheattrs->shareability = 2; /* outer sharable */ - } else { - memattr = 0x00; /* Device, nGnRnE */ - } - cacheattrs->attrs = memattr; - return 0; - } - - if (regime_using_lpae_format(env, mmu_idx)) { - return get_phys_addr_lpae(env, address, access_type, mmu_idx, false, - phys_ptr, attrs, prot, page_size, - fi, cacheattrs); - } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) { - return get_phys_addr_v6(env, address, access_type, mmu_idx, - phys_ptr, attrs, prot, page_size, fi); - } else { - return get_phys_addr_v5(env, address, access_type, mmu_idx, - phys_ptr, prot, page_size, fi); - } -} - -hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, - MemTxAttrs *attrs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - hwaddr phys_addr; - target_ulong page_size; - int prot; - bool ret; - ARMMMUFaultInfo fi = {}; - ARMMMUIdx mmu_idx = arm_mmu_idx(env); - ARMCacheAttrs cacheattrs = {}; - - *attrs = (MemTxAttrs) {}; - - ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &phys_addr, - attrs, &prot, &page_size, &fi, &cacheattrs); - - if (ret) { - return -1; - } - return phys_addr; -} - -#endif - /* Note that signed overflow is undefined in C. The following routines are careful to use unsigned types where modulo arithmetic is required. Failure to do so _will_ break on newer gcc. */ diff --git a/target/arm/tcg/pauth_helper.c b/target/arm/tcg/pauth_helper.c index cd6df18150..11021d1a2f 100644 --- a/target/arm/tcg/pauth_helper.c +++ b/target/arm/tcg/pauth_helper.c @@ -25,7 +25,7 @@ #include "exec/helper-proto.h" #include "tcg/tcg-gvec-desc.h" #include "qemu/xxhash.h" - +#include "cpu-mmu.h" static uint64_t pac_cell_shuffle(uint64_t i) { diff --git a/target/arm/tcg/sysemu/m_helper.c b/target/arm/tcg/sysemu/m_helper.c index 77c9fd0b6e..59787c5650 100644 --- a/target/arm/tcg/sysemu/m_helper.c +++ b/target/arm/tcg/sysemu/m_helper.c @@ -13,7 +13,7 @@ #include "qemu/main-loop.h" #include "exec/exec-all.h" #include "semihosting/common-semi.h" - +#include "cpu-mmu.h" #include "tcg/m_helper.h" /* diff --git a/target/arm/tcg/sysemu/tlb_helper.c b/target/arm/tcg/sysemu/tlb_helper.c index 586f602989..1290612ed9 100644 --- a/target/arm/tcg/sysemu/tlb_helper.c +++ b/target/arm/tcg/sysemu/tlb_helper.c @@ -9,6 +9,7 @@ #include "cpu.h" #include "internals.h" #include "exec/exec-all.h" +#include "cpu-mmu.h" #include "tcg/tlb_helper.h" /* diff --git a/target/arm/meson.build b/target/arm/meson.build index b75392e3e9..3e7cea7604 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -1,6 +1,7 @@ arm_ss = ss.source_set() arm_ss.add(files( 'cpu.c', + 'cpu-mmu.c', 'gdbstub.c', 'cpu_tcg.c', )) @@ -17,6 +18,7 @@ arm_softmmu_ss = ss.source_set() arm_softmmu_ss.add(files( 'arch_dump.c', 'arm-powerctl.c', + 'cpu-mmu-sysemu.c', 'cpu-sysemu.c', 'machine.c', 'monitor.c', From patchwork Fri Jun 4 15:52:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454123 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp604127jae; Fri, 4 Jun 2021 10:16:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwS5kFNkuadG3oKG08Id/uVWemA0hA2I+MYfUnGeL7aN2Do72DpcQmor+5nbEcsGi6ALVnD X-Received: by 2002:ac5:c7c4:: with SMTP id e4mr3402731vkn.22.1622826973672; Fri, 04 Jun 2021 10:16:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826973; cv=none; d=google.com; s=arc-20160816; b=e++IVTIBkGweyGCz53iLOR5/Lw8MIPIMEQx0URpXLE3aV0QppN7YON9GwFKOmSwrEa XaITjaWNiPiXWMTbQ1C4j3vcEviultUbD8ZwGVOuGzlJLEfgQJUv/xC1SifGSETiuJKP FdDVyVBnQes2pSZL2N7ouSP1SUAHFM1q/JaZfbnAyDMRx4k2RMbOpNBHLFw+AItyU4kH LwLUhQNyEC20Wwsb54Tcg5l2PeO9vLETJ+exxyCgNzX29Y+eb9kHiWOXHnsWaxacO11C IUPWgeWc6nChbHeW5iGKexIOHX4BkoRlI80bkxO+gQZ8I+8ffPQhVSC/9BGwspeV3w83 RkKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=sc/6uajFSwSpb53JElt22XiaxKOlvTHLpMwHaEEN5+g=; b=Kn41pkJK9lEacBw8k4xLkHzNZSJb7vT3yNXsnL22jlL12i9Kw22jMIYztyCXHU8twn e3jW63DIIMMjmSUuoXiPnfCIURcg4Uvgv2Ii5wUcnoLk7UgTQDrfMrVkArgvBWywwogW uE3jQ694RZ5VILMMHRe6W8SZdE7CqJ+LlhdOe6tctyzAebMX0lJwWNYKxUthXL9h8a9b U17O2kmp8xIO6olbSNF3uLFpvn+qGknE3yZGOm9/AONVQwD4D1BQfROHmsh3p54UsSaQ JOZK0ILGsiDk/tYuIr2dXIrUA9qtqw4IHR5s9/vnRdW9qd2w73jE36/jiug+q4xTa/HF gVfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=t0vz8Hfz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u77si3372936vke.62.2021.06.04.10.16.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:16:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=t0vz8Hfz; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59874 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDQa-0004NE-UN for patch@linaro.org; Fri, 04 Jun 2021 13:16:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33788) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCku-0000Bq-Fx for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:08 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]:50688) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCke-0002Cm-FB for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:08 -0400 Received: by mail-wm1-x32f.google.com with SMTP id f20so1516808wmg.0 for ; Fri, 04 Jun 2021 09:32:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sc/6uajFSwSpb53JElt22XiaxKOlvTHLpMwHaEEN5+g=; b=t0vz8HfzwvE8bOXplCvnxf2Mgd1L9NoAhuG7EixVrTsr7QyMgUdVkXMNm5cDs+jzQq jLW654pamsz8Ir3ofA8FE1llovuOHNjcrXOhdQWe08TeNvWesi7VU9btFfBqRuUIOc6W OMQq/j3bs9yyx6QdK6o+QhNQXJVE3LySa6WbOGMyYq2pClypi2wFaE3Rc9ESkEfOvt9d WwWG4P+x4V12w9WJRbE70uZHqS5zn+HWPfVy9T+eFvO5BWhgHKJTPe1GsSXJd2TnIOJ7 iraXuLyQbDcBCJ1RlxoUafh68SdBIDBJEuc+2zKFmeQvMpUvp/jECA1JKYPhoSNPCvLQ 3udQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sc/6uajFSwSpb53JElt22XiaxKOlvTHLpMwHaEEN5+g=; b=HfUWPgUZsuePH7fqGLHdRAp5v9PG5UcTq9JlANviaeAj/N6g79zLmTAy/0XWqofx+y 6bKq7DoaExPjxETm/k6zii6fvufbliEza/sUhlBbBfMp7ZJcN71eBBF+fFui/I4fPunJ urEttUKu2p+5ThRnsjNufbpf/jcRhJaWRBaZ+VzdrVoBZtbknzvrdxLy0aJOsDZ0Tu3H VR5RaDT322BBb+No3vsABIMsiLVPTwED3zs1j81ciLKmHlBuwSt9tuw9sZnN1pMu9vam HxRFwmFTliwxW34grur+db+IQm7HyJc6VqfqsRVw01oAbuJp4VAJPtLAavpKikVa5ZAD EQbg== X-Gm-Message-State: AOAM530PYXqKI5uUF+PHcXZKOrK0emkB7WkG4l0YGmIpdS+4opCrux4d ORdyLYpDo/ALkYCD6c7tDjX7wM6Rur9Mag== X-Received: by 2002:a1c:1d04:: with SMTP id d4mr4462577wmd.126.1622824370280; Fri, 04 Jun 2021 09:32:50 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id j18sm7153953wrw.30.2021.06.04.09.32.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A03961FFB1; Fri, 4 Jun 2021 16:53:15 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 27/99] target/arm: fix style in preparation of new cpregs module Date: Fri, 4 Jun 2021 16:52:00 +0100 Message-Id: <20210604155312.15902-28-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana in preparation of the creation of a new cpregs module, fix the style for the to-be-exported code. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/cpu.h | 54 ++++--- target/arm/tcg/helper.c | 310 ++++++++++++++++++++++++++-------------- 2 files changed, 239 insertions(+), 125 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu.h b/target/arm/cpu.h index f9ce70e607..af788c7801 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2709,14 +2709,16 @@ typedef struct ARMCPRegInfo ARMCPRegInfo; typedef enum CPAccessResult { /* Access is permitted */ CP_ACCESS_OK = 0, - /* Access fails due to a configurable trap or enable which would + /* + * Access fails due to a configurable trap or enable which would * result in a categorized exception syndrome giving information about * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6, * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or * PL1 if in EL0, otherwise to the current EL). */ CP_ACCESS_TRAP = 1, - /* Access fails and results in an exception syndrome 0x0 ("uncategorized"). + /* + * Access fails and results in an exception syndrome 0x0 ("uncategorized"). * Note that this is not a catch-all case -- the set of cases which may * result in this failure is specifically defined by the architecture. */ @@ -2727,14 +2729,16 @@ typedef enum CPAccessResult { /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5, CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6, - /* Access fails and results in an exception syndrome for an FP access, + /* + * Access fails and results in an exception syndrome for an FP access, * trapped directly to EL2 or EL3 */ CP_ACCESS_TRAP_FP_EL2 = 7, CP_ACCESS_TRAP_FP_EL3 = 8, } CPAccessResult; -/* Access functions for coprocessor registers. These cannot fail and +/* + * Access functions for coprocessor registers. These cannot fail and * may not raise exceptions. */ typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque); @@ -2753,7 +2757,8 @@ typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque); struct ARMCPRegInfo { /* Name of register (useful mainly for debugging, need not be unique) */ const char *name; - /* Location of register: coprocessor number and (crn,crm,opc1,opc2) + /* + * Location of register: coprocessor number and (crn,crm,opc1,opc2) * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a * 'wildcard' field -- any value of that field in the MRC/MCR insn * will be decoded to this register. The register read and write @@ -2784,16 +2789,19 @@ struct ARMCPRegInfo { int access; /* Security state: ARM_CP_SECSTATE_* bits/values */ int secure; - /* The opaque pointer passed to define_arm_cp_regs_with_opaque() when + /* + * The opaque pointer passed to define_arm_cp_regs_with_opaque() when * this register was defined: can be used to hand data through to the * register read/write functions, since they are passed the ARMCPRegInfo*. */ void *opaque; - /* Value of this register, if it is ARM_CP_CONST. Otherwise, if + /* + * Value of this register, if it is ARM_CP_CONST. Otherwise, if * fieldoffset is non-zero, the reset value of the register. */ uint64_t resetvalue; - /* Offset of the field in CPUARMState for this register. + /* + * Offset of the field in CPUARMState for this register. * * This is not needed if either: * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs @@ -2801,7 +2809,8 @@ struct ARMCPRegInfo { */ ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */ - /* Offsets of the secure and non-secure fields in CPUARMState for the + /* + * Offsets of the secure and non-secure fields in CPUARMState for the * register if it is banked. These fields are only used during the static * registration of a register. During hashing the bank associated * with a given security state is copied to fieldoffset which is used from @@ -2814,36 +2823,42 @@ struct ARMCPRegInfo { */ ptrdiff_t bank_fieldoffsets[2]; - /* Function for making any access checks for this register in addition to + /* + * Function for making any access checks for this register in addition to * those specified by the 'access' permissions bits. If NULL, no extra * checks required. The access check is performed at runtime, not at * translate time. */ CPAccessFn *accessfn; - /* Function for handling reads of this register. If NULL, then reads + /* + * Function for handling reads of this register. If NULL, then reads * will be done by loading from the offset into CPUARMState specified * by fieldoffset. */ CPReadFn *readfn; - /* Function for handling writes of this register. If NULL, then writes + /* + * Function for handling writes of this register. If NULL, then writes * will be done by writing to the offset into CPUARMState specified * by fieldoffset. */ CPWriteFn *writefn; - /* Function for doing a "raw" read; used when we need to copy + /* + * Function for doing a "raw" read; used when we need to copy * coprocessor state to the kernel for KVM or out for * migration. This only needs to be provided if there is also a * readfn and it has side effects (for instance clear-on-read bits). */ CPReadFn *raw_readfn; - /* Function for doing a "raw" write; used when we need to copy KVM + /* + * Function for doing a "raw" write; used when we need to copy KVM * kernel coprocessor state into userspace, or for inbound * migration. This only needs to be provided if there is also a * writefn and it masks out "unwritable" bits or has write-one-to-clear * or similar behaviour. */ CPWriteFn *raw_writefn; - /* Function for resetting the register. If NULL, then reset will be done + /* + * Function for resetting the register. If NULL, then reset will be done * by writing resetvalue to the field specified in fieldoffset. If * fieldoffset is 0 then no reset will be done. */ @@ -2863,7 +2878,8 @@ struct ARMCPRegInfo { CPWriteFn *orig_writefn; }; -/* Macros which are lvalues for the field in CPUARMState for the +/* + * Macros which are lvalues for the field in CPUARMState for the * ARMCPRegInfo *ri. */ #define CPREG_FIELD32(env, ri) \ @@ -2917,12 +2933,14 @@ void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri, /* CPReadFn that can be used for read-as-zero behaviour */ uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri); -/* CPResetFn that does nothing, for use if no reset is required even +/* + * CPResetFn that does nothing, for use if no reset is required even * if fieldoffset is non zero. */ void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque); -/* Return true if this reginfo struct's field in the cpu state struct +/* + * Return true if this reginfo struct's field in the cpu state struct * is 64 bits wide. */ static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri) diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 7f818e5860..0f4ebcc46f 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -327,7 +327,8 @@ static int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg) static bool raw_accessors_invalid(const ARMCPRegInfo *ri) { - /* Return true if the regdef would cause an assertion if you called + /* + * Return true if the regdef would cause an assertion if you called * read_raw_cp_reg() or write_raw_cp_reg() on it (ie if it is a * program bug for it not to have the NO_RAW flag). * NB that returning false here doesn't necessarily mean that calling @@ -431,7 +432,7 @@ static void add_cpreg_to_list(gpointer key, gpointer opaque) regidx = *(uint32_t *)key; ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); - if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) { + if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) { cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx); /* The value array need not be initialized at this point */ cpu->cpreg_array_len++; @@ -447,7 +448,7 @@ static void count_cpreg(gpointer key, gpointer opaque) regidx = *(uint32_t *)key; ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); - if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) { + if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) { cpu->cpreg_array_len++; } } @@ -468,7 +469,8 @@ static gint cpreg_key_compare(gconstpointer a, gconstpointer b) void init_cpreg_list(ARMCPU *cpu) { - /* Initialise the cpreg_tuples[] array based on the cp_regs hash. + /* + * Initialise the cpreg_tuples[] array based on the cp_regs hash. * Note that we require cpreg_tuples[] to be sorted by key ID. */ GList *keys; @@ -510,7 +512,8 @@ static CPAccessResult access_el3_aa32ns(CPUARMState *env, return CP_ACCESS_OK; } -/* Some secure-only AArch32 registers trap to EL3 if used from +/* + * Some secure-only AArch32 registers trap to EL3 if used from * Secure EL1 (but are just ordinary UNDEF in other non-EL3 contexts). * Note that an access from Secure EL1 can only happen if EL3 is AArch64. * We assume that the .access field is set to PL1_RW. @@ -537,7 +540,8 @@ static uint64_t arm_mdcr_el2_eff(CPUARMState *env) return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0; } -/* Check for traps to "powerdown debug" registers, which are controlled +/* + * Check for traps to "powerdown debug" registers, which are controlled * by MDCR.TDOSA */ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri, @@ -557,7 +561,8 @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri, return CP_ACCESS_OK; } -/* Check for traps to "debug ROM" registers, which are controlled +/* + * Check for traps to "debug ROM" registers, which are controlled * by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3. */ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri, @@ -577,7 +582,8 @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri, return CP_ACCESS_OK; } -/* Check for traps to general debug registers, which are controlled +/* + * Check for traps to general debug registers, which are controlled * by MDCR_EL2.TDA for EL2 and MDCR_EL3.TDA for EL3. */ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri, @@ -597,7 +603,8 @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri, return CP_ACCESS_OK; } -/* Check for traps to performance monitor registers, which are controlled +/* + * Check for traps to performance monitor registers, which are controlled * by MDCR_EL2.TPM for EL2 and MDCR_EL3.TPM for EL3. */ static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri, @@ -671,7 +678,8 @@ static void fcse_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) ARMCPU *cpu = env_archcpu(env); if (raw_read(env, ri) != value) { - /* Unlike real hardware the qemu TLB uses virtual addresses, + /* + * Unlike real hardware the qemu TLB uses virtual addresses, * not modified virtual addresses, so this causes a TLB flush. */ tlb_flush(CPU(cpu)); @@ -686,7 +694,8 @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri, if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA) && !extended_addresses_enabled(env)) { - /* For VMSA (when not using the LPAE long descriptor page table + /* + * For VMSA (when not using the LPAE long descriptor page table * format) this register includes the ASID, so do a TLB flush. * For PMSA it is purely a process ID and no action is needed. */ @@ -851,7 +860,8 @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, } static const ARMCPRegInfo cp_reginfo[] = { - /* Define the secure and non-secure FCSE identifier CP registers + /* + * Define the secure and non-secure FCSE identifier CP registers * separately because there is no secure bank in V8 (no _EL3). This allows * the secure register to be properly reset and migrated. There is also no * v8 EL1 version of the register so the non-secure instance stands alone. @@ -866,7 +876,8 @@ static const ARMCPRegInfo cp_reginfo[] = { .access = PL1_RW, .secure = ARM_CP_SECSTATE_S, .fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s), .resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, }, - /* Define the secure and non-secure context identifier CP registers + /* + * Define the secure and non-secure context identifier CP registers * separately because there is no secure bank in V8 (no _EL3). This allows * the secure register to be properly reset and migrated. In the * non-secure case, the 32-bit register will have reset and migration @@ -888,7 +899,8 @@ static const ARMCPRegInfo cp_reginfo[] = { }; static const ARMCPRegInfo not_v8_cp_reginfo[] = { - /* NB: Some of these registers exist in v8 but with more precise + /* + * NB: Some of these registers exist in v8 but with more precise * definitions that don't use CP_ANY wildcards (mostly in v8_cp_reginfo[]). */ /* MMU Domain access control / MPU write buffer control */ @@ -898,7 +910,8 @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = { .writefn = dacr_write, .raw_writefn = raw_write, .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s), offsetoflow32(CPUARMState, cp15.dacr_ns) } }, - /* ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs. + /* + * ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs. * For v6 and v5, these mappings are overly broad. */ { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 0, @@ -917,7 +930,8 @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = { }; static const ARMCPRegInfo not_v6_cp_reginfo[] = { - /* Not all pre-v6 cores implemented this WFI, so this is slightly + /* + * Not all pre-v6 cores implemented this WFI, so this is slightly * over-broad. */ { .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2, @@ -926,12 +940,14 @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = { }; static const ARMCPRegInfo not_v7_cp_reginfo[] = { - /* Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which + /* + * Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which * is UNPREDICTABLE; we choose to NOP as most implementations do). */ { .name = "WFI_v6", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4, .access = PL1_W, .type = ARM_CP_WFI }, - /* L1 cache lockdown. Not architectural in v6 and earlier but in practice + /* + * L1 cache lockdown. Not architectural in v6 and earlier but in practice * implemented in 926, 946, 1026, 1136, 1176 and 11MPCore. StrongARM and * OMAPCP will override this space. */ @@ -945,14 +961,16 @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = { { .name = "DUMMY", .cp = 15, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = CP_ANY, .access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, .resetvalue = 0 }, - /* We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR; + /* + * We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR; * implementing it as RAZ means the "debug architecture version" bits * will read as a reserved value, which should cause Linux to not try * to use the debug hardware. */ { .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0, .access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 }, - /* MMU TLB control. Note that the wildcarding means we cover not just + /* + * MMU TLB control. Note that the wildcarding means we cover not just * the unified TLB ops but also the dside/iside/inner-shareable variants. */ { .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY, @@ -981,7 +999,8 @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri, /* In ARMv8 most bits of CPACR_EL1 are RES0. */ if (!arm_feature(env, ARM_FEATURE_V8)) { - /* ARMv7 defines bits for unimplemented coprocessors as RAZ/WI. + /* + * ARMv7 defines bits for unimplemented coprocessors as RAZ/WI. * ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP. * TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell. */ @@ -994,7 +1013,8 @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri, value |= (1 << 31); } - /* VFPv3 and upwards with NEON implement 32 double precision + /* + * VFPv3 and upwards with NEON implement 32 double precision * registers (D0-D31). */ if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) { @@ -1036,7 +1056,8 @@ static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri) static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri) { - /* Call cpacr_write() so that we reset with the correct RAO bits set + /* + * Call cpacr_write() so that we reset with the correct RAO bits set * for our CPU features. */ cpacr_write(env, ri, 0); @@ -1076,7 +1097,8 @@ static const ARMCPRegInfo v6_cp_reginfo[] = { { .name = "MVA_prefetch", .cp = 15, .crn = 7, .crm = 13, .opc1 = 0, .opc2 = 1, .access = PL1_W, .type = ARM_CP_NOP }, - /* We need to break the TB after ISB to execute self-modifying code + /* + * We need to break the TB after ISB to execute self-modifying code * correctly and also to take any pending interrupts immediately. * So use arm_cp_write_ignore() function instead of ARM_CP_NOP flag. */ @@ -1091,7 +1113,8 @@ static const ARMCPRegInfo v6_cp_reginfo[] = { .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ifar_s), offsetof(CPUARMState, cp15.ifar_ns) }, .resetvalue = 0, }, - /* Watchpoint Fault Address Register : should actually only be present + /* + * Watchpoint Fault Address Register : should actually only be present * for 1136, 1176, 11MPCore. */ { .name = "WFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1, @@ -2492,7 +2515,8 @@ static const ARMCPRegInfo v6k_cp_reginfo[] = { static CPAccessResult gt_cntfrq_access(CPUARMState *env, const ARMCPRegInfo *ri, bool isread) { - /* CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero. + /* + * CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero. * Writable only at the highest implemented exception level. */ int el = arm_current_el(env); @@ -2651,7 +2675,8 @@ static CPAccessResult gt_stimer_access(CPUARMState *env, const ARMCPRegInfo *ri, bool isread) { - /* The AArch64 register view of the secure physical timer is + /* + * The AArch64 register view of the secure physical timer is * always accessible from EL3, and configurably accessible from * Secure EL1. */ @@ -2686,7 +2711,8 @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx) ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx]; if (gt->ctl & 1) { - /* Timer enabled: calculate and set current ISTATUS, irq, and + /* + * Timer enabled: calculate and set current ISTATUS, irq, and * reset timer to when ISTATUS next has to change */ uint64_t offset = timeridx == GTIMER_VIRT ? @@ -2709,7 +2735,8 @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx) /* Next transition is when we hit cval */ nexttick = gt->cval + offset; } - /* Note that the desired next expiry time might be beyond the + /* + * Note that the desired next expiry time might be beyond the * signed-64-bit range of a QEMUTimer -- in this case we just * set the timer for as far in the future as possible. When the * timer expires we will reset the timer for any remaining period. @@ -2826,7 +2853,8 @@ static void gt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, /* Enable toggled */ gt_recalc_timer(cpu, timeridx); } else if ((oldval ^ value) & 2) { - /* IMASK toggled: don't need to recalculate, + /* + * IMASK toggled: don't need to recalculate, * just set the interrupt line based on ISTATUS */ int irqstate = (oldval & 4) && !(value & 2); @@ -3143,7 +3171,8 @@ static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque) } static const ARMCPRegInfo generic_timer_cp_reginfo[] = { - /* Note that CNTFRQ is purely reads-as-written for the benefit + /* + * Note that CNTFRQ is purely reads-as-written for the benefit * of software; writing it doesn't actually change the timer frequency. * Our reset value matches the fixed frequency we implement the timer at. */ @@ -3306,7 +3335,8 @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = { .readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read, .writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write, }, - /* Secure timer -- this is actually restricted to only EL3 + /* + * Secure timer -- this is actually restricted to only EL3 * and configurably Secure-EL1 via the accessfn. */ { .name = "CNTPS_TVAL_EL1", .state = ARM_CP_STATE_AA64, @@ -3346,7 +3376,8 @@ static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri, #else -/* In user-mode most of the generic timer registers are inaccessible +/* + * In user-mode most of the generic timer registers are inaccessible * however modern kernels (4.12+) allow access to cntvct_el0 */ @@ -3354,7 +3385,8 @@ static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri) { ARMCPU *cpu = env_archcpu(env); - /* Currently we have no support for QEMUTimer in linux-user so we + /* + * Currently we have no support for QEMUTimer in linux-user so we * can't call gt_get_countervalue(env), instead we directly * call the lower level functions. */ @@ -3396,7 +3428,8 @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri, bool isread) { if (ri->opc2 & 4) { - /* The ATS12NSO* operations must trap to EL3 or EL2 if executed in + /* + * The ATS12NSO* operations must trap to EL3 or EL2 if executed in * Secure EL1 (which can only happen if EL3 is AArch64). * They are simply UNDEF if executed from NS EL1. * They function normally from EL2 or EL3. @@ -3554,7 +3587,8 @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value, } } } else { - /* fsr is a DFSR/IFSR value for the short descriptor + /* + * fsr is a DFSR/IFSR value for the short descriptor * translation table format (with WnR always clear). * Convert it to a 32-bit PAR. */ @@ -3836,7 +3870,8 @@ static void pmsav7_rgnr_write(CPUARMState *env, const ARMCPRegInfo *ri, } static const ARMCPRegInfo pmsav7_cp_reginfo[] = { - /* Reset for all these registers is handled in arm_cpu_reset(), + /* + * Reset for all these registers is handled in arm_cpu_reset(), * because the PMSAv7 is also used by M-profile CPUs, which do * not register cpregs but still need the state to be reset. */ @@ -3922,11 +3957,14 @@ static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri, if (!arm_feature(env, ARM_FEATURE_V8)) { if (arm_feature(env, ARM_FEATURE_LPAE) && (value & TTBCR_EAE)) { - /* Pre ARMv8 bits [21:19], [15:14] and [6:3] are UNK/SBZP when - * using Long-desciptor translation table format */ + /* + * Pre ARMv8 bits [21:19], [15:14] and [6:3] are UNK/SBZP when + * using Long-desciptor translation table format + */ value &= ~((7 << 19) | (3 << 14) | (0xf << 3)); } else if (arm_feature(env, ARM_FEATURE_EL3)) { - /* In an implementation that includes the Security Extensions + /* + * In an implementation that includes the Security Extensions * TTBCR has additional fields PD0 [4] and PD1 [5] for * Short-descriptor translation table format. */ @@ -3936,7 +3974,8 @@ static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri, } } - /* Update the masks corresponding to the TCR bank being written + /* + * Update the masks corresponding to the TCR bank being written * Note that we always calculate mask and base_mask, but * they are only used for short-descriptor tables (ie if EAE is 0); * for long-descriptor tables the TCR fields are used differently @@ -3954,7 +3993,8 @@ static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri, TCR *tcr = raw_ptr(env, ri); if (arm_feature(env, ARM_FEATURE_LPAE)) { - /* With LPAE the TTBCR could result in a change of ASID + /* + * With LPAE the TTBCR could result in a change of ASID * via the TTBCR.A1 bit, so do a TLB flush. */ tlb_flush(CPU(cpu)); @@ -3968,7 +4008,8 @@ static void vmsa_ttbcr_reset(CPUARMState *env, const ARMCPRegInfo *ri) { TCR *tcr = raw_ptr(env, ri); - /* Reset both the TCR as well as the masks corresponding to the bank of + /* + * Reset both the TCR as well as the masks corresponding to the bank of * the TCR being reset. */ tcr->raw_tcr = 0; @@ -4100,7 +4141,8 @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = { REGINFO_SENTINEL }; -/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing +/* + * Note that unlike TTBCR, writing to TTBCR2 does not require flushing * qemu tlbs nor adjusting cached masks. */ static const ARMCPRegInfo ttbcr2_reginfo = { @@ -4136,7 +4178,8 @@ static void omap_wfi_write(CPUARMState *env, const ARMCPRegInfo *ri, static void omap_cachemaint_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - /* On OMAP there are registers indicating the max/min index of dcache lines + /* + * On OMAP there are registers indicating the max/min index of dcache lines * containing a dirty line; cache flush operations have to reset these. */ env->cp15.c15_i_max = 0x000; @@ -4168,7 +4211,8 @@ static const ARMCPRegInfo omap_cp_reginfo[] = { .crm = 8, .opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NO_RAW, .readfn = arm_cp_read_zero, .writefn = omap_wfi_write, }, - /* TODO: Peripheral port remap register: + /* + * TODO: Peripheral port remap register: * On OMAP2 mcr p15, 0, rn, c15, c2, 4 sets up the interrupt controller * base address at $rn & ~0xfff and map size of 0x200 << ($rn & 0xfff), * when MMU is off. @@ -4198,7 +4242,8 @@ static const ARMCPRegInfo xscale_cp_reginfo[] = { .cp = 15, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 1, .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.c1_xscaleauxcr), .resetvalue = 0, }, - /* XScale specific cache-lockdown: since we have no cache we NOP these + /* + * XScale specific cache-lockdown: since we have no cache we NOP these * and hope the guest does not really rely on cache behaviour. */ { .name = "XSCALE_LOCK_ICACHE_LINE", @@ -4217,7 +4262,8 @@ static const ARMCPRegInfo xscale_cp_reginfo[] = { }; static const ARMCPRegInfo dummy_c15_cp_reginfo[] = { - /* RAZ/WI the whole crn=15 space, when we don't have a more specific + /* + * RAZ/WI the whole crn=15 space, when we don't have a more specific * implementation of this implementation-defined space. * Ideally this should eventually disappear in favour of actually * implementing the correct behaviour for all cores. @@ -4260,7 +4306,8 @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = { }; static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = { - /* The cache test-and-clean instructions always return (1 << 30) + /* + * The cache test-and-clean instructions always return (1 << 30) * to indicate that there are no dirty cache lines. */ { .name = "TC_DCACHE", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 3, @@ -4298,7 +4345,8 @@ static uint64_t mpidr_read_val(CPUARMState *env) if (arm_feature(env, ARM_FEATURE_V7MP)) { mpidr |= (1U << 31); - /* Cores which are uniprocessor (non-coherent) + /* + * Cores which are uniprocessor (non-coherent) * but still implement the MP extensions set * bit 30. (For instance, Cortex-R5). */ @@ -4501,7 +4549,8 @@ static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env, return CP_ACCESS_OK; } -/* See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions +/* + * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions * Page D4-1736 (DDI0487A.b) */ @@ -4668,7 +4717,8 @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri, static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - /* Invalidate by VA, EL2 + /* + * Invalidate by VA, EL2 * Currently handles both VAE2 and VALE2, since we don't support * flush-last-level-only. */ @@ -4682,7 +4732,8 @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - /* Invalidate by VA, EL3 + /* + * Invalidate by VA, EL3 * Currently handles both VAE3 and VALE3, since we don't support * flush-last-level-only. */ @@ -4707,7 +4758,8 @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - /* Invalidate by VA, EL1&0 (AArch64 version). + /* + * Invalidate by VA, EL1&0 (AArch64 version). * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1, * since we don't support flush-for-specific-ASID-only or * flush-last-level-only. @@ -4958,7 +5010,8 @@ static CPAccessResult sp_el0_access(CPUARMState *env, const ARMCPRegInfo *ri, bool isread) { if (!(env->pstate & PSTATE_SP)) { - /* Access to SP_EL0 is undefined if it's being used as + /* + * Access to SP_EL0 is undefined if it's being used as * the stack pointer. */ return CP_ACCESS_TRAP_UNCATEGORIZED; @@ -4998,7 +5051,8 @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri, } if (raw_read(env, ri) == value) { - /* Skip the TLB flush if nothing actually changed; Linux likes + /* + * Skip the TLB flush if nothing actually changed; Linux likes * to do a lot of pointless SCTLR writes. */ return; @@ -5039,7 +5093,8 @@ static void sdcr_write(CPUARMState *env, const ARMCPRegInfo *ri, } static const ARMCPRegInfo v8_cp_reginfo[] = { - /* Minimal set of EL0-visible registers. This will need to be expanded + /* + * Minimal set of EL0-visible registers. This will need to be expanded * significantly for system emulation of AArch64 CPUs. */ { .name = "NZCV", .state = ARM_CP_STATE_AA64, @@ -5314,7 +5369,8 @@ static const ARMCPRegInfo v8_cp_reginfo[] = { .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 0, .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_SVC]) }, - /* We rely on the access checks not allowing the guest to write to the + /* + * We rely on the access checks not allowing the guest to write to the * state field when SPSel indicates that it's being used as the stack * pointer. */ @@ -5510,7 +5566,8 @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask) if (arm_feature(env, ARM_FEATURE_EL3)) { valid_mask &= ~HCR_HCD; } else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) { - /* Architecturally HCR.TSC is RES0 if EL3 is not implemented. + /* + * Architecturally HCR.TSC is RES0 if EL3 is not implemented. * However, if we're using the SMC PSCI conduit then QEMU is * effectively acting like EL3 firmware and so the guest at * EL2 should retain the ability to prevent EL1 from being @@ -5772,7 +5829,8 @@ static const ARMCPRegInfo el2_cp_reginfo[] = { { .name = "VTCR_EL2", .state = ARM_CP_STATE_AA64, .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2, .access = PL2_RW, - /* no .writefn needed as this can't cause an ASID change; + /* + * no .writefn needed as this can't cause an ASID change; * no .raw_writefn or .resetfn needed as we never use mask/base_mask */ .fieldoffset = offsetof(CPUARMState, cp15.vtcr_el2) }, @@ -5846,7 +5904,8 @@ static const ARMCPRegInfo el2_cp_reginfo[] = { .access = PL2_W, .type = ARM_CP_NO_RAW, .writefn = tlbi_aa64_vae2is_write }, #ifndef CONFIG_USER_ONLY - /* Unlike the other EL2-related AT operations, these must + /* + * Unlike the other EL2-related AT operations, these must * UNDEF from EL3 if EL2 is not implemented, which is why we * define them here rather than with the rest of the AT ops. */ @@ -5858,7 +5917,8 @@ static const ARMCPRegInfo el2_cp_reginfo[] = { .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1, .access = PL2_W, .accessfn = at_s1e2_access, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 }, - /* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE + /* + * The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE * if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3 * with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose * to behave as if SCR.NS was 1. @@ -5871,7 +5931,8 @@ static const ARMCPRegInfo el2_cp_reginfo[] = { .writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC }, { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH, .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0, - /* ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the + /* + * ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the * reset values as IMPDEF. We choose to reset to 3 to comply with * both ARMv7 and ARMv8. */ @@ -5964,7 +6025,8 @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = { static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri, bool isread) { - /* The NSACR is RW at EL3, and RO for NS EL1 and NS EL2. + /* + * The NSACR is RW at EL3, and RO for NS EL1 and NS EL2. * At Secure EL1 it traps to EL3 or EL2. */ if (arm_current_el(env) == 3) { @@ -6012,7 +6074,8 @@ static const ARMCPRegInfo el3_cp_reginfo[] = { { .name = "TCR_EL3", .state = ARM_CP_STATE_AA64, .opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2, .access = PL3_RW, - /* no .writefn needed as this can't cause an ASID change; + /* + * no .writefn needed as this can't cause an ASID change; * we must provide a .raw_writefn and .resetfn because we handle * reset and migration for the AArch32 TTBCR(S), which might be * using mask and base_mask. @@ -6278,7 +6341,8 @@ static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri, static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - /* Writes to OSLAR_EL1 may update the OS lock status, which can be + /* + * Writes to OSLAR_EL1 may update the OS lock status, which can be * read via a bit in OSLSR_EL1. */ int oslock; @@ -6293,7 +6357,8 @@ static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri, } static const ARMCPRegInfo debug_cp_reginfo[] = { - /* DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped + /* + * DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped * debug components. The AArch64 version of DBGDRAR is named MDRAR_EL1; * unlike DBGDRAR it is never accessible from EL0. * DBGDSAR is deprecated and must RAZ from v8 anyway, so it has no AArch64 @@ -6315,7 +6380,8 @@ static const ARMCPRegInfo debug_cp_reginfo[] = { .access = PL1_RW, .accessfn = access_tda, .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1), .resetvalue = 0 }, - /* MDCCSR_EL0, aka DBGDSCRint. This is a read-only mirror of MDSCR_EL1. + /* + * MDCCSR_EL0, aka DBGDSCRint. This is a read-only mirror of MDSCR_EL1. * We don't implement the configurable EL0 access. */ { .name = "MDCCSR_EL0", .state = ARM_CP_STATE_BOTH, @@ -6338,21 +6404,24 @@ static const ARMCPRegInfo debug_cp_reginfo[] = { .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4, .access = PL1_RW, .accessfn = access_tdosa, .type = ARM_CP_NOP }, - /* Dummy DBGVCR: Linux wants to clear this on startup, but we don't + /* + * Dummy DBGVCR: Linux wants to clear this on startup, but we don't * implement vector catch debug events yet. */ { .name = "DBGVCR", .cp = 14, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0, .access = PL1_RW, .accessfn = access_tda, .type = ARM_CP_NOP }, - /* Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor + /* + * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor * to save and restore a 32-bit guest's DBGVCR) */ { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64, .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0, .access = PL2_RW, .accessfn = access_tda, .type = ARM_CP_NOP }, - /* Dummy MDCCINT_EL1, since we don't implement the Debug Communications + /* + * Dummy MDCCINT_EL1, since we don't implement the Debug Communications * Channel but Linux may try to access this register. The 32-bit * alias is DBGDCCINT. */ @@ -6624,7 +6693,8 @@ static void dbgwvr_write(CPUARMState *env, const ARMCPRegInfo *ri, ARMCPU *cpu = env_archcpu(env); int i = ri->crm; - /* Bits [63:49] are hardwired to the value of bit [48]; that is, the + /* + * Bits [63:49] are hardwired to the value of bit [48]; that is, the * register reads and behaves as if values written are sign extended. * Bits [1:0] are RES0. */ @@ -6752,7 +6822,8 @@ static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri, ARMCPU *cpu = env_archcpu(env); int i = ri->crm; - /* BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only + /* + * BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only * copy of BAS[0]. */ value = deposit64(value, 6, 1, extract64(value, 5, 1)); @@ -6764,7 +6835,8 @@ static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri, static void define_debug_regs(ARMCPU *cpu) { - /* Define v7 and v8 architectural debug registers. + /* + * Define v7 and v8 architectural debug registers. * These are just dummy implementations for now. */ int i; @@ -6927,7 +6999,8 @@ static void define_pmu_regs(ARMCPU *cpu) } } -/* We don't know until after realize whether there's a GICv3 +/* + * We don't know until after realize whether there's a GICv3 * attached, and that is what registers the gicv3 sysregs. * So we have to fill in the GIC fields in ID_PFR/ID_PFR1_EL1/ID_AA64PFR0_EL1 * at runtime. @@ -6956,7 +7029,8 @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri) } #endif -/* Shared logic between LORID and the rest of the LOR* registers. +/* + * Shared logic between LORID and the rest of the LOR* registers. * Secure state exclusion has already been dealt with. */ static CPAccessResult access_lor_ns(CPUARMState *env, @@ -7699,7 +7773,8 @@ void register_cp_regs_for_features(ARMCPU *cpu) define_arm_cp_regs(cpu, cp_reginfo); if (!arm_feature(env, ARM_FEATURE_V8)) { - /* Must go early as it is full of wildcards that may be + /* + * Must go early as it is full of wildcards that may be * overridden by later definitions. */ define_arm_cp_regs(cpu, not_v8_cp_reginfo); @@ -7713,7 +7788,8 @@ void register_cp_regs_for_features(ARMCPU *cpu) .access = PL1_R, .type = ARM_CP_CONST, .accessfn = access_aa32_tid3, .resetvalue = cpu->isar.id_pfr0 }, - /* ID_PFR1 is not a plain ARM_CP_CONST because we don't know + /* + * ID_PFR1 is not a plain ARM_CP_CONST because we don't know * the value of the GIC field until after we define these regs. */ { .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH, @@ -7825,7 +7901,8 @@ void register_cp_regs_for_features(ARMCPU *cpu) define_arm_cp_regs(cpu, not_v7_cp_reginfo); } if (arm_feature(env, ARM_FEATURE_V8)) { - /* AArch64 ID registers, which all have impdef reset values. + /* + * AArch64 ID registers, which all have impdef reset values. * Note that within the ID register ranges the unused slots * must all RAZ, not UNDEF; future architecture versions may * define new registers here. @@ -8149,11 +8226,13 @@ void register_cp_regs_for_features(ARMCPU *cpu) define_one_arm_cp_reg(cpu, &rvbar); } } else { - /* If EL2 is missing but higher ELs are enabled, we need to + /* + * If EL2 is missing but higher ELs are enabled, we need to * register the no_el2 reginfos. */ if (arm_feature(env, ARM_FEATURE_EL3)) { - /* When EL3 exists but not EL2, VPIDR and VMPIDR take the value + /* + * When EL3 exists but not EL2, VPIDR and VMPIDR take the value * of MIDR_EL1 and MPIDR_EL1. */ ARMCPRegInfo vpidr_regs[] = { @@ -8193,7 +8272,8 @@ void register_cp_regs_for_features(ARMCPU *cpu) define_arm_cp_regs(cpu, el3_regs); } - /* The behaviour of NSACR is sufficiently various that we don't + /* + * The behaviour of NSACR is sufficiently various that we don't * try to describe it in a single reginfo: * if EL3 is 64 bit, then trap to EL3 from S EL1, * reads as constant 0xc00 from NS EL1 and NS EL2 @@ -8285,13 +8365,15 @@ void register_cp_regs_for_features(ARMCPU *cpu) if (cpu_isar_feature(aa32_jazelle, cpu)) { define_arm_cp_regs(cpu, jazelle_regs); } - /* Slightly awkwardly, the OMAP and StrongARM cores need all of + /* + * Slightly awkwardly, the OMAP and StrongARM cores need all of * cp15 crn=0 to be writes-ignored, whereas for other cores they should * be read-only (ie write causes UNDEF exception). */ { ARMCPRegInfo id_pre_v8_midr_cp_reginfo[] = { - /* Pre-v8 MIDR space. + /* + * Pre-v8 MIDR space. * Note that the MIDR isn't a simple constant register because * of the TI925 behaviour where writes to another register can * cause the MIDR value to change. @@ -8395,7 +8477,8 @@ void register_cp_regs_for_features(ARMCPU *cpu) if (arm_feature(env, ARM_FEATURE_OMAPCP) || arm_feature(env, ARM_FEATURE_STRONGARM)) { ARMCPRegInfo *r; - /* Register the blanket "writes ignored" value first to cover the + /* + * Register the blanket "writes ignored" value first to cover the * whole space. Then update the specific ID registers to allow write * access, so that they ignore writes rather than causing them to * UNDEF. @@ -8757,7 +8840,8 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, int crm, int opc1, int opc2, const char *name) { - /* Private utility function for define_one_arm_cp_reg_with_opaque(): + /* + * Private utility function for define_one_arm_cp_reg_with_opaque(): * add a single reginfo struct to the hash table. */ uint32_t *key = g_new(uint32_t, 1); @@ -8766,13 +8850,15 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0; r2->name = g_strdup(name); - /* Reset the secure state to the specific incoming state. This is + /* + * Reset the secure state to the specific incoming state. This is * necessary as the register may have been defined with both states. */ r2->secure = secstate; if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) { - /* Register is banked (using both entries in array). + /* + * Register is banked (using both entries in array). * Overwriting fieldoffset as the array is only used to define * banked registers but later only fieldoffset is used. */ @@ -8781,7 +8867,8 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, if (state == ARM_CP_STATE_AA32) { if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) { - /* If the register is banked then we don't need to migrate or + /* + * If the register is banked then we don't need to migrate or * reset the 32-bit instance in certain cases: * * 1) If the register has both 32-bit and 64-bit instances then we @@ -8796,15 +8883,15 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, r2->type |= ARM_CP_ALIAS; } } else if ((secstate != r->secure) && !ns) { - /* The register is not banked so we only want to allow migration of + /* + * The register is not banked so we only want to allow migration of * the non-secure instance. */ r2->type |= ARM_CP_ALIAS; } if (r->state == ARM_CP_STATE_BOTH) { - /* We assume it is a cp15 register if the .cp field is left unset. - */ + /* We assume it is a cp15 register if the .cp field is left unset */ if (r2->cp == 0) { r2->cp = 15; } @@ -8817,7 +8904,8 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, } } if (state == ARM_CP_STATE_AA64) { - /* To allow abbreviation of ARMCPRegInfo + /* + * To allow abbreviation of ARMCPRegInfo * definitions, we treat cp == 0 as equivalent to * the value for "standard guest-visible sysreg". * STATE_BOTH definitions are also always "standard @@ -8835,17 +8923,20 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, if (opaque) { r2->opaque = opaque; } - /* reginfo passed to helpers is correct for the actual access, + /* + * reginfo passed to helpers is correct for the actual access, * and is never ARM_CP_STATE_BOTH: */ r2->state = state; - /* Make sure reginfo passed to helpers for wildcarded regs + /* + * Make sure reginfo passed to helpers for wildcarded regs * has the correct crm/opc1/opc2 for this reg, not CP_ANY: */ r2->crm = crm; r2->opc1 = opc1; r2->opc2 = opc2; - /* By convention, for wildcarded registers only the first + /* + * By convention, for wildcarded registers only the first * entry is used for migration; the others are marked as * ALIAS so we don't try to transfer the register * multiple times. Special registers (ie NOP/WFI) are @@ -8860,7 +8951,8 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB; } - /* Check that raw accesses are either forbidden or handled. Note that + /* + * Check that raw accesses are either forbidden or handled. Note that * we can't assert this earlier because the setup of fieldoffset for * banked registers has to be done first. */ @@ -8868,9 +8960,7 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, assert(!raw_accessors_invalid(r2)); } - /* Overriding of an existing definition must be explicitly - * requested. - */ + /* Overriding of an existing definition must be explicitly requested. */ if (!(r->type & ARM_CP_OVERRIDE)) { ARMCPRegInfo *oldreg; oldreg = g_hash_table_lookup(cpu->cp_regs, key); @@ -8890,7 +8980,8 @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, const ARMCPRegInfo *r, void *opaque) { - /* Define implementations of coprocessor registers. + /* + * Define implementations of coprocessor registers. * We store these in a hashtable because typically * there are less than 150 registers in a space which * is 16*16*16*8*8 = 262144 in size. @@ -8955,7 +9046,8 @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, default: g_assert_not_reached(); } - /* The AArch64 pseudocode CheckSystemAccess() specifies that op1 + /* + * The AArch64 pseudocode CheckSystemAccess() specifies that op1 * encodes a minimum access level for the register. We roll this * runtime check into our general permission check code, so check * here that the reginfo's specified permissions are strict enough @@ -8998,10 +9090,11 @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, assert((r->access & ~mask) == 0); } - /* Check that the register definition has enough info to handle + /* + * Check that the register definition has enough info to handle * reads and writes if they are permitted. */ - if (!(r->type & (ARM_CP_SPECIAL|ARM_CP_CONST))) { + if (!(r->type & (ARM_CP_SPECIAL | ARM_CP_CONST))) { if (r->access & PL3_R) { assert((r->fieldoffset || (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) || @@ -9024,7 +9117,8 @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, continue; } if (state == ARM_CP_STATE_AA32) { - /* Under AArch32 CP registers can be common + /* + * Under AArch32 CP registers can be common * (same for secure and non-secure world) or banked. */ char *name; @@ -9048,8 +9142,10 @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, break; } } else { - /* AArch64 registers get mapped to non-secure instance - * of AArch32 */ + /* + * AArch64 registers get mapped to non-secure + * instance of AArch32 + */ add_cpreg_to_hashtable(cpu, r, opaque, state, ARM_CP_SECSTATE_NS, crm, opc1, opc2, r->name); From patchwork Fri Jun 4 15:52:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454091 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp571762jae; Fri, 4 Jun 2021 09:36:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy4EdsslqSShxKbxPe+dekORG3JKtB2+NYZg0vUXYH2nw1kUsmqsqt7TEUBfUIxCKNru0Sf X-Received: by 2002:a05:6402:1c83:: with SMTP id cy3mr5556840edb.108.1622824600828; Fri, 04 Jun 2021 09:36:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824600; cv=none; d=google.com; s=arc-20160816; b=jBpq8Ba7SOn+umjN2jZaG98DBiFQKK46pFMCfWZfcIysFnUuLrTP+GReIaImf4GwE8 9sJx5e8TrirO6JlmMyoHiTg78i1/dj1rWqed25TefB6Trd3XqU2GNdH+9qkB2AYAd2pa FVbWDrVj2j+lUSBXfJ40lSqQwek5lcczKbtHwwTjIMexwslPOQ7RT25iKR0nkVyDWIQh 87tzoxcpOoOCo/1l+AaVfylMnKY+/c4KFgYh9J8ntVn7+VAwjvVXaq515KX4gTPiucJW TBJ9U3RaIOeutY+E5CBvi2jNyCJfZeYdg/KxXiFzyvswWA3P7xK/p/+POV1ist9mQC74 7tIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=9IAZoH1FQVFYKQUbTP+HG3rskhG4h312NYkS47dna/I=; b=cBv8ON6oewCyLOdMyNLTCS/x1qdRUhTnftWepOGWf19elbHtaMQPEeepIqHeoWrr89 XGkLUT9VsgxO1TdGWYQJWX76d3O410tw+ewoGIIxWVBceGJ1EKUWKJ4o/Tv2pHPBjeR8 hox10KsbOMsn0mG7bELqR3dP6s2NTw651NQC7zHaNA9QF0JUvUX07V12sLmKJN1dXG+Y 4rd3T0sNq9eR7bvGo7tDuAc1aWwrhNeuLEq9TShQThD/tCHtcUkkog/tZJT8sF/haJYI 5WHzZT8dzKyiJekLJB8QKuOipjGxENJT/SY2y9KOYhuPw0DkOi2o9oi7kq2Ur3K+PA/p 7Lsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KyX7j3Ww; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c23si752724ejc.734.2021.06.04.09.36.38 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:36:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KyX7j3Ww; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42304 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCoH-0007AG-Ew for patch@linaro.org; Fri, 04 Jun 2021 12:36:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48804) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCIL-0000Qg-GY for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:37 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]:54128) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHo-0005vt-PD for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:37 -0400 Received: by mail-wm1-x32d.google.com with SMTP id h3so5688409wmq.3 for ; Fri, 04 Jun 2021 09:03:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9IAZoH1FQVFYKQUbTP+HG3rskhG4h312NYkS47dna/I=; b=KyX7j3WwfsSw17NIdi5Qk4YGnD0DAB/Yz6xbKS2pdYFjJgf1JSSQevF3XvG5Hq/JXg uZA61OnBowgdtUyz1TE+NytVVKCUE17xMd3xFgGqW4yjJv4ONjjVxfLg1Ei8B9FCy1NC BnGtkaXaj/vBqvNoI8BdqJVLiIvf1UgoEkFqreCC7bkKAJOvx7G5xTImI7gNq2Z2HHAo 4btqRb8klVaEVyNNGhnXBtZGs9y4NTiW77oby4vSuNk8G9piGLF9vxvI/0rhPUW25ZrQ 0jldm/HsvP6i2xP4QzjJwQEGCuAv1ANH09vv1Ucc/SIyCjPRQ24nA+9tdF9eswToC+HK VQsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9IAZoH1FQVFYKQUbTP+HG3rskhG4h312NYkS47dna/I=; b=fvyChdXcPh/qTMFEUZq1NKJEss1fqqVIrixeZ3ZDLDuxbvlS0E8mNlH6+BS44Ip9zD oL6VjKO/B0o5IAc6cV6V+GyVTHotylN/ya/Fw/pFPrz1TUDfW1zvd1nl3wQl2W6Ek/Db 7KbV1kzZubgF6cgsNTA1ZnyBnYqWxwlWPMHvmigIIka85TkA7Ts06Pd6w/wzhWcbFKAc YATVqLoS3rhPKSvMPup7CE6EJ8ZZLbEglEFNTnozgcomxDpO/b9Q9OrWAald6fK3nWd8 9v68obRZ7/+lKEcfLTRo9rDGaUw3rD6kFd1P1/f/v54E5Lkkr0PQDplw7/nSCNNUIreS S0vg== X-Gm-Message-State: AOAM5311gCk/pzloLtsWtGGF4Ga0UG9umpNumoRZqqqy04djenrVONWn eGiFYOpgRVirZNzXQQ41xG5Acw== X-Received: by 2002:a1c:282:: with SMTP id 124mr4485073wmc.82.1622822579539; Fri, 04 Jun 2021 09:02:59 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id x11sm7147165wru.87.2021.06.04.09.02.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:54 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 3D4361FFB3; Fri, 4 Jun 2021 16:53:16 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 28/99] target/arm: split cpregs from tcg/helper.c Date: Fri, 4 Jun 2021 16:52:01 +0100 Message-Id: <20210604155312.15902-29-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Richard Henderson , qemu-arm@nongnu.org, Claudio Fontana , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana give them their own common module cpregs.c, and an interface cpregs.h. Extract the raw cpustate list to its own module. This is more or less needed for KVM too. For the tcg-specific registers, stuff them into tcg/cpregs.c As a result, the monster that is tcg/helper.c is a bit less scary, and a lot of stuff is removed from cpu.h too, relegated to cpregs.h. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/cpregs.h | 515 ++ target/arm/cpu.h | 480 -- hw/arm/pxa2xx.c | 1 + hw/arm/pxa2xx_pic.c | 1 + hw/intc/arm_gicv3_cpuif.c | 1 + hw/intc/arm_gicv3_kvm.c | 1 + target/arm/cpregs.c | 380 ++ target/arm/cpu.c | 1 + target/arm/cpu64.c | 2 +- target/arm/cpu_tcg.c | 1 + target/arm/cpustate-list.c | 146 + target/arm/gdbstub.c | 1 + target/arm/machine.c | 1 + target/arm/tcg/cpregs.c | 7674 +++++++++++++++++++++++++++ target/arm/tcg/helper.c | 9051 +------------------------------- target/arm/tcg/op_helper.c | 1 + target/arm/tcg/translate-a64.c | 1 + target/arm/tcg/translate.c | 1 + target/arm/meson.build | 2 + target/arm/tcg/meson.build | 1 + 20 files changed, 9005 insertions(+), 9257 deletions(-) create mode 100644 target/arm/cpregs.h create mode 100644 target/arm/cpregs.c create mode 100644 target/arm/cpustate-list.c create mode 100644 target/arm/tcg/cpregs.c -- 2.20.1 diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h new file mode 100644 index 0000000000..a4e62d8f3d --- /dev/null +++ b/target/arm/cpregs.h @@ -0,0 +1,515 @@ +/* + * ARM CP registers + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#ifndef CPREGS_H +#define CPREGS_H + +/* + * Interface for defining coprocessor registers. + * Registers are defined in tables of arm_cp_reginfo structs + * which are passed to define_arm_cp_regs(). + */ + +/* + * When looking up a coprocessor register we look for it + * via an integer which encodes all of: + * coprocessor number + * Crn, Crm, opc1, opc2 fields + * 32 or 64 bit register (ie is it accessed via MRC/MCR + * or via MRRC/MCRR?) + * non-secure/secure bank (AArch32 only) + * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field. + * (In this case crn and opc2 should be zero.) + * For AArch64, there is no 32/64 bit size distinction; + * instead all registers have a 2 bit op0, 3 bit op1 and op2, + * and 4 bit CRn and CRm. The encoding patterns are chosen + * to be easy to convert to and from the KVM encodings, and also + * so that the hashtable can contain both AArch32 and AArch64 + * registers (to allow for interprocessing where we might run + * 32 bit code on a 64 bit core). + */ +/* + * This bit is private to our hashtable cpreg; in KVM register + * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64 + * in the upper bits of the 64 bit ID. + */ +#define CP_REG_AA64_SHIFT 28 +#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT) + +/* + * To enable banking of coprocessor registers depending on ns-bit we + * add a bit to distinguish between secure and non-secure cpregs in the + * hashtable. + */ +#define CP_REG_NS_SHIFT 29 +#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT) + +#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \ + ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \ + ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2)) + +#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \ + (CP_REG_AA64_MASK | \ + ((cp) << CP_REG_ARM_COPROC_SHIFT) | \ + ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \ + ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \ + ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \ + ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \ + ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT)) + +/* + * Convert a full 64 bit KVM register ID to the truncated 32 bit + * version used as a key for the coprocessor register hashtable + */ +static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid) +{ + uint32_t cpregid = kvmid; + if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) { + cpregid |= CP_REG_AA64_MASK; + } else { + if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) { + cpregid |= (1 << 15); + } + + /* + * KVM is always non-secure so add the NS flag on AArch32 register + * entries. + */ + cpregid |= 1 << CP_REG_NS_SHIFT; + } + return cpregid; +} + +/* + * Convert a truncated 32 bit hashtable key into the full + * 64 bit KVM register ID. + */ +static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid) +{ + uint64_t kvmid; + + if (cpregid & CP_REG_AA64_MASK) { + kvmid = cpregid & ~CP_REG_AA64_MASK; + kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64; + } else { + kvmid = cpregid & ~(1 << 15); + if (cpregid & (1 << 15)) { + kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM; + } else { + kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM; + } + } + return kvmid; +} + +/* + * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a + * special-behaviour cp reg and bits [11..8] indicate what behaviour + * it has. Otherwise it is a simple cp reg, where CONST indicates that + * TCG can assume the value to be constant (ie load at translate time) + * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END + * indicates that the TB should not be ended after a write to this register + * (the default is that the TB ends after cp writes). OVERRIDE permits + * a register definition to override a previous definition for the + * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the + * old must have the OVERRIDE bit set. + * ALIAS indicates that this register is an alias view of some underlying + * state which is also visible via another register, and that the other + * register is handling migration and reset; registers marked ALIAS will not be + * migrated but may have their state set by syncing of register state from KVM. + * NO_RAW indicates that this register has no underlying state and does not + * support raw access for state saving/loading; it will not be used for either + * migration or KVM state synchronization. (Typically this is for "registers" + * which are actually used as instructions for cache maintenance and so on.) + * IO indicates that this register does I/O and therefore its accesses + * need to be marked with gen_io_start() and also end the TB. In particular, + * registers which implement clocks or timers require this. + * RAISES_EXC is for when the read or write hook might raise an exception; + * the generated code will synchronize the CPU state before calling the hook + * so that it is safe for the hook to call raise_exception(). + * NEWEL is for writes to registers that might change the exception + * level - typically on older ARM chips. For those cases we need to + * re-read the new el when recomputing the translation flags. + */ +#define ARM_CP_SPECIAL 0x0001 +#define ARM_CP_CONST 0x0002 +#define ARM_CP_64BIT 0x0004 +#define ARM_CP_SUPPRESS_TB_END 0x0008 +#define ARM_CP_OVERRIDE 0x0010 +#define ARM_CP_ALIAS 0x0020 +#define ARM_CP_IO 0x0040 +#define ARM_CP_NO_RAW 0x0080 +#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100) +#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200) +#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300) +#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400) +#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500) +#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600) +#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700) +#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA +#define ARM_CP_FPU 0x1000 +#define ARM_CP_SVE 0x2000 +#define ARM_CP_NO_GDB 0x4000 +#define ARM_CP_RAISES_EXC 0x8000 +#define ARM_CP_NEWEL 0x10000 +/* Used only as a terminator for ARMCPRegInfo lists */ +#define ARM_CP_SENTINEL 0xfffff +/* Mask of only the flag bits in a type field */ +#define ARM_CP_FLAG_MASK 0x1f0ff + +/* + * Valid values for ARMCPRegInfo state field, indicating which of + * the AArch32 and AArch64 execution states this register is visible in. + * If the reginfo doesn't explicitly specify then it is AArch32 only. + * If the reginfo is declared to be visible in both states then a second + * reginfo is synthesised for the AArch32 view of the AArch64 register, + * such that the AArch32 view is the lower 32 bits of the AArch64 one. + * Note that we rely on the values of these enums as we iterate through + * the various states in some places. + */ +enum { + ARM_CP_STATE_AA32 = 0, + ARM_CP_STATE_AA64 = 1, + ARM_CP_STATE_BOTH = 2, +}; + +/* + * ARM CP register secure state flags. These flags identify security state + * attributes for a given CP register entry. + * The existence of both or neither secure and non-secure flags indicates that + * the register has both a secure and non-secure hash entry. A single one of + * these flags causes the register to only be hashed for the specified + * security state. + * Although definitions may have any combination of the S/NS bits, each + * registered entry will only have one to identify whether the entry is secure + * or non-secure. + */ +enum { + ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */ + ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */ +}; + +/* + * Return true if cptype is a valid type field. This is used to try to + * catch errors where the sentinel has been accidentally left off the end + * of a list of registers. + */ +static inline bool cptype_valid(int cptype) +{ + return ((cptype & ~ARM_CP_FLAG_MASK) == 0) + || ((cptype & ARM_CP_SPECIAL) && + ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL)); +} + +/* + * Access rights: + * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM + * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and + * PL2 (hyp). The other level which has Read and Write bits is Secure PL1 + * (ie any of the privileged modes in Secure state, or Monitor mode). + * If a register is accessible in one privilege level it's always accessible + * in higher privilege levels too. Since "Secure PL1" also follows this rule + * (ie anything visible in PL2 is visible in S-PL1, some things are only + * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the + * terminology a little and call this PL3. + * In AArch64 things are somewhat simpler as the PLx bits line up exactly + * with the ELx exception levels. + * + * If access permissions for a register are more complex than can be + * described with these bits, then use a laxer set of restrictions, and + * do the more restrictive/complex check inside a helper function. + */ +#define PL3_R 0x80 +#define PL3_W 0x40 +#define PL2_R (0x20 | PL3_R) +#define PL2_W (0x10 | PL3_W) +#define PL1_R (0x08 | PL2_R) +#define PL1_W (0x04 | PL2_W) +#define PL0_R (0x02 | PL1_R) +#define PL0_W (0x01 | PL1_W) + +/* + * For user-mode some registers are accessible to EL0 via a kernel + * trap-and-emulate ABI. In this case we define the read permissions + * as actually being PL0_R. However some bits of any given register + * may still be masked. + */ +#ifdef CONFIG_USER_ONLY +#define PL0U_R PL0_R +#else +#define PL0U_R PL1_R +#endif + +#define PL3_RW (PL3_R | PL3_W) +#define PL2_RW (PL2_R | PL2_W) +#define PL1_RW (PL1_R | PL1_W) +#define PL0_RW (PL0_R | PL0_W) + +typedef enum CPAccessResult { + /* Access is permitted */ + CP_ACCESS_OK = 0, + /* + * Access fails due to a configurable trap or enable which would + * result in a categorized exception syndrome giving information about + * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6, + * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or + * PL1 if in EL0, otherwise to the current EL). + */ + CP_ACCESS_TRAP = 1, + /* + * Access fails and results in an exception syndrome 0x0 ("uncategorized"). + * Note that this is not a catch-all case -- the set of cases which may + * result in this failure is specifically defined by the architecture. + */ + CP_ACCESS_TRAP_UNCATEGORIZED = 2, + /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */ + CP_ACCESS_TRAP_EL2 = 3, + CP_ACCESS_TRAP_EL3 = 4, + /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */ + CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5, + CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6, + /* + * Access fails and results in an exception syndrome for an FP access, + * trapped directly to EL2 or EL3 + */ + CP_ACCESS_TRAP_FP_EL2 = 7, + CP_ACCESS_TRAP_FP_EL3 = 8, +} CPAccessResult; + +/* + * Access functions for coprocessor registers. These cannot fail and + * may not raise exceptions. + */ +typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque); +typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque, + uint64_t value); +/* Access permission check functions for coprocessor registers. */ +typedef CPAccessResult CPAccessFn(CPUARMState *env, + const ARMCPRegInfo *opaque, + bool isread); +/* Hook function for register reset */ +typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque); + +#define CP_ANY 0xff + +/* Definition of an ARM coprocessor register */ +struct ARMCPRegInfo { + /* Name of register (useful mainly for debugging, need not be unique) */ + const char *name; + /* + * Location of register: coprocessor number and (crn,crm,opc1,opc2) + * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a + * 'wildcard' field -- any value of that field in the MRC/MCR insn + * will be decoded to this register. The register read and write + * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2 + * used by the program, so it is possible to register a wildcard and + * then behave differently on read/write if necessary. + * For 64 bit registers, only crm and opc1 are relevant; crn and opc2 + * must both be zero. + * For AArch64-visible registers, opc0 is also used. + * Since there are no "coprocessors" in AArch64, cp is purely used as a + * way to distinguish (for KVM's benefit) guest-visible system registers + * from demuxed ones provided to preserve the "no side effects on + * KVM register read/write from QEMU" semantics. cp==0x13 is guest + * visible (to match KVM's encoding); cp==0 will be converted to + * cp==0x13 when the ARMCPRegInfo is registered, for convenience. + */ + uint8_t cp; + uint8_t crn; + uint8_t crm; + uint8_t opc0; + uint8_t opc1; + uint8_t opc2; + /* Execution state in which this register is visible: ARM_CP_STATE_* */ + int state; + /* Register type: ARM_CP_* bits/values */ + int type; + /* Access rights: PL*_[RW] */ + int access; + /* Security state: ARM_CP_SECSTATE_* bits/values */ + int secure; + /* + * The opaque pointer passed to define_arm_cp_regs_with_opaque() when + * this register was defined: can be used to hand data through to the + * register read/write functions, since they are passed the ARMCPRegInfo*. + */ + void *opaque; + /* + * Value of this register, if it is ARM_CP_CONST. Otherwise, if + * fieldoffset is non-zero, the reset value of the register. + */ + uint64_t resetvalue; + /* + * Offset of the field in CPUARMState for this register. + * + * This is not needed if either: + * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs + * 2. both readfn and writefn are specified + */ + ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */ + + /* + * Offsets of the secure and non-secure fields in CPUARMState for the + * register if it is banked. These fields are only used during the static + * registration of a register. During hashing the bank associated + * with a given security state is copied to fieldoffset which is used from + * there on out. + * + * It is expected that register definitions use either fieldoffset or + * bank_fieldoffsets in the definition but not both. It is also expected + * that both bank offsets are set when defining a banked register. This + * use indicates that a register is banked. + */ + ptrdiff_t bank_fieldoffsets[2]; + + /* + * Function for making any access checks for this register in addition to + * those specified by the 'access' permissions bits. If NULL, no extra + * checks required. The access check is performed at runtime, not at + * translate time. + */ + CPAccessFn *accessfn; + /* + * Function for handling reads of this register. If NULL, then reads + * will be done by loading from the offset into CPUARMState specified + * by fieldoffset. + */ + CPReadFn *readfn; + /* + * Function for handling writes of this register. If NULL, then writes + * will be done by writing to the offset into CPUARMState specified + * by fieldoffset. + */ + CPWriteFn *writefn; + /* + * Function for doing a "raw" read; used when we need to copy + * coprocessor state to the kernel for KVM or out for + * migration. This only needs to be provided if there is also a + * readfn and it has side effects (for instance clear-on-read bits). + */ + CPReadFn *raw_readfn; + /* + * Function for doing a "raw" write; used when we need to copy KVM + * kernel coprocessor state into userspace, or for inbound + * migration. This only needs to be provided if there is also a + * writefn and it masks out "unwritable" bits or has write-one-to-clear + * or similar behaviour. + */ + CPWriteFn *raw_writefn; + /* + * Function for resetting the register. If NULL, then reset will be done + * by writing resetvalue to the field specified in fieldoffset. If + * fieldoffset is 0 then no reset will be done. + */ + CPResetFn *resetfn; + + /* + * "Original" writefn and readfn. + * For ARMv8.1-VHE register aliases, we overwrite the read/write + * accessor functions of various EL1/EL0 to perform the runtime + * check for which sysreg should actually be modified, and then + * forwards the operation. Before overwriting the accessors, + * the original function is copied here, so that accesses that + * really do go to the EL1/EL0 version proceed normally. + * (The corresponding EL2 register is linked via opaque.) + */ + CPReadFn *orig_readfn; + CPWriteFn *orig_writefn; +}; + +/* + * Macros which are lvalues for the field in CPUARMState for the + * ARMCPRegInfo *ri. + */ +#define CPREG_FIELD32(env, ri) \ + (*(uint32_t *)((char *)(env) + (ri)->fieldoffset)) +#define CPREG_FIELD64(env, ri) \ + (*(uint64_t *)((char *)(env) + (ri)->fieldoffset)) + +#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL } + +void define_arm_cp_regs_with_opaque(ARMCPU *cpu, + const ARMCPRegInfo *regs, void *opaque); +void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, + const ARMCPRegInfo *regs, void *opaque); +static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs) +{ + define_arm_cp_regs_with_opaque(cpu, regs, 0); +} +static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs) +{ + define_one_arm_cp_reg_with_opaque(cpu, regs, 0); +} +const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp); + +/* + * Definition of an ARM co-processor register as viewed from + * userspace. This is used for presenting sanitised versions of + * registers to userspace when emulating the Linux AArch64 CPU + * ID/feature ABI (advertised as HWCAP_CPUID). + */ +typedef struct ARMCPRegUserSpaceInfo { + /* Name of register */ + const char *name; + + /* Is the name actually a glob pattern */ + bool is_glob; + + /* Only some bits are exported to user space */ + uint64_t exported_bits; + + /* Fixed bits are applied after the mask */ + uint64_t fixed_bits; +} ARMCPRegUserSpaceInfo; + +#define REGUSERINFO_SENTINEL { .name = NULL } + +/* CPWriteFn that can be used to implement writes-ignored behaviour */ +void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value); +/* CPReadFn that can be used for read-as-zero behaviour */ +uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri); + +/* + * CPResetFn that does nothing, for use if no reset is required even + * if fieldoffset is non zero. + */ +void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque); + +/* + * Return true if this reginfo struct's field in the cpu state struct + * is 64 bits wide. + */ +static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri) +{ + return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT); +} + +static inline bool cp_access_ok(int current_el, + const ARMCPRegInfo *ri, int isread) +{ + return (ri->access >> ((current_el * 2) + isread)) & 1; +} + +/* Raw read of a coprocessor register (as needed for migration, etc) */ +uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri); + +#ifdef CONFIG_TCG +/* Modify ARMCPRegInfo for access from userspace. */ +void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods); +#endif + +/* + * default raw read/write of coprocessor register field, + * behavior if no other function defined, and not const. + */ +uint64_t raw_read(CPUARMState *env, const ARMCPRegInfo *ri); +void raw_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value); + +#endif /* CPREGS_H */ diff --git a/target/arm/cpu.h b/target/arm/cpu.h index af788c7801..adb9d2828d 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2424,235 +2424,6 @@ static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure) } #endif -/* Interface for defining coprocessor registers. - * Registers are defined in tables of arm_cp_reginfo structs - * which are passed to define_arm_cp_regs(). - */ - -/* When looking up a coprocessor register we look for it - * via an integer which encodes all of: - * coprocessor number - * Crn, Crm, opc1, opc2 fields - * 32 or 64 bit register (ie is it accessed via MRC/MCR - * or via MRRC/MCRR?) - * non-secure/secure bank (AArch32 only) - * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field. - * (In this case crn and opc2 should be zero.) - * For AArch64, there is no 32/64 bit size distinction; - * instead all registers have a 2 bit op0, 3 bit op1 and op2, - * and 4 bit CRn and CRm. The encoding patterns are chosen - * to be easy to convert to and from the KVM encodings, and also - * so that the hashtable can contain both AArch32 and AArch64 - * registers (to allow for interprocessing where we might run - * 32 bit code on a 64 bit core). - */ -/* This bit is private to our hashtable cpreg; in KVM register - * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64 - * in the upper bits of the 64 bit ID. - */ -#define CP_REG_AA64_SHIFT 28 -#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT) - -/* To enable banking of coprocessor registers depending on ns-bit we - * add a bit to distinguish between secure and non-secure cpregs in the - * hashtable. - */ -#define CP_REG_NS_SHIFT 29 -#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT) - -#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \ - ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \ - ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2)) - -#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \ - (CP_REG_AA64_MASK | \ - ((cp) << CP_REG_ARM_COPROC_SHIFT) | \ - ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \ - ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \ - ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \ - ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \ - ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT)) - -/* Convert a full 64 bit KVM register ID to the truncated 32 bit - * version used as a key for the coprocessor register hashtable - */ -static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid) -{ - uint32_t cpregid = kvmid; - if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) { - cpregid |= CP_REG_AA64_MASK; - } else { - if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) { - cpregid |= (1 << 15); - } - - /* KVM is always non-secure so add the NS flag on AArch32 register - * entries. - */ - cpregid |= 1 << CP_REG_NS_SHIFT; - } - return cpregid; -} - -/* Convert a truncated 32 bit hashtable key into the full - * 64 bit KVM register ID. - */ -static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid) -{ - uint64_t kvmid; - - if (cpregid & CP_REG_AA64_MASK) { - kvmid = cpregid & ~CP_REG_AA64_MASK; - kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64; - } else { - kvmid = cpregid & ~(1 << 15); - if (cpregid & (1 << 15)) { - kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM; - } else { - kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM; - } - } - return kvmid; -} - -/* ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a - * special-behaviour cp reg and bits [11..8] indicate what behaviour - * it has. Otherwise it is a simple cp reg, where CONST indicates that - * TCG can assume the value to be constant (ie load at translate time) - * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END - * indicates that the TB should not be ended after a write to this register - * (the default is that the TB ends after cp writes). OVERRIDE permits - * a register definition to override a previous definition for the - * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the - * old must have the OVERRIDE bit set. - * ALIAS indicates that this register is an alias view of some underlying - * state which is also visible via another register, and that the other - * register is handling migration and reset; registers marked ALIAS will not be - * migrated but may have their state set by syncing of register state from KVM. - * NO_RAW indicates that this register has no underlying state and does not - * support raw access for state saving/loading; it will not be used for either - * migration or KVM state synchronization. (Typically this is for "registers" - * which are actually used as instructions for cache maintenance and so on.) - * IO indicates that this register does I/O and therefore its accesses - * need to be marked with gen_io_start() and also end the TB. In particular, - * registers which implement clocks or timers require this. - * RAISES_EXC is for when the read or write hook might raise an exception; - * the generated code will synchronize the CPU state before calling the hook - * so that it is safe for the hook to call raise_exception(). - * NEWEL is for writes to registers that might change the exception - * level - typically on older ARM chips. For those cases we need to - * re-read the new el when recomputing the translation flags. - */ -#define ARM_CP_SPECIAL 0x0001 -#define ARM_CP_CONST 0x0002 -#define ARM_CP_64BIT 0x0004 -#define ARM_CP_SUPPRESS_TB_END 0x0008 -#define ARM_CP_OVERRIDE 0x0010 -#define ARM_CP_ALIAS 0x0020 -#define ARM_CP_IO 0x0040 -#define ARM_CP_NO_RAW 0x0080 -#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100) -#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200) -#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300) -#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400) -#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500) -#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600) -#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700) -#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA -#define ARM_CP_FPU 0x1000 -#define ARM_CP_SVE 0x2000 -#define ARM_CP_NO_GDB 0x4000 -#define ARM_CP_RAISES_EXC 0x8000 -#define ARM_CP_NEWEL 0x10000 -/* Used only as a terminator for ARMCPRegInfo lists */ -#define ARM_CP_SENTINEL 0xfffff -/* Mask of only the flag bits in a type field */ -#define ARM_CP_FLAG_MASK 0x1f0ff - -/* Valid values for ARMCPRegInfo state field, indicating which of - * the AArch32 and AArch64 execution states this register is visible in. - * If the reginfo doesn't explicitly specify then it is AArch32 only. - * If the reginfo is declared to be visible in both states then a second - * reginfo is synthesised for the AArch32 view of the AArch64 register, - * such that the AArch32 view is the lower 32 bits of the AArch64 one. - * Note that we rely on the values of these enums as we iterate through - * the various states in some places. - */ -enum { - ARM_CP_STATE_AA32 = 0, - ARM_CP_STATE_AA64 = 1, - ARM_CP_STATE_BOTH = 2, -}; - -/* ARM CP register secure state flags. These flags identify security state - * attributes for a given CP register entry. - * The existence of both or neither secure and non-secure flags indicates that - * the register has both a secure and non-secure hash entry. A single one of - * these flags causes the register to only be hashed for the specified - * security state. - * Although definitions may have any combination of the S/NS bits, each - * registered entry will only have one to identify whether the entry is secure - * or non-secure. - */ -enum { - ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */ - ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */ -}; - -/* Return true if cptype is a valid type field. This is used to try to - * catch errors where the sentinel has been accidentally left off the end - * of a list of registers. - */ -static inline bool cptype_valid(int cptype) -{ - return ((cptype & ~ARM_CP_FLAG_MASK) == 0) - || ((cptype & ARM_CP_SPECIAL) && - ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL)); -} - -/* Access rights: - * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM - * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and - * PL2 (hyp). The other level which has Read and Write bits is Secure PL1 - * (ie any of the privileged modes in Secure state, or Monitor mode). - * If a register is accessible in one privilege level it's always accessible - * in higher privilege levels too. Since "Secure PL1" also follows this rule - * (ie anything visible in PL2 is visible in S-PL1, some things are only - * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the - * terminology a little and call this PL3. - * In AArch64 things are somewhat simpler as the PLx bits line up exactly - * with the ELx exception levels. - * - * If access permissions for a register are more complex than can be - * described with these bits, then use a laxer set of restrictions, and - * do the more restrictive/complex check inside a helper function. - */ -#define PL3_R 0x80 -#define PL3_W 0x40 -#define PL2_R (0x20 | PL3_R) -#define PL2_W (0x10 | PL3_W) -#define PL1_R (0x08 | PL2_R) -#define PL1_W (0x04 | PL2_W) -#define PL0_R (0x02 | PL1_R) -#define PL0_W (0x01 | PL1_W) - -/* - * For user-mode some registers are accessible to EL0 via a kernel - * trap-and-emulate ABI. In this case we define the read permissions - * as actually being PL0_R. However some bits of any given register - * may still be masked. - */ -#ifdef CONFIG_USER_ONLY -#define PL0U_R PL0_R -#else -#define PL0U_R PL1_R -#endif - -#define PL3_RW (PL3_R | PL3_W) -#define PL2_RW (PL2_R | PL2_W) -#define PL1_RW (PL1_R | PL1_W) -#define PL0_RW (PL0_R | PL0_W) - /* Return the highest implemented Exception Level */ static inline int arm_highest_el(CPUARMState *env) { @@ -2706,257 +2477,6 @@ static inline int arm_current_el(CPUARMState *env) typedef struct ARMCPRegInfo ARMCPRegInfo; -typedef enum CPAccessResult { - /* Access is permitted */ - CP_ACCESS_OK = 0, - /* - * Access fails due to a configurable trap or enable which would - * result in a categorized exception syndrome giving information about - * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6, - * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or - * PL1 if in EL0, otherwise to the current EL). - */ - CP_ACCESS_TRAP = 1, - /* - * Access fails and results in an exception syndrome 0x0 ("uncategorized"). - * Note that this is not a catch-all case -- the set of cases which may - * result in this failure is specifically defined by the architecture. - */ - CP_ACCESS_TRAP_UNCATEGORIZED = 2, - /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */ - CP_ACCESS_TRAP_EL2 = 3, - CP_ACCESS_TRAP_EL3 = 4, - /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */ - CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5, - CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6, - /* - * Access fails and results in an exception syndrome for an FP access, - * trapped directly to EL2 or EL3 - */ - CP_ACCESS_TRAP_FP_EL2 = 7, - CP_ACCESS_TRAP_FP_EL3 = 8, -} CPAccessResult; - -/* - * Access functions for coprocessor registers. These cannot fail and - * may not raise exceptions. - */ -typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque); -typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque, - uint64_t value); -/* Access permission check functions for coprocessor registers. */ -typedef CPAccessResult CPAccessFn(CPUARMState *env, - const ARMCPRegInfo *opaque, - bool isread); -/* Hook function for register reset */ -typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque); - -#define CP_ANY 0xff - -/* Definition of an ARM coprocessor register */ -struct ARMCPRegInfo { - /* Name of register (useful mainly for debugging, need not be unique) */ - const char *name; - /* - * Location of register: coprocessor number and (crn,crm,opc1,opc2) - * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a - * 'wildcard' field -- any value of that field in the MRC/MCR insn - * will be decoded to this register. The register read and write - * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2 - * used by the program, so it is possible to register a wildcard and - * then behave differently on read/write if necessary. - * For 64 bit registers, only crm and opc1 are relevant; crn and opc2 - * must both be zero. - * For AArch64-visible registers, opc0 is also used. - * Since there are no "coprocessors" in AArch64, cp is purely used as a - * way to distinguish (for KVM's benefit) guest-visible system registers - * from demuxed ones provided to preserve the "no side effects on - * KVM register read/write from QEMU" semantics. cp==0x13 is guest - * visible (to match KVM's encoding); cp==0 will be converted to - * cp==0x13 when the ARMCPRegInfo is registered, for convenience. - */ - uint8_t cp; - uint8_t crn; - uint8_t crm; - uint8_t opc0; - uint8_t opc1; - uint8_t opc2; - /* Execution state in which this register is visible: ARM_CP_STATE_* */ - int state; - /* Register type: ARM_CP_* bits/values */ - int type; - /* Access rights: PL*_[RW] */ - int access; - /* Security state: ARM_CP_SECSTATE_* bits/values */ - int secure; - /* - * The opaque pointer passed to define_arm_cp_regs_with_opaque() when - * this register was defined: can be used to hand data through to the - * register read/write functions, since they are passed the ARMCPRegInfo*. - */ - void *opaque; - /* - * Value of this register, if it is ARM_CP_CONST. Otherwise, if - * fieldoffset is non-zero, the reset value of the register. - */ - uint64_t resetvalue; - /* - * Offset of the field in CPUARMState for this register. - * - * This is not needed if either: - * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs - * 2. both readfn and writefn are specified - */ - ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */ - - /* - * Offsets of the secure and non-secure fields in CPUARMState for the - * register if it is banked. These fields are only used during the static - * registration of a register. During hashing the bank associated - * with a given security state is copied to fieldoffset which is used from - * there on out. - * - * It is expected that register definitions use either fieldoffset or - * bank_fieldoffsets in the definition but not both. It is also expected - * that both bank offsets are set when defining a banked register. This - * use indicates that a register is banked. - */ - ptrdiff_t bank_fieldoffsets[2]; - - /* - * Function for making any access checks for this register in addition to - * those specified by the 'access' permissions bits. If NULL, no extra - * checks required. The access check is performed at runtime, not at - * translate time. - */ - CPAccessFn *accessfn; - /* - * Function for handling reads of this register. If NULL, then reads - * will be done by loading from the offset into CPUARMState specified - * by fieldoffset. - */ - CPReadFn *readfn; - /* - * Function for handling writes of this register. If NULL, then writes - * will be done by writing to the offset into CPUARMState specified - * by fieldoffset. - */ - CPWriteFn *writefn; - /* - * Function for doing a "raw" read; used when we need to copy - * coprocessor state to the kernel for KVM or out for - * migration. This only needs to be provided if there is also a - * readfn and it has side effects (for instance clear-on-read bits). - */ - CPReadFn *raw_readfn; - /* - * Function for doing a "raw" write; used when we need to copy KVM - * kernel coprocessor state into userspace, or for inbound - * migration. This only needs to be provided if there is also a - * writefn and it masks out "unwritable" bits or has write-one-to-clear - * or similar behaviour. - */ - CPWriteFn *raw_writefn; - /* - * Function for resetting the register. If NULL, then reset will be done - * by writing resetvalue to the field specified in fieldoffset. If - * fieldoffset is 0 then no reset will be done. - */ - CPResetFn *resetfn; - - /* - * "Original" writefn and readfn. - * For ARMv8.1-VHE register aliases, we overwrite the read/write - * accessor functions of various EL1/EL0 to perform the runtime - * check for which sysreg should actually be modified, and then - * forwards the operation. Before overwriting the accessors, - * the original function is copied here, so that accesses that - * really do go to the EL1/EL0 version proceed normally. - * (The corresponding EL2 register is linked via opaque.) - */ - CPReadFn *orig_readfn; - CPWriteFn *orig_writefn; -}; - -/* - * Macros which are lvalues for the field in CPUARMState for the - * ARMCPRegInfo *ri. - */ -#define CPREG_FIELD32(env, ri) \ - (*(uint32_t *)((char *)(env) + (ri)->fieldoffset)) -#define CPREG_FIELD64(env, ri) \ - (*(uint64_t *)((char *)(env) + (ri)->fieldoffset)) - -#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL } - -void define_arm_cp_regs_with_opaque(ARMCPU *cpu, - const ARMCPRegInfo *regs, void *opaque); -void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, - const ARMCPRegInfo *regs, void *opaque); -static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs) -{ - define_arm_cp_regs_with_opaque(cpu, regs, 0); -} -static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs) -{ - define_one_arm_cp_reg_with_opaque(cpu, regs, 0); -} -const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp); - -/* - * Definition of an ARM co-processor register as viewed from - * userspace. This is used for presenting sanitised versions of - * registers to userspace when emulating the Linux AArch64 CPU - * ID/feature ABI (advertised as HWCAP_CPUID). - */ -typedef struct ARMCPRegUserSpaceInfo { - /* Name of register */ - const char *name; - - /* Is the name actually a glob pattern */ - bool is_glob; - - /* Only some bits are exported to user space */ - uint64_t exported_bits; - - /* Fixed bits are applied after the mask */ - uint64_t fixed_bits; -} ARMCPRegUserSpaceInfo; - -#define REGUSERINFO_SENTINEL { .name = NULL } - -void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods); - -/* CPWriteFn that can be used to implement writes-ignored behaviour */ -void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value); -/* CPReadFn that can be used for read-as-zero behaviour */ -uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri); - -/* - * CPResetFn that does nothing, for use if no reset is required even - * if fieldoffset is non zero. - */ -void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque); - -/* - * Return true if this reginfo struct's field in the cpu state struct - * is 64 bits wide. - */ -static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri) -{ - return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT); -} - -static inline bool cp_access_ok(int current_el, - const ARMCPRegInfo *ri, int isread) -{ - return (ri->access >> ((current_el * 2) + isread)) & 1; -} - -/* Raw read of a coprocessor register (as needed for migration, etc) */ -uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri); - /** * write_list_to_cpustate * @cpu: ARMCPU diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c index fdc4955e95..2f7a4e90ad 100644 --- a/hw/arm/pxa2xx.c +++ b/hw/arm/pxa2xx.c @@ -30,6 +30,7 @@ #include "qemu/cutils.h" #include "qemu/log.h" #include "qom/object.h" +#include "cpregs.h" static struct { hwaddr io_base; diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c index ed032fed54..934ff6716a 100644 --- a/hw/arm/pxa2xx_pic.c +++ b/hw/arm/pxa2xx_pic.c @@ -17,6 +17,7 @@ #include "hw/sysbus.h" #include "migration/vmstate.h" #include "qom/object.h" +#include "cpregs.h" #define ICIP 0x00 /* Interrupt Controller IRQ Pending register */ #define ICMR 0x04 /* Interrupt Controller Mask register */ diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c index 81f94c7f4a..fa374d5ecc 100644 --- a/hw/intc/arm_gicv3_cpuif.c +++ b/hw/intc/arm_gicv3_cpuif.c @@ -19,6 +19,7 @@ #include "gicv3_internal.h" #include "hw/irq.h" #include "cpu.h" +#include "cpregs.h" void gicv3_set_gicv3state(CPUState *cpu, GICv3CPUState *s) { diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c index 5c09f00dec..96c7e8b80c 100644 --- a/hw/intc/arm_gicv3_kvm.c +++ b/hw/intc/arm_gicv3_kvm.c @@ -31,6 +31,7 @@ #include "vgic_common.h" #include "migration/blocker.h" #include "qom/object.h" +#include "cpregs.h" #ifdef DEBUG_GICV3_KVM #define DPRINTF(fmt, ...) \ diff --git a/target/arm/cpregs.c b/target/arm/cpregs.c new file mode 100644 index 0000000000..3fbfbfb35a --- /dev/null +++ b/target/arm/cpregs.c @@ -0,0 +1,380 @@ +/* + * ARM CP registers - common functionality + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "cpregs.h" + +static bool raw_accessors_invalid(const ARMCPRegInfo *ri) +{ + /* + * Return true if the regdef would cause an assertion if you called + * read_raw_cp_reg() or write_raw_cp_reg() on it (ie if it is a + * program bug for it not to have the NO_RAW flag). + * NB that returning false here doesn't necessarily mean that calling + * read/write_raw_cp_reg() is safe, because we can't distinguish "has + * read/write access functions which are safe for raw use" from "has + * read/write access functions which have side effects but has forgotten + * to provide raw access functions". + * The tests here line up with the conditions in read/write_raw_cp_reg() + * and assertions in raw_read()/raw_write(). + */ + if ((ri->type & ARM_CP_CONST) || + ri->fieldoffset || + ((ri->raw_writefn || ri->writefn) && (ri->raw_readfn || ri->readfn))) { + return false; + } + return true; +} + +static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, + void *opaque, int state, int secstate, + int crm, int opc1, int opc2, + const char *name) +{ + /* + * Private utility function for define_one_arm_cp_reg_with_opaque(): + * add a single reginfo struct to the hash table. + */ + uint32_t *key = g_new(uint32_t, 1); + ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo)); + int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0; + int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0; + + r2->name = g_strdup(name); + /* + * Reset the secure state to the specific incoming state. This is + * necessary as the register may have been defined with both states. + */ + r2->secure = secstate; + + if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) { + /* + * Register is banked (using both entries in array). + * Overwriting fieldoffset as the array is only used to define + * banked registers but later only fieldoffset is used. + */ + r2->fieldoffset = r->bank_fieldoffsets[ns]; + } + + if (state == ARM_CP_STATE_AA32) { + if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) { + /* + * If the register is banked then we don't need to migrate or + * reset the 32-bit instance in certain cases: + * + * 1) If the register has both 32-bit and 64-bit instances then we + * can count on the 64-bit instance taking care of the + * non-secure bank. + * 2) If ARMv8 is enabled then we can count on a 64-bit version + * taking care of the secure bank. This requires that separate + * 32 and 64-bit definitions are provided. + */ + if ((r->state == ARM_CP_STATE_BOTH && ns) || + (arm_feature(&cpu->env, ARM_FEATURE_V8) && !ns)) { + r2->type |= ARM_CP_ALIAS; + } + } else if ((secstate != r->secure) && !ns) { + /* + * The register is not banked so we only want to allow migration of + * the non-secure instance. + */ + r2->type |= ARM_CP_ALIAS; + } + + if (r->state == ARM_CP_STATE_BOTH) { + /* We assume it is a cp15 register if the .cp field is left unset */ + if (r2->cp == 0) { + r2->cp = 15; + } + +#ifdef HOST_WORDS_BIGENDIAN + if (r2->fieldoffset) { + r2->fieldoffset += sizeof(uint32_t); + } +#endif + } + } + if (state == ARM_CP_STATE_AA64) { + /* + * To allow abbreviation of ARMCPRegInfo + * definitions, we treat cp == 0 as equivalent to + * the value for "standard guest-visible sysreg". + * STATE_BOTH definitions are also always "standard + * sysreg" in their AArch64 view (the .cp value may + * be non-zero for the benefit of the AArch32 view). + */ + if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) { + r2->cp = CP_REG_ARM64_SYSREG_CP; + } + *key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm, + r2->opc0, opc1, opc2); + } else { + *key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2); + } + if (opaque) { + r2->opaque = opaque; + } + /* + * reginfo passed to helpers is correct for the actual access, + * and is never ARM_CP_STATE_BOTH: + */ + r2->state = state; + /* + * Make sure reginfo passed to helpers for wildcarded regs + * has the correct crm/opc1/opc2 for this reg, not CP_ANY: + */ + r2->crm = crm; + r2->opc1 = opc1; + r2->opc2 = opc2; + /* + * By convention, for wildcarded registers only the first + * entry is used for migration; the others are marked as + * ALIAS so we don't try to transfer the register + * multiple times. Special registers (ie NOP/WFI) are + * never migratable and not even raw-accessible. + */ + if ((r->type & ARM_CP_SPECIAL)) { + r2->type |= ARM_CP_NO_RAW; + } + if (((r->crm == CP_ANY) && crm != 0) || + ((r->opc1 == CP_ANY) && opc1 != 0) || + ((r->opc2 == CP_ANY) && opc2 != 0)) { + r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB; + } + + /* + * Check that raw accesses are either forbidden or handled. Note that + * we can't assert this earlier because the setup of fieldoffset for + * banked registers has to be done first. + */ + if (!(r2->type & ARM_CP_NO_RAW)) { + assert(!raw_accessors_invalid(r2)); + } + + /* Overriding of an existing definition must be explicitly requested. */ + if (!(r->type & ARM_CP_OVERRIDE)) { + ARMCPRegInfo *oldreg; + oldreg = g_hash_table_lookup(cpu->cp_regs, key); + if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) { + fprintf(stderr, "Register redefined: cp=%d %d bit " + "crn=%d crm=%d opc1=%d opc2=%d, " + "was %s, now %s\n", r2->cp, 32 + 32 * is64, + r2->crn, r2->crm, r2->opc1, r2->opc2, + oldreg->name, r2->name); + g_assert_not_reached(); + } + } + g_hash_table_insert(cpu->cp_regs, key, r2); +} + +void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, + const ARMCPRegInfo *r, void *opaque) +{ + /* + * Define implementations of coprocessor registers. + * We store these in a hashtable because typically + * there are less than 150 registers in a space which + * is 16*16*16*8*8 = 262144 in size. + * Wildcarding is supported for the crm, opc1 and opc2 fields. + * If a register is defined twice then the second definition is + * used, so this can be used to define some generic registers and + * then override them with implementation specific variations. + * At least one of the original and the second definition should + * include ARM_CP_OVERRIDE in its type bits -- this is just a guard + * against accidental use. + * + * The state field defines whether the register is to be + * visible in the AArch32 or AArch64 execution state. If the + * state is set to ARM_CP_STATE_BOTH then we synthesise a + * reginfo structure for the AArch32 view, which sees the lower + * 32 bits of the 64 bit register. + * + * Only registers visible in AArch64 may set r->opc0; opc0 cannot + * be wildcarded. AArch64 registers are always considered to be 64 + * bits; the ARM_CP_64BIT* flag applies only to the AArch32 view of + * the register, if any. + */ + int crm, opc1, opc2, state; + int crmmin = (r->crm == CP_ANY) ? 0 : r->crm; + int crmmax = (r->crm == CP_ANY) ? 15 : r->crm; + int opc1min = (r->opc1 == CP_ANY) ? 0 : r->opc1; + int opc1max = (r->opc1 == CP_ANY) ? 7 : r->opc1; + int opc2min = (r->opc2 == CP_ANY) ? 0 : r->opc2; + int opc2max = (r->opc2 == CP_ANY) ? 7 : r->opc2; + /* 64 bit registers have only CRm and Opc1 fields */ + assert(!((r->type & ARM_CP_64BIT) && (r->opc2 || r->crn))); + /* op0 only exists in the AArch64 encodings */ + assert((r->state != ARM_CP_STATE_AA32) || (r->opc0 == 0)); + /* AArch64 regs are all 64 bit so ARM_CP_64BIT is meaningless */ + assert((r->state != ARM_CP_STATE_AA64) || !(r->type & ARM_CP_64BIT)); + /* + * This API is only for Arm's system coprocessors (14 and 15) or + * (M-profile or v7A-and-earlier only) for implementation defined + * coprocessors in the range 0..7. Our decode assumes this, since + * 8..13 can be used for other insns including VFP and Neon. See + * valid_cp() in translate.c. Assert here that we haven't tried + * to use an invalid coprocessor number. + */ + switch (r->state) { + case ARM_CP_STATE_BOTH: + /* 0 has a special meaning, but otherwise the same rules as AA32. */ + if (r->cp == 0) { + break; + } + /* fall through */ + case ARM_CP_STATE_AA32: + if (arm_feature(&cpu->env, ARM_FEATURE_V8) && + !arm_feature(&cpu->env, ARM_FEATURE_M)) { + assert(r->cp >= 14 && r->cp <= 15); + } else { + assert(r->cp < 8 || (r->cp >= 14 && r->cp <= 15)); + } + break; + case ARM_CP_STATE_AA64: + assert(r->cp == 0 || r->cp == CP_REG_ARM64_SYSREG_CP); + break; + default: + g_assert_not_reached(); + } + /* + * The AArch64 pseudocode CheckSystemAccess() specifies that op1 + * encodes a minimum access level for the register. We roll this + * runtime check into our general permission check code, so check + * here that the reginfo's specified permissions are strict enough + * to encompass the generic architectural permission check. + */ + if (r->state != ARM_CP_STATE_AA32) { + int mask = 0; + switch (r->opc1) { + case 0: + /* min_EL EL1, but some accessible to EL0 via kernel ABI */ + mask = PL0U_R | PL1_RW; + break; + case 1: case 2: + /* min_EL EL1 */ + mask = PL1_RW; + break; + case 3: + /* min_EL EL0 */ + mask = PL0_RW; + break; + case 4: + case 5: + /* min_EL EL2 */ + mask = PL2_RW; + break; + case 6: + /* min_EL EL3 */ + mask = PL3_RW; + break; + case 7: + /* min_EL EL1, secure mode only (we don't check the latter) */ + mask = PL1_RW; + break; + default: + /* broken reginfo with out-of-range opc1 */ + assert(false); + break; + } + /* assert our permissions are not too lax (stricter is fine) */ + assert((r->access & ~mask) == 0); + } + + /* + * Check that the register definition has enough info to handle + * reads and writes if they are permitted. + */ + if (!(r->type & (ARM_CP_SPECIAL | ARM_CP_CONST))) { + if (r->access & PL3_R) { + assert((r->fieldoffset || + (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) || + r->readfn); + } + if (r->access & PL3_W) { + assert((r->fieldoffset || + (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) || + r->writefn); + } + } + /* Bad type field probably means missing sentinel at end of reg list */ + assert(cptype_valid(r->type)); + for (crm = crmmin; crm <= crmmax; crm++) { + for (opc1 = opc1min; opc1 <= opc1max; opc1++) { + for (opc2 = opc2min; opc2 <= opc2max; opc2++) { + for (state = ARM_CP_STATE_AA32; + state <= ARM_CP_STATE_AA64; state++) { + if (r->state != state && r->state != ARM_CP_STATE_BOTH) { + continue; + } + if (state == ARM_CP_STATE_AA32) { + /* + * Under AArch32 CP registers can be common + * (same for secure and non-secure world) or banked. + */ + char *name; + + switch (r->secure) { + case ARM_CP_SECSTATE_S: + case ARM_CP_SECSTATE_NS: + add_cpreg_to_hashtable(cpu, r, opaque, state, + r->secure, crm, opc1, opc2, + r->name); + break; + default: + name = g_strdup_printf("%s_S", r->name); + add_cpreg_to_hashtable(cpu, r, opaque, state, + ARM_CP_SECSTATE_S, + crm, opc1, opc2, name); + g_free(name); + add_cpreg_to_hashtable(cpu, r, opaque, state, + ARM_CP_SECSTATE_NS, + crm, opc1, opc2, r->name); + break; + } + } else { + /* + * AArch64 registers get mapped to non-secure + * instance of AArch32 + */ + add_cpreg_to_hashtable(cpu, r, opaque, state, + ARM_CP_SECSTATE_NS, + crm, opc1, opc2, r->name); + } + } + } + } + } +} + +void define_arm_cp_regs_with_opaque(ARMCPU *cpu, + const ARMCPRegInfo *regs, void *opaque) +{ + /* Define a whole list of registers */ + const ARMCPRegInfo *r; + for (r = regs; r->type != ARM_CP_SENTINEL; r++) { + define_one_arm_cp_reg_with_opaque(cpu, r, opaque); + } +} + +void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Helper coprocessor write function for write-ignore registers */ +} + +uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* Helper coprocessor write function for read-as-zero registers */ + return 0; +} + +void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque) +{ + /* Helper coprocessor reset function for do-nothing-on-reset registers */ +} diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 17dc0d4255..9e616a15e1 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -26,6 +26,7 @@ #include "qapi/error.h" #include "qapi/visitor.h" #include "cpu.h" +#include "cpregs.h" #ifdef CONFIG_TCG #include "hw/core/tcg-cpu-ops.h" #endif /* CONFIG_TCG */ diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index d561dc7acc..5354069c63 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -32,7 +32,7 @@ #include "kvm_arm.h" #include "qapi/visitor.h" #include "hw/qdev-properties.h" - +#include "cpregs.h" #ifndef CONFIG_USER_ONLY static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri) diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c index 2e0e508f0e..d973239d78 100644 --- a/target/arm/cpu_tcg.c +++ b/target/arm/cpu_tcg.c @@ -18,6 +18,7 @@ #if !defined(CONFIG_USER_ONLY) #include "hw/boards.h" #endif +#include "cpregs.h" /* CPU models. These are not needed for the AArch64 linux-user build. */ #if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) diff --git a/target/arm/cpustate-list.c b/target/arm/cpustate-list.c new file mode 100644 index 0000000000..7885806f78 --- /dev/null +++ b/target/arm/cpustate-list.c @@ -0,0 +1,146 @@ +/* + * ARM CPUState list read/write + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "cpregs.h" + +uint64_t raw_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + assert(ri->fieldoffset); + if (cpreg_field_is_64bit(ri)) { + return CPREG_FIELD64(env, ri); + } else { + return CPREG_FIELD32(env, ri); + } +} + +void raw_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + assert(ri->fieldoffset); + if (cpreg_field_is_64bit(ri)) { + CPREG_FIELD64(env, ri) = value; + } else { + CPREG_FIELD32(env, ri) = value; + } +} + +const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp) +{ + return g_hash_table_lookup(cpregs, &encoded_cp); +} + +uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* Raw read of a coprocessor register (as needed for migration, etc). */ + if (ri->type & ARM_CP_CONST) { + return ri->resetvalue; + } else if (ri->raw_readfn) { + return ri->raw_readfn(env, ri); + } else if (ri->readfn) { + return ri->readfn(env, ri); + } else { + return raw_read(env, ri); + } +} + +static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t v) +{ + /* Raw write of a coprocessor register (as needed for migration, etc). + * Note that constant registers are treated as write-ignored; the + * caller should check for success by whether a readback gives the + * value written. + */ + if (ri->type & ARM_CP_CONST) { + return; + } else if (ri->raw_writefn) { + ri->raw_writefn(env, ri, v); + } else if (ri->writefn) { + ri->writefn(env, ri, v); + } else { + raw_write(env, ri, v); + } +} + +bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync) +{ + /* Write the coprocessor state from cpu->env to the (index,value) list. */ + int i; + bool ok = true; + + for (i = 0; i < cpu->cpreg_array_len; i++) { + uint32_t regidx = kvm_to_cpreg_id(cpu->cpreg_indexes[i]); + const ARMCPRegInfo *ri; + uint64_t newval; + + ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); + if (!ri) { + ok = false; + continue; + } + if (ri->type & ARM_CP_NO_RAW) { + continue; + } + + newval = read_raw_cp_reg(&cpu->env, ri); + if (kvm_sync) { + /* + * Only sync if the previous list->cpustate sync succeeded. + * Rather than tracking the success/failure state for every + * item in the list, we just recheck "does the raw write we must + * have made in write_list_to_cpustate() read back OK" here. + */ + uint64_t oldval = cpu->cpreg_values[i]; + + if (oldval == newval) { + continue; + } + + write_raw_cp_reg(&cpu->env, ri, oldval); + if (read_raw_cp_reg(&cpu->env, ri) != oldval) { + continue; + } + + write_raw_cp_reg(&cpu->env, ri, newval); + } + cpu->cpreg_values[i] = newval; + } + return ok; +} + +bool write_list_to_cpustate(ARMCPU *cpu) +{ + int i; + bool ok = true; + + for (i = 0; i < cpu->cpreg_array_len; i++) { + uint32_t regidx = kvm_to_cpreg_id(cpu->cpreg_indexes[i]); + uint64_t v = cpu->cpreg_values[i]; + const ARMCPRegInfo *ri; + + ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); + if (!ri) { + ok = false; + continue; + } + if (ri->type & ARM_CP_NO_RAW) { + continue; + } + /* Write value and confirm it reads back as written + * (to catch read-only registers and partially read-only + * registers where the incoming migration value doesn't match) + */ + write_raw_cp_reg(&cpu->env, ri, v); + if (read_raw_cp_reg(&cpu->env, ri) != v) { + ok = false; + } + } + return ok; +} diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c index a8fff2a3d0..0645415f44 100644 --- a/target/arm/gdbstub.c +++ b/target/arm/gdbstub.c @@ -20,6 +20,7 @@ #include "qemu/osdep.h" #include "cpu.h" #include "exec/gdbstub.h" +#include "cpregs.h" typedef struct RegisterSysregXmlParam { CPUState *cs; diff --git a/target/arm/machine.c b/target/arm/machine.c index 6ad1d306b1..e568662cca 100644 --- a/target/arm/machine.c +++ b/target/arm/machine.c @@ -5,6 +5,7 @@ #include "kvm_arm.h" #include "internals.h" #include "migration/cpu.h" +#include "cpregs.h" static bool vfp_needed(void *opaque) { diff --git a/target/arm/tcg/cpregs.c b/target/arm/tcg/cpregs.c new file mode 100644 index 0000000000..8422da4335 --- /dev/null +++ b/target/arm/tcg/cpregs.c @@ -0,0 +1,7674 @@ +/* + * ARM CP registers + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "trace.h" +#include "qemu/main-loop.h" +#include "exec/exec-all.h" +#include "hw/irq.h" +#include "qapi/error.h" +#include "qemu/guest-random.h" +#include "cpu-mmu.h" +#include "cpregs.h" + +#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */ +#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */ + +static void *raw_ptr(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return (char *)env + ri->fieldoffset; +} + +static void add_cpreg_to_list(gpointer key, gpointer opaque) +{ + ARMCPU *cpu = opaque; + uint64_t regidx; + const ARMCPRegInfo *ri; + + regidx = *(uint32_t *)key; + ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); + + if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) { + cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx); + /* The value array need not be initialized at this point */ + cpu->cpreg_array_len++; + } +} + +static void count_cpreg(gpointer key, gpointer opaque) +{ + ARMCPU *cpu = opaque; + uint64_t regidx; + const ARMCPRegInfo *ri; + + regidx = *(uint32_t *)key; + ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); + + if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) { + cpu->cpreg_array_len++; + } +} + +static gint cpreg_key_compare(gconstpointer a, gconstpointer b) +{ + uint64_t aidx = cpreg_to_kvm_id(*(uint32_t *)a); + uint64_t bidx = cpreg_to_kvm_id(*(uint32_t *)b); + + if (aidx > bidx) { + return 1; + } + if (aidx < bidx) { + return -1; + } + return 0; +} + +void init_cpreg_list(ARMCPU *cpu) +{ + /* + * Initialise the cpreg_tuples[] array based on the cp_regs hash. + * Note that we require cpreg_tuples[] to be sorted by key ID. + */ + GList *keys; + int arraylen; + + keys = g_hash_table_get_keys(cpu->cp_regs); + keys = g_list_sort(keys, cpreg_key_compare); + + cpu->cpreg_array_len = 0; + + g_list_foreach(keys, count_cpreg, cpu); + + arraylen = cpu->cpreg_array_len; + cpu->cpreg_indexes = g_new(uint64_t, arraylen); + cpu->cpreg_values = g_new(uint64_t, arraylen); + cpu->cpreg_vmstate_indexes = g_new(uint64_t, arraylen); + cpu->cpreg_vmstate_values = g_new(uint64_t, arraylen); + cpu->cpreg_vmstate_array_len = cpu->cpreg_array_len; + cpu->cpreg_array_len = 0; + + g_list_foreach(keys, add_cpreg_to_list, cpu); + + assert(cpu->cpreg_array_len == arraylen); + + g_list_free(keys); +} + +/* + * Some registers are not accessible from AArch32 EL3 if SCR.NS == 0. + */ +static CPAccessResult access_el3_aa32ns(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + if (!is_a64(env) && arm_current_el(env) == 3 && + arm_is_secure_below_el3(env)) { + return CP_ACCESS_TRAP_UNCATEGORIZED; + } + return CP_ACCESS_OK; +} + +/* + * Some secure-only AArch32 registers trap to EL3 if used from + * Secure EL1 (but are just ordinary UNDEF in other non-EL3 contexts). + * Note that an access from Secure EL1 can only happen if EL3 is AArch64. + * We assume that the .access field is set to PL1_RW. + */ +static CPAccessResult access_trap_aa32s_el1(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 3) { + return CP_ACCESS_OK; + } + if (arm_is_secure_below_el3(env)) { + if (env->cp15.scr_el3 & SCR_EEL2) { + return CP_ACCESS_TRAP_EL2; + } + return CP_ACCESS_TRAP_EL3; + } + /* This will be EL1 NS and EL2 NS, which just UNDEF */ + return CP_ACCESS_TRAP_UNCATEGORIZED; +} + +static uint64_t arm_mdcr_el2_eff(CPUARMState *env) +{ + return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0; +} + +/* + * Check for traps to "powerdown debug" registers, which are controlled + * by MDCR.TDOSA + */ +static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int el = arm_current_el(env); + uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); + bool mdcr_el2_tdosa = (mdcr_el2 & MDCR_TDOSA) || (mdcr_el2 & MDCR_TDE) || + (arm_hcr_el2_eff(env) & HCR_TGE); + + if (el < 2 && mdcr_el2_tdosa) { + return CP_ACCESS_TRAP_EL2; + } + if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; +} + +/* + * Check for traps to "debug ROM" registers, which are controlled + * by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3. + */ +static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int el = arm_current_el(env); + uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); + bool mdcr_el2_tdra = (mdcr_el2 & MDCR_TDRA) || (mdcr_el2 & MDCR_TDE) || + (arm_hcr_el2_eff(env) & HCR_TGE); + + if (el < 2 && mdcr_el2_tdra) { + return CP_ACCESS_TRAP_EL2; + } + if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; +} + +/* + * Check for traps to general debug registers, which are controlled + * by MDCR_EL2.TDA for EL2 and MDCR_EL3.TDA for EL3. + */ +static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int el = arm_current_el(env); + uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); + bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) || + (arm_hcr_el2_eff(env) & HCR_TGE); + + if (el < 2 && mdcr_el2_tda) { + return CP_ACCESS_TRAP_EL2; + } + if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; +} + +/* + * Check for traps to performance monitor registers, which are controlled + * by MDCR_EL2.TPM for EL2 and MDCR_EL3.TPM for EL3. + */ +static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int el = arm_current_el(env); + uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); + + if (el < 2 && (mdcr_el2 & MDCR_TPM)) { + return CP_ACCESS_TRAP_EL2; + } + if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; +} + +/* Check for traps from EL1 due to HCR_EL2.TVM and HCR_EL2.TRVM. */ +static CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 1) { + uint64_t trap = isread ? HCR_TRVM : HCR_TVM; + if (arm_hcr_el2_eff(env) & trap) { + return CP_ACCESS_TRAP_EL2; + } + } + return CP_ACCESS_OK; +} + +/* Check for traps from EL1 due to HCR_EL2.TSW. */ +static CPAccessResult access_tsw(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TSW)) { + return CP_ACCESS_TRAP_EL2; + } + return CP_ACCESS_OK; +} + +/* Check for traps from EL1 due to HCR_EL2.TACR. */ +static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TACR)) { + return CP_ACCESS_TRAP_EL2; + } + return CP_ACCESS_OK; +} + +/* Check for traps from EL1 due to HCR_EL2.TTLB. */ +static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) { + return CP_ACCESS_TRAP_EL2; + } + return CP_ACCESS_OK; +} + +static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + + raw_write(env, ri, value); + tlb_flush(CPU(cpu)); /* Flush TLB as domain not tracked in TLB */ +} + +static void fcse_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + + if (raw_read(env, ri) != value) { + /* + * Unlike real hardware the qemu TLB uses virtual addresses, + * not modified virtual addresses, so this causes a TLB flush. + */ + tlb_flush(CPU(cpu)); + raw_write(env, ri, value); + } +} + +static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + + if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA) + && !extended_addresses_enabled(env)) { + /* + * For VMSA (when not using the LPAE long descriptor page table + * format) this register includes the ASID, so do a TLB flush. + * For PMSA it is purely a process ID and no action is needed. + */ + tlb_flush(CPU(cpu)); + } + raw_write(env, ri, value); +} + +/* IS variants of TLB operations must affect all cores */ +static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_all_cpus_synced(cs); +} + +static void tlbiasid_is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_all_cpus_synced(cs); +} + +static void tlbimva_is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK); +} + +static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK); +} + +/* + * Non-IS variants of TLB operations are upgraded to + * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to + * force broadcast of these operations. + */ +static bool tlb_force_broadcast(CPUARMState *env) +{ + return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB); +} + +static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Invalidate all (TLBIALL) */ + CPUState *cs = env_cpu(env); + + if (tlb_force_broadcast(env)) { + tlb_flush_all_cpus_synced(cs); + } else { + tlb_flush(cs); + } +} + +static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */ + CPUState *cs = env_cpu(env); + + value &= TARGET_PAGE_MASK; + if (tlb_force_broadcast(env)) { + tlb_flush_page_all_cpus_synced(cs, value); + } else { + tlb_flush_page(cs, value); + } +} + +static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Invalidate by ASID (TLBIASID) */ + CPUState *cs = env_cpu(env); + + if (tlb_force_broadcast(env)) { + tlb_flush_all_cpus_synced(cs); + } else { + tlb_flush(cs); + } +} + +static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */ + CPUState *cs = env_cpu(env); + + value &= TARGET_PAGE_MASK; + if (tlb_force_broadcast(env)) { + tlb_flush_page_all_cpus_synced(cs, value); + } else { + tlb_flush_page(cs, value); + } +} + +static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_by_mmuidx(cs, + ARMMMUIdxBit_E10_1 | + ARMMMUIdxBit_E10_1_PAN | + ARMMMUIdxBit_E10_0); +} + +static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_by_mmuidx_all_cpus_synced(cs, + ARMMMUIdxBit_E10_1 | + ARMMMUIdxBit_E10_1_PAN | + ARMMMUIdxBit_E10_0); +} + + +static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2); +} + +static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2); +} + +static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12); + + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2); +} + +static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12); + + tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, + ARMMMUIdxBit_E2); +} + +static const ARMCPRegInfo cp_reginfo[] = { + /* + * Define the secure and non-secure FCSE identifier CP registers + * separately because there is no secure bank in V8 (no _EL3). This allows + * the secure register to be properly reset and migrated. There is also no + * v8 EL1 version of the register so the non-secure instance stands alone. + */ + { .name = "FCSEIDR", + .cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0, + .access = PL1_RW, .secure = ARM_CP_SECSTATE_NS, + .fieldoffset = offsetof(CPUARMState, cp15.fcseidr_ns), + .resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, }, + { .name = "FCSEIDR_S", + .cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0, + .access = PL1_RW, .secure = ARM_CP_SECSTATE_S, + .fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s), + .resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, }, + /* + * Define the secure and non-secure context identifier CP registers + * separately because there is no secure bank in V8 (no _EL3). This allows + * the secure register to be properly reset and migrated. In the + * non-secure case, the 32-bit register will have reset and migration + * disabled during registration as it is handled by the 64-bit instance. + */ + { .name = "CONTEXTIDR_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .secure = ARM_CP_SECSTATE_NS, + .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[1]), + .resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, }, + { .name = "CONTEXTIDR_S", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .secure = ARM_CP_SECSTATE_S, + .fieldoffset = offsetof(CPUARMState, cp15.contextidr_s), + .resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo not_v8_cp_reginfo[] = { + /* + * NB: Some of these registers exist in v8 but with more precise + * definitions that don't use CP_ANY wildcards (mostly in v8_cp_reginfo[]). + */ + /* MMU Domain access control / MPU write buffer control */ + { .name = "DACR", + .cp = 15, .opc1 = CP_ANY, .crn = 3, .crm = CP_ANY, .opc2 = CP_ANY, + .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, + .writefn = dacr_write, .raw_writefn = raw_write, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s), + offsetoflow32(CPUARMState, cp15.dacr_ns) } }, + /* + * ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs. + * For v6 and v5, these mappings are overly broad. + */ + { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 0, + .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, + { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 1, + .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, + { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 4, + .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, + { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 8, + .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, + /* Cache maintenance ops; some of this space may be overridden later. */ + { .name = "CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY, + .opc1 = 0, .opc2 = CP_ANY, .access = PL1_W, + .type = ARM_CP_NOP | ARM_CP_OVERRIDE }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo not_v6_cp_reginfo[] = { + /* + * Not all pre-v6 cores implemented this WFI, so this is slightly + * over-broad. + */ + { .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2, + .access = PL1_W, .type = ARM_CP_WFI }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo not_v7_cp_reginfo[] = { + /* + * Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which + * is UNPREDICTABLE; we choose to NOP as most implementations do). + */ + { .name = "WFI_v6", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4, + .access = PL1_W, .type = ARM_CP_WFI }, + /* + * L1 cache lockdown. Not architectural in v6 and earlier but in practice + * implemented in 926, 946, 1026, 1136, 1176 and 11MPCore. StrongARM and + * OMAPCP will override this space. + */ + { .name = "DLOCKDOWN", .cp = 15, .crn = 9, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.c9_data), + .resetvalue = 0 }, + { .name = "ILOCKDOWN", .cp = 15, .crn = 9, .crm = 0, .opc1 = 0, .opc2 = 1, + .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.c9_insn), + .resetvalue = 0 }, + /* v6 doesn't have the cache ID registers but Linux reads them anyway */ + { .name = "DUMMY", .cp = 15, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = CP_ANY, + .access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, + .resetvalue = 0 }, + /* + * We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR; + * implementing it as RAZ means the "debug architecture version" bits + * will read as a reserved value, which should cause Linux to not try + * to use the debug hardware. + */ + { .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 }, + /* + * MMU TLB control. Note that the wildcarding means we cover not just + * the unified TLB ops but also the dside/iside/inner-shareable variants. + */ + { .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY, + .opc1 = CP_ANY, .opc2 = 0, .access = PL1_W, .writefn = tlbiall_write, + .type = ARM_CP_NO_RAW }, + { .name = "TLBIMVA", .cp = 15, .crn = 8, .crm = CP_ANY, + .opc1 = CP_ANY, .opc2 = 1, .access = PL1_W, .writefn = tlbimva_write, + .type = ARM_CP_NO_RAW }, + { .name = "TLBIASID", .cp = 15, .crn = 8, .crm = CP_ANY, + .opc1 = CP_ANY, .opc2 = 2, .access = PL1_W, .writefn = tlbiasid_write, + .type = ARM_CP_NO_RAW }, + { .name = "TLBIMVAA", .cp = 15, .crn = 8, .crm = CP_ANY, + .opc1 = CP_ANY, .opc2 = 3, .access = PL1_W, .writefn = tlbimvaa_write, + .type = ARM_CP_NO_RAW }, + { .name = "PRRR", .cp = 15, .crn = 10, .crm = 2, + .opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP }, + { .name = "NMRR", .cp = 15, .crn = 10, .crm = 2, + .opc1 = 0, .opc2 = 1, .access = PL1_RW, .type = ARM_CP_NOP }, + REGINFO_SENTINEL +}; + +static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + uint32_t mask = 0; + + /* In ARMv8 most bits of CPACR_EL1 are RES0. */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + /* + * ARMv7 defines bits for unimplemented coprocessors as RAZ/WI. + * ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP. + * TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell. + */ + if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) { + /* VFP coprocessor: cp10 & cp11 [23:20] */ + mask |= (1 << 31) | (1 << 30) | (0xf << 20); + + if (!arm_feature(env, ARM_FEATURE_NEON)) { + /* ASEDIS [31] bit is RAO/WI */ + value |= (1 << 31); + } + + /* + * VFPv3 and upwards with NEON implement 32 double precision + * registers (D0-D31). + */ + if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) { + /* D32DIS [30] is RAO/WI if D16-31 are not implemented. */ + value |= (1 << 30); + } + } + value &= mask; + } + + /* + * For A-profile AArch32 EL3 (but not M-profile secure mode), if NSACR.CP10 + * is 0 then CPACR.{CP11,CP10} ignore writes and read as 0b00. + */ + if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && + !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { + value &= ~(0xf << 20); + value |= env->cp15.cpacr_el1 & (0xf << 20); + } + + env->cp15.cpacr_el1 = value; +} + +static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* + * For A-profile AArch32 EL3 (but not M-profile secure mode), if NSACR.CP10 + * is 0 then CPACR.{CP11,CP10} ignore writes and read as 0b00. + */ + uint64_t value = env->cp15.cpacr_el1; + + if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && + !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { + value &= ~(0xf << 20); + } + return value; +} + + +static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* + * Call cpacr_write() so that we reset with the correct RAO bits set + * for our CPU features. + */ + cpacr_write(env, ri, 0); +} + +static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_feature(env, ARM_FEATURE_V8)) { + /* Check if CPACR accesses are to be trapped to EL2 */ + if (arm_current_el(env) == 1 && arm_is_el2_enabled(env) && + (env->cp15.cptr_el[2] & CPTR_TCPAC)) { + return CP_ACCESS_TRAP_EL2; + /* Check if CPACR accesses are to be trapped to EL3 */ + } else if (arm_current_el(env) < 3 && + (env->cp15.cptr_el[3] & CPTR_TCPAC)) { + return CP_ACCESS_TRAP_EL3; + } + } + + return CP_ACCESS_OK; +} + +static CPAccessResult cptr_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + /* Check if CPTR accesses are set to trap to EL3 */ + if (arm_current_el(env) == 2 && (env->cp15.cptr_el[3] & CPTR_TCPAC)) { + return CP_ACCESS_TRAP_EL3; + } + + return CP_ACCESS_OK; +} + +static const ARMCPRegInfo v6_cp_reginfo[] = { + /* prefetch by MVA in v6, NOP in v7 */ + { .name = "MVA_prefetch", + .cp = 15, .crn = 7, .crm = 13, .opc1 = 0, .opc2 = 1, + .access = PL1_W, .type = ARM_CP_NOP }, + /* + * We need to break the TB after ISB to execute self-modifying code + * correctly and also to take any pending interrupts immediately. + * So use arm_cp_write_ignore() function instead of ARM_CP_NOP flag. + */ + { .name = "ISB", .cp = 15, .crn = 7, .crm = 5, .opc1 = 0, .opc2 = 4, + .access = PL0_W, .type = ARM_CP_NO_RAW, .writefn = arm_cp_write_ignore }, + { .name = "DSB", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 4, + .access = PL0_W, .type = ARM_CP_NOP }, + { .name = "DMB", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 5, + .access = PL0_W, .type = ARM_CP_NOP }, + { .name = "IFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 2, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ifar_s), + offsetof(CPUARMState, cp15.ifar_ns) }, + .resetvalue = 0, }, + /* + * Watchpoint Fault Address Register : should actually only be present + * for 1136, 1176, 11MPCore. + */ + { .name = "WFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1, + .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0, }, + { .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3, + .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access, + .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1), + .resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read }, + REGINFO_SENTINEL +}; + +/* Definitions for the PMU registers */ +#define PMCRN_MASK 0xf800 +#define PMCRN_SHIFT 11 +#define PMCRLC 0x40 +#define PMCRDP 0x20 +#define PMCRX 0x10 +#define PMCRD 0x8 +#define PMCRC 0x4 +#define PMCRP 0x2 +#define PMCRE 0x1 +/* + * Mask of PMCR bits writeable by guest (not including WO bits like C, P, + * which can be written as 1 to trigger behaviour but which stay RAZ). + */ +#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE) + +#define PMXEVTYPER_P 0x80000000 +#define PMXEVTYPER_U 0x40000000 +#define PMXEVTYPER_NSK 0x20000000 +#define PMXEVTYPER_NSU 0x10000000 +#define PMXEVTYPER_NSH 0x08000000 +#define PMXEVTYPER_M 0x04000000 +#define PMXEVTYPER_MT 0x02000000 +#define PMXEVTYPER_EVTCOUNT 0x0000ffff +#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \ + PMXEVTYPER_NSU | PMXEVTYPER_NSH | \ + PMXEVTYPER_M | PMXEVTYPER_MT | \ + PMXEVTYPER_EVTCOUNT) + +#define PMCCFILTR 0xf8000000 +#define PMCCFILTR_M PMXEVTYPER_M +#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M) + +static inline uint32_t pmu_num_counters(CPUARMState *env) +{ + return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT; +} + +/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */ +static inline uint64_t pmu_counter_mask(CPUARMState *env) +{ + return (1 << 31) | ((1 << pmu_num_counters(env)) - 1); +} + +typedef struct pm_event { + uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */ + /* If the event is supported on this CPU (used to generate PMCEID[01]) */ + bool (*supported)(CPUARMState *); + /* + * Retrieve the current count of the underlying event. The programmed + * counters hold a difference from the return value from this function + */ + uint64_t (*get_count)(CPUARMState *); + /* + * Return how many nanoseconds it will take (at a minimum) for count events + * to occur. A negative value indicates the counter will never overflow, or + * that the counter has otherwise arranged for the overflow bit to be set + * and the PMU interrupt to be raised on overflow. + */ + int64_t (*ns_per_count)(uint64_t); +} pm_event; + +static bool event_always_supported(CPUARMState *env) +{ + return true; +} + +static uint64_t swinc_get_count(CPUARMState *env) +{ + /* + * SW_INCR events are written directly to the pmevcntr's by writes to + * PMSWINC, so there is no underlying count maintained by the PMU itself + */ + return 0; +} + +static int64_t swinc_ns_per(uint64_t ignored) +{ + return -1; +} + +/* + * Return the underlying cycle count for the PMU cycle counters. If we're in + * usermode, simply return 0. + */ +static uint64_t cycles_get_count(CPUARMState *env) +{ +#ifndef CONFIG_USER_ONLY + return muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), + ARM_CPU_FREQ, NANOSECONDS_PER_SECOND); +#else + return cpu_get_host_ticks(); +#endif +} + +#ifndef CONFIG_USER_ONLY +static int64_t cycles_ns_per(uint64_t cycles) +{ + return (ARM_CPU_FREQ / NANOSECONDS_PER_SECOND) * cycles; +} + +static bool instructions_supported(CPUARMState *env) +{ + return icount_enabled() == 1; /* Precise instruction counting */ +} + +static uint64_t instructions_get_count(CPUARMState *env) +{ + return (uint64_t)icount_get_raw(); +} + +static int64_t instructions_ns_per(uint64_t icount) +{ + return icount_to_ns((int64_t)icount); +} +#endif + +static bool pmu_8_1_events_supported(CPUARMState *env) +{ + /* For events which are supported in any v8.1 PMU */ + return cpu_isar_feature(any_pmu_8_1, env_archcpu(env)); +} + +static bool pmu_8_4_events_supported(CPUARMState *env) +{ + /* For events which are supported in any v8.1 PMU */ + return cpu_isar_feature(any_pmu_8_4, env_archcpu(env)); +} + +static uint64_t zero_event_get_count(CPUARMState *env) +{ + /* For events which on QEMU never fire, so their count is always zero */ + return 0; +} + +static int64_t zero_event_ns_per(uint64_t cycles) +{ + /* An event which never fires can never overflow */ + return -1; +} + +static const pm_event pm_events[] = { + { .number = 0x000, /* SW_INCR */ + .supported = event_always_supported, + .get_count = swinc_get_count, + .ns_per_count = swinc_ns_per, + }, +#ifndef CONFIG_USER_ONLY + { .number = 0x008, /* INST_RETIRED, Instruction architecturally executed */ + .supported = instructions_supported, + .get_count = instructions_get_count, + .ns_per_count = instructions_ns_per, + }, + { .number = 0x011, /* CPU_CYCLES, Cycle */ + .supported = event_always_supported, + .get_count = cycles_get_count, + .ns_per_count = cycles_ns_per, + }, +#endif + { .number = 0x023, /* STALL_FRONTEND */ + .supported = pmu_8_1_events_supported, + .get_count = zero_event_get_count, + .ns_per_count = zero_event_ns_per, + }, + { .number = 0x024, /* STALL_BACKEND */ + .supported = pmu_8_1_events_supported, + .get_count = zero_event_get_count, + .ns_per_count = zero_event_ns_per, + }, + { .number = 0x03c, /* STALL */ + .supported = pmu_8_4_events_supported, + .get_count = zero_event_get_count, + .ns_per_count = zero_event_ns_per, + }, +}; + +/* + * Note: Before increasing MAX_EVENT_ID beyond 0x3f into the 0x40xx range of + * events (i.e. the statistical profiling extension), this implementation + * should first be updated to something sparse instead of the current + * supported_event_map[] array. + */ +#define MAX_EVENT_ID 0x3c +#define UNSUPPORTED_EVENT UINT16_MAX +static uint16_t supported_event_map[MAX_EVENT_ID + 1]; + +/* + * Called upon CPU initialization to initialize PMCEID[01]_EL0 and build a map + * of ARM event numbers to indices in our pm_events array. + * + * Note: Events in the 0x40XX range are not currently supported. + */ +void pmu_init(ARMCPU *cpu) +{ + unsigned int i; + + /* + * Empty supported_event_map and cpu->pmceid[01] before adding supported + * events to them + */ + for (i = 0; i < ARRAY_SIZE(supported_event_map); i++) { + supported_event_map[i] = UNSUPPORTED_EVENT; + } + cpu->pmceid0 = 0; + cpu->pmceid1 = 0; + + for (i = 0; i < ARRAY_SIZE(pm_events); i++) { + const pm_event *cnt = &pm_events[i]; + assert(cnt->number <= MAX_EVENT_ID); + /* We do not currently support events in the 0x40xx range */ + assert(cnt->number <= 0x3f); + + if (cnt->supported(&cpu->env)) { + supported_event_map[cnt->number] = i; + uint64_t event_mask = 1ULL << (cnt->number & 0x1f); + if (cnt->number & 0x20) { + cpu->pmceid1 |= event_mask; + } else { + cpu->pmceid0 |= event_mask; + } + } + } +} + +/* + * Check at runtime whether a PMU event is supported for the current machine + */ +static bool event_supported(uint16_t number) +{ + if (number > MAX_EVENT_ID) { + return false; + } + return supported_event_map[number] != UNSUPPORTED_EVENT; +} + +static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + /* Performance monitor registers user accessibility is controlled + * by PMUSERENR. MDCR_EL2.TPM and MDCR_EL3.TPM allow configurable + * trapping to EL2 or EL3 for other accesses. + */ + int el = arm_current_el(env); + uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); + + if (el == 0 && !(env->cp15.c9_pmuserenr & 1)) { + return CP_ACCESS_TRAP; + } + if (el < 2 && (mdcr_el2 & MDCR_TPM)) { + return CP_ACCESS_TRAP_EL2; + } + if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) { + return CP_ACCESS_TRAP_EL3; + } + + return CP_ACCESS_OK; +} + +static CPAccessResult pmreg_access_xevcntr(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + /* ER: event counter read trap control */ + if (arm_feature(env, ARM_FEATURE_V8) + && arm_current_el(env) == 0 + && (env->cp15.c9_pmuserenr & (1 << 3)) != 0 + && isread) { + return CP_ACCESS_OK; + } + + return pmreg_access(env, ri, isread); +} + +static CPAccessResult pmreg_access_swinc(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + /* SW: software increment write trap control */ + if (arm_feature(env, ARM_FEATURE_V8) + && arm_current_el(env) == 0 + && (env->cp15.c9_pmuserenr & (1 << 1)) != 0 + && !isread) { + return CP_ACCESS_OK; + } + + return pmreg_access(env, ri, isread); +} + +static CPAccessResult pmreg_access_selr(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + /* ER: event counter read trap control */ + if (arm_feature(env, ARM_FEATURE_V8) + && arm_current_el(env) == 0 + && (env->cp15.c9_pmuserenr & (1 << 3)) != 0) { + return CP_ACCESS_OK; + } + + return pmreg_access(env, ri, isread); +} + +static CPAccessResult pmreg_access_ccntr(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + /* CR: cycle counter read trap control */ + if (arm_feature(env, ARM_FEATURE_V8) + && arm_current_el(env) == 0 + && (env->cp15.c9_pmuserenr & (1 << 2)) != 0 + && isread) { + return CP_ACCESS_OK; + } + + return pmreg_access(env, ri, isread); +} + +/* Returns true if the counter (pass 31 for PMCCNTR) should count events using + * the current EL, security state, and register configuration. + */ +static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter) +{ + uint64_t filter; + bool e, p, u, nsk, nsu, nsh, m; + bool enabled, prohibited, filtered; + bool secure = arm_is_secure(env); + int el = arm_current_el(env); + uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); + uint8_t hpmn = mdcr_el2 & MDCR_HPMN; + + if (!arm_feature(env, ARM_FEATURE_PMU)) { + return false; + } + + if (!arm_feature(env, ARM_FEATURE_EL2) || + (counter < hpmn || counter == 31)) { + e = env->cp15.c9_pmcr & PMCRE; + } else { + e = mdcr_el2 & MDCR_HPME; + } + enabled = e && (env->cp15.c9_pmcnten & (1 << counter)); + + if (!secure) { + if (el == 2 && (counter < hpmn || counter == 31)) { + prohibited = mdcr_el2 & MDCR_HPMD; + } else { + prohibited = false; + } + } else { + prohibited = arm_feature(env, ARM_FEATURE_EL3) && + !(env->cp15.mdcr_el3 & MDCR_SPME); + } + + if (prohibited && counter == 31) { + prohibited = env->cp15.c9_pmcr & PMCRDP; + } + + if (counter == 31) { + filter = env->cp15.pmccfiltr_el0; + } else { + filter = env->cp15.c14_pmevtyper[counter]; + } + + p = filter & PMXEVTYPER_P; + u = filter & PMXEVTYPER_U; + nsk = arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_NSK); + nsu = arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_NSU); + nsh = arm_feature(env, ARM_FEATURE_EL2) && (filter & PMXEVTYPER_NSH); + m = arm_el_is_aa64(env, 1) && + arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_M); + + if (el == 0) { + filtered = secure ? u : u != nsu; + } else if (el == 1) { + filtered = secure ? p : p != nsk; + } else if (el == 2) { + filtered = !nsh; + } else { /* EL3 */ + filtered = m != p; + } + + if (counter != 31) { + /* + * If not checking PMCCNTR, ensure the counter is setup to an event we + * support + */ + uint16_t event = filter & PMXEVTYPER_EVTCOUNT; + if (!event_supported(event)) { + return false; + } + } + + return enabled && !prohibited && !filtered; +} + +static void pmu_update_irq(CPUARMState *env) +{ + ARMCPU *cpu = env_archcpu(env); + qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) && + (env->cp15.c9_pminten & env->cp15.c9_pmovsr)); +} + +/* + * Ensure c15_ccnt is the guest-visible count so that operations such as + * enabling/disabling the counter or filtering, modifying the count itself, + * etc. can be done logically. This is essentially a no-op if the counter is + * not enabled at the time of the call. + */ +static void pmccntr_op_start(CPUARMState *env) +{ + uint64_t cycles = cycles_get_count(env); + + if (pmu_counter_enabled(env, 31)) { + uint64_t eff_cycles = cycles; + if (env->cp15.c9_pmcr & PMCRD) { + /* Increment once every 64 processor clock cycles */ + eff_cycles /= 64; + } + + uint64_t new_pmccntr = eff_cycles - env->cp15.c15_ccnt_delta; + + uint64_t overflow_mask = env->cp15.c9_pmcr & PMCRLC ? \ + 1ull << 63 : 1ull << 31; + if (env->cp15.c15_ccnt & ~new_pmccntr & overflow_mask) { + env->cp15.c9_pmovsr |= (1 << 31); + pmu_update_irq(env); + } + + env->cp15.c15_ccnt = new_pmccntr; + } + env->cp15.c15_ccnt_delta = cycles; +} + +/* + * If PMCCNTR is enabled, recalculate the delta between the clock and the + * guest-visible count. A call to pmccntr_op_finish should follow every call to + * pmccntr_op_start. + */ +static void pmccntr_op_finish(CPUARMState *env) +{ + if (pmu_counter_enabled(env, 31)) { +#ifndef CONFIG_USER_ONLY + /* Calculate when the counter will next overflow */ + uint64_t remaining_cycles = -env->cp15.c15_ccnt; + if (!(env->cp15.c9_pmcr & PMCRLC)) { + remaining_cycles = (uint32_t)remaining_cycles; + } + int64_t overflow_in = cycles_ns_per(remaining_cycles); + + if (overflow_in > 0) { + int64_t overflow_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + + overflow_in; + ARMCPU *cpu = env_archcpu(env); + timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at); + } +#endif + + uint64_t prev_cycles = env->cp15.c15_ccnt_delta; + if (env->cp15.c9_pmcr & PMCRD) { + /* Increment once every 64 processor clock cycles */ + prev_cycles /= 64; + } + env->cp15.c15_ccnt_delta = prev_cycles - env->cp15.c15_ccnt; + } +} + +static void pmevcntr_op_start(CPUARMState *env, uint8_t counter) +{ + + uint16_t event = env->cp15.c14_pmevtyper[counter] & PMXEVTYPER_EVTCOUNT; + uint64_t count = 0; + if (event_supported(event)) { + uint16_t event_idx = supported_event_map[event]; + count = pm_events[event_idx].get_count(env); + } + + if (pmu_counter_enabled(env, counter)) { + uint32_t new_pmevcntr = count - env->cp15.c14_pmevcntr_delta[counter]; + + if (env->cp15.c14_pmevcntr[counter] & ~new_pmevcntr & INT32_MIN) { + env->cp15.c9_pmovsr |= (1 << counter); + pmu_update_irq(env); + } + env->cp15.c14_pmevcntr[counter] = new_pmevcntr; + } + env->cp15.c14_pmevcntr_delta[counter] = count; +} + +static void pmevcntr_op_finish(CPUARMState *env, uint8_t counter) +{ + if (pmu_counter_enabled(env, counter)) { +#ifndef CONFIG_USER_ONLY + uint16_t event = env->cp15.c14_pmevtyper[counter] & PMXEVTYPER_EVTCOUNT; + uint16_t event_idx = supported_event_map[event]; + uint64_t delta = UINT32_MAX - + (uint32_t)env->cp15.c14_pmevcntr[counter] + 1; + int64_t overflow_in = pm_events[event_idx].ns_per_count(delta); + + if (overflow_in > 0) { + int64_t overflow_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + + overflow_in; + ARMCPU *cpu = env_archcpu(env); + timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at); + } +#endif + + env->cp15.c14_pmevcntr_delta[counter] -= + env->cp15.c14_pmevcntr[counter]; + } +} + +void pmu_op_start(CPUARMState *env) +{ + unsigned int i; + pmccntr_op_start(env); + for (i = 0; i < pmu_num_counters(env); i++) { + pmevcntr_op_start(env, i); + } +} + +void pmu_op_finish(CPUARMState *env) +{ + unsigned int i; + pmccntr_op_finish(env); + for (i = 0; i < pmu_num_counters(env); i++) { + pmevcntr_op_finish(env, i); + } +} + +void pmu_pre_el_change(ARMCPU *cpu, void *ignored) +{ + pmu_op_start(&cpu->env); +} + +void pmu_post_el_change(ARMCPU *cpu, void *ignored) +{ + pmu_op_finish(&cpu->env); +} + +void arm_pmu_timer_cb(void *opaque) +{ + ARMCPU *cpu = opaque; + + /* + * Update all the counter values based on the current underlying counts, + * triggering interrupts to be raised, if necessary. pmu_op_finish() also + * has the effect of setting the cpu->pmu_timer to the next earliest time a + * counter may expire. + */ + pmu_op_start(&cpu->env); + pmu_op_finish(&cpu->env); +} + +static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + pmu_op_start(env); + + if (value & PMCRC) { + /* The counter has been reset */ + env->cp15.c15_ccnt = 0; + } + + if (value & PMCRP) { + unsigned int i; + for (i = 0; i < pmu_num_counters(env); i++) { + env->cp15.c14_pmevcntr[i] = 0; + } + } + + env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK; + env->cp15.c9_pmcr |= (value & PMCR_WRITEABLE_MASK); + + pmu_op_finish(env); +} + +static void pmswinc_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + unsigned int i; + for (i = 0; i < pmu_num_counters(env); i++) { + /* Increment a counter's count iff: */ + if ((value & (1 << i)) && /* counter's bit is set */ + /* counter is enabled and not filtered */ + pmu_counter_enabled(env, i) && + /* counter is SW_INCR */ + (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) { + pmevcntr_op_start(env, i); + + /* + * Detect if this write causes an overflow since we can't predict + * PMSWINC overflows like we can for other events + */ + uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1; + + if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) { + env->cp15.c9_pmovsr |= (1 << i); + pmu_update_irq(env); + } + + env->cp15.c14_pmevcntr[i] = new_pmswinc; + + pmevcntr_op_finish(env, i); + } + } +} + +static uint64_t pmccntr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + uint64_t ret; + pmccntr_op_start(env); + ret = env->cp15.c15_ccnt; + pmccntr_op_finish(env); + return ret; +} + +static void pmselr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* The value of PMSELR.SEL affects the behavior of PMXEVTYPER and + * PMXEVCNTR. We allow [0..31] to be written to PMSELR here; in the + * meanwhile, we check PMSELR.SEL when PMXEVTYPER and PMXEVCNTR are + * accessed. + */ + env->cp15.c9_pmselr = value & 0x1f; +} + +static void pmccntr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + pmccntr_op_start(env); + env->cp15.c15_ccnt = value; + pmccntr_op_finish(env); +} + +static void pmccntr_write32(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + uint64_t cur_val = pmccntr_read(env, NULL); + + pmccntr_write(env, ri, deposit64(cur_val, 0, 32, value)); +} + +static void pmccfiltr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + pmccntr_op_start(env); + env->cp15.pmccfiltr_el0 = value & PMCCFILTR_EL0; + pmccntr_op_finish(env); +} + +static void pmccfiltr_write_a32(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + pmccntr_op_start(env); + /* M is not accessible from AArch32 */ + env->cp15.pmccfiltr_el0 = (env->cp15.pmccfiltr_el0 & PMCCFILTR_M) | + (value & PMCCFILTR); + pmccntr_op_finish(env); +} + +static uint64_t pmccfiltr_read_a32(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* M is not visible in AArch32 */ + return env->cp15.pmccfiltr_el0 & PMCCFILTR; +} + +static void pmcntenset_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + value &= pmu_counter_mask(env); + env->cp15.c9_pmcnten |= value; +} + +static void pmcntenclr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + value &= pmu_counter_mask(env); + env->cp15.c9_pmcnten &= ~value; +} + +static void pmovsr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + value &= pmu_counter_mask(env); + env->cp15.c9_pmovsr &= ~value; + pmu_update_irq(env); +} + +static void pmovsset_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + value &= pmu_counter_mask(env); + env->cp15.c9_pmovsr |= value; + pmu_update_irq(env); +} + +static void pmevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value, const uint8_t counter) +{ + if (counter == 31) { + pmccfiltr_write(env, ri, value); + } else if (counter < pmu_num_counters(env)) { + pmevcntr_op_start(env, counter); + + /* + * If this counter's event type is changing, store the current + * underlying count for the new type in c14_pmevcntr_delta[counter] so + * pmevcntr_op_finish has the correct baseline when it converts back to + * a delta. + */ + uint16_t old_event = env->cp15.c14_pmevtyper[counter] & + PMXEVTYPER_EVTCOUNT; + uint16_t new_event = value & PMXEVTYPER_EVTCOUNT; + if (old_event != new_event) { + uint64_t count = 0; + if (event_supported(new_event)) { + uint16_t event_idx = supported_event_map[new_event]; + count = pm_events[event_idx].get_count(env); + } + env->cp15.c14_pmevcntr_delta[counter] = count; + } + + env->cp15.c14_pmevtyper[counter] = value & PMXEVTYPER_MASK; + pmevcntr_op_finish(env, counter); + } + /* Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when + * PMSELR value is equal to or greater than the number of implemented + * counters, but not equal to 0x1f. We opt to behave as a RAZ/WI. + */ +} + +static uint64_t pmevtyper_read(CPUARMState *env, const ARMCPRegInfo *ri, + const uint8_t counter) +{ + if (counter == 31) { + return env->cp15.pmccfiltr_el0; + } else if (counter < pmu_num_counters(env)) { + return env->cp15.c14_pmevtyper[counter]; + } else { + /* + * We opt to behave as a RAZ/WI when attempts to access PMXEVTYPER + * are CONSTRAINED UNPREDICTABLE. See comments in pmevtyper_write(). + */ + return 0; + } +} + +static void pmevtyper_writefn(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); + pmevtyper_write(env, ri, value, counter); +} + +static void pmevtyper_rawwrite(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); + env->cp15.c14_pmevtyper[counter] = value; + + /* + * pmevtyper_rawwrite is called between a pair of pmu_op_start and + * pmu_op_finish calls when loading saved state for a migration. Because + * we're potentially updating the type of event here, the value written to + * c14_pmevcntr_delta by the preceeding pmu_op_start call may be for a + * different counter type. Therefore, we need to set this value to the + * current count for the counter type we're writing so that pmu_op_finish + * has the correct count for its calculation. + */ + uint16_t event = value & PMXEVTYPER_EVTCOUNT; + if (event_supported(event)) { + uint16_t event_idx = supported_event_map[event]; + env->cp15.c14_pmevcntr_delta[counter] = + pm_events[event_idx].get_count(env); + } +} + +static uint64_t pmevtyper_readfn(CPUARMState *env, const ARMCPRegInfo *ri) +{ + uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); + return pmevtyper_read(env, ri, counter); +} + +static void pmxevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + pmevtyper_write(env, ri, value, env->cp15.c9_pmselr & 31); +} + +static uint64_t pmxevtyper_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return pmevtyper_read(env, ri, env->cp15.c9_pmselr & 31); +} + +static void pmevcntr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value, uint8_t counter) +{ + if (counter < pmu_num_counters(env)) { + pmevcntr_op_start(env, counter); + env->cp15.c14_pmevcntr[counter] = value; + pmevcntr_op_finish(env, counter); + } + /* + * We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR + * are CONSTRAINED UNPREDICTABLE. + */ +} + +static uint64_t pmevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri, + uint8_t counter) +{ + if (counter < pmu_num_counters(env)) { + uint64_t ret; + pmevcntr_op_start(env, counter); + ret = env->cp15.c14_pmevcntr[counter]; + pmevcntr_op_finish(env, counter); + return ret; + } else { + /* We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR + * are CONSTRAINED UNPREDICTABLE. */ + return 0; + } +} + +static void pmevcntr_writefn(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); + pmevcntr_write(env, ri, value, counter); +} + +static uint64_t pmevcntr_readfn(CPUARMState *env, const ARMCPRegInfo *ri) +{ + uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); + return pmevcntr_read(env, ri, counter); +} + +static void pmevcntr_rawwrite(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); + assert(counter < pmu_num_counters(env)); + env->cp15.c14_pmevcntr[counter] = value; + pmevcntr_write(env, ri, value, counter); +} + +static uint64_t pmevcntr_rawread(CPUARMState *env, const ARMCPRegInfo *ri) +{ + uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); + assert(counter < pmu_num_counters(env)); + return env->cp15.c14_pmevcntr[counter]; +} + +static void pmxevcntr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + pmevcntr_write(env, ri, value, env->cp15.c9_pmselr & 31); +} + +static uint64_t pmxevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return pmevcntr_read(env, ri, env->cp15.c9_pmselr & 31); +} + +static void pmuserenr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + if (arm_feature(env, ARM_FEATURE_V8)) { + env->cp15.c9_pmuserenr = value & 0xf; + } else { + env->cp15.c9_pmuserenr = value & 1; + } +} + +static void pmintenset_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* We have no event counters so only the C bit can be changed */ + value &= pmu_counter_mask(env); + env->cp15.c9_pminten |= value; + pmu_update_irq(env); +} + +static void pmintenclr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + value &= pmu_counter_mask(env); + env->cp15.c9_pminten &= ~value; + pmu_update_irq(env); +} + +static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Note that even though the AArch64 view of this register has bits + * [10:0] all RES0 we can only mask the bottom 5, to comply with the + * architectural requirements for bits which are RES0 only in some + * contexts. (ARMv8 would permit us to do no masking at all, but ARMv7 + * requires the bottom five bits to be RAZ/WI because they're UNK/SBZP.) + */ + raw_write(env, ri, value & ~0x1FULL); +} + +static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) +{ + /* Begin with base v8.0 state. */ + uint32_t valid_mask = 0x3fff; + ARMCPU *cpu = env_archcpu(env); + + if (ri->state == ARM_CP_STATE_AA64) { + if (arm_feature(env, ARM_FEATURE_AARCH64) && + !cpu_isar_feature(aa64_aa32_el1, cpu)) { + value |= SCR_FW | SCR_AW; /* these two bits are RES1. */ + } + valid_mask &= ~SCR_NET; + + if (cpu_isar_feature(aa64_lor, cpu)) { + valid_mask |= SCR_TLOR; + } + if (cpu_isar_feature(aa64_pauth, cpu)) { + valid_mask |= SCR_API | SCR_APK; + } + if (cpu_isar_feature(aa64_sel2, cpu)) { + valid_mask |= SCR_EEL2; + } + if (cpu_isar_feature(aa64_mte, cpu)) { + valid_mask |= SCR_ATA; + } + } else { + valid_mask &= ~(SCR_RW | SCR_ST); + } + + if (!arm_feature(env, ARM_FEATURE_EL2)) { + valid_mask &= ~SCR_HCE; + + /* On ARMv7, SMD (or SCD as it is called in v7) is only + * supported if EL2 exists. The bit is UNK/SBZP when + * EL2 is unavailable. In QEMU ARMv7, we force it to always zero + * when EL2 is unavailable. + * On ARMv8, this bit is always available. + */ + if (arm_feature(env, ARM_FEATURE_V7) && + !arm_feature(env, ARM_FEATURE_V8)) { + valid_mask &= ~SCR_SMD; + } + } + + /* Clear all-context RES0 bits. */ + value &= valid_mask; + raw_write(env, ri, value); +} + +static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* + * scr_write will set the RES1 bits on an AArch64-only CPU. + * The reset value will be 0x30 on an AArch64-only CPU and 0 otherwise. + */ + scr_write(env, ri, 0); +} + +static CPAccessResult access_aa64_tid2(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TID2)) { + return CP_ACCESS_TRAP_EL2; + } + + return CP_ACCESS_OK; +} + +static uint64_t ccsidr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + ARMCPU *cpu = env_archcpu(env); + + /* Acquire the CSSELR index from the bank corresponding to the CCSIDR + * bank + */ + uint32_t index = A32_BANKED_REG_GET(env, csselr, + ri->secure & ARM_CP_SECSTATE_S); + + return cpu->ccsidr[index]; +} + +static void csselr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + raw_write(env, ri, value & 0xf); +} + +static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + CPUState *cs = env_cpu(env); + bool el1 = arm_current_el(env) == 1; + uint64_t hcr_el2 = el1 ? arm_hcr_el2_eff(env) : 0; + uint64_t ret = 0; + + if (hcr_el2 & HCR_IMO) { + if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) { + ret |= CPSR_I; + } + } else { + if (cs->interrupt_request & CPU_INTERRUPT_HARD) { + ret |= CPSR_I; + } + } + + if (hcr_el2 & HCR_FMO) { + if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) { + ret |= CPSR_F; + } + } else { + if (cs->interrupt_request & CPU_INTERRUPT_FIQ) { + ret |= CPSR_F; + } + } + + /* External aborts are not possible in QEMU so A bit is always clear */ + return ret; +} + +static CPAccessResult access_aa64_tid1(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TID1)) { + return CP_ACCESS_TRAP_EL2; + } + + return CP_ACCESS_OK; +} + +static CPAccessResult access_aa32_tid1(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_feature(env, ARM_FEATURE_V8)) { + return access_aa64_tid1(env, ri, isread); + } + + return CP_ACCESS_OK; +} + +static const ARMCPRegInfo v7_cp_reginfo[] = { + /* the old v6 WFI, UNPREDICTABLE in v7 but we choose to NOP */ + { .name = "NOP", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4, + .access = PL1_W, .type = ARM_CP_NOP }, + /* Performance monitors are implementation defined in v7, + * but with an ARM recommended set of registers, which we + * follow. + * + * Performance registers fall into three categories: + * (a) always UNDEF in PL0, RW in PL1 (PMINTENSET, PMINTENCLR) + * (b) RO in PL0 (ie UNDEF on write), RW in PL1 (PMUSERENR) + * (c) UNDEF in PL0 if PMUSERENR.EN==0, otherwise accessible (all others) + * For the cases controlled by PMUSERENR we must set .access to PL0_RW + * or PL0_RO as appropriate and then check PMUSERENR in the helper fn. + */ + { .name = "PMCNTENSET", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 1, + .access = PL0_RW, .type = ARM_CP_ALIAS, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten), + .writefn = pmcntenset_write, + .accessfn = pmreg_access, + .raw_writefn = raw_write }, + { .name = "PMCNTENSET_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 1, + .access = PL0_RW, .accessfn = pmreg_access, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten), .resetvalue = 0, + .writefn = pmcntenset_write, .raw_writefn = raw_write }, + { .name = "PMCNTENCLR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 2, + .access = PL0_RW, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten), + .accessfn = pmreg_access, + .writefn = pmcntenclr_write, + .type = ARM_CP_ALIAS }, + { .name = "PMCNTENCLR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 2, + .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten), + .writefn = pmcntenclr_write }, + { .name = "PMOVSR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 3, + .access = PL0_RW, .type = ARM_CP_IO, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr), + .accessfn = pmreg_access, + .writefn = pmovsr_write, + .raw_writefn = raw_write }, + { .name = "PMOVSCLR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 3, + .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_ALIAS | ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr), + .writefn = pmovsr_write, + .raw_writefn = raw_write }, + { .name = "PMSWINC", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 4, + .access = PL0_W, .accessfn = pmreg_access_swinc, + .type = ARM_CP_NO_RAW | ARM_CP_IO, + .writefn = pmswinc_write }, + { .name = "PMSWINC_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 4, + .access = PL0_W, .accessfn = pmreg_access_swinc, + .type = ARM_CP_NO_RAW | ARM_CP_IO, + .writefn = pmswinc_write }, + { .name = "PMSELR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 5, + .access = PL0_RW, .type = ARM_CP_ALIAS, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmselr), + .accessfn = pmreg_access_selr, .writefn = pmselr_write, + .raw_writefn = raw_write}, + { .name = "PMSELR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 5, + .access = PL0_RW, .accessfn = pmreg_access_selr, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pmselr), + .writefn = pmselr_write, .raw_writefn = raw_write, }, + { .name = "PMCCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 0, + .access = PL0_RW, .resetvalue = 0, .type = ARM_CP_ALIAS | ARM_CP_IO, + .readfn = pmccntr_read, .writefn = pmccntr_write32, + .accessfn = pmreg_access_ccntr }, + { .name = "PMCCNTR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 0, + .access = PL0_RW, .accessfn = pmreg_access_ccntr, + .type = ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.c15_ccnt), + .readfn = pmccntr_read, .writefn = pmccntr_write, + .raw_readfn = raw_read, .raw_writefn = raw_write, }, + { .name = "PMCCFILTR", .cp = 15, .opc1 = 0, .crn = 14, .crm = 15, .opc2 = 7, + .writefn = pmccfiltr_write_a32, .readfn = pmccfiltr_read_a32, + .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_ALIAS | ARM_CP_IO, + .resetvalue = 0, }, + { .name = "PMCCFILTR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 15, .opc2 = 7, + .writefn = pmccfiltr_write, .raw_writefn = raw_write, + .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.pmccfiltr_el0), + .resetvalue = 0, }, + { .name = "PMXEVTYPER", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 1, + .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = pmreg_access, + .writefn = pmxevtyper_write, .readfn = pmxevtyper_read }, + { .name = "PMXEVTYPER_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 1, + .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = pmreg_access, + .writefn = pmxevtyper_write, .readfn = pmxevtyper_read }, + { .name = "PMXEVCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 2, + .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = pmreg_access_xevcntr, + .writefn = pmxevcntr_write, .readfn = pmxevcntr_read }, + { .name = "PMXEVCNTR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 2, + .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = pmreg_access_xevcntr, + .writefn = pmxevcntr_write, .readfn = pmxevcntr_read }, + { .name = "PMUSERENR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 0, + .access = PL0_R | PL1_RW, .accessfn = access_tpm, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmuserenr), + .resetvalue = 0, + .writefn = pmuserenr_write, .raw_writefn = raw_write }, + { .name = "PMUSERENR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 14, .opc2 = 0, + .access = PL0_R | PL1_RW, .accessfn = access_tpm, .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pmuserenr), + .resetvalue = 0, + .writefn = pmuserenr_write, .raw_writefn = raw_write }, + { .name = "PMINTENSET", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tpm, + .type = ARM_CP_ALIAS | ARM_CP_IO, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pminten), + .resetvalue = 0, + .writefn = pmintenset_write, .raw_writefn = raw_write }, + { .name = "PMINTENSET_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tpm, + .type = ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pminten), + .writefn = pmintenset_write, .raw_writefn = raw_write, + .resetvalue = 0x0 }, + { .name = "PMINTENCLR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 2, + .access = PL1_RW, .accessfn = access_tpm, + .type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pminten), + .writefn = pmintenclr_write, }, + { .name = "PMINTENCLR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 2, + .access = PL1_RW, .accessfn = access_tpm, + .type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pminten), + .writefn = pmintenclr_write }, + { .name = "CCSIDR", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 0, + .access = PL1_R, + .accessfn = access_aa64_tid2, + .readfn = ccsidr_read, .type = ARM_CP_NO_RAW }, + { .name = "CSSELR", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 2, .opc2 = 0, + .access = PL1_RW, + .accessfn = access_aa64_tid2, + .writefn = csselr_write, .resetvalue = 0, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s), + offsetof(CPUARMState, cp15.csselr_ns) } }, + /* Auxiliary ID register: this actually has an IMPDEF value but for now + * just RAZ for all cores: + */ + { .name = "AIDR", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid1, + .resetvalue = 0 }, + /* Auxiliary fault status registers: these also are IMPDEF, and we + * choose to RAZ/WI for all cores. + */ + { .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "AFSR1_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_CONST, .resetvalue = 0 }, + /* MAIR can just read-as-written because we don't implement caches + * and so don't need to care about memory attributes. + */ + { .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .fieldoffset = offsetof(CPUARMState, cp15.mair_el[1]), + .resetvalue = 0 }, + { .name = "MAIR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 10, .crm = 2, .opc2 = 0, + .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[3]), + .resetvalue = 0 }, + /* For non-long-descriptor page tables these are PRRR and NMRR; + * regardless they still act as reads-as-written for QEMU. + */ + /* MAIR0/1 are defined separately from their 64-bit counterpart which + * allows them to assign the correct fieldoffset based on the endianness + * handled in the field definitions. + */ + { .name = "MAIR0", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.mair0_s), + offsetof(CPUARMState, cp15.mair0_ns) }, + .resetfn = arm_cp_reset_ignore }, + { .name = "MAIR1", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.mair1_s), + offsetof(CPUARMState, cp15.mair1_ns) }, + .resetfn = arm_cp_reset_ignore }, + { .name = "ISR_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL1_R, .readfn = isr_read }, + /* 32 bit ITLB invalidates */ + { .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiall_write }, + { .name = "ITLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimva_write }, + { .name = "ITLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 2, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiasid_write }, + /* 32 bit DTLB invalidates */ + { .name = "DTLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiall_write }, + { .name = "DTLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimva_write }, + { .name = "DTLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 2, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiasid_write }, + /* 32 bit TLB invalidates */ + { .name = "TLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiall_write }, + { .name = "TLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimva_write }, + { .name = "TLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiasid_write }, + { .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimvaa_write }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo v7mp_cp_reginfo[] = { + /* 32 bit TLB invalidates, Inner Shareable */ + { .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiall_is_write }, + { .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimva_is_write }, + { .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbiasid_is_write }, + { .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimvaa_is_write }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo pmovsset_cp_reginfo[] = { + /* PMOVSSET is not implemented in v7 before v7ve */ + { .name = "PMOVSSET", .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 3, + .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_ALIAS | ARM_CP_IO, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr), + .writefn = pmovsset_write, + .raw_writefn = raw_write }, + { .name = "PMOVSSET_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 14, .opc2 = 3, + .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_ALIAS | ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr), + .writefn = pmovsset_write, + .raw_writefn = raw_write }, + REGINFO_SENTINEL +}; + +static void teecr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + value &= 1; + env->teecr = value; +} + +static CPAccessResult teehbr_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 0 && (env->teecr & 1)) { + return CP_ACCESS_TRAP; + } + return CP_ACCESS_OK; +} + +static const ARMCPRegInfo t2ee_cp_reginfo[] = { + { .name = "TEECR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 6, .opc2 = 0, + .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, teecr), + .resetvalue = 0, + .writefn = teecr_write }, + { .name = "TEEHBR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 6, .opc2 = 0, + .access = PL0_RW, .fieldoffset = offsetof(CPUARMState, teehbr), + .accessfn = teehbr_access, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo v6k_cp_reginfo[] = { + { .name = "TPIDR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 2, .crn = 13, .crm = 0, + .access = PL0_RW, + .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[0]), .resetvalue = 0 }, + { .name = "TPIDRURW", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 2, + .access = PL0_RW, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrurw_s), + offsetoflow32(CPUARMState, cp15.tpidrurw_ns) }, + .resetfn = arm_cp_reset_ignore }, + { .name = "TPIDRRO_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 3, .crn = 13, .crm = 0, + .access = PL0_R|PL1_W, + .fieldoffset = offsetof(CPUARMState, cp15.tpidrro_el[0]), + .resetvalue = 0}, + { .name = "TPIDRURO", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 3, + .access = PL0_R|PL1_W, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidruro_s), + offsetoflow32(CPUARMState, cp15.tpidruro_ns) }, + .resetfn = arm_cp_reset_ignore }, + { .name = "TPIDR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .opc2 = 4, .crn = 13, .crm = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[1]), .resetvalue = 0 }, + { .name = "TPIDRPRW", .opc1 = 0, .cp = 15, .crn = 13, .crm = 0, .opc2 = 4, + .access = PL1_RW, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrprw_s), + offsetoflow32(CPUARMState, cp15.tpidrprw_ns) }, + .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +#ifndef CONFIG_USER_ONLY + +static CPAccessResult gt_cntfrq_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + /* + * CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero. + * Writable only at the highest implemented exception level. + */ + int el = arm_current_el(env); + uint64_t hcr; + uint32_t cntkctl; + + switch (el) { + case 0: + hcr = arm_hcr_el2_eff(env); + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + cntkctl = env->cp15.cnthctl_el2; + } else { + cntkctl = env->cp15.c14_cntkctl; + } + if (!extract32(cntkctl, 0, 2)) { + return CP_ACCESS_TRAP; + } + break; + case 1: + if (!isread && ri->state == ARM_CP_STATE_AA32 && + arm_is_secure_below_el3(env)) { + /* Accesses from 32-bit Secure EL1 UNDEF (*not* trap to EL3!) */ + return CP_ACCESS_TRAP_UNCATEGORIZED; + } + break; + case 2: + case 3: + break; + } + + if (!isread && el < arm_highest_el(env)) { + return CP_ACCESS_TRAP_UNCATEGORIZED; + } + + return CP_ACCESS_OK; +} + +static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx, + bool isread) +{ + unsigned int cur_el = arm_current_el(env); + bool has_el2 = arm_is_el2_enabled(env); + uint64_t hcr = arm_hcr_el2_eff(env); + + switch (cur_el) { + case 0: + /* If HCR_EL2. == '11': check CNTHCTL_EL2.EL0[PV]CTEN. */ + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + return (extract32(env->cp15.cnthctl_el2, timeridx, 1) + ? CP_ACCESS_OK : CP_ACCESS_TRAP_EL2); + } + + /* CNT[PV]CT: not visible from PL0 if EL0[PV]CTEN is zero */ + if (!extract32(env->cp15.c14_cntkctl, timeridx, 1)) { + return CP_ACCESS_TRAP; + } + + /* If HCR_EL2. == '10': check CNTHCTL_EL2.EL1PCTEN. */ + if (hcr & HCR_E2H) { + if (timeridx == GTIMER_PHYS && + !extract32(env->cp15.cnthctl_el2, 10, 1)) { + return CP_ACCESS_TRAP_EL2; + } + } else { + /* If HCR_EL2. == 0: check CNTHCTL_EL2.EL1PCEN. */ + if (has_el2 && timeridx == GTIMER_PHYS && + !extract32(env->cp15.cnthctl_el2, 1, 1)) { + return CP_ACCESS_TRAP_EL2; + } + } + break; + + case 1: + /* Check CNTHCTL_EL2.EL1PCTEN, which changes location based on E2H. */ + if (has_el2 && timeridx == GTIMER_PHYS && + (hcr & HCR_E2H + ? !extract32(env->cp15.cnthctl_el2, 10, 1) + : !extract32(env->cp15.cnthctl_el2, 0, 1))) { + return CP_ACCESS_TRAP_EL2; + } + break; + } + return CP_ACCESS_OK; +} + +static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx, + bool isread) +{ + unsigned int cur_el = arm_current_el(env); + bool has_el2 = arm_is_el2_enabled(env); + uint64_t hcr = arm_hcr_el2_eff(env); + + switch (cur_el) { + case 0: + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + /* If HCR_EL2. == '11': check CNTHCTL_EL2.EL0[PV]TEN. */ + return (extract32(env->cp15.cnthctl_el2, 9 - timeridx, 1) + ? CP_ACCESS_OK : CP_ACCESS_TRAP_EL2); + } + + /* + * CNT[PV]_CVAL, CNT[PV]_CTL, CNT[PV]_TVAL: not visible from + * EL0 if EL0[PV]TEN is zero. + */ + if (!extract32(env->cp15.c14_cntkctl, 9 - timeridx, 1)) { + return CP_ACCESS_TRAP; + } + /* fall through */ + + case 1: + if (has_el2 && timeridx == GTIMER_PHYS) { + if (hcr & HCR_E2H) { + /* If HCR_EL2. == '10': check CNTHCTL_EL2.EL1PTEN. */ + if (!extract32(env->cp15.cnthctl_el2, 11, 1)) { + return CP_ACCESS_TRAP_EL2; + } + } else { + /* If HCR_EL2. == 0: check CNTHCTL_EL2.EL1PCEN. */ + if (!extract32(env->cp15.cnthctl_el2, 1, 1)) { + return CP_ACCESS_TRAP_EL2; + } + } + } + break; + } + return CP_ACCESS_OK; +} + +static CPAccessResult gt_pct_access(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + return gt_counter_access(env, GTIMER_PHYS, isread); +} + +static CPAccessResult gt_vct_access(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + return gt_counter_access(env, GTIMER_VIRT, isread); +} + +static CPAccessResult gt_ptimer_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + return gt_timer_access(env, GTIMER_PHYS, isread); +} + +static CPAccessResult gt_vtimer_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + return gt_timer_access(env, GTIMER_VIRT, isread); +} + +static CPAccessResult gt_stimer_access(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + /* + * The AArch64 register view of the secure physical timer is + * always accessible from EL3, and configurably accessible from + * Secure EL1. + */ + switch (arm_current_el(env)) { + case 1: + if (!arm_is_secure(env)) { + return CP_ACCESS_TRAP; + } + if (!(env->cp15.scr_el3 & SCR_ST)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; + case 0: + case 2: + return CP_ACCESS_TRAP; + case 3: + return CP_ACCESS_OK; + default: + g_assert_not_reached(); + } +} + +static uint64_t gt_get_countervalue(CPUARMState *env) +{ + ARMCPU *cpu = env_archcpu(env); + + return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) / gt_cntfrq_period_ns(cpu); +} + +static void gt_recalc_timer(ARMCPU *cpu, int timeridx) +{ + ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx]; + + if (gt->ctl & 1) { + /* + * Timer enabled: calculate and set current ISTATUS, irq, and + * reset timer to when ISTATUS next has to change + */ + uint64_t offset = timeridx == GTIMER_VIRT ? + cpu->env.cp15.cntvoff_el2 : 0; + uint64_t count = gt_get_countervalue(&cpu->env); + /* Note that this must be unsigned 64 bit arithmetic: */ + int istatus = count - offset >= gt->cval; + uint64_t nexttick; + int irqstate; + + gt->ctl = deposit32(gt->ctl, 2, 1, istatus); + + irqstate = (istatus && !(gt->ctl & 2)); + qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate); + + if (istatus) { + /* Next transition is when count rolls back over to zero */ + nexttick = UINT64_MAX; + } else { + /* Next transition is when we hit cval */ + nexttick = gt->cval + offset; + } + /* + * Note that the desired next expiry time might be beyond the + * signed-64-bit range of a QEMUTimer -- in this case we just + * set the timer for as far in the future as possible. When the + * timer expires we will reset the timer for any remaining period. + */ + if (nexttick > INT64_MAX / gt_cntfrq_period_ns(cpu)) { + timer_mod_ns(cpu->gt_timer[timeridx], INT64_MAX); + } else { + timer_mod(cpu->gt_timer[timeridx], nexttick); + } + trace_arm_gt_recalc(timeridx, irqstate, nexttick); + } else { + /* Timer disabled: ISTATUS and timer output always clear */ + gt->ctl &= ~4; + qemu_set_irq(cpu->gt_timer_outputs[timeridx], 0); + timer_del(cpu->gt_timer[timeridx]); + trace_arm_gt_recalc_disabled(timeridx); + } +} + +static void gt_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri, + int timeridx) +{ + ARMCPU *cpu = env_archcpu(env); + + timer_del(cpu->gt_timer[timeridx]); +} + +static uint64_t gt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return gt_get_countervalue(env); +} + +static uint64_t gt_virt_cnt_offset(CPUARMState *env) +{ + uint64_t hcr; + + switch (arm_current_el(env)) { + case 2: + hcr = arm_hcr_el2_eff(env); + if (hcr & HCR_E2H) { + return 0; + } + break; + case 0: + hcr = arm_hcr_el2_eff(env); + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + return 0; + } + break; + } + + return env->cp15.cntvoff_el2; +} + +static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return gt_get_countervalue(env) - gt_virt_cnt_offset(env); +} + +static void gt_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + int timeridx, + uint64_t value) +{ + trace_arm_gt_cval_write(timeridx, value); + env->cp15.c14_timer[timeridx].cval = value; + gt_recalc_timer(env_archcpu(env), timeridx); +} + +static uint64_t gt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri, + int timeridx) +{ + uint64_t offset = 0; + + switch (timeridx) { + case GTIMER_VIRT: + case GTIMER_HYPVIRT: + offset = gt_virt_cnt_offset(env); + break; + } + + return (uint32_t)(env->cp15.c14_timer[timeridx].cval - + (gt_get_countervalue(env) - offset)); +} + +static void gt_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + int timeridx, + uint64_t value) +{ + uint64_t offset = 0; + + switch (timeridx) { + case GTIMER_VIRT: + case GTIMER_HYPVIRT: + offset = gt_virt_cnt_offset(env); + break; + } + + trace_arm_gt_tval_write(timeridx, value); + env->cp15.c14_timer[timeridx].cval = gt_get_countervalue(env) - offset + + sextract64(value, 0, 32); + gt_recalc_timer(env_archcpu(env), timeridx); +} + +static void gt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + int timeridx, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + uint32_t oldval = env->cp15.c14_timer[timeridx].ctl; + + trace_arm_gt_ctl_write(timeridx, value); + env->cp15.c14_timer[timeridx].ctl = deposit64(oldval, 0, 2, value); + if ((oldval ^ value) & 1) { + /* Enable toggled */ + gt_recalc_timer(cpu, timeridx); + } else if ((oldval ^ value) & 2) { + /* + * IMASK toggled: don't need to recalculate, + * just set the interrupt line based on ISTATUS + */ + int irqstate = (oldval & 4) && !(value & 2); + + trace_arm_gt_imask_toggle(timeridx, irqstate); + qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate); + } +} + +static void gt_phys_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + gt_timer_reset(env, ri, GTIMER_PHYS); +} + +static void gt_phys_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_cval_write(env, ri, GTIMER_PHYS, value); +} + +static uint64_t gt_phys_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return gt_tval_read(env, ri, GTIMER_PHYS); +} + +static void gt_phys_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_tval_write(env, ri, GTIMER_PHYS, value); +} + +static void gt_phys_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_ctl_write(env, ri, GTIMER_PHYS, value); +} + +static int gt_phys_redir_timeridx(CPUARMState *env) +{ + switch (arm_mmu_idx(env)) { + case ARMMMUIdx_E20_0: + case ARMMMUIdx_E20_2: + case ARMMMUIdx_E20_2_PAN: + case ARMMMUIdx_SE20_0: + case ARMMMUIdx_SE20_2: + case ARMMMUIdx_SE20_2_PAN: + return GTIMER_HYP; + default: + return GTIMER_PHYS; + } +} + +static int gt_virt_redir_timeridx(CPUARMState *env) +{ + switch (arm_mmu_idx(env)) { + case ARMMMUIdx_E20_0: + case ARMMMUIdx_E20_2: + case ARMMMUIdx_E20_2_PAN: + case ARMMMUIdx_SE20_0: + case ARMMMUIdx_SE20_2: + case ARMMMUIdx_SE20_2_PAN: + return GTIMER_HYPVIRT; + default: + return GTIMER_VIRT; + } +} + +static uint64_t gt_phys_redir_cval_read(CPUARMState *env, + const ARMCPRegInfo *ri) +{ + int timeridx = gt_phys_redir_timeridx(env); + return env->cp15.c14_timer[timeridx].cval; +} + +static void gt_phys_redir_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + int timeridx = gt_phys_redir_timeridx(env); + gt_cval_write(env, ri, timeridx, value); +} + +static uint64_t gt_phys_redir_tval_read(CPUARMState *env, + const ARMCPRegInfo *ri) +{ + int timeridx = gt_phys_redir_timeridx(env); + return gt_tval_read(env, ri, timeridx); +} + +static void gt_phys_redir_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + int timeridx = gt_phys_redir_timeridx(env); + gt_tval_write(env, ri, timeridx, value); +} + +static uint64_t gt_phys_redir_ctl_read(CPUARMState *env, + const ARMCPRegInfo *ri) +{ + int timeridx = gt_phys_redir_timeridx(env); + return env->cp15.c14_timer[timeridx].ctl; +} + +static void gt_phys_redir_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + int timeridx = gt_phys_redir_timeridx(env); + gt_ctl_write(env, ri, timeridx, value); +} + +static void gt_virt_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + gt_timer_reset(env, ri, GTIMER_VIRT); +} + +static void gt_virt_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_cval_write(env, ri, GTIMER_VIRT, value); +} + +static uint64_t gt_virt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return gt_tval_read(env, ri, GTIMER_VIRT); +} + +static void gt_virt_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_tval_write(env, ri, GTIMER_VIRT, value); +} + +static void gt_virt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_ctl_write(env, ri, GTIMER_VIRT, value); +} + +static void gt_cntvoff_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + + trace_arm_gt_cntvoff_write(value); + raw_write(env, ri, value); + gt_recalc_timer(cpu, GTIMER_VIRT); +} + +static uint64_t gt_virt_redir_cval_read(CPUARMState *env, + const ARMCPRegInfo *ri) +{ + int timeridx = gt_virt_redir_timeridx(env); + return env->cp15.c14_timer[timeridx].cval; +} + +static void gt_virt_redir_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + int timeridx = gt_virt_redir_timeridx(env); + gt_cval_write(env, ri, timeridx, value); +} + +static uint64_t gt_virt_redir_tval_read(CPUARMState *env, + const ARMCPRegInfo *ri) +{ + int timeridx = gt_virt_redir_timeridx(env); + return gt_tval_read(env, ri, timeridx); +} + +static void gt_virt_redir_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + int timeridx = gt_virt_redir_timeridx(env); + gt_tval_write(env, ri, timeridx, value); +} + +static uint64_t gt_virt_redir_ctl_read(CPUARMState *env, + const ARMCPRegInfo *ri) +{ + int timeridx = gt_virt_redir_timeridx(env); + return env->cp15.c14_timer[timeridx].ctl; +} + +static void gt_virt_redir_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + int timeridx = gt_virt_redir_timeridx(env); + gt_ctl_write(env, ri, timeridx, value); +} + +static void gt_hyp_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + gt_timer_reset(env, ri, GTIMER_HYP); +} + +static void gt_hyp_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_cval_write(env, ri, GTIMER_HYP, value); +} + +static uint64_t gt_hyp_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return gt_tval_read(env, ri, GTIMER_HYP); +} + +static void gt_hyp_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_tval_write(env, ri, GTIMER_HYP, value); +} + +static void gt_hyp_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_ctl_write(env, ri, GTIMER_HYP, value); +} + +static void gt_sec_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + gt_timer_reset(env, ri, GTIMER_SEC); +} + +static void gt_sec_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_cval_write(env, ri, GTIMER_SEC, value); +} + +static uint64_t gt_sec_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return gt_tval_read(env, ri, GTIMER_SEC); +} + +static void gt_sec_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_tval_write(env, ri, GTIMER_SEC, value); +} + +static void gt_sec_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_ctl_write(env, ri, GTIMER_SEC, value); +} + +static void gt_hv_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + gt_timer_reset(env, ri, GTIMER_HYPVIRT); +} + +static void gt_hv_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_cval_write(env, ri, GTIMER_HYPVIRT, value); +} + +static uint64_t gt_hv_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return gt_tval_read(env, ri, GTIMER_HYPVIRT); +} + +static void gt_hv_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_tval_write(env, ri, GTIMER_HYPVIRT, value); +} + +static void gt_hv_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + gt_ctl_write(env, ri, GTIMER_HYPVIRT, value); +} + +void arm_gt_ptimer_cb(void *opaque) +{ + ARMCPU *cpu = opaque; + + gt_recalc_timer(cpu, GTIMER_PHYS); +} + +void arm_gt_vtimer_cb(void *opaque) +{ + ARMCPU *cpu = opaque; + + gt_recalc_timer(cpu, GTIMER_VIRT); +} + +void arm_gt_htimer_cb(void *opaque) +{ + ARMCPU *cpu = opaque; + + gt_recalc_timer(cpu, GTIMER_HYP); +} + +void arm_gt_stimer_cb(void *opaque) +{ + ARMCPU *cpu = opaque; + + gt_recalc_timer(cpu, GTIMER_SEC); +} + +void arm_gt_hvtimer_cb(void *opaque) +{ + ARMCPU *cpu = opaque; + + gt_recalc_timer(cpu, GTIMER_HYPVIRT); +} + +static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque) +{ + ARMCPU *cpu = env_archcpu(env); + + cpu->env.cp15.c14_cntfrq = cpu->gt_cntfrq_hz; +} + +static const ARMCPRegInfo generic_timer_cp_reginfo[] = { + /* + * Note that CNTFRQ is purely reads-as-written for the benefit + * of software; writing it doesn't actually change the timer frequency. + * Our reset value matches the fixed frequency we implement the timer at. + */ + { .name = "CNTFRQ", .cp = 15, .crn = 14, .crm = 0, .opc1 = 0, .opc2 = 0, + .type = ARM_CP_ALIAS, + .access = PL1_RW | PL0_R, .accessfn = gt_cntfrq_access, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c14_cntfrq), + }, + { .name = "CNTFRQ_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 0, + .access = PL1_RW | PL0_R, .accessfn = gt_cntfrq_access, + .fieldoffset = offsetof(CPUARMState, cp15.c14_cntfrq), + .resetfn = arm_gt_cntfrq_reset, + }, + /* overall control: mostly access permissions */ + { .name = "CNTKCTL", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 14, .crm = 1, .opc2 = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c14_cntkctl), + .resetvalue = 0, + }, + /* per-timer control */ + { .name = "CNTP_CTL", .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 1, + .secure = ARM_CP_SECSTATE_NS, + .type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL0_RW, + .accessfn = gt_ptimer_access, + .fieldoffset = offsetoflow32(CPUARMState, + cp15.c14_timer[GTIMER_PHYS].ctl), + .readfn = gt_phys_redir_ctl_read, .raw_readfn = raw_read, + .writefn = gt_phys_redir_ctl_write, .raw_writefn = raw_write, + }, + { .name = "CNTP_CTL_S", + .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 1, + .secure = ARM_CP_SECSTATE_S, + .type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL0_RW, + .accessfn = gt_ptimer_access, + .fieldoffset = offsetoflow32(CPUARMState, + cp15.c14_timer[GTIMER_SEC].ctl), + .writefn = gt_sec_ctl_write, .raw_writefn = raw_write, + }, + { .name = "CNTP_CTL_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 2, .opc2 = 1, + .type = ARM_CP_IO, .access = PL0_RW, + .accessfn = gt_ptimer_access, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].ctl), + .resetvalue = 0, + .readfn = gt_phys_redir_ctl_read, .raw_readfn = raw_read, + .writefn = gt_phys_redir_ctl_write, .raw_writefn = raw_write, + }, + { .name = "CNTV_CTL", .cp = 15, .crn = 14, .crm = 3, .opc1 = 0, .opc2 = 1, + .type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL0_RW, + .accessfn = gt_vtimer_access, + .fieldoffset = offsetoflow32(CPUARMState, + cp15.c14_timer[GTIMER_VIRT].ctl), + .readfn = gt_virt_redir_ctl_read, .raw_readfn = raw_read, + .writefn = gt_virt_redir_ctl_write, .raw_writefn = raw_write, + }, + { .name = "CNTV_CTL_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 1, + .type = ARM_CP_IO, .access = PL0_RW, + .accessfn = gt_vtimer_access, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].ctl), + .resetvalue = 0, + .readfn = gt_virt_redir_ctl_read, .raw_readfn = raw_read, + .writefn = gt_virt_redir_ctl_write, .raw_writefn = raw_write, + }, + /* TimerValue views: a 32 bit downcounting view of the underlying state */ + { .name = "CNTP_TVAL", .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0, + .secure = ARM_CP_SECSTATE_NS, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, + .accessfn = gt_ptimer_access, + .readfn = gt_phys_redir_tval_read, .writefn = gt_phys_redir_tval_write, + }, + { .name = "CNTP_TVAL_S", + .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0, + .secure = ARM_CP_SECSTATE_S, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, + .accessfn = gt_ptimer_access, + .readfn = gt_sec_tval_read, .writefn = gt_sec_tval_write, + }, + { .name = "CNTP_TVAL_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 2, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, + .accessfn = gt_ptimer_access, .resetfn = gt_phys_timer_reset, + .readfn = gt_phys_redir_tval_read, .writefn = gt_phys_redir_tval_write, + }, + { .name = "CNTV_TVAL", .cp = 15, .crn = 14, .crm = 3, .opc1 = 0, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, + .accessfn = gt_vtimer_access, + .readfn = gt_virt_redir_tval_read, .writefn = gt_virt_redir_tval_write, + }, + { .name = "CNTV_TVAL_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, + .accessfn = gt_vtimer_access, .resetfn = gt_virt_timer_reset, + .readfn = gt_virt_redir_tval_read, .writefn = gt_virt_redir_tval_write, + }, + /* The counter itself */ + { .name = "CNTPCT", .cp = 15, .crm = 14, .opc1 = 0, + .access = PL0_R, .type = ARM_CP_64BIT | ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = gt_pct_access, + .readfn = gt_cnt_read, .resetfn = arm_cp_reset_ignore, + }, + { .name = "CNTPCT_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 1, + .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = gt_pct_access, .readfn = gt_cnt_read, + }, + { .name = "CNTVCT", .cp = 15, .crm = 14, .opc1 = 1, + .access = PL0_R, .type = ARM_CP_64BIT | ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = gt_vct_access, + .readfn = gt_virt_cnt_read, .resetfn = arm_cp_reset_ignore, + }, + { .name = "CNTVCT_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 2, + .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO, + .accessfn = gt_vct_access, .readfn = gt_virt_cnt_read, + }, + /* Comparison value, indicating when the timer goes off */ + { .name = "CNTP_CVAL", .cp = 15, .crm = 14, .opc1 = 2, + .secure = ARM_CP_SECSTATE_NS, + .access = PL0_RW, + .type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval), + .accessfn = gt_ptimer_access, + .readfn = gt_phys_redir_cval_read, .raw_readfn = raw_read, + .writefn = gt_phys_redir_cval_write, .raw_writefn = raw_write, + }, + { .name = "CNTP_CVAL_S", .cp = 15, .crm = 14, .opc1 = 2, + .secure = ARM_CP_SECSTATE_S, + .access = PL0_RW, + .type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval), + .accessfn = gt_ptimer_access, + .writefn = gt_sec_cval_write, .raw_writefn = raw_write, + }, + { .name = "CNTP_CVAL_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 2, .opc2 = 2, + .access = PL0_RW, + .type = ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval), + .resetvalue = 0, .accessfn = gt_ptimer_access, + .readfn = gt_phys_redir_cval_read, .raw_readfn = raw_read, + .writefn = gt_phys_redir_cval_write, .raw_writefn = raw_write, + }, + { .name = "CNTV_CVAL", .cp = 15, .crm = 14, .opc1 = 3, + .access = PL0_RW, + .type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval), + .accessfn = gt_vtimer_access, + .readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read, + .writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write, + }, + { .name = "CNTV_CVAL_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 2, + .access = PL0_RW, + .type = ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval), + .resetvalue = 0, .accessfn = gt_vtimer_access, + .readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read, + .writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write, + }, + /* + * Secure timer -- this is actually restricted to only EL3 + * and configurably Secure-EL1 via the accessfn. + */ + { .name = "CNTPS_TVAL_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 7, .crn = 14, .crm = 2, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL1_RW, + .accessfn = gt_stimer_access, + .readfn = gt_sec_tval_read, + .writefn = gt_sec_tval_write, + .resetfn = gt_sec_timer_reset, + }, + { .name = "CNTPS_CTL_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 7, .crn = 14, .crm = 2, .opc2 = 1, + .type = ARM_CP_IO, .access = PL1_RW, + .accessfn = gt_stimer_access, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].ctl), + .resetvalue = 0, + .writefn = gt_sec_ctl_write, .raw_writefn = raw_write, + }, + { .name = "CNTPS_CVAL_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 7, .crn = 14, .crm = 2, .opc2 = 2, + .type = ARM_CP_IO, .access = PL1_RW, + .accessfn = gt_stimer_access, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval), + .writefn = gt_sec_cval_write, .raw_writefn = raw_write, + }, + REGINFO_SENTINEL +}; + +static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (!(arm_hcr_el2_eff(env) & HCR_E2H)) { + return CP_ACCESS_TRAP; + } + return CP_ACCESS_OK; +} + +#else + +/* + * In user-mode most of the generic timer registers are inaccessible + * however modern kernels (4.12+) allow access to cntvct_el0 + */ + +static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + ARMCPU *cpu = env_archcpu(env); + + /* + * Currently we have no support for QEMUTimer in linux-user so we + * can't call gt_get_countervalue(env), instead we directly + * call the lower level functions. + */ + return cpu_get_clock() / gt_cntfrq_period_ns(cpu); +} + +static const ARMCPRegInfo generic_timer_cp_reginfo[] = { + { .name = "CNTFRQ_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 0, + .type = ARM_CP_CONST, .access = PL0_R /* no PL1_RW in linux-user */, + .fieldoffset = offsetof(CPUARMState, cp15.c14_cntfrq), + .resetvalue = NANOSECONDS_PER_SECOND / GTIMER_SCALE, + }, + { .name = "CNTVCT_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 2, + .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO, + .readfn = gt_virt_cnt_read, + }, + REGINFO_SENTINEL +}; + +#endif + +static void par_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) +{ + if (arm_feature(env, ARM_FEATURE_LPAE)) { + raw_write(env, ri, value); + } else if (arm_feature(env, ARM_FEATURE_V7)) { + raw_write(env, ri, value & 0xfffff6ff); + } else { + raw_write(env, ri, value & 0xfffff1ff); + } +} + +#ifndef CONFIG_USER_ONLY +/* get_phys_addr() isn't present for user-mode-only targets */ + +static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (ri->opc2 & 4) { + /* + * The ATS12NSO* operations must trap to EL3 or EL2 if executed in + * Secure EL1 (which can only happen if EL3 is AArch64). + * They are simply UNDEF if executed from NS EL1. + * They function normally from EL2 or EL3. + */ + if (arm_current_el(env) == 1) { + if (arm_is_secure_below_el3(env)) { + if (env->cp15.scr_el3 & SCR_EEL2) { + return CP_ACCESS_TRAP_UNCATEGORIZED_EL2; + } + return CP_ACCESS_TRAP_UNCATEGORIZED_EL3; + } + return CP_ACCESS_TRAP_UNCATEGORIZED; + } + } + return CP_ACCESS_OK; +} + +static uint64_t do_ats_write(CPUARMState *env, uint64_t value, + MMUAccessType access_type, ARMMMUIdx mmu_idx) +{ + hwaddr phys_addr; + target_ulong page_size; + int prot; + bool ret; + uint64_t par64; + bool format64 = false; + MemTxAttrs attrs = {}; + ARMMMUFaultInfo fi = {}; + ARMCacheAttrs cacheattrs = {}; + + ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs, + &prot, &page_size, &fi, &cacheattrs); + + if (ret) { + /* + * Some kinds of translation fault must cause exceptions rather + * than being reported in the PAR. + */ + int current_el = arm_current_el(env); + int target_el; + uint32_t syn, fsr, fsc; + bool take_exc = false; + + if (fi.s1ptw && current_el == 1 + && arm_mmu_idx_is_stage1_of_2(mmu_idx)) { + /* + * Synchronous stage 2 fault on an access made as part of the + * translation table walk for AT S1E0* or AT S1E1* insn + * executed from NS EL1. If this is a synchronous external abort + * and SCR_EL3.EA == 1, then we take a synchronous external abort + * to EL3. Otherwise the fault is taken as an exception to EL2, + * and HPFAR_EL2 holds the faulting IPA. + */ + if (fi.type == ARMFault_SyncExternalOnWalk && + (env->cp15.scr_el3 & SCR_EA)) { + target_el = 3; + } else { + env->cp15.hpfar_el2 = extract64(fi.s2addr, 12, 47) << 4; + if (arm_is_secure_below_el3(env) && fi.s1ns) { + env->cp15.hpfar_el2 |= HPFAR_NS; + } + target_el = 2; + } + take_exc = true; + } else if (fi.type == ARMFault_SyncExternalOnWalk) { + /* + * Synchronous external aborts during a translation table walk + * are taken as Data Abort exceptions. + */ + if (fi.stage2) { + if (current_el == 3) { + target_el = 3; + } else { + target_el = 2; + } + } else { + target_el = exception_target_el(env); + } + take_exc = true; + } + + if (take_exc) { + /* Construct FSR and FSC using same logic as arm_deliver_fault() */ + if (target_el == 2 || arm_el_is_aa64(env, target_el) || + arm_s1_regime_using_lpae_format(env, mmu_idx)) { + fsr = arm_fi_to_lfsc(&fi); + fsc = extract32(fsr, 0, 6); + } else { + fsr = arm_fi_to_sfsc(&fi); + fsc = 0x3f; + } + /* + * Report exception with ESR indicating a fault due to a + * translation table walk for a cache maintenance instruction. + */ + syn = syn_data_abort_no_iss(current_el == target_el, 0, + fi.ea, 1, fi.s1ptw, 1, fsc); + env->exception.vaddress = value; + env->exception.fsr = fsr; + raise_exception(env, EXCP_DATA_ABORT, syn, target_el); + } + } + + if (is_a64(env)) { + format64 = true; + } else if (arm_feature(env, ARM_FEATURE_LPAE)) { + /* + * ATS1Cxx: + * * TTBCR.EAE determines whether the result is returned using the + * 32-bit or the 64-bit PAR format + * * Instructions executed in Hyp mode always use the 64bit format + * + * ATS1S2NSOxx uses the 64bit format if any of the following is true: + * * The Non-secure TTBCR.EAE bit is set to 1 + * * The implementation includes EL2, and the value of HCR.VM is 1 + * + * (Note that HCR.DC makes HCR.VM behave as if it is 1.) + * + * ATS1Hx always uses the 64bit format. + */ + format64 = arm_s1_regime_using_lpae_format(env, mmu_idx); + + if (arm_feature(env, ARM_FEATURE_EL2)) { + if (mmu_idx == ARMMMUIdx_E10_0 || + mmu_idx == ARMMMUIdx_E10_1 || + mmu_idx == ARMMMUIdx_E10_1_PAN) { + format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC); + } else { + format64 |= arm_current_el(env) == 2; + } + } + } + + if (format64) { + /* Create a 64-bit PAR */ + par64 = (1 << 11); /* LPAE bit always set */ + if (!ret) { + par64 |= phys_addr & ~0xfffULL; + if (!attrs.secure) { + par64 |= (1 << 9); /* NS */ + } + par64 |= (uint64_t)cacheattrs.attrs << 56; /* ATTR */ + par64 |= cacheattrs.shareability << 7; /* SH */ + } else { + uint32_t fsr = arm_fi_to_lfsc(&fi); + + par64 |= 1; /* F */ + par64 |= (fsr & 0x3f) << 1; /* FS */ + if (fi.stage2) { + par64 |= (1 << 9); /* S */ + } + if (fi.s1ptw) { + par64 |= (1 << 8); /* PTW */ + } + } + } else { + /* + * fsr is a DFSR/IFSR value for the short descriptor + * translation table format (with WnR always clear). + * Convert it to a 32-bit PAR. + */ + if (!ret) { + /* We do not set any attribute bits in the PAR */ + if (page_size == (1 << 24) + && arm_feature(env, ARM_FEATURE_V7)) { + par64 = (phys_addr & 0xff000000) | (1 << 1); + } else { + par64 = phys_addr & 0xfffff000; + } + if (!attrs.secure) { + par64 |= (1 << 9); /* NS */ + } + } else { + uint32_t fsr = arm_fi_to_sfsc(&fi); + + par64 = ((fsr & (1 << 10)) >> 5) | ((fsr & (1 << 12)) >> 6) | + ((fsr & 0xf) << 1) | 1; + } + } + return par64; +} + +static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) +{ + MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; + uint64_t par64; + ARMMMUIdx mmu_idx; + int el = arm_current_el(env); + bool secure = arm_is_secure_below_el3(env); + + switch (ri->opc2 & 6) { + case 0: + /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */ + switch (el) { + case 3: + mmu_idx = ARMMMUIdx_SE3; + break; + case 2: + g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ + /* fall through */ + case 1: + if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) { + mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN + : ARMMMUIdx_Stage1_E1_PAN); + } else { + mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; + } + break; + default: + g_assert_not_reached(); + } + break; + case 2: + /* stage 1 current state PL0: ATS1CUR, ATS1CUW */ + switch (el) { + case 3: + mmu_idx = ARMMMUIdx_SE10_0; + break; + case 2: + g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ + mmu_idx = ARMMMUIdx_Stage1_E0; + break; + case 1: + mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; + break; + default: + g_assert_not_reached(); + } + break; + case 4: + /* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */ + mmu_idx = ARMMMUIdx_E10_1; + break; + case 6: + /* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */ + mmu_idx = ARMMMUIdx_E10_0; + break; + default: + g_assert_not_reached(); + } + + par64 = do_ats_write(env, value, access_type, mmu_idx); + + A32_BANKED_CURRENT_REG_SET(env, par, par64); +} + +static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; + uint64_t par64; + + par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2); + + A32_BANKED_CURRENT_REG_SET(env, par, par64); +} + +static CPAccessResult at_s1e2_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 3 && + !(env->cp15.scr_el3 & (SCR_NS | SCR_EEL2))) { + return CP_ACCESS_TRAP; + } + return CP_ACCESS_OK; +} + +static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; + ARMMMUIdx mmu_idx; + int secure = arm_is_secure_below_el3(env); + + switch (ri->opc2 & 6) { + case 0: + switch (ri->opc1) { + case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */ + if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) { + mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN + : ARMMMUIdx_Stage1_E1_PAN); + } else { + mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; + } + break; + case 4: /* AT S1E2R, AT S1E2W */ + mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2; + break; + case 6: /* AT S1E3R, AT S1E3W */ + mmu_idx = ARMMMUIdx_SE3; + break; + default: + g_assert_not_reached(); + } + break; + case 2: /* AT S1E0R, AT S1E0W */ + mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; + break; + case 4: /* AT S12E1R, AT S12E1W */ + mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1; + break; + case 6: /* AT S12E0R, AT S12E0W */ + mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0; + break; + default: + g_assert_not_reached(); + } + + env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx); +} +#endif + +static const ARMCPRegInfo vapa_cp_reginfo[] = { + { .name = "PAR", .cp = 15, .crn = 7, .crm = 4, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .resetvalue = 0, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.par_s), + offsetoflow32(CPUARMState, cp15.par_ns) }, + .writefn = par_write }, +#ifndef CONFIG_USER_ONLY + /* This underdecoding is safe because the reginfo is NO_RAW. */ + { .name = "ATS", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = CP_ANY, + .access = PL1_W, .accessfn = ats_access, + .writefn = ats_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC }, +#endif + REGINFO_SENTINEL +}; + +/* Return basic MPU access permission bits. */ +static uint32_t simple_mpu_ap_bits(uint32_t val) +{ + uint32_t ret; + uint32_t mask; + int i; + ret = 0; + mask = 3; + for (i = 0; i < 16; i += 2) { + ret |= (val >> i) & mask; + mask <<= 2; + } + return ret; +} + +/* Pad basic MPU access permission bits to extended format. */ +static uint32_t extended_mpu_ap_bits(uint32_t val) +{ + uint32_t ret; + uint32_t mask; + int i; + ret = 0; + mask = 3; + for (i = 0; i < 16; i += 2) { + ret |= (val & mask) << i; + mask <<= 2; + } + return ret; +} + +static void pmsav5_data_ap_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->cp15.pmsav5_data_ap = extended_mpu_ap_bits(value); +} + +static uint64_t pmsav5_data_ap_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return simple_mpu_ap_bits(env->cp15.pmsav5_data_ap); +} + +static void pmsav5_insn_ap_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->cp15.pmsav5_insn_ap = extended_mpu_ap_bits(value); +} + +static uint64_t pmsav5_insn_ap_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return simple_mpu_ap_bits(env->cp15.pmsav5_insn_ap); +} + +static uint64_t pmsav7_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + uint32_t *u32p = *(uint32_t **)raw_ptr(env, ri); + + if (!u32p) { + return 0; + } + + u32p += env->pmsav7.rnr[M_REG_NS]; + return *u32p; +} + +static void pmsav7_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + uint32_t *u32p = *(uint32_t **)raw_ptr(env, ri); + + if (!u32p) { + return; + } + + u32p += env->pmsav7.rnr[M_REG_NS]; + tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */ + *u32p = value; +} + +static void pmsav7_rgnr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + uint32_t nrgs = cpu->pmsav7_dregion; + + if (value >= nrgs) { + qemu_log_mask(LOG_GUEST_ERROR, + "PMSAv7 RGNR write >= # supported regions, %" PRIu32 + " > %" PRIu32 "\n", (uint32_t)value, nrgs); + return; + } + + raw_write(env, ri, value); +} + +static const ARMCPRegInfo pmsav7_cp_reginfo[] = { + /* + * Reset for all these registers is handled in arm_cpu_reset(), + * because the PMSAv7 is also used by M-profile CPUs, which do + * not register cpregs but still need the state to be reset. + */ + { .name = "DRBAR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 1, .opc2 = 0, + .access = PL1_RW, .type = ARM_CP_NO_RAW, + .fieldoffset = offsetof(CPUARMState, pmsav7.drbar), + .readfn = pmsav7_read, .writefn = pmsav7_write, + .resetfn = arm_cp_reset_ignore }, + { .name = "DRSR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 1, .opc2 = 2, + .access = PL1_RW, .type = ARM_CP_NO_RAW, + .fieldoffset = offsetof(CPUARMState, pmsav7.drsr), + .readfn = pmsav7_read, .writefn = pmsav7_write, + .resetfn = arm_cp_reset_ignore }, + { .name = "DRACR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 1, .opc2 = 4, + .access = PL1_RW, .type = ARM_CP_NO_RAW, + .fieldoffset = offsetof(CPUARMState, pmsav7.dracr), + .readfn = pmsav7_read, .writefn = pmsav7_write, + .resetfn = arm_cp_reset_ignore }, + { .name = "RGNR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 2, .opc2 = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]), + .writefn = pmsav7_rgnr_write, + .resetfn = arm_cp_reset_ignore }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo pmsav5_cp_reginfo[] = { + { .name = "DATA_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_data_ap), + .readfn = pmsav5_data_ap_read, .writefn = pmsav5_data_ap_write, }, + { .name = "INSN_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 1, + .access = PL1_RW, .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_insn_ap), + .readfn = pmsav5_insn_ap_read, .writefn = pmsav5_insn_ap_write, }, + { .name = "DATA_EXT_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 2, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_data_ap), + .resetvalue = 0, }, + { .name = "INSN_EXT_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 3, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_insn_ap), + .resetvalue = 0, }, + { .name = "DCACHE_CFG", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c2_data), .resetvalue = 0, }, + { .name = "ICACHE_CFG", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 1, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c2_insn), .resetvalue = 0, }, + /* Protection region base and size registers */ + { .name = "946_PRBS0", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[0]) }, + { .name = "946_PRBS1", .cp = 15, .crn = 6, .crm = 1, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[1]) }, + { .name = "946_PRBS2", .cp = 15, .crn = 6, .crm = 2, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[2]) }, + { .name = "946_PRBS3", .cp = 15, .crn = 6, .crm = 3, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[3]) }, + { .name = "946_PRBS4", .cp = 15, .crn = 6, .crm = 4, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[4]) }, + { .name = "946_PRBS5", .cp = 15, .crn = 6, .crm = 5, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[5]) }, + { .name = "946_PRBS6", .cp = 15, .crn = 6, .crm = 6, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[6]) }, + { .name = "946_PRBS7", .cp = 15, .crn = 6, .crm = 7, .opc1 = 0, + .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.c6_region[7]) }, + REGINFO_SENTINEL +}; + +static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + TCR *tcr = raw_ptr(env, ri); + int maskshift = extract32(value, 0, 3); + + if (!arm_feature(env, ARM_FEATURE_V8)) { + if (arm_feature(env, ARM_FEATURE_LPAE) && (value & TTBCR_EAE)) { + /* + * Pre ARMv8 bits [21:19], [15:14] and [6:3] are UNK/SBZP when + * using Long-desciptor translation table format + */ + value &= ~((7 << 19) | (3 << 14) | (0xf << 3)); + } else if (arm_feature(env, ARM_FEATURE_EL3)) { + /* + * In an implementation that includes the Security Extensions + * TTBCR has additional fields PD0 [4] and PD1 [5] for + * Short-descriptor translation table format. + */ + value &= TTBCR_PD1 | TTBCR_PD0 | TTBCR_N; + } else { + value &= TTBCR_N; + } + } + + /* + * Update the masks corresponding to the TCR bank being written + * Note that we always calculate mask and base_mask, but + * they are only used for short-descriptor tables (ie if EAE is 0); + * for long-descriptor tables the TCR fields are used differently + * and the mask and base_mask values are meaningless. + */ + tcr->raw_tcr = value; + tcr->mask = ~(((uint32_t)0xffffffffu) >> maskshift); + tcr->base_mask = ~((uint32_t)0x3fffu >> maskshift); +} + +static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + TCR *tcr = raw_ptr(env, ri); + + if (arm_feature(env, ARM_FEATURE_LPAE)) { + /* + * With LPAE the TTBCR could result in a change of ASID + * via the TTBCR.A1 bit, so do a TLB flush. + */ + tlb_flush(CPU(cpu)); + } + /* Preserve the high half of TCR_EL1, set via TTBCR2. */ + value = deposit64(tcr->raw_tcr, 0, 32, value); + vmsa_ttbcr_raw_write(env, ri, value); +} + +static void vmsa_ttbcr_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + TCR *tcr = raw_ptr(env, ri); + + /* + * Reset both the TCR as well as the masks corresponding to the bank of + * the TCR being reset. + */ + tcr->raw_tcr = 0; + tcr->mask = 0; + tcr->base_mask = 0xffffc000u; +} + +static void vmsa_tcr_el12_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + TCR *tcr = raw_ptr(env, ri); + + /* For AArch64 the A1 bit could result in a change of ASID, so TLB flush. */ + tlb_flush(CPU(cpu)); + tcr->raw_tcr = value; +} + +static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* If the ASID changes (with a 64-bit write), we must flush the TLB. */ + if (cpreg_field_is_64bit(ri) && + extract64(raw_read(env, ri) ^ value, 48, 16) != 0) { + ARMCPU *cpu = env_archcpu(env); + tlb_flush(CPU(cpu)); + } + raw_write(env, ri, value); +} + +static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* + * If we are running with E2&0 regime, then an ASID is active. + * Flush if that might be changing. Note we're not checking + * TCR_EL2.A1 to know if this is really the TTBRx_EL2 that + * holds the active ASID, only checking the field that might. + */ + if (extract64(raw_read(env, ri) ^ value, 48, 16) && + (arm_hcr_el2_eff(env) & HCR_E2H)) { + uint16_t mask = ARMMMUIdxBit_E20_2 | + ARMMMUIdxBit_E20_2_PAN | + ARMMMUIdxBit_E20_0; + + if (arm_is_secure_below_el3(env)) { + mask >>= ARM_MMU_IDX_A_NS; + } + + tlb_flush_by_mmuidx(env_cpu(env), mask); + } + raw_write(env, ri, value); +} + +static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + CPUState *cs = CPU(cpu); + + /* + * A change in VMID to the stage2 page table (Stage2) invalidates + * the combined stage 1&2 tlbs (EL10_1 and EL10_0). + */ + if (raw_read(env, ri) != value) { + uint16_t mask = ARMMMUIdxBit_E10_1 | + ARMMMUIdxBit_E10_1_PAN | + ARMMMUIdxBit_E10_0; + + if (arm_is_secure_below_el3(env)) { + mask >>= ARM_MMU_IDX_A_NS; + } + + tlb_flush_by_mmuidx(cs, mask); + raw_write(env, ri, value); + } +} + +static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = { + { .name = "DFSR", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, .type = ARM_CP_ALIAS, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dfsr_s), + offsetoflow32(CPUARMState, cp15.dfsr_ns) }, }, + { .name = "IFSR", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.ifsr_s), + offsetoflow32(CPUARMState, cp15.ifsr_ns) } }, + { .name = "DFAR", .cp = 15, .opc1 = 0, .crn = 6, .crm = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.dfar_s), + offsetof(CPUARMState, cp15.dfar_ns) } }, + { .name = "FAR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .fieldoffset = offsetof(CPUARMState, cp15.far_el[1]), + .resetvalue = 0, }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo vmsa_cp_reginfo[] = { + { .name = "ESR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .crn = 5, .crm = 2, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .fieldoffset = offsetof(CPUARMState, cp15.esr_el[1]), .resetvalue = 0, }, + { .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .writefn = vmsa_ttbr_write, .resetvalue = 0, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s), + offsetof(CPUARMState, cp15.ttbr0_ns) } }, + { .name = "TTBR1_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .writefn = vmsa_ttbr_write, .resetvalue = 0, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s), + offsetof(CPUARMState, cp15.ttbr1_ns) } }, + { .name = "TCR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .writefn = vmsa_tcr_el12_write, + .resetfn = vmsa_ttbcr_reset, .raw_writefn = raw_write, + .fieldoffset = offsetof(CPUARMState, cp15.tcr_el[1]) }, + { .name = "TTBCR", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_ALIAS, .writefn = vmsa_ttbcr_write, + .raw_writefn = vmsa_ttbcr_raw_write, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tcr_el[3]), + offsetoflow32(CPUARMState, cp15.tcr_el[1])} }, + REGINFO_SENTINEL +}; + +/* + * Note that unlike TTBCR, writing to TTBCR2 does not require flushing + * qemu tlbs nor adjusting cached masks. + */ +static const ARMCPRegInfo ttbcr2_reginfo = { + .name = "TTBCR2", .cp = 15, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 3, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_ALIAS, + .bank_fieldoffsets = { offsetofhigh32(CPUARMState, cp15.tcr_el[3]), + offsetofhigh32(CPUARMState, cp15.tcr_el[1]) }, +}; + +static void omap_ticonfig_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->cp15.c15_ticonfig = value & 0xe7; + /* The OS_TYPE bit in this register changes the reported CPUID! */ + env->cp15.c0_cpuid = (value & (1 << 5)) ? + ARM_CPUID_TI915T : ARM_CPUID_TI925T; +} + +static void omap_threadid_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->cp15.c15_threadid = value & 0xffff; +} + +static void omap_wfi_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Wait-for-interrupt (deprecated) */ + cpu_interrupt(env_cpu(env), CPU_INTERRUPT_HALT); +} + +static void omap_cachemaint_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* + * On OMAP there are registers indicating the max/min index of dcache lines + * containing a dirty line; cache flush operations have to reset these. + */ + env->cp15.c15_i_max = 0x000; + env->cp15.c15_i_min = 0xff0; +} + +static const ARMCPRegInfo omap_cp_reginfo[] = { + { .name = "DFSR", .cp = 15, .crn = 5, .crm = CP_ANY, + .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_OVERRIDE, + .fieldoffset = offsetoflow32(CPUARMState, cp15.esr_el[1]), + .resetvalue = 0, }, + { .name = "", .cp = 15, .crn = 15, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .type = ARM_CP_NOP }, + { .name = "TICONFIG", .cp = 15, .crn = 15, .crm = 1, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c15_ticonfig), .resetvalue = 0, + .writefn = omap_ticonfig_write }, + { .name = "IMAX", .cp = 15, .crn = 15, .crm = 2, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c15_i_max), .resetvalue = 0, }, + { .name = "IMIN", .cp = 15, .crn = 15, .crm = 3, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .resetvalue = 0xff0, + .fieldoffset = offsetof(CPUARMState, cp15.c15_i_min) }, + { .name = "THREADID", .cp = 15, .crn = 15, .crm = 4, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c15_threadid), .resetvalue = 0, + .writefn = omap_threadid_write }, + { .name = "TI925T_STATUS", .cp = 15, .crn = 15, + .crm = 8, .opc1 = 0, .opc2 = 0, .access = PL1_RW, + .type = ARM_CP_NO_RAW, + .readfn = arm_cp_read_zero, .writefn = omap_wfi_write, }, + /* + * TODO: Peripheral port remap register: + * On OMAP2 mcr p15, 0, rn, c15, c2, 4 sets up the interrupt controller + * base address at $rn & ~0xfff and map size of 0x200 << ($rn & 0xfff), + * when MMU is off. + */ + { .name = "OMAP_CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY, + .opc1 = 0, .opc2 = CP_ANY, .access = PL1_W, + .type = ARM_CP_OVERRIDE | ARM_CP_NO_RAW, + .writefn = omap_cachemaint_write }, + { .name = "C9", .cp = 15, .crn = 9, + .crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, + .type = ARM_CP_CONST | ARM_CP_OVERRIDE, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +static void xscale_cpar_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->cp15.c15_cpar = value & 0x3fff; +} + +static const ARMCPRegInfo xscale_cp_reginfo[] = { + { .name = "XSCALE_CPAR", + .cp = 15, .crn = 15, .crm = 1, .opc1 = 0, .opc2 = 0, .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c15_cpar), .resetvalue = 0, + .writefn = xscale_cpar_write, }, + { .name = "XSCALE_AUXCR", + .cp = 15, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 1, .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c1_xscaleauxcr), + .resetvalue = 0, }, + /* + * XScale specific cache-lockdown: since we have no cache we NOP these + * and hope the guest does not really rely on cache behaviour. + */ + { .name = "XSCALE_LOCK_ICACHE_LINE", + .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 0, + .access = PL1_W, .type = ARM_CP_NOP }, + { .name = "XSCALE_UNLOCK_ICACHE", + .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 1, + .access = PL1_W, .type = ARM_CP_NOP }, + { .name = "XSCALE_DCACHE_LOCK", + .cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 0, + .access = PL1_RW, .type = ARM_CP_NOP }, + { .name = "XSCALE_UNLOCK_DCACHE", + .cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 1, + .access = PL1_W, .type = ARM_CP_NOP }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo dummy_c15_cp_reginfo[] = { + /* + * RAZ/WI the whole crn=15 space, when we don't have a more specific + * implementation of this implementation-defined space. + * Ideally this should eventually disappear in favour of actually + * implementing the correct behaviour for all cores. + */ + { .name = "C15_IMPDEF", .cp = 15, .crn = 15, + .crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, + .access = PL1_RW, + .type = ARM_CP_CONST | ARM_CP_NO_RAW | ARM_CP_OVERRIDE, + .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = { + /* Cache status: RAZ because we have no cache so it's always clean */ + { .name = "CDSR", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, + .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = { + /* We never have a a block transfer operation in progress */ + { .name = "BXSR", .cp = 15, .crn = 7, .crm = 12, .opc1 = 0, .opc2 = 4, + .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, + .resetvalue = 0 }, + /* The cache ops themselves: these all NOP for QEMU */ + { .name = "IICR", .cp = 15, .crm = 5, .opc1 = 0, + .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, + { .name = "IDCR", .cp = 15, .crm = 6, .opc1 = 0, + .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, + { .name = "CDCR", .cp = 15, .crm = 12, .opc1 = 0, + .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, + { .name = "PIR", .cp = 15, .crm = 12, .opc1 = 1, + .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, + { .name = "PDR", .cp = 15, .crm = 12, .opc1 = 2, + .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, + { .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0, + .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = { + /* + * The cache test-and-clean instructions always return (1 << 30) + * to indicate that there are no dirty cache lines. + */ + { .name = "TC_DCACHE", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 3, + .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, + .resetvalue = (1 << 30) }, + { .name = "TCI_DCACHE", .cp = 15, .crn = 7, .crm = 14, .opc1 = 0, .opc2 = 3, + .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, + .resetvalue = (1 << 30) }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo strongarm_cp_reginfo[] = { + /* Ignore ReadBuffer accesses */ + { .name = "C9_READBUFFER", .cp = 15, .crn = 9, + .crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, + .access = PL1_RW, .resetvalue = 0, + .type = ARM_CP_CONST | ARM_CP_OVERRIDE | ARM_CP_NO_RAW }, + REGINFO_SENTINEL +}; + +static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + unsigned int cur_el = arm_current_el(env); + + if (arm_is_el2_enabled(env) && cur_el == 1) { + return env->cp15.vpidr_el2; + } + return raw_read(env, ri); +} + +static uint64_t mpidr_read_val(CPUARMState *env) +{ + ARMCPU *cpu = env_archcpu(env); + uint64_t mpidr = cpu->mp_affinity; + + if (arm_feature(env, ARM_FEATURE_V7MP)) { + mpidr |= (1U << 31); + /* + * Cores which are uniprocessor (non-coherent) + * but still implement the MP extensions set + * bit 30. (For instance, Cortex-R5). + */ + if (cpu->mp_is_up) { + mpidr |= (1u << 30); + } + } + return mpidr; +} + +static uint64_t mpidr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + unsigned int cur_el = arm_current_el(env); + + if (arm_is_el2_enabled(env) && cur_el == 1) { + return env->cp15.vmpidr_el2; + } + return mpidr_read_val(env); +} + +static const ARMCPRegInfo lpae_cp_reginfo[] = { + /* NOP AMAIR0/1 */ + { .name = "AMAIR0", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_CONST, .resetvalue = 0 }, + /* AMAIR1 is mapped to AMAIR_EL1[63:32] */ + { .name = "AMAIR1", .cp = 15, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "PAR", .cp = 15, .crm = 7, .opc1 = 0, + .access = PL1_RW, .type = ARM_CP_64BIT, .resetvalue = 0, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.par_s), + offsetof(CPUARMState, cp15.par_ns)} }, + { .name = "TTBR0", .cp = 15, .crm = 2, .opc1 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_64BIT | ARM_CP_ALIAS, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s), + offsetof(CPUARMState, cp15.ttbr0_ns) }, + .writefn = vmsa_ttbr_write, }, + { .name = "TTBR1", .cp = 15, .crm = 2, .opc1 = 1, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .type = ARM_CP_64BIT | ARM_CP_ALIAS, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s), + offsetof(CPUARMState, cp15.ttbr1_ns) }, + .writefn = vmsa_ttbr_write, }, + REGINFO_SENTINEL +}; + +static uint64_t aa64_fpcr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return vfp_get_fpcr(env); +} + +static void aa64_fpcr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + vfp_set_fpcr(env, value); +} + +static uint64_t aa64_fpsr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return vfp_get_fpsr(env); +} + +static void aa64_fpsr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + vfp_set_fpsr(env, value); +} + +static CPAccessResult aa64_daif_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UMA)) { + return CP_ACCESS_TRAP; + } + return CP_ACCESS_OK; +} + +static void aa64_daif_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->daif = value & PSTATE_DAIF; +} + +static uint64_t aa64_pan_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return env->pstate & PSTATE_PAN; +} + +static void aa64_pan_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->pstate = (env->pstate & ~PSTATE_PAN) | (value & PSTATE_PAN); +} + +static const ARMCPRegInfo pan_reginfo = { + .name = "PAN", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 3, + .type = ARM_CP_NO_RAW, .access = PL1_RW, + .readfn = aa64_pan_read, .writefn = aa64_pan_write +}; + +static uint64_t aa64_uao_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return env->pstate & PSTATE_UAO; +} + +static void aa64_uao_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->pstate = (env->pstate & ~PSTATE_UAO) | (value & PSTATE_UAO); +} + +static const ARMCPRegInfo uao_reginfo = { + .name = "UAO", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 4, + .type = ARM_CP_NO_RAW, .access = PL1_RW, + .readfn = aa64_uao_read, .writefn = aa64_uao_write +}; + +static uint64_t aa64_dit_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return env->pstate & PSTATE_DIT; +} + +static void aa64_dit_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->pstate = (env->pstate & ~PSTATE_DIT) | (value & PSTATE_DIT); +} + +static const ARMCPRegInfo dit_reginfo = { + .name = "DIT", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 5, + .type = ARM_CP_NO_RAW, .access = PL0_RW, + .readfn = aa64_dit_read, .writefn = aa64_dit_write +}; + +static uint64_t aa64_ssbs_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return env->pstate & PSTATE_SSBS; +} + +static void aa64_ssbs_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->pstate = (env->pstate & ~PSTATE_SSBS) | (value & PSTATE_SSBS); +} + +static const ARMCPRegInfo ssbs_reginfo = { + .name = "SSBS", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 6, + .type = ARM_CP_NO_RAW, .access = PL0_RW, + .readfn = aa64_ssbs_read, .writefn = aa64_ssbs_write +}; + +static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + /* Cache invalidate/clean to Point of Coherency or Persistence... */ + switch (arm_current_el(env)) { + case 0: + /* ... EL0 must UNDEF unless SCTLR_EL1.UCI is set. */ + if (!(arm_sctlr(env, 0) & SCTLR_UCI)) { + return CP_ACCESS_TRAP; + } + /* fall through */ + case 1: + /* ... EL1 must trap to EL2 if HCR_EL2.TPCP is set. */ + if (arm_hcr_el2_eff(env) & HCR_TPCP) { + return CP_ACCESS_TRAP_EL2; + } + break; + } + return CP_ACCESS_OK; +} + +static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env, + const ARMCPRegInfo *ri, + bool isread) +{ + /* Cache invalidate/clean to Point of Unification... */ + switch (arm_current_el(env)) { + case 0: + /* ... EL0 must UNDEF unless SCTLR_EL1.UCI is set. */ + if (!(arm_sctlr(env, 0) & SCTLR_UCI)) { + return CP_ACCESS_TRAP; + } + /* fall through */ + case 1: + /* ... EL1 must trap to EL2 if HCR_EL2.TPU is set. */ + if (arm_hcr_el2_eff(env) & HCR_TPU) { + return CP_ACCESS_TRAP_EL2; + } + break; + } + return CP_ACCESS_OK; +} + +/* + * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions + * Page D4-1736 (DDI0487A.b) + */ + +static int vae1_tlbmask(CPUARMState *env) +{ + uint64_t hcr = arm_hcr_el2_eff(env); + uint16_t mask; + + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + mask = ARMMMUIdxBit_E20_2 | + ARMMMUIdxBit_E20_2_PAN | + ARMMMUIdxBit_E20_0; + } else { + mask = ARMMMUIdxBit_E10_1 | + ARMMMUIdxBit_E10_1_PAN | + ARMMMUIdxBit_E10_0; + } + + if (arm_is_secure_below_el3(env)) { + mask >>= ARM_MMU_IDX_A_NS; + } + + return mask; +} + +/* Return 56 if TBI is enabled, 64 otherwise. */ +static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx, + uint64_t addr) +{ + uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr; + int tbi = aa64_va_parameter_tbi(tcr, mmu_idx); + int select = extract64(addr, 55, 1); + + return (tbi >> select) & 1 ? 56 : 64; +} + +static int vae1_tlbbits(CPUARMState *env, uint64_t addr) +{ + uint64_t hcr = arm_hcr_el2_eff(env); + ARMMMUIdx mmu_idx; + + /* Only the regime of the mmu_idx below is significant. */ + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + mmu_idx = ARMMMUIdx_E20_0; + } else { + mmu_idx = ARMMMUIdx_E10_0; + } + + if (arm_is_secure_below_el3(env)) { + mmu_idx &= ~ARM_MMU_IDX_A_NS; + } + + return tlbbits_for_regime(env, mmu_idx, addr); +} + +static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + int mask = vae1_tlbmask(env); + + tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); +} + +static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + int mask = vae1_tlbmask(env); + + if (tlb_force_broadcast(env)) { + tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); + } else { + tlb_flush_by_mmuidx(cs, mask); + } +} + +static int alle1_tlbmask(CPUARMState *env) +{ + /* + * Note that the 'ALL' scope must invalidate both stage 1 and + * stage 2 translations, whereas most other scopes only invalidate + * stage 1 translations. + */ + if (arm_is_secure_below_el3(env)) { + return ARMMMUIdxBit_SE10_1 | + ARMMMUIdxBit_SE10_1_PAN | + ARMMMUIdxBit_SE10_0; + } else { + return ARMMMUIdxBit_E10_1 | + ARMMMUIdxBit_E10_1_PAN | + ARMMMUIdxBit_E10_0; + } +} + +static int e2_tlbmask(CPUARMState *env) +{ + if (arm_is_secure_below_el3(env)) { + return ARMMMUIdxBit_SE20_0 | + ARMMMUIdxBit_SE20_2 | + ARMMMUIdxBit_SE20_2_PAN | + ARMMMUIdxBit_SE2; + } else { + return ARMMMUIdxBit_E20_0 | + ARMMMUIdxBit_E20_2 | + ARMMMUIdxBit_E20_2_PAN | + ARMMMUIdxBit_E2; + } +} + +static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + int mask = alle1_tlbmask(env); + + tlb_flush_by_mmuidx(cs, mask); +} + +static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + int mask = e2_tlbmask(env); + + tlb_flush_by_mmuidx(cs, mask); +} + +static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + CPUState *cs = CPU(cpu); + + tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3); +} + +static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + int mask = alle1_tlbmask(env); + + tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); +} + +static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + int mask = e2_tlbmask(env); + + tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); +} + +static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + + tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3); +} + +static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* + * Invalidate by VA, EL2 + * Currently handles both VAE2 and VALE2, since we don't support + * flush-last-level-only. + */ + CPUState *cs = env_cpu(env); + int mask = e2_tlbmask(env); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + + tlb_flush_page_by_mmuidx(cs, pageaddr, mask); +} + +static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* + * Invalidate by VA, EL3 + * Currently handles both VAE3 and VALE3, since we don't support + * flush-last-level-only. + */ + ARMCPU *cpu = env_archcpu(env); + CPUState *cs = CPU(cpu); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3); +} + +static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + int mask = vae1_tlbmask(env); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + int bits = vae1_tlbbits(env, pageaddr); + + tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); +} + +static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* + * Invalidate by VA, EL1&0 (AArch64 version). + * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1, + * since we don't support flush-for-specific-ASID-only or + * flush-last-level-only. + */ + CPUState *cs = env_cpu(env); + int mask = vae1_tlbmask(env); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + int bits = vae1_tlbbits(env, pageaddr); + + if (tlb_force_broadcast(env)) { + tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); + } else { + tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits); + } +} + +static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + bool secure = arm_is_secure_below_el3(env); + int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2; + int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2, + pageaddr); + + tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); +} + +static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPUState *cs = env_cpu(env); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr); + + tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, + ARMMMUIdxBit_SE3, bits); +} + +static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int cur_el = arm_current_el(env); + + if (cur_el < 2) { + uint64_t hcr = arm_hcr_el2_eff(env); + + if (cur_el == 0) { + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + if (!(env->cp15.sctlr_el[2] & SCTLR_DZE)) { + return CP_ACCESS_TRAP_EL2; + } + } else { + if (!(env->cp15.sctlr_el[1] & SCTLR_DZE)) { + return CP_ACCESS_TRAP; + } + if (hcr & HCR_TDZ) { + return CP_ACCESS_TRAP_EL2; + } + } + } else if (hcr & HCR_TDZ) { + return CP_ACCESS_TRAP_EL2; + } + } + return CP_ACCESS_OK; +} + +static uint64_t aa64_dczid_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + ARMCPU *cpu = env_archcpu(env); + int dzp_bit = 1 << 4; + + /* DZP indicates whether DC ZVA access is allowed */ + if (aa64_zva_access(env, NULL, false) == CP_ACCESS_OK) { + dzp_bit = 0; + } + return cpu->dcz_blocksize | dzp_bit; +} + +static CPAccessResult sp_el0_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (!(env->pstate & PSTATE_SP)) { + /* + * Access to SP_EL0 is undefined if it's being used as + * the stack pointer. + */ + return CP_ACCESS_TRAP_UNCATEGORIZED; + } + return CP_ACCESS_OK; +} + +static uint64_t spsel_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return env->pstate & PSTATE_SP; +} + +static void spsel_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val) +{ + update_spsel(env, val); +} + +static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + + if (arm_feature(env, ARM_FEATURE_PMSA) && !cpu->has_mpu) { + /* M bit is RAZ/WI for PMSA with no MPU implemented */ + value &= ~SCTLR_M; + } + + /* ??? Lots of these bits are not implemented. */ + + if (ri->state == ARM_CP_STATE_AA64 && !cpu_isar_feature(aa64_mte, cpu)) { + if (ri->opc1 == 6) { /* SCTLR_EL3 */ + value &= ~(SCTLR_ITFSB | SCTLR_TCF | SCTLR_ATA); + } else { + value &= ~(SCTLR_ITFSB | SCTLR_TCF0 | SCTLR_TCF | + SCTLR_ATA0 | SCTLR_ATA); + } + } + + if (raw_read(env, ri) == value) { + /* + * Skip the TLB flush if nothing actually changed; Linux likes + * to do a lot of pointless SCTLR writes. + */ + return; + } + + raw_write(env, ri, value); + + /* This may enable/disable the MMU, so do a TLB flush. */ + tlb_flush(CPU(cpu)); + + if (ri->type & ARM_CP_SUPPRESS_TB_END) { + /* + * Normally we would always end the TB on an SCTLR write; see the + * comment in ARMCPRegInfo sctlr initialization below for why Xscale + * is special. Setting ARM_CP_SUPPRESS_TB_END also stops the rebuild + * of hflags from the translator, so do it here. + */ + arm_rebuild_hflags(env); + } +} + +static CPAccessResult fpexc32_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if ((env->cp15.cptr_el[2] & CPTR_TFP) && arm_current_el(env) == 2) { + return CP_ACCESS_TRAP_FP_EL2; + } + if (env->cp15.cptr_el[3] & CPTR_TFP) { + return CP_ACCESS_TRAP_FP_EL3; + } + return CP_ACCESS_OK; +} + +static void sdcr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + env->cp15.mdcr_el3 = value & SDCR_VALID_MASK; +} + +static const ARMCPRegInfo v8_cp_reginfo[] = { + /* + * Minimal set of EL0-visible registers. This will need to be expanded + * significantly for system emulation of AArch64 CPUs. + */ + { .name = "NZCV", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 2, + .access = PL0_RW, .type = ARM_CP_NZCV }, + { .name = "DAIF", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 4, .crm = 2, + .type = ARM_CP_NO_RAW, + .access = PL0_RW, .accessfn = aa64_daif_access, + .fieldoffset = offsetof(CPUARMState, daif), + .writefn = aa64_daif_write, .resetfn = arm_cp_reset_ignore }, + { .name = "FPCR", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 4, + .access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END, + .readfn = aa64_fpcr_read, .writefn = aa64_fpcr_write }, + { .name = "FPSR", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 4, .crm = 4, + .access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END, + .readfn = aa64_fpsr_read, .writefn = aa64_fpsr_write }, + { .name = "DCZID_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 7, .crn = 0, .crm = 0, + .access = PL0_R, .type = ARM_CP_NO_RAW, + .readfn = aa64_dczid_read }, + { .name = "DC_ZVA", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 1, + .access = PL0_W, .type = ARM_CP_DC_ZVA, +#ifndef CONFIG_USER_ONLY + /* Avoid overhead of an access check that always passes in user-mode */ + .accessfn = aa64_zva_access, +#endif + }, + { .name = "CURRENTEL", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .opc2 = 2, .crn = 4, .crm = 2, + .access = PL1_R, .type = ARM_CP_CURRENTEL }, + /* Cache ops: all NOPs since we don't emulate caches */ + { .name = "IC_IALLUIS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0, + .access = PL1_W, .type = ARM_CP_NOP, + .accessfn = aa64_cacheop_pou_access }, + { .name = "IC_IALLU", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0, + .access = PL1_W, .type = ARM_CP_NOP, + .accessfn = aa64_cacheop_pou_access }, + { .name = "IC_IVAU", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 5, .opc2 = 1, + .access = PL0_W, .type = ARM_CP_NOP, + .accessfn = aa64_cacheop_pou_access }, + { .name = "DC_IVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1, + .access = PL1_W, .accessfn = aa64_cacheop_poc_access, + .type = ARM_CP_NOP }, + { .name = "DC_ISW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2, + .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP }, + { .name = "DC_CVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1, + .access = PL0_W, .type = ARM_CP_NOP, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CSW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2, + .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP }, + { .name = "DC_CVAU", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 11, .opc2 = 1, + .access = PL0_W, .type = ARM_CP_NOP, + .accessfn = aa64_cacheop_pou_access }, + { .name = "DC_CIVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 1, + .access = PL0_W, .type = ARM_CP_NOP, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CISW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2, + .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP }, + /* TLBI operations */ + { .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vmalle1is_write }, + { .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1is_write }, + { .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vmalle1is_write }, + { .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1is_write }, + { .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1is_write }, + { .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1is_write }, + { .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vmalle1_write }, + { .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1_write }, + { .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vmalle1_write }, + { .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1_write }, + { .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1_write }, + { .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7, + .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae1_write }, + { .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1, + .access = PL2_W, .type = ARM_CP_NOP }, + { .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5, + .access = PL2_W, .type = ARM_CP_NOP }, + { .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4, + .access = PL2_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_alle1is_write }, + { .name = "TLBI_VMALLS12E1IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 6, + .access = PL2_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_alle1is_write }, + { .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1, + .access = PL2_W, .type = ARM_CP_NOP }, + { .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5, + .access = PL2_W, .type = ARM_CP_NOP }, + { .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4, + .access = PL2_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_alle1_write }, + { .name = "TLBI_VMALLS12E1", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 6, + .access = PL2_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_alle1is_write }, +#ifndef CONFIG_USER_ONLY + /* 64 bit address translation operations */ + { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 0, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 1, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S1E0R", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 2, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S1E0W", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 3, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S12E1R", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 4, + .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S12E1W", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 5, + .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S12E0R", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 6, + .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S12E0W", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 7, + .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + /* AT S1E2* are elsewhere as they UNDEF from EL3 if EL2 is not present */ + { .name = "AT_S1E3R", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 7, .crm = 8, .opc2 = 0, + .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S1E3W", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 7, .crm = 8, .opc2 = 1, + .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "PAR_EL1", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 0, .crn = 7, .crm = 4, .opc2 = 0, + .access = PL1_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.par_el[1]), + .writefn = par_write }, +#endif + /* TLB invalidate last level of translation table walk */ + { .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimva_is_write }, + { .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimvaa_is_write }, + { .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimva_write }, + { .name = "TLBIMVAAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7, + .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, + .writefn = tlbimvaa_write }, + { .name = "TLBIMVALH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbimva_hyp_write }, + { .name = "TLBIMVALHIS", + .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbimva_hyp_is_write }, + { .name = "TLBIIPAS2", + .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1, + .type = ARM_CP_NOP, .access = PL2_W }, + { .name = "TLBIIPAS2IS", + .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1, + .type = ARM_CP_NOP, .access = PL2_W }, + { .name = "TLBIIPAS2L", + .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL2_W }, + { .name = "TLBIIPAS2LIS", + .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL2_W }, + /* 32 bit cache operations */ + { .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, + { .name = "BPIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 6, + .type = ARM_CP_NOP, .access = PL1_W }, + { .name = "ICIALLU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, + { .name = "ICIMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 1, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, + { .name = "BPIALL", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 6, + .type = ARM_CP_NOP, .access = PL1_W }, + { .name = "BPIMVA", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 7, + .type = ARM_CP_NOP, .access = PL1_W }, + { .name = "DCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access }, + { .name = "DCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + { .name = "DCCMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 1, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access }, + { .name = "DCCSW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + { .name = "DCCMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 11, .opc2 = 1, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, + { .name = "DCCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 1, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access }, + { .name = "DCCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + /* MMU Domain access control / MPU write buffer control */ + { .name = "DACR", .cp = 15, .opc1 = 0, .crn = 3, .crm = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, + .writefn = dacr_write, .raw_writefn = raw_write, + .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s), + offsetoflow32(CPUARMState, cp15.dacr_ns) } }, + { .name = "ELR_EL1", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 1, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, elr_el[1]) }, + { .name = "SPSR_EL1", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 0, + .access = PL1_RW, + .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_SVC]) }, + /* + * We rely on the access checks not allowing the guest to write to the + * state field when SPSel indicates that it's being used as the stack + * pointer. + */ + { .name = "SP_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 1, .opc2 = 0, + .access = PL1_RW, .accessfn = sp_el0_access, + .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, sp_el[0]) }, + { .name = "SP_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 1, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, sp_el[1]) }, + { .name = "SPSel", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 0, + .type = ARM_CP_NO_RAW, + .access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write }, + { .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0, + .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]), + .access = PL2_RW, .accessfn = fpexc32_access }, + { .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0, + .access = PL2_RW, .resetvalue = 0, + .writefn = dacr_write, .raw_writefn = raw_write, + .fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) }, + { .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1, + .access = PL2_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) }, + { .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 0, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_IRQ]) }, + { .name = "SPSR_ABT", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 1, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_ABT]) }, + { .name = "SPSR_UND", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 2, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_UND]) }, + { .name = "SPSR_FIQ", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 3, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_FIQ]) }, + { .name = "MDCR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 3, .opc2 = 1, + .resetvalue = 0, + .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el3) }, + { .name = "SDCR", .type = ARM_CP_ALIAS, + .cp = 15, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 1, + .access = PL1_RW, .accessfn = access_trap_aa32s_el1, + .writefn = sdcr_write, + .fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) }, + REGINFO_SENTINEL +}; + +/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */ +static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = { + { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0, + .access = PL2_RW, + .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore }, + { .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0, + .access = PL2_RW, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0, + .access = PL2_RW, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "HMAIR1", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "VTTBR", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 6, .crm = 2, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 }, + { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2, + .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14, + .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14, + .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1, + .access = PL2_RW, .accessfn = access_tda, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "HIFAR", .state = ARM_CP_STATE_AA32, + .type = ARM_CP_CONST, + .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2, + .access = PL2_RW, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +/* Ditto, but for registers which exist in ARMv8 but not v7 */ +static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = { + { .name = "HCR2", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4, + .access = PL2_RW, + .type = ARM_CP_CONST, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask) +{ + ARMCPU *cpu = env_archcpu(env); + + if (arm_feature(env, ARM_FEATURE_V8)) { + valid_mask |= MAKE_64BIT_MASK(0, 34); /* ARMv8.0 */ + } else { + valid_mask |= MAKE_64BIT_MASK(0, 28); /* ARMv7VE */ + } + + if (arm_feature(env, ARM_FEATURE_EL3)) { + valid_mask &= ~HCR_HCD; + } else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) { + /* + * Architecturally HCR.TSC is RES0 if EL3 is not implemented. + * However, if we're using the SMC PSCI conduit then QEMU is + * effectively acting like EL3 firmware and so the guest at + * EL2 should retain the ability to prevent EL1 from being + * able to make SMC calls into the ersatz firmware, so in + * that case HCR.TSC should be read/write. + */ + valid_mask &= ~HCR_TSC; + } + + if (arm_feature(env, ARM_FEATURE_AARCH64)) { + if (cpu_isar_feature(aa64_vh, cpu)) { + valid_mask |= HCR_E2H; + } + if (cpu_isar_feature(aa64_lor, cpu)) { + valid_mask |= HCR_TLOR; + } + if (cpu_isar_feature(aa64_pauth, cpu)) { + valid_mask |= HCR_API | HCR_APK; + } + if (cpu_isar_feature(aa64_mte, cpu)) { + valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5; + } + } + + /* Clear RES0 bits. */ + value &= valid_mask; + + /* + * These bits change the MMU setup: + * HCR_VM enables stage 2 translation + * HCR_PTW forbids certain page-table setups + * HCR_DC disables stage1 and enables stage2 translation + * HCR_DCT enables tagging on (disabled) stage1 translation + */ + if ((env->cp15.hcr_el2 ^ value) & (HCR_VM | HCR_PTW | HCR_DC | HCR_DCT)) { + tlb_flush(CPU(cpu)); + } + env->cp15.hcr_el2 = value; + + /* + * Updates to VI and VF require us to update the status of + * virtual interrupts, which are the logical OR of these bits + * and the state of the input lines from the GIC. (This requires + * that we have the iothread lock, which is done by marking the + * reginfo structs as ARM_CP_IO.) + * Note that if a write to HCR pends a VIRQ or VFIQ it is never + * possible for it to be taken immediately, because VIRQ and + * VFIQ are masked unless running at EL0 or EL1, and HCR + * can only be written at EL2. + */ + g_assert(qemu_mutex_iothread_locked()); + arm_cpu_update_virq(cpu); + arm_cpu_update_vfiq(cpu); +} + +static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) +{ + do_hcr_write(env, value, 0); +} + +static void hcr_writehigh(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Handle HCR2 write, i.e. write to high half of HCR_EL2 */ + value = deposit64(env->cp15.hcr_el2, 32, 32, value); + do_hcr_write(env, value, MAKE_64BIT_MASK(0, 32)); +} + +static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Handle HCR write, i.e. write to low half of HCR_EL2 */ + value = deposit64(env->cp15.hcr_el2, 0, 32, value); + do_hcr_write(env, value, MAKE_64BIT_MASK(32, 32)); +} + +static void cptr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* + * For A-profile AArch32 EL3, if NSACR.CP10 + * is 0 then HCPTR.{TCP11,TCP10} ignore writes and read as 1. + */ + if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && + !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { + value &= ~(0x3 << 10); + value |= env->cp15.cptr_el[2] & (0x3 << 10); + } + env->cp15.cptr_el[2] = value; +} + +static uint64_t cptr_el2_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* + * For A-profile AArch32 EL3, if NSACR.CP10 + * is 0 then HCPTR.{TCP11,TCP10} ignore writes and read as 1. + */ + uint64_t value = env->cp15.cptr_el[2]; + + if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && + !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { + value |= 0x3 << 10; + } + return value; +} + +static const ARMCPRegInfo el2_cp_reginfo[] = { + { .name = "HCR_EL2", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_IO, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0, + .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2), + .writefn = hcr_write }, + { .name = "HCR", .state = ARM_CP_STATE_AA32, + .type = ARM_CP_ALIAS | ARM_CP_IO, + .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0, + .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2), + .writefn = hcr_writelow }, + { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7, + .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "ELR_EL2", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, elr_el[2]) }, + { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0, + .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.esr_el[2]) }, + { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0, + .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.far_el[2]) }, + { .name = "HIFAR", .state = ARM_CP_STATE_AA32, + .type = ARM_CP_ALIAS, + .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2, + .access = PL2_RW, + .fieldoffset = offsetofhigh32(CPUARMState, cp15.far_el[2]) }, + { .name = "SPSR_EL2", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 0, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_HYP]) }, + { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0, + .access = PL2_RW, .writefn = vbar_write, + .fieldoffset = offsetof(CPUARMState, cp15.vbar_el[2]), + .resetvalue = 0 }, + { .name = "SP_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 4, .crm = 1, .opc2 = 0, + .access = PL3_RW, .type = ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, sp_el[2]) }, + { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2, + .access = PL2_RW, .accessfn = cptr_access, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.cptr_el[2]), + .readfn = cptr_el2_read, .writefn = cptr_el2_write }, + { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0, + .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[2]), + .resetvalue = 0 }, + { .name = "HMAIR1", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_ALIAS, + .fieldoffset = offsetofhigh32(CPUARMState, cp15.mair_el[2]) }, + { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + /* HAMAIR1 is mapped to AMAIR_EL2[63:32] */ + { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2, + .access = PL2_RW, .writefn = vmsa_tcr_el12_write, + /* no .raw_writefn or .resetfn needed as we never use mask/base_mask */ + .fieldoffset = offsetof(CPUARMState, cp15.tcr_el[2]) }, + { .name = "VTCR", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2, + .type = ARM_CP_ALIAS, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .fieldoffset = offsetof(CPUARMState, cp15.vtcr_el2) }, + { .name = "VTCR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2, + .access = PL2_RW, + /* + * no .writefn needed as this can't cause an ASID change; + * no .raw_writefn or .resetfn needed as we never use mask/base_mask + */ + .fieldoffset = offsetof(CPUARMState, cp15.vtcr_el2) }, + { .name = "VTTBR", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 6, .crm = 2, + .type = ARM_CP_64BIT | ARM_CP_ALIAS, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .fieldoffset = offsetof(CPUARMState, cp15.vttbr_el2), + .writefn = vttbr_write }, + { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0, + .access = PL2_RW, .writefn = vttbr_write, + .fieldoffset = offsetof(CPUARMState, cp15.vttbr_el2) }, + { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0, + .access = PL2_RW, .raw_writefn = raw_write, .writefn = sctlr_write, + .fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[2]) }, + { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2, + .access = PL2_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[2]) }, + { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0, + .access = PL2_RW, .resetvalue = 0, .writefn = vmsa_tcr_ttbr_el2_write, + .fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) }, + { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2, + .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) }, + { .name = "TLBIALLNSNH", + .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbiall_nsnh_write }, + { .name = "TLBIALLNSNHIS", + .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbiall_nsnh_is_write }, + { .name = "TLBIALLH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbiall_hyp_write }, + { .name = "TLBIALLHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbiall_hyp_is_write }, + { .name = "TLBIMVAH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbimva_hyp_write }, + { .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbimva_hyp_is_write }, + { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbi_aa64_alle2_write }, + { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbi_aa64_vae2_write }, + { .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5, + .access = PL2_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae2_write }, + { .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0, + .access = PL2_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_alle2is_write }, + { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1, + .type = ARM_CP_NO_RAW, .access = PL2_W, + .writefn = tlbi_aa64_vae2is_write }, + { .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5, + .access = PL2_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae2is_write }, +#ifndef CONFIG_USER_ONLY + /* + * Unlike the other EL2-related AT operations, these must + * UNDEF from EL3 if EL2 is not implemented, which is why we + * define them here rather than with the rest of the AT ops. + */ + { .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0, + .access = PL2_W, .accessfn = at_s1e2_access, + .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 }, + { .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1, + .access = PL2_W, .accessfn = at_s1e2_access, + .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 }, + /* + * The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE + * if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3 + * with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose + * to behave as if SCR.NS was 1. + */ + { .name = "ATS1HR", .cp = 15, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0, + .access = PL2_W, + .writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC }, + { .name = "ATS1HW", .cp = 15, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1, + .access = PL2_W, + .writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC }, + { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0, + /* + * ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the + * reset values as IMPDEF. We choose to reset to 3 to comply with + * both ARMv7 and ARMv8. + */ + .access = PL2_RW, .resetvalue = 3, + .fieldoffset = offsetof(CPUARMState, cp15.cnthctl_el2) }, + { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3, + .access = PL2_RW, .type = ARM_CP_IO, .resetvalue = 0, + .writefn = gt_cntvoff_write, + .fieldoffset = offsetof(CPUARMState, cp15.cntvoff_el2) }, + { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14, + .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS | ARM_CP_IO, + .writefn = gt_cntvoff_write, + .fieldoffset = offsetof(CPUARMState, cp15.cntvoff_el2) }, + { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYP].cval), + .type = ARM_CP_IO, .access = PL2_RW, + .writefn = gt_hyp_cval_write, .raw_writefn = raw_write }, + { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYP].cval), + .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_IO, + .writefn = gt_hyp_cval_write, .raw_writefn = raw_write }, + { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL2_RW, + .resetfn = gt_hyp_timer_reset, + .readfn = gt_hyp_tval_read, .writefn = gt_hyp_tval_write }, + { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH, + .type = ARM_CP_IO, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYP].ctl), + .resetvalue = 0, + .writefn = gt_hyp_ctl_write, .raw_writefn = raw_write }, +#endif + /* The only field of MDCR_EL2 that has a defined architectural reset value + * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N. + */ + { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1, + .access = PL2_RW, .resetvalue = PMCR_NUM_COUNTERS, + .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2), }, + { .name = "HPFAR", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .fieldoffset = offsetof(CPUARMState, cp15.hpfar_el2) }, + { .name = "HPFAR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, cp15.hpfar_el2) }, + { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH, + .cp = 15, .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, cp15.hstr_el2) }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo el2_v8_cp_reginfo[] = { + { .name = "HCR2", .state = ARM_CP_STATE_AA32, + .type = ARM_CP_ALIAS | ARM_CP_IO, + .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4, + .access = PL2_RW, + .fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2), + .writefn = hcr_writehigh }, + REGINFO_SENTINEL +}; + +static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 3 || arm_is_secure_below_el3(env)) { + return CP_ACCESS_OK; + } + return CP_ACCESS_TRAP_UNCATEGORIZED; +} + +static const ARMCPRegInfo el2_sec_cp_reginfo[] = { + { .name = "VSTTBR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 0, + .access = PL2_RW, .accessfn = sel2_access, + .fieldoffset = offsetof(CPUARMState, cp15.vsttbr_el2) }, + { .name = "VSTCR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2, + .access = PL2_RW, .accessfn = sel2_access, + .fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) }, + REGINFO_SENTINEL +}; + +static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + /* + * The NSACR is RW at EL3, and RO for NS EL1 and NS EL2. + * At Secure EL1 it traps to EL3 or EL2. + */ + if (arm_current_el(env) == 3) { + return CP_ACCESS_OK; + } + if (arm_is_secure_below_el3(env)) { + if (env->cp15.scr_el3 & SCR_EEL2) { + return CP_ACCESS_TRAP_EL2; + } + return CP_ACCESS_TRAP_EL3; + } + /* Accesses from EL1 NS and EL2 NS are UNDEF for write but allow reads. */ + if (isread) { + return CP_ACCESS_OK; + } + return CP_ACCESS_TRAP_UNCATEGORIZED; +} + +static const ARMCPRegInfo el3_cp_reginfo[] = { + { .name = "SCR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 0, + .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.scr_el3), + .resetfn = scr_reset, .writefn = scr_write }, + { .name = "SCR", .type = ARM_CP_ALIAS | ARM_CP_NEWEL, + .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 0, + .access = PL1_RW, .accessfn = access_trap_aa32s_el1, + .fieldoffset = offsetoflow32(CPUARMState, cp15.scr_el3), + .writefn = scr_write }, + { .name = "SDER32_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 1, + .access = PL3_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.sder) }, + { .name = "SDER", + .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 1, + .access = PL3_RW, .resetvalue = 0, + .fieldoffset = offsetoflow32(CPUARMState, cp15.sder) }, + { .name = "MVBAR", .cp = 15, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_trap_aa32s_el1, + .writefn = vbar_write, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.mvbar) }, + { .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0, + .access = PL3_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) }, + { .name = "TCR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2, + .access = PL3_RW, + /* + * no .writefn needed as this can't cause an ASID change; + * we must provide a .raw_writefn and .resetfn because we handle + * reset and migration for the AArch32 TTBCR(S), which might be + * using mask and base_mask. + */ + .resetfn = vmsa_ttbcr_reset, .raw_writefn = vmsa_ttbcr_raw_write, + .fieldoffset = offsetof(CPUARMState, cp15.tcr_el[3]) }, + { .name = "ELR_EL3", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 6, .crn = 4, .crm = 0, .opc2 = 1, + .access = PL3_RW, + .fieldoffset = offsetof(CPUARMState, elr_el[3]) }, + { .name = "ESR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 2, .opc2 = 0, + .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.esr_el[3]) }, + { .name = "FAR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 6, .crm = 0, .opc2 = 0, + .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.far_el[3]) }, + { .name = "SPSR_EL3", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_ALIAS, + .opc0 = 3, .opc1 = 6, .crn = 4, .crm = 0, .opc2 = 0, + .access = PL3_RW, + .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_MON]) }, + { .name = "VBAR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 12, .crm = 0, .opc2 = 0, + .access = PL3_RW, .writefn = vbar_write, + .fieldoffset = offsetof(CPUARMState, cp15.vbar_el[3]), + .resetvalue = 0 }, + { .name = "CPTR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 2, + .access = PL3_RW, .accessfn = cptr_access, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.cptr_el[3]) }, + { .name = "TPIDR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 13, .crm = 0, .opc2 = 2, + .access = PL3_RW, .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[3]) }, + { .name = "AMAIR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 10, .crm = 3, .opc2 = 0, + .access = PL3_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "AFSR0_EL3", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 1, .opc2 = 0, + .access = PL3_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "AFSR1_EL3", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 1, .opc2 = 1, + .access = PL3_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "TLBI_ALLE3IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 0, + .access = PL3_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_alle3is_write }, + { .name = "TLBI_VAE3IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 1, + .access = PL3_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae3is_write }, + { .name = "TLBI_VALE3IS", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 5, + .access = PL3_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae3is_write }, + { .name = "TLBI_ALLE3", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 0, + .access = PL3_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_alle3_write }, + { .name = "TLBI_VAE3", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 1, + .access = PL3_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae3_write }, + { .name = "TLBI_VALE3", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5, + .access = PL3_W, .type = ARM_CP_NO_RAW, + .writefn = tlbi_aa64_vae3_write }, + REGINFO_SENTINEL +}; + +#ifndef CONFIG_USER_ONLY +/* Test if system register redirection is to occur in the current state. */ +static bool redirect_for_e2h(CPUARMState *env) +{ + return arm_current_el(env) == 2 && (arm_hcr_el2_eff(env) & HCR_E2H); +} + +static uint64_t el2_e2h_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + CPReadFn *readfn; + + if (redirect_for_e2h(env)) { + /* Switch to the saved EL2 version of the register. */ + ri = ri->opaque; + readfn = ri->readfn; + } else { + readfn = ri->orig_readfn; + } + if (readfn == NULL) { + readfn = raw_read; + } + return readfn(env, ri); +} + +static void el2_e2h_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + CPWriteFn *writefn; + + if (redirect_for_e2h(env)) { + /* Switch to the saved EL2 version of the register. */ + ri = ri->opaque; + writefn = ri->writefn; + } else { + writefn = ri->orig_writefn; + } + if (writefn == NULL) { + writefn = raw_write; + } + writefn(env, ri, value); +} + +static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu) +{ + struct E2HAlias { + uint32_t src_key, dst_key, new_key; + const char *src_name, *dst_name, *new_name; + bool (*feature)(const ARMISARegisters *id); + }; + +#define K(op0, op1, crn, crm, op2) \ + ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2) + + static const struct E2HAlias aliases[] = { + { K(3, 0, 1, 0, 0), K(3, 4, 1, 0, 0), K(3, 5, 1, 0, 0), + "SCTLR", "SCTLR_EL2", "SCTLR_EL12" }, + { K(3, 0, 1, 0, 2), K(3, 4, 1, 1, 2), K(3, 5, 1, 0, 2), + "CPACR", "CPTR_EL2", "CPACR_EL12" }, + { K(3, 0, 2, 0, 0), K(3, 4, 2, 0, 0), K(3, 5, 2, 0, 0), + "TTBR0_EL1", "TTBR0_EL2", "TTBR0_EL12" }, + { K(3, 0, 2, 0, 1), K(3, 4, 2, 0, 1), K(3, 5, 2, 0, 1), + "TTBR1_EL1", "TTBR1_EL2", "TTBR1_EL12" }, + { K(3, 0, 2, 0, 2), K(3, 4, 2, 0, 2), K(3, 5, 2, 0, 2), + "TCR_EL1", "TCR_EL2", "TCR_EL12" }, + { K(3, 0, 4, 0, 0), K(3, 4, 4, 0, 0), K(3, 5, 4, 0, 0), + "SPSR_EL1", "SPSR_EL2", "SPSR_EL12" }, + { K(3, 0, 4, 0, 1), K(3, 4, 4, 0, 1), K(3, 5, 4, 0, 1), + "ELR_EL1", "ELR_EL2", "ELR_EL12" }, + { K(3, 0, 5, 1, 0), K(3, 4, 5, 1, 0), K(3, 5, 5, 1, 0), + "AFSR0_EL1", "AFSR0_EL2", "AFSR0_EL12" }, + { K(3, 0, 5, 1, 1), K(3, 4, 5, 1, 1), K(3, 5, 5, 1, 1), + "AFSR1_EL1", "AFSR1_EL2", "AFSR1_EL12" }, + { K(3, 0, 5, 2, 0), K(3, 4, 5, 2, 0), K(3, 5, 5, 2, 0), + "ESR_EL1", "ESR_EL2", "ESR_EL12" }, + { K(3, 0, 6, 0, 0), K(3, 4, 6, 0, 0), K(3, 5, 6, 0, 0), + "FAR_EL1", "FAR_EL2", "FAR_EL12" }, + { K(3, 0, 10, 2, 0), K(3, 4, 10, 2, 0), K(3, 5, 10, 2, 0), + "MAIR_EL1", "MAIR_EL2", "MAIR_EL12" }, + { K(3, 0, 10, 3, 0), K(3, 4, 10, 3, 0), K(3, 5, 10, 3, 0), + "AMAIR0", "AMAIR_EL2", "AMAIR_EL12" }, + { K(3, 0, 12, 0, 0), K(3, 4, 12, 0, 0), K(3, 5, 12, 0, 0), + "VBAR", "VBAR_EL2", "VBAR_EL12" }, + { K(3, 0, 13, 0, 1), K(3, 4, 13, 0, 1), K(3, 5, 13, 0, 1), + "CONTEXTIDR_EL1", "CONTEXTIDR_EL2", "CONTEXTIDR_EL12" }, + { K(3, 0, 14, 1, 0), K(3, 4, 14, 1, 0), K(3, 5, 14, 1, 0), + "CNTKCTL", "CNTHCTL_EL2", "CNTKCTL_EL12" }, + + /* + * Note that redirection of ZCR is mentioned in the description + * of ZCR_EL2, and aliasing in the description of ZCR_EL1, but + * not in the summary table. + */ + { K(3, 0, 1, 2, 0), K(3, 4, 1, 2, 0), K(3, 5, 1, 2, 0), + "ZCR_EL1", "ZCR_EL2", "ZCR_EL12", isar_feature_aa64_sve }, + + { K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0), + "TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte }, + + /* TODO: ARMv8.2-SPE -- PMSCR_EL2 */ + /* TODO: ARMv8.4-Trace -- TRFCR_EL2 */ + }; +#undef K + + size_t i; + + for (i = 0; i < ARRAY_SIZE(aliases); i++) { + const struct E2HAlias *a = &aliases[i]; + ARMCPRegInfo *src_reg, *dst_reg; + + if (a->feature && !a->feature(&cpu->isar)) { + continue; + } + + src_reg = g_hash_table_lookup(cpu->cp_regs, &a->src_key); + dst_reg = g_hash_table_lookup(cpu->cp_regs, &a->dst_key); + g_assert(src_reg != NULL); + g_assert(dst_reg != NULL); + + /* Cross-compare names to detect typos in the keys. */ + g_assert(strcmp(src_reg->name, a->src_name) == 0); + g_assert(strcmp(dst_reg->name, a->dst_name) == 0); + + /* None of the core system registers use opaque; we will. */ + g_assert(src_reg->opaque == NULL); + + /* Create alias before redirection so we dup the right data. */ + if (a->new_key) { + ARMCPRegInfo *new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo)); + uint32_t *new_key = g_memdup(&a->new_key, sizeof(uint32_t)); + bool ok; + + new_reg->name = a->new_name; + new_reg->type |= ARM_CP_ALIAS; + /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */ + new_reg->access &= PL2_RW | PL3_RW; + + ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg); + g_assert(ok); + } + + src_reg->opaque = dst_reg; + src_reg->orig_readfn = src_reg->readfn ?: raw_read; + src_reg->orig_writefn = src_reg->writefn ?: raw_write; + if (!src_reg->raw_readfn) { + src_reg->raw_readfn = raw_read; + } + if (!src_reg->raw_writefn) { + src_reg->raw_writefn = raw_write; + } + src_reg->readfn = el2_e2h_read; + src_reg->writefn = el2_e2h_write; + } +} +#endif + +static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int cur_el = arm_current_el(env); + + if (cur_el < 2) { + uint64_t hcr = arm_hcr_el2_eff(env); + + if (cur_el == 0) { + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + if (!(env->cp15.sctlr_el[2] & SCTLR_UCT)) { + return CP_ACCESS_TRAP_EL2; + } + } else { + if (!(env->cp15.sctlr_el[1] & SCTLR_UCT)) { + return CP_ACCESS_TRAP; + } + if (hcr & HCR_TID2) { + return CP_ACCESS_TRAP_EL2; + } + } + } else if (hcr & HCR_TID2) { + return CP_ACCESS_TRAP_EL2; + } + } + + if (arm_current_el(env) < 2 && arm_hcr_el2_eff(env) & HCR_TID2) { + return CP_ACCESS_TRAP_EL2; + } + + return CP_ACCESS_OK; +} + +static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* + * Writes to OSLAR_EL1 may update the OS lock status, which can be + * read via a bit in OSLSR_EL1. + */ + int oslock; + + if (ri->state == ARM_CP_STATE_AA32) { + oslock = (value == 0xC5ACCE55); + } else { + oslock = value & 1; + } + + env->cp15.oslsr_el1 = deposit32(env->cp15.oslsr_el1, 1, 1, oslock); +} + +static const ARMCPRegInfo debug_cp_reginfo[] = { + /* + * DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped + * debug components. The AArch64 version of DBGDRAR is named MDRAR_EL1; + * unlike DBGDRAR it is never accessible from EL0. + * DBGDSAR is deprecated and must RAZ from v8 anyway, so it has no AArch64 + * accessor. + */ + { .name = "DBGDRAR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL0_R, .accessfn = access_tdra, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "MDRAR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0, + .access = PL1_R, .accessfn = access_tdra, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "DBGDSAR", .cp = 14, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL0_R, .accessfn = access_tdra, + .type = ARM_CP_CONST, .resetvalue = 0 }, + /* Monitor debug system control register; the 32-bit alias is DBGDSCRext. */ + { .name = "MDSCR_EL1", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2, + .access = PL1_RW, .accessfn = access_tda, + .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1), + .resetvalue = 0 }, + /* + * MDCCSR_EL0, aka DBGDSCRint. This is a read-only mirror of MDSCR_EL1. + * We don't implement the configurable EL0 access. + */ + { .name = "MDCCSR_EL0", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0, + .type = ARM_CP_ALIAS, + .access = PL1_R, .accessfn = access_tda, + .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1), }, + { .name = "OSLAR_EL1", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 4, + .access = PL1_W, .type = ARM_CP_NO_RAW, + .accessfn = access_tdosa, + .writefn = oslar_write }, + { .name = "OSLSR_EL1", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 4, + .access = PL1_R, .resetvalue = 10, + .accessfn = access_tdosa, + .fieldoffset = offsetof(CPUARMState, cp15.oslsr_el1) }, + /* Dummy OSDLR_EL1: 32-bit Linux will read this */ + { .name = "OSDLR_EL1", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4, + .access = PL1_RW, .accessfn = access_tdosa, + .type = ARM_CP_NOP }, + /* + * Dummy DBGVCR: Linux wants to clear this on startup, but we don't + * implement vector catch debug events yet. + */ + { .name = "DBGVCR", + .cp = 14, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tda, + .type = ARM_CP_NOP }, + /* + * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor + * to save and restore a 32-bit guest's DBGVCR) + */ + { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0, + .access = PL2_RW, .accessfn = access_tda, + .type = ARM_CP_NOP }, + /* + * Dummy MDCCINT_EL1, since we don't implement the Debug Communications + * Channel but Linux may try to access this register. The 32-bit + * alias is DBGDCCINT. + */ + { .name = "MDCCINT_EL1", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tda, + .type = ARM_CP_NOP }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo debug_lpae_cp_reginfo[] = { + /* 64 bit access versions of the (dummy) debug registers */ + { .name = "DBGDRAR", .cp = 14, .crm = 1, .opc1 = 0, + .access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 }, + { .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0, + .access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + int cur_el = arm_current_el(env); + int old_len = sve_zcr_len_for_el(env, cur_el); + int new_len; + + /* Bits other than [3:0] are RAZ/WI. */ + QEMU_BUILD_BUG_ON(ARM_MAX_VQ > 16); + raw_write(env, ri, value & 0xf); + + /* + * Because we arrived here, we know both FP and SVE are enabled; + * otherwise we would have trapped access to the ZCR_ELn register. + */ + new_len = sve_zcr_len_for_el(env, cur_el); + if (new_len < old_len) { + aarch64_sve_narrow_vq(env, new_len + 1); + } +} + +static const ARMCPRegInfo zcr_el1_reginfo = { + .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0, + .access = PL1_RW, .type = ARM_CP_SVE, + .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]), + .writefn = zcr_write, .raw_writefn = raw_write +}; + +static const ARMCPRegInfo zcr_el2_reginfo = { + .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_SVE, + .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]), + .writefn = zcr_write, .raw_writefn = raw_write +}; + +static const ARMCPRegInfo zcr_no_el2_reginfo = { + .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0, + .access = PL2_RW, .type = ARM_CP_SVE, + .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore +}; + +static const ARMCPRegInfo zcr_el3_reginfo = { + .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0, + .access = PL3_RW, .type = ARM_CP_SVE, + .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]), + .writefn = zcr_write, .raw_writefn = raw_write +}; + +static void dbgwvr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + int i = ri->crm; + + /* + * Bits [63:49] are hardwired to the value of bit [48]; that is, the + * register reads and behaves as if values written are sign extended. + * Bits [1:0] are RES0. + */ + value = sextract64(value, 0, 49) & ~3ULL; + + raw_write(env, ri, value); + hw_watchpoint_update(cpu, i); +} + +static void dbgwcr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + int i = ri->crm; + + raw_write(env, ri, value); + hw_watchpoint_update(cpu, i); +} + +static void dbgbvr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + int i = ri->crm; + + raw_write(env, ri, value); + hw_breakpoint_update(cpu, i); +} + +static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + int i = ri->crm; + + /* + * BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only + * copy of BAS[0]. + */ + value = deposit64(value, 6, 1, extract64(value, 5, 1)); + value = deposit64(value, 8, 1, extract64(value, 7, 1)); + + raw_write(env, ri, value); + hw_breakpoint_update(cpu, i); +} + +static void define_debug_regs(ARMCPU *cpu) +{ + /* + * Define v7 and v8 architectural debug registers. + * These are just dummy implementations for now. + */ + int i; + int wrps, brps, ctx_cmps; + + /* + * The Arm ARM says DBGDIDR is optional and deprecated if EL1 cannot + * use AArch32. Given that bit 15 is RES1, if the value is 0 then + * the register must not exist for this cpu. + */ + if (cpu->isar.dbgdidr != 0) { + ARMCPRegInfo dbgdidr = { + .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, + .opc1 = 0, .opc2 = 0, + .access = PL0_R, .accessfn = access_tda, + .type = ARM_CP_CONST, .resetvalue = cpu->isar.dbgdidr, + }; + define_one_arm_cp_reg(cpu, &dbgdidr); + } + + /* Note that all these register fields hold "number of Xs minus 1". */ + brps = arm_num_brps(cpu); + wrps = arm_num_wrps(cpu); + ctx_cmps = arm_num_ctx_cmps(cpu); + + assert(ctx_cmps <= brps); + + define_arm_cp_regs(cpu, debug_cp_reginfo); + + if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) { + define_arm_cp_regs(cpu, debug_lpae_cp_reginfo); + } + + for (i = 0; i < brps; i++) { + ARMCPRegInfo dbgregs[] = { + { .name = "DBGBVR", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4, + .access = PL1_RW, .accessfn = access_tda, + .fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]), + .writefn = dbgbvr_write, .raw_writefn = raw_write + }, + { .name = "DBGBCR", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5, + .access = PL1_RW, .accessfn = access_tda, + .fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]), + .writefn = dbgbcr_write, .raw_writefn = raw_write + }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, dbgregs); + } + + for (i = 0; i < wrps; i++) { + ARMCPRegInfo dbgregs[] = { + { .name = "DBGWVR", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6, + .access = PL1_RW, .accessfn = access_tda, + .fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]), + .writefn = dbgwvr_write, .raw_writefn = raw_write + }, + { .name = "DBGWCR", .state = ARM_CP_STATE_BOTH, + .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7, + .access = PL1_RW, .accessfn = access_tda, + .fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]), + .writefn = dbgwcr_write, .raw_writefn = raw_write + }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, dbgregs); + } +} + +static void define_pmu_regs(ARMCPU *cpu) +{ + /* + * v7 performance monitor control register: same implementor + * field as main ID register, and we implement four counters in + * addition to the cycle count register. + */ + unsigned int i, pmcrn = PMCR_NUM_COUNTERS; + ARMCPRegInfo pmcr = { + .name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0, + .access = PL0_RW, + .type = ARM_CP_IO | ARM_CP_ALIAS, + .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcr), + .accessfn = pmreg_access, .writefn = pmcr_write, + .raw_writefn = raw_write, + }; + ARMCPRegInfo pmcr64 = { + .name = "PMCR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 0, + .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_IO, + .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr), + .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT) | + PMCRLC, + .writefn = pmcr_write, .raw_writefn = raw_write, + }; + define_one_arm_cp_reg(cpu, &pmcr); + define_one_arm_cp_reg(cpu, &pmcr64); + for (i = 0; i < pmcrn; i++) { + char *pmevcntr_name = g_strdup_printf("PMEVCNTR%d", i); + char *pmevcntr_el0_name = g_strdup_printf("PMEVCNTR%d_EL0", i); + char *pmevtyper_name = g_strdup_printf("PMEVTYPER%d", i); + char *pmevtyper_el0_name = g_strdup_printf("PMEVTYPER%d_EL0", i); + ARMCPRegInfo pmev_regs[] = { + { .name = pmevcntr_name, .cp = 15, .crn = 14, + .crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7, + .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS, + .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn, + .accessfn = pmreg_access }, + { .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)), + .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_IO, + .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn, + .raw_readfn = pmevcntr_rawread, + .raw_writefn = pmevcntr_rawwrite }, + { .name = pmevtyper_name, .cp = 15, .crn = 14, + .crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7, + .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS, + .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn, + .accessfn = pmreg_access }, + { .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 12 | (3 & (i >> 3)), + .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access, + .type = ARM_CP_IO, + .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn, + .raw_writefn = pmevtyper_rawwrite }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, pmev_regs); + g_free(pmevcntr_name); + g_free(pmevcntr_el0_name); + g_free(pmevtyper_name); + g_free(pmevtyper_el0_name); + } + if (cpu_isar_feature(aa32_pmu_8_1, cpu)) { + ARMCPRegInfo v81_pmu_regs[] = { + { .name = "PMCEID2", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4, + .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, + .resetvalue = extract64(cpu->pmceid0, 32, 32) }, + { .name = "PMCEID3", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5, + .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, + .resetvalue = extract64(cpu->pmceid1, 32, 32) }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, v81_pmu_regs); + } + if (cpu_isar_feature(any_pmu_8_4, cpu)) { + static const ARMCPRegInfo v84_pmmir = { + .name = "PMMIR_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 6, + .access = PL1_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, + .resetvalue = 0 + }; + define_one_arm_cp_reg(cpu, &v84_pmmir); + } +} + +/* + * We don't know until after realize whether there's a GICv3 + * attached, and that is what registers the gicv3 sysregs. + * So we have to fill in the GIC fields in ID_PFR/ID_PFR1_EL1/ID_AA64PFR0_EL1 + * at runtime. + */ +static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + ARMCPU *cpu = env_archcpu(env); + uint64_t pfr1 = cpu->isar.id_pfr1; + + if (env->gicv3state) { + pfr1 |= 1 << 28; + } + return pfr1; +} + +#ifndef CONFIG_USER_ONLY +static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + ARMCPU *cpu = env_archcpu(env); + uint64_t pfr0 = cpu->isar.id_aa64pfr0; + + if (env->gicv3state) { + pfr0 |= 1 << 24; + } + return pfr0; +} +#endif + +/* + * Shared logic between LORID and the rest of the LOR* registers. + * Secure state exclusion has already been dealt with. + */ +static CPAccessResult access_lor_ns(CPUARMState *env, + const ARMCPRegInfo *ri, bool isread) +{ + int el = arm_current_el(env); + + if (el < 2 && (arm_hcr_el2_eff(env) & HCR_TLOR)) { + return CP_ACCESS_TRAP_EL2; + } + if (el < 3 && (env->cp15.scr_el3 & SCR_TLOR)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; +} + +static CPAccessResult access_lor_other(CPUARMState *env, + const ARMCPRegInfo *ri, bool isread) +{ + if (arm_is_secure_below_el3(env)) { + /* Access denied in secure mode. */ + return CP_ACCESS_TRAP; + } + return access_lor_ns(env, ri, isread); +} + +/* + * A trivial implementation of ARMv8.1-LOR leaves all of these + * registers fixed at 0, which indicates that there are zero + * supported Limited Ordering regions. + */ +static const ARMCPRegInfo lor_reginfo[] = { + { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0, + .access = PL1_RW, .accessfn = access_lor_other, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1, + .access = PL1_RW, .accessfn = access_lor_other, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2, + .access = PL1_RW, .accessfn = access_lor_other, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3, + .access = PL1_RW, .accessfn = access_lor_other, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7, + .access = PL1_R, .accessfn = access_lor_ns, + .type = ARM_CP_CONST, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +#ifdef TARGET_AARCH64 +static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int el = arm_current_el(env); + + if (el < 2 && + arm_feature(env, ARM_FEATURE_EL2) && + !(arm_hcr_el2_eff(env) & HCR_APK)) { + return CP_ACCESS_TRAP_EL2; + } + if (el < 3 && + arm_feature(env, ARM_FEATURE_EL3) && + !(env->cp15.scr_el3 & SCR_APK)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; +} + +static const ARMCPRegInfo pauth_reginfo[] = { + { .name = "APDAKEYLO_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 0, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apda.lo) }, + { .name = "APDAKEYHI_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 1, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apda.hi) }, + { .name = "APDBKEYLO_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 2, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apdb.lo) }, + { .name = "APDBKEYHI_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 3, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apdb.hi) }, + { .name = "APGAKEYLO_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 0, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apga.lo) }, + { .name = "APGAKEYHI_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 1, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apga.hi) }, + { .name = "APIAKEYLO_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 0, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apia.lo) }, + { .name = "APIAKEYHI_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 1, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apia.hi) }, + { .name = "APIBKEYLO_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 2, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apib.lo) }, + { .name = "APIBKEYHI_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3, + .access = PL1_RW, .accessfn = access_pauth, + .fieldoffset = offsetof(CPUARMState, keys.apib.hi) }, + REGINFO_SENTINEL +}; + +static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri) +{ + Error *err = NULL; + uint64_t ret; + + /* Success sets NZCV = 0000. */ + env->NF = env->CF = env->VF = 0, env->ZF = 1; + + if (qemu_guest_getrandom(&ret, sizeof(ret), &err) < 0) { + /* + * ??? Failed, for unknown reasons in the crypto subsystem. + * The best we can do is log the reason and return the + * timed-out indication to the guest. There is no reason + * we know to expect this failure to be transitory, so the + * guest may well hang retrying the operation. + */ + qemu_log_mask(LOG_UNIMP, "%s: Crypto failure: %s", + ri->name, error_get_pretty(err)); + error_free(err); + + env->ZF = 0; /* NZCF = 0100 */ + return 0; + } + return ret; +} + +/* We do not support re-seeding, so the two registers operate the same. */ +static const ARMCPRegInfo rndr_reginfo[] = { + { .name = "RNDR", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO, + .opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 0, + .access = PL0_R, .readfn = rndr_readfn }, + { .name = "RNDRRS", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO, + .opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 1, + .access = PL0_R, .readfn = rndr_readfn }, + REGINFO_SENTINEL +}; + +#ifndef CONFIG_USER_ONLY +static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque, + uint64_t value) +{ + ARMCPU *cpu = env_archcpu(env); + /* CTR_EL0 System register -> DminLine, bits [19:16] */ + uint64_t dline_size = 4 << ((cpu->ctr >> 16) & 0xF); + uint64_t vaddr_in = (uint64_t) value; + uint64_t vaddr = vaddr_in & ~(dline_size - 1); + void *haddr; + int mem_idx = cpu_mmu_index(env, false); + + /* This won't be crossing page boundaries */ + haddr = probe_read(env, vaddr, dline_size, mem_idx, GETPC()); + if (haddr) { + + ram_addr_t offset; + MemoryRegion *mr; + + /* RCU lock is already being held */ + mr = memory_region_from_host(haddr, &offset); + + if (mr) { + memory_region_writeback(mr, offset, dline_size); + } + } +} + +static const ARMCPRegInfo dcpop_reg[] = { + { .name = "DC_CVAP", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1, + .access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END, + .accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo dcpodp_reg[] = { + { .name = "DC_CVADP", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1, + .access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END, + .accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn }, + REGINFO_SENTINEL +}; +#endif /*CONFIG_USER_ONLY*/ + +static CPAccessResult access_aa64_tid5(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if ((arm_current_el(env) < 2) && (arm_hcr_el2_eff(env) & HCR_TID5)) { + return CP_ACCESS_TRAP_EL2; + } + + return CP_ACCESS_OK; +} + +static CPAccessResult access_mte(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int el = arm_current_el(env); + + if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) { + uint64_t hcr = arm_hcr_el2_eff(env); + if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) { + return CP_ACCESS_TRAP_EL2; + } + } + if (el < 3 && + arm_feature(env, ARM_FEATURE_EL3) && + !(env->cp15.scr_el3 & SCR_ATA)) { + return CP_ACCESS_TRAP_EL3; + } + return CP_ACCESS_OK; +} + +static uint64_t tco_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return env->pstate & PSTATE_TCO; +} + +static void tco_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val) +{ + env->pstate = (env->pstate & ~PSTATE_TCO) | (val & PSTATE_TCO); +} + +static const ARMCPRegInfo mte_reginfo[] = { + { .name = "TFSRE0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 6, .opc2 = 1, + .access = PL1_RW, .accessfn = access_mte, + .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[0]) }, + { .name = "TFSR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 6, .opc2 = 0, + .access = PL1_RW, .accessfn = access_mte, + .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[1]) }, + { .name = "TFSR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 6, .opc2 = 0, + .access = PL2_RW, .accessfn = access_mte, + .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[2]) }, + { .name = "TFSR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 6, .opc2 = 0, + .access = PL3_RW, + .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[3]) }, + { .name = "RGSR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 5, + .access = PL1_RW, .accessfn = access_mte, + .fieldoffset = offsetof(CPUARMState, cp15.rgsr_el1) }, + { .name = "GCR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 6, + .access = PL1_RW, .accessfn = access_mte, + .fieldoffset = offsetof(CPUARMState, cp15.gcr_el1) }, + { .name = "GMID_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 4, + .access = PL1_R, .accessfn = access_aa64_tid5, + .type = ARM_CP_CONST, .resetvalue = GMID_EL1_BS }, + { .name = "TCO", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7, + .type = ARM_CP_NO_RAW, + .access = PL0_RW, .readfn = tco_read, .writefn = tco_write }, + { .name = "DC_IGVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 3, + .type = ARM_CP_NOP, .access = PL1_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_IGSW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 4, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + { .name = "DC_IGDVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL1_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_IGDSW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 6, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + { .name = "DC_CGSW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 4, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + { .name = "DC_CGDSW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 6, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + { .name = "DC_CIGSW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 4, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + { .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6, + .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo mte_tco_ro_reginfo[] = { + { .name = "TCO", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7, + .type = ARM_CP_CONST, .access = PL0_RW, }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = { + { .name = "DC_CGVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 3, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CGDVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CGVAP", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 3, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CGDVAP", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CGVADP", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 3, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CGDVADP", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CIGVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 3, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_CIGDVAC", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL0_W, + .accessfn = aa64_cacheop_poc_access }, + { .name = "DC_GVA", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 3, + .access = PL0_W, .type = ARM_CP_DC_GVA, +#ifndef CONFIG_USER_ONLY + /* Avoid overhead of an access check that always passes in user-mode */ + .accessfn = aa64_zva_access, +#endif + }, + { .name = "DC_GZVA", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 4, + .access = PL0_W, .type = ARM_CP_DC_GZVA, +#ifndef CONFIG_USER_ONLY + /* Avoid overhead of an access check that always passes in user-mode */ + .accessfn = aa64_zva_access, +#endif + }, + REGINFO_SENTINEL +}; + +#endif + +static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + int el = arm_current_el(env); + + if (el == 0) { + uint64_t sctlr = arm_sctlr(env, el); + if (!(sctlr & SCTLR_EnRCTX)) { + return CP_ACCESS_TRAP; + } + } else if (el == 1) { + uint64_t hcr = arm_hcr_el2_eff(env); + if (hcr & HCR_NV) { + return CP_ACCESS_TRAP_EL2; + } + } + return CP_ACCESS_OK; +} + +static const ARMCPRegInfo predinv_reginfo[] = { + { .name = "CFP_RCTX", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 4, + .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, + { .name = "DVP_RCTX", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, + { .name = "CPP_RCTX", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 7, + .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, + /* + * Note the AArch32 opcodes have a different OPC1. + */ + { .name = "CFPRCTX", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 4, + .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, + { .name = "DVPRCTX", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 5, + .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, + { .name = "CPPRCTX", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7, + .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, + REGINFO_SENTINEL +}; + +static uint64_t ccsidr2_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + /* Read the high 32 bits of the current CCSIDR */ + return extract64(ccsidr_read(env, ri), 32, 32); +} + +static const ARMCPRegInfo ccsidr2_reginfo[] = { + { .name = "CCSIDR2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 2, + .access = PL1_R, + .accessfn = access_aa64_tid2, + .readfn = ccsidr2_read, .type = ARM_CP_NO_RAW }, + REGINFO_SENTINEL +}; + +static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if ((arm_current_el(env) < 2) && (arm_hcr_el2_eff(env) & HCR_TID3)) { + return CP_ACCESS_TRAP_EL2; + } + + return CP_ACCESS_OK; +} + +static CPAccessResult access_aa32_tid3(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_feature(env, ARM_FEATURE_V8)) { + return access_aa64_tid3(env, ri, isread); + } + + return CP_ACCESS_OK; +} + +static CPAccessResult access_jazelle(CPUARMState *env, const ARMCPRegInfo *ri, + bool isread) +{ + if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TID0)) { + return CP_ACCESS_TRAP_EL2; + } + + return CP_ACCESS_OK; +} + +static const ARMCPRegInfo jazelle_regs[] = { + { .name = "JIDR", + .cp = 14, .crn = 0, .crm = 0, .opc1 = 7, .opc2 = 0, + .access = PL1_R, .accessfn = access_jazelle, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "JOSCR", + .cp = 14, .crn = 1, .crm = 0, .opc1 = 7, .opc2 = 0, + .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "JMCR", + .cp = 14, .crn = 2, .crm = 0, .opc1 = 7, .opc2 = 0, + .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo vhe_reginfo[] = { + { .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) }, + { .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1, + .access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write, + .fieldoffset = offsetof(CPUARMState, cp15.ttbr1_el[2]) }, +#ifndef CONFIG_USER_ONLY + { .name = "CNTHV_CVAL_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 2, + .fieldoffset = + offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].cval), + .type = ARM_CP_IO, .access = PL2_RW, + .writefn = gt_hv_cval_write, .raw_writefn = raw_write }, + { .name = "CNTHV_TVAL_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL2_RW, + .resetfn = gt_hv_timer_reset, + .readfn = gt_hv_tval_read, .writefn = gt_hv_tval_write }, + { .name = "CNTHV_CTL_EL2", .state = ARM_CP_STATE_BOTH, + .type = ARM_CP_IO, + .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 1, + .access = PL2_RW, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].ctl), + .writefn = gt_hv_ctl_write, .raw_writefn = raw_write }, + { .name = "CNTP_CTL_EL02", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 1, + .type = ARM_CP_IO | ARM_CP_ALIAS, + .access = PL2_RW, .accessfn = e2h_access, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].ctl), + .writefn = gt_phys_ctl_write, .raw_writefn = raw_write }, + { .name = "CNTV_CTL_EL02", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 1, + .type = ARM_CP_IO | ARM_CP_ALIAS, + .access = PL2_RW, .accessfn = e2h_access, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].ctl), + .writefn = gt_virt_ctl_write, .raw_writefn = raw_write }, + { .name = "CNTP_TVAL_EL02", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO | ARM_CP_ALIAS, + .access = PL2_RW, .accessfn = e2h_access, + .readfn = gt_phys_tval_read, .writefn = gt_phys_tval_write }, + { .name = "CNTV_TVAL_EL02", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 0, + .type = ARM_CP_NO_RAW | ARM_CP_IO | ARM_CP_ALIAS, + .access = PL2_RW, .accessfn = e2h_access, + .readfn = gt_virt_tval_read, .writefn = gt_virt_tval_write }, + { .name = "CNTP_CVAL_EL02", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 2, + .type = ARM_CP_IO | ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval), + .access = PL2_RW, .accessfn = e2h_access, + .writefn = gt_phys_cval_write, .raw_writefn = raw_write }, + { .name = "CNTV_CVAL_EL02", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 2, + .type = ARM_CP_IO | ARM_CP_ALIAS, + .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval), + .access = PL2_RW, .accessfn = e2h_access, + .writefn = gt_virt_cval_write, .raw_writefn = raw_write }, +#endif + REGINFO_SENTINEL +}; + +#ifndef CONFIG_USER_ONLY +static const ARMCPRegInfo ats1e1_reginfo[] = { + { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64, + .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write64 }, + REGINFO_SENTINEL +}; + +static const ARMCPRegInfo ats1cp_reginfo[] = { + { .name = "ATS1CPRP", + .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write }, + { .name = "ATS1CPWP", + .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1, + .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, + .writefn = ats_write }, + REGINFO_SENTINEL +}; +#endif + +/* + * ACTLR2 and HACTLR2 map to ACTLR_EL1[63:32] and + * ACTLR_EL2[63:32]. They exist only if the ID_MMFR4.AC2 field + * is non-zero, which is never for ARMv7, optionally in ARMv8 + * and mandatorily for ARMv8.2 and up. + * ACTLR2 is banked for S and NS if EL3 is AArch32. Since QEMU's + * implementation is RAZ/WI we can ignore this detail, as we + * do for ACTLR. + */ +static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = { + { .name = "ACTLR2", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 3, + .access = PL1_RW, .accessfn = access_tacr, + .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "HACTLR2", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +void register_cp_regs_for_features(ARMCPU *cpu) +{ + /* Register all the coprocessor registers based on feature bits */ + CPUARMState *env = &cpu->env; + if (arm_feature(env, ARM_FEATURE_M)) { + /* M profile has no coprocessor registers */ + return; + } + + define_arm_cp_regs(cpu, cp_reginfo); + if (!arm_feature(env, ARM_FEATURE_V8)) { + /* + * Must go early as it is full of wildcards that may be + * overridden by later definitions. + */ + define_arm_cp_regs(cpu, not_v8_cp_reginfo); + } + + if (arm_feature(env, ARM_FEATURE_V6)) { + /* The ID registers all have impdef reset values */ + ARMCPRegInfo v6_idregs[] = { + { .name = "ID_PFR0", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_pfr0 }, + /* + * ID_PFR1 is not a plain ARM_CP_CONST because we don't know + * the value of the GIC field until after we define these regs. + */ + { .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_NO_RAW, + .accessfn = access_aa32_tid3, + .readfn = id_pfr1_read, + .writefn = arm_cp_write_ignore }, + { .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_dfr0 }, + { .name = "ID_AFR0", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 3, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->id_afr0 }, + { .name = "ID_MMFR0", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_mmfr0 }, + { .name = "ID_MMFR1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 5, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_mmfr1 }, + { .name = "ID_MMFR2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_mmfr2 }, + { .name = "ID_MMFR3", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_mmfr3 }, + { .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_isar0 }, + { .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_isar1 }, + { .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_isar2 }, + { .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_isar3 }, + { .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_isar4 }, + { .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_isar5 }, + { .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_mmfr4 }, + { .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa32_tid3, + .resetvalue = cpu->isar.id_isar6 }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, v6_idregs); + define_arm_cp_regs(cpu, v6_cp_reginfo); + } else { + define_arm_cp_regs(cpu, not_v6_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_V6K)) { + define_arm_cp_regs(cpu, v6k_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_V7MP) && + !arm_feature(env, ARM_FEATURE_PMSA)) { + define_arm_cp_regs(cpu, v7mp_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_V7VE)) { + define_arm_cp_regs(cpu, pmovsset_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_V7)) { + ARMCPRegInfo clidr = { + .name = "CLIDR", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid2, + .resetvalue = cpu->clidr + }; + define_one_arm_cp_reg(cpu, &clidr); + define_arm_cp_regs(cpu, v7_cp_reginfo); + define_debug_regs(cpu); + define_pmu_regs(cpu); + } else { + define_arm_cp_regs(cpu, not_v7_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_V8)) { + /* + * AArch64 ID registers, which all have impdef reset values. + * Note that within the ID register ranges the unused slots + * must all RAZ, not UNDEF; future architecture versions may + * define new registers here. + */ + ARMCPRegInfo v8_idregs[] = { + /* + * ID_AA64PFR0_EL1 is not a plain ARM_CP_CONST in system + * emulation because we don't know the right value for the + * GIC field until after we define these regs. + */ + { .name = "ID_AA64PFR0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 0, + .access = PL1_R, +#ifdef CONFIG_USER_ONLY + .type = ARM_CP_CONST, + .resetvalue = cpu->isar.id_aa64pfr0 +#else + .type = ARM_CP_NO_RAW, + .accessfn = access_aa64_tid3, + .readfn = id_aa64pfr0_read, + .writefn = arm_cp_write_ignore +#endif + }, + { .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64pfr1}, + { .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64PFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 3, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64ZFR0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + /* At present, only SVEver == 0 is defined anyway. */ + .resetvalue = 0 }, + { .name = "ID_AA64PFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 5, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64PFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64PFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64DFR0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 0, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64dfr0 }, + { .name = "ID_AA64DFR1_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64dfr1 }, + { .name = "ID_AA64DFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 2, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64DFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 3, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64AFR0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->id_aa64afr0 }, + { .name = "ID_AA64AFR1_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 5, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->id_aa64afr1 }, + { .name = "ID_AA64AFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64AFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64isar0 }, + { .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64isar1 }, + { .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64ISAR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 3, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64ISAR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64ISAR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 5, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64ISAR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64ISAR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64MMFR0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64mmfr0 }, + { .name = "ID_AA64MMFR1_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64mmfr1 }, + { .name = "ID_AA64MMFR2_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 2, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_aa64mmfr2 }, + { .name = "ID_AA64MMFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 3, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64MMFR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64MMFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 5, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64MMFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_AA64MMFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.mvfr0 }, + { .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.mvfr1 }, + { .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.mvfr2 }, + { .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "ID_PFR2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = cpu->isar.id_pfr2 }, + { .name = "MVFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 5, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "MVFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 6, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "MVFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 7, + .access = PL1_R, .type = ARM_CP_CONST, + .accessfn = access_aa64_tid3, + .resetvalue = 0 }, + { .name = "PMCEID0", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 6, + .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, + .resetvalue = extract64(cpu->pmceid0, 0, 32) }, + { .name = "PMCEID0_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 6, + .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, + .resetvalue = cpu->pmceid0 }, + { .name = "PMCEID1", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 7, + .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, + .resetvalue = extract64(cpu->pmceid1, 0, 32) }, + { .name = "PMCEID1_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7, + .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, + .resetvalue = cpu->pmceid1 }, + REGINFO_SENTINEL + }; +#ifdef CONFIG_USER_ONLY + ARMCPRegUserSpaceInfo v8_user_idregs[] = { + { .name = "ID_AA64PFR0_EL1", + .exported_bits = 0x000f000f00ff0000, + .fixed_bits = 0x0000000000000011 }, + { .name = "ID_AA64PFR1_EL1", + .exported_bits = 0x00000000000000f0 }, + { .name = "ID_AA64PFR*_EL1_RESERVED", + .is_glob = true }, + { .name = "ID_AA64ZFR0_EL1" }, + { .name = "ID_AA64MMFR0_EL1", + .fixed_bits = 0x00000000ff000000 }, + { .name = "ID_AA64MMFR1_EL1" }, + { .name = "ID_AA64MMFR*_EL1_RESERVED", + .is_glob = true }, + { .name = "ID_AA64DFR0_EL1", + .fixed_bits = 0x0000000000000006 }, + { .name = "ID_AA64DFR1_EL1" }, + { .name = "ID_AA64DFR*_EL1_RESERVED", + .is_glob = true }, + { .name = "ID_AA64AFR*", + .is_glob = true }, + { .name = "ID_AA64ISAR0_EL1", + .exported_bits = 0x00fffffff0fffff0 }, + { .name = "ID_AA64ISAR1_EL1", + .exported_bits = 0x000000f0ffffffff }, + { .name = "ID_AA64ISAR*_EL1_RESERVED", + .is_glob = true }, + REGUSERINFO_SENTINEL + }; + modify_arm_cp_regs(v8_idregs, v8_user_idregs); +#endif + /* RVBAR_EL1 is only implemented if EL1 is the highest EL */ + if (!arm_feature(env, ARM_FEATURE_EL3) && + !arm_feature(env, ARM_FEATURE_EL2)) { + ARMCPRegInfo rvbar = { + .name = "RVBAR_EL1", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1, + .type = ARM_CP_CONST, .access = PL1_R, .resetvalue = cpu->rvbar + }; + define_one_arm_cp_reg(cpu, &rvbar); + } + define_arm_cp_regs(cpu, v8_idregs); + define_arm_cp_regs(cpu, v8_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_EL2)) { + uint64_t vmpidr_def = mpidr_read_val(env); + ARMCPRegInfo vpidr_regs[] = { + { .name = "VPIDR", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .resetvalue = cpu->midr, .type = ARM_CP_ALIAS, + .fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) }, + { .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, + .access = PL2_RW, .resetvalue = cpu->midr, + .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) }, + { .name = "VMPIDR", .state = ARM_CP_STATE_AA32, + .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .resetvalue = vmpidr_def, .type = ARM_CP_ALIAS, + .fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) }, + { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, + .access = PL2_RW, + .resetvalue = vmpidr_def, + .fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, vpidr_regs); + define_arm_cp_regs(cpu, el2_cp_reginfo); + if (arm_feature(env, ARM_FEATURE_V8)) { + define_arm_cp_regs(cpu, el2_v8_cp_reginfo); + } + if (cpu_isar_feature(aa64_sel2, cpu)) { + define_arm_cp_regs(cpu, el2_sec_cp_reginfo); + } + /* RVBAR_EL2 is only implemented if EL2 is the highest EL */ + if (!arm_feature(env, ARM_FEATURE_EL3)) { + ARMCPRegInfo rvbar = { + .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1, + .type = ARM_CP_CONST, .access = PL2_R, .resetvalue = cpu->rvbar + }; + define_one_arm_cp_reg(cpu, &rvbar); + } + } else { + /* + * If EL2 is missing but higher ELs are enabled, we need to + * register the no_el2 reginfos. + */ + if (arm_feature(env, ARM_FEATURE_EL3)) { + /* + * When EL3 exists but not EL2, VPIDR and VMPIDR take the value + * of MIDR_EL1 and MPIDR_EL1. + */ + ARMCPRegInfo vpidr_regs[] = { + { .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .type = ARM_CP_CONST, .resetvalue = cpu->midr, + .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) }, + { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, + .access = PL2_RW, .accessfn = access_el3_aa32ns, + .type = ARM_CP_NO_RAW, + .writefn = arm_cp_write_ignore, .readfn = mpidr_read }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, vpidr_regs); + define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo); + if (arm_feature(env, ARM_FEATURE_V8)) { + define_arm_cp_regs(cpu, el3_no_el2_v8_cp_reginfo); + } + } + } + if (arm_feature(env, ARM_FEATURE_EL3)) { + define_arm_cp_regs(cpu, el3_cp_reginfo); + ARMCPRegInfo el3_regs[] = { + { .name = "RVBAR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 12, .crm = 0, .opc2 = 1, + .type = ARM_CP_CONST, .access = PL3_R, .resetvalue = cpu->rvbar }, + { .name = "SCTLR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 0, + .access = PL3_RW, + .raw_writefn = raw_write, .writefn = sctlr_write, + .fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[3]), + .resetvalue = cpu->reset_sctlr }, + REGINFO_SENTINEL + }; + + define_arm_cp_regs(cpu, el3_regs); + } + /* + * The behaviour of NSACR is sufficiently various that we don't + * try to describe it in a single reginfo: + * if EL3 is 64 bit, then trap to EL3 from S EL1, + * reads as constant 0xc00 from NS EL1 and NS EL2 + * if EL3 is 32 bit, then RW at EL3, RO at NS EL1 and NS EL2 + * if v7 without EL3, register doesn't exist + * if v8 without EL3, reads as constant 0xc00 from NS EL1 and NS EL2 + */ + if (arm_feature(env, ARM_FEATURE_EL3)) { + if (arm_feature(env, ARM_FEATURE_AARCH64)) { + ARMCPRegInfo nsacr = { + .name = "NSACR", .type = ARM_CP_CONST, + .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2, + .access = PL1_RW, .accessfn = nsacr_access, + .resetvalue = 0xc00 + }; + define_one_arm_cp_reg(cpu, &nsacr); + } else { + ARMCPRegInfo nsacr = { + .name = "NSACR", + .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2, + .access = PL3_RW | PL1_R, + .resetvalue = 0, + .fieldoffset = offsetof(CPUARMState, cp15.nsacr) + }; + define_one_arm_cp_reg(cpu, &nsacr); + } + } else { + if (arm_feature(env, ARM_FEATURE_V8)) { + ARMCPRegInfo nsacr = { + .name = "NSACR", .type = ARM_CP_CONST, + .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2, + .access = PL1_R, + .resetvalue = 0xc00 + }; + define_one_arm_cp_reg(cpu, &nsacr); + } + } + + if (arm_feature(env, ARM_FEATURE_PMSA)) { + if (arm_feature(env, ARM_FEATURE_V6)) { + /* PMSAv6 not implemented */ + assert(arm_feature(env, ARM_FEATURE_V7)); + define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo); + define_arm_cp_regs(cpu, pmsav7_cp_reginfo); + } else { + define_arm_cp_regs(cpu, pmsav5_cp_reginfo); + } + } else { + define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo); + define_arm_cp_regs(cpu, vmsa_cp_reginfo); + /* TTCBR2 is introduced with ARMv8.2-AA32HPD. */ + if (cpu_isar_feature(aa32_hpd, cpu)) { + define_one_arm_cp_reg(cpu, &ttbcr2_reginfo); + } + } + if (arm_feature(env, ARM_FEATURE_THUMB2EE)) { + define_arm_cp_regs(cpu, t2ee_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) { + define_arm_cp_regs(cpu, generic_timer_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_VAPA)) { + define_arm_cp_regs(cpu, vapa_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_CACHE_TEST_CLEAN)) { + define_arm_cp_regs(cpu, cache_test_clean_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_CACHE_DIRTY_REG)) { + define_arm_cp_regs(cpu, cache_dirty_status_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_CACHE_BLOCK_OPS)) { + define_arm_cp_regs(cpu, cache_block_ops_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_OMAPCP)) { + define_arm_cp_regs(cpu, omap_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_STRONGARM)) { + define_arm_cp_regs(cpu, strongarm_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_XSCALE)) { + define_arm_cp_regs(cpu, xscale_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_DUMMY_C15_REGS)) { + define_arm_cp_regs(cpu, dummy_c15_cp_reginfo); + } + if (arm_feature(env, ARM_FEATURE_LPAE)) { + define_arm_cp_regs(cpu, lpae_cp_reginfo); + } + if (cpu_isar_feature(aa32_jazelle, cpu)) { + define_arm_cp_regs(cpu, jazelle_regs); + } + /* + * Slightly awkwardly, the OMAP and StrongARM cores need all of + * cp15 crn=0 to be writes-ignored, whereas for other cores they should + * be read-only (ie write causes UNDEF exception). + */ + { + ARMCPRegInfo id_pre_v8_midr_cp_reginfo[] = { + /* + * Pre-v8 MIDR space. + * Note that the MIDR isn't a simple constant register because + * of the TI925 behaviour where writes to another register can + * cause the MIDR value to change. + * + * Unimplemented registers in the c15 0 0 0 space default to + * MIDR. Define MIDR first as this entire space, then CTR, TCMTR + * and friends override accordingly. + */ + { .name = "MIDR", + .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = CP_ANY, + .access = PL1_R, .resetvalue = cpu->midr, + .writefn = arm_cp_write_ignore, .raw_writefn = raw_write, + .readfn = midr_read, + .fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid), + .type = ARM_CP_OVERRIDE }, + /* crn = 0 op1 = 0 crm = 3..7 : currently unassigned; we RAZ. */ + { .name = "DUMMY", + .cp = 15, .crn = 0, .crm = 3, .opc1 = 0, .opc2 = CP_ANY, + .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "DUMMY", + .cp = 15, .crn = 0, .crm = 4, .opc1 = 0, .opc2 = CP_ANY, + .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "DUMMY", + .cp = 15, .crn = 0, .crm = 5, .opc1 = 0, .opc2 = CP_ANY, + .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "DUMMY", + .cp = 15, .crn = 0, .crm = 6, .opc1 = 0, .opc2 = CP_ANY, + .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, + { .name = "DUMMY", + .cp = 15, .crn = 0, .crm = 7, .opc1 = 0, .opc2 = CP_ANY, + .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, + REGINFO_SENTINEL + }; + ARMCPRegInfo id_v8_midr_cp_reginfo[] = { + { .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 0, + .access = PL1_R, .type = ARM_CP_NO_RAW, .resetvalue = cpu->midr, + .fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid), + .readfn = midr_read }, + /* crn = 0 op1 = 0 crm = 0 op2 = 4,7 : AArch32 aliases of MIDR */ + { .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST, + .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4, + .access = PL1_R, .resetvalue = cpu->midr }, + { .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST, + .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 7, + .access = PL1_R, .resetvalue = cpu->midr }, + { .name = "REVIDR_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 6, + .access = PL1_R, + .accessfn = access_aa64_tid1, + .type = ARM_CP_CONST, .resetvalue = cpu->revidr }, + REGINFO_SENTINEL + }; + ARMCPRegInfo id_cp_reginfo[] = { + /* These are common to v8 and pre-v8 */ + { .name = "CTR", + .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 1, + .access = PL1_R, .accessfn = ctr_el0_access, + .type = ARM_CP_CONST, .resetvalue = cpu->ctr }, + { .name = "CTR_EL0", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 0, .crm = 0, + .access = PL0_R, .accessfn = ctr_el0_access, + .type = ARM_CP_CONST, .resetvalue = cpu->ctr }, + /* TCMTR and TLBTR exist in v8 but have no 64-bit versions */ + { .name = "TCMTR", + .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 2, + .access = PL1_R, + .accessfn = access_aa32_tid1, + .type = ARM_CP_CONST, .resetvalue = 0 }, + REGINFO_SENTINEL + }; + /* TLBTR is specific to VMSA */ + ARMCPRegInfo id_tlbtr_reginfo = { + .name = "TLBTR", + .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 3, + .access = PL1_R, + .accessfn = access_aa32_tid1, + .type = ARM_CP_CONST, .resetvalue = 0, + }; + /* MPUIR is specific to PMSA V6+ */ + ARMCPRegInfo id_mpuir_reginfo = { + .name = "MPUIR", + .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4, + .access = PL1_R, .type = ARM_CP_CONST, + .resetvalue = cpu->pmsav7_dregion << 8 + }; + ARMCPRegInfo crn0_wi_reginfo = { + .name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY, + .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W, + .type = ARM_CP_NOP | ARM_CP_OVERRIDE + }; +#ifdef CONFIG_USER_ONLY + ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = { + { .name = "MIDR_EL1", + .exported_bits = 0x00000000ffffffff }, + { .name = "REVIDR_EL1" }, + REGUSERINFO_SENTINEL + }; + modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo); +#endif + if (arm_feature(env, ARM_FEATURE_OMAPCP) || + arm_feature(env, ARM_FEATURE_STRONGARM)) { + ARMCPRegInfo *r; + /* + * Register the blanket "writes ignored" value first to cover the + * whole space. Then update the specific ID registers to allow write + * access, so that they ignore writes rather than causing them to + * UNDEF. + */ + define_one_arm_cp_reg(cpu, &crn0_wi_reginfo); + for (r = id_pre_v8_midr_cp_reginfo; + r->type != ARM_CP_SENTINEL; r++) { + r->access = PL1_RW; + } + for (r = id_cp_reginfo; r->type != ARM_CP_SENTINEL; r++) { + r->access = PL1_RW; + } + id_mpuir_reginfo.access = PL1_RW; + id_tlbtr_reginfo.access = PL1_RW; + } + if (arm_feature(env, ARM_FEATURE_V8)) { + define_arm_cp_regs(cpu, id_v8_midr_cp_reginfo); + } else { + define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo); + } + define_arm_cp_regs(cpu, id_cp_reginfo); + if (!arm_feature(env, ARM_FEATURE_PMSA)) { + define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo); + } else if (arm_feature(env, ARM_FEATURE_V7)) { + define_one_arm_cp_reg(cpu, &id_mpuir_reginfo); + } + } + + if (arm_feature(env, ARM_FEATURE_MPIDR)) { + ARMCPRegInfo mpidr_cp_reginfo[] = { + { .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5, + .access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW }, + REGINFO_SENTINEL + }; +#ifdef CONFIG_USER_ONLY + ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = { + { .name = "MPIDR_EL1", + .fixed_bits = 0x0000000080000000 }, + REGUSERINFO_SENTINEL + }; + modify_arm_cp_regs(mpidr_cp_reginfo, mpidr_user_cp_reginfo); +#endif + define_arm_cp_regs(cpu, mpidr_cp_reginfo); + } + + if (arm_feature(env, ARM_FEATURE_AUXCR)) { + ARMCPRegInfo auxcr_reginfo[] = { + { .name = "ACTLR_EL1", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 1, + .access = PL1_RW, .accessfn = access_tacr, + .type = ARM_CP_CONST, .resetvalue = cpu->reset_auxcr }, + { .name = "ACTLR_EL2", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 1, + .access = PL2_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + { .name = "ACTLR_EL3", .state = ARM_CP_STATE_AA64, + .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 1, + .access = PL3_RW, .type = ARM_CP_CONST, + .resetvalue = 0 }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, auxcr_reginfo); + if (cpu_isar_feature(aa32_ac2, cpu)) { + define_arm_cp_regs(cpu, actlr2_hactlr2_reginfo); + } + } + + if (arm_feature(env, ARM_FEATURE_CBAR)) { + /* + * CBAR is IMPDEF, but common on Arm Cortex-A implementations. + * There are two flavours: + * (1) older 32-bit only cores have a simple 32-bit CBAR + * (2) 64-bit cores have a 64-bit CBAR visible to AArch64, plus a + * 32-bit register visible to AArch32 at a different encoding + * to the "flavour 1" register and with the bits rearranged to + * be able to squash a 64-bit address into the 32-bit view. + * We distinguish the two via the ARM_FEATURE_AARCH64 flag, but + * in future if we support AArch32-only configs of some of the + * AArch64 cores we might need to add a specific feature flag + * to indicate cores with "flavour 2" CBAR. + */ + if (arm_feature(env, ARM_FEATURE_AARCH64)) { + /* 32 bit view is [31:18] 0...0 [43:32]. */ + uint32_t cbar32 = (extract64(cpu->reset_cbar, 18, 14) << 18) + | extract64(cpu->reset_cbar, 32, 12); + ARMCPRegInfo cbar_reginfo[] = { + { .name = "CBAR", + .type = ARM_CP_CONST, + .cp = 15, .crn = 15, .crm = 3, .opc1 = 1, .opc2 = 0, + .access = PL1_R, .resetvalue = cbar32 }, + { .name = "CBAR_EL1", .state = ARM_CP_STATE_AA64, + .type = ARM_CP_CONST, + .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 3, .opc2 = 0, + .access = PL1_R, .resetvalue = cpu->reset_cbar }, + REGINFO_SENTINEL + }; + /* We don't implement a r/w 64 bit CBAR currently */ + assert(arm_feature(env, ARM_FEATURE_CBAR_RO)); + define_arm_cp_regs(cpu, cbar_reginfo); + } else { + ARMCPRegInfo cbar = { + .name = "CBAR", + .cp = 15, .crn = 15, .crm = 0, .opc1 = 4, .opc2 = 0, + .access = PL1_R|PL3_W, .resetvalue = cpu->reset_cbar, + .fieldoffset = offsetof(CPUARMState, + cp15.c15_config_base_address) + }; + if (arm_feature(env, ARM_FEATURE_CBAR_RO)) { + cbar.access = PL1_R; + cbar.fieldoffset = 0; + cbar.type = ARM_CP_CONST; + } + define_one_arm_cp_reg(cpu, &cbar); + } + } + + if (arm_feature(env, ARM_FEATURE_VBAR)) { + ARMCPRegInfo vbar_cp_reginfo[] = { + { .name = "VBAR", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0, + .access = PL1_RW, .writefn = vbar_write, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s), + offsetof(CPUARMState, cp15.vbar_ns) }, + .resetvalue = 0 }, + REGINFO_SENTINEL + }; + define_arm_cp_regs(cpu, vbar_cp_reginfo); + } + + /* Generic registers whose values depend on the implementation */ + { + ARMCPRegInfo sctlr = { + .name = "SCTLR", .state = ARM_CP_STATE_BOTH, + .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0, + .access = PL1_RW, .accessfn = access_tvm_trvm, + .bank_fieldoffsets = { offsetof(CPUARMState, cp15.sctlr_s), + offsetof(CPUARMState, cp15.sctlr_ns) }, + .writefn = sctlr_write, .resetvalue = cpu->reset_sctlr, + .raw_writefn = raw_write, + }; + if (arm_feature(env, ARM_FEATURE_XSCALE)) { + /* Normally we would always end the TB on an SCTLR write, but Linux + * arch/arm/mach-pxa/sleep.S expects two instructions following + * an MMU enable to execute from cache. Imitate this behaviour. + */ + sctlr.type |= ARM_CP_SUPPRESS_TB_END; + } + define_one_arm_cp_reg(cpu, &sctlr); + } + + if (cpu_isar_feature(aa64_lor, cpu)) { + define_arm_cp_regs(cpu, lor_reginfo); + } + if (cpu_isar_feature(aa64_pan, cpu)) { + define_one_arm_cp_reg(cpu, &pan_reginfo); + } +#ifndef CONFIG_USER_ONLY + if (cpu_isar_feature(aa64_ats1e1, cpu)) { + define_arm_cp_regs(cpu, ats1e1_reginfo); + } + if (cpu_isar_feature(aa32_ats1e1, cpu)) { + define_arm_cp_regs(cpu, ats1cp_reginfo); + } +#endif + if (cpu_isar_feature(aa64_uao, cpu)) { + define_one_arm_cp_reg(cpu, &uao_reginfo); + } + + if (cpu_isar_feature(aa64_dit, cpu)) { + define_one_arm_cp_reg(cpu, &dit_reginfo); + } + if (cpu_isar_feature(aa64_ssbs, cpu)) { + define_one_arm_cp_reg(cpu, &ssbs_reginfo); + } + + if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) { + define_arm_cp_regs(cpu, vhe_reginfo); + } + + if (cpu_isar_feature(aa64_sve, cpu)) { + define_one_arm_cp_reg(cpu, &zcr_el1_reginfo); + if (arm_feature(env, ARM_FEATURE_EL2)) { + define_one_arm_cp_reg(cpu, &zcr_el2_reginfo); + } else { + define_one_arm_cp_reg(cpu, &zcr_no_el2_reginfo); + } + if (arm_feature(env, ARM_FEATURE_EL3)) { + define_one_arm_cp_reg(cpu, &zcr_el3_reginfo); + } + } + +#ifdef TARGET_AARCH64 + if (cpu_isar_feature(aa64_pauth, cpu)) { + define_arm_cp_regs(cpu, pauth_reginfo); + } + if (cpu_isar_feature(aa64_rndr, cpu)) { + define_arm_cp_regs(cpu, rndr_reginfo); + } +#ifndef CONFIG_USER_ONLY + /* Data Cache clean instructions up to PoP */ + if (cpu_isar_feature(aa64_dcpop, cpu)) { + define_one_arm_cp_reg(cpu, dcpop_reg); + + if (cpu_isar_feature(aa64_dcpodp, cpu)) { + define_one_arm_cp_reg(cpu, dcpodp_reg); + } + } +#endif /*CONFIG_USER_ONLY*/ + + /* + * If full MTE is enabled, add all of the system registers. + * If only "instructions available at EL0" are enabled, + * then define only a RAZ/WI version of PSTATE.TCO. + */ + if (cpu_isar_feature(aa64_mte, cpu)) { + define_arm_cp_regs(cpu, mte_reginfo); + define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo); + } else if (cpu_isar_feature(aa64_mte_insn_reg, cpu)) { + define_arm_cp_regs(cpu, mte_tco_ro_reginfo); + define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo); + } +#endif + + if (cpu_isar_feature(any_predinv, cpu)) { + define_arm_cp_regs(cpu, predinv_reginfo); + } + + if (cpu_isar_feature(any_ccidx, cpu)) { + define_arm_cp_regs(cpu, ccsidr2_reginfo); + } + +#ifndef CONFIG_USER_ONLY + /* + * Register redirections and aliases must be done last, + * after the registers from the other extensions have been defined. + */ + if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) { + define_arm_vh_e2h_redirects_aliases(cpu); + } +#endif +} + +/* + * Modify ARMCPRegInfo for access from userspace. + * + * This is a data driven modification directed by + * ARMCPRegUserSpaceInfo. All registers become ARM_CP_CONST as + * user-space cannot alter any values and dynamic values pertaining to + * execution state are hidden from user space view anyway. + */ +void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods) +{ + const ARMCPRegUserSpaceInfo *m; + ARMCPRegInfo *r; + + for (m = mods; m->name; m++) { + GPatternSpec *pat = NULL; + if (m->is_glob) { + pat = g_pattern_spec_new(m->name); + } + for (r = regs; r->type != ARM_CP_SENTINEL; r++) { + if (pat && g_pattern_match_string(pat, r->name)) { + r->type = ARM_CP_CONST; + r->access = PL0U_R; + r->resetvalue = 0; + /* continue */ + } else if (strcmp(r->name, m->name) == 0) { + r->type = ARM_CP_CONST; + r->access = PL0U_R; + r->resetvalue &= m->exported_bits; + r->resetvalue |= m->fixed_bits; + break; + } + } + if (pat) { + g_pattern_spec_free(pat); + } + } +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 0f4ebcc46f..09503db37b 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -37,9 +37,7 @@ #include "semihosting/common-semi.h" #endif #include "cpu-mmu.h" - -#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */ -#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */ +#include "cpregs.h" static void switch_mode(CPUARMState *env, int mode); @@ -138,65 +136,6 @@ static int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg) } } -static uint64_t raw_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - assert(ri->fieldoffset); - if (cpreg_field_is_64bit(ri)) { - return CPREG_FIELD64(env, ri); - } else { - return CPREG_FIELD32(env, ri); - } -} - -static void raw_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - assert(ri->fieldoffset); - if (cpreg_field_is_64bit(ri)) { - CPREG_FIELD64(env, ri) = value; - } else { - CPREG_FIELD32(env, ri) = value; - } -} - -static void *raw_ptr(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return (char *)env + ri->fieldoffset; -} - -uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* Raw read of a coprocessor register (as needed for migration, etc). */ - if (ri->type & ARM_CP_CONST) { - return ri->resetvalue; - } else if (ri->raw_readfn) { - return ri->raw_readfn(env, ri); - } else if (ri->readfn) { - return ri->readfn(env, ri); - } else { - return raw_read(env, ri); - } -} - -static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t v) -{ - /* Raw write of a coprocessor register (as needed for migration, etc). - * Note that constant registers are treated as write-ignored; the - * caller should check for success by whether a readback gives the - * value written. - */ - if (ri->type & ARM_CP_CONST) { - return; - } else if (ri->raw_writefn) { - ri->raw_writefn(env, ri, v); - } else if (ri->writefn) { - ri->writefn(env, ri, v); - } else { - raw_write(env, ri, v); - } -} - /** * arm_get/set_gdb_*: get/set a gdb register * @env: the CPU state @@ -325,8407 +264,358 @@ static int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg) } #endif /* TARGET_AARCH64 */ -static bool raw_accessors_invalid(const ARMCPRegInfo *ri) -{ - /* - * Return true if the regdef would cause an assertion if you called - * read_raw_cp_reg() or write_raw_cp_reg() on it (ie if it is a - * program bug for it not to have the NO_RAW flag). - * NB that returning false here doesn't necessarily mean that calling - * read/write_raw_cp_reg() is safe, because we can't distinguish "has - * read/write access functions which are safe for raw use" from "has - * read/write access functions which have side effects but has forgotten - * to provide raw access functions". - * The tests here line up with the conditions in read/write_raw_cp_reg() - * and assertions in raw_read()/raw_write(). - */ - if ((ri->type & ARM_CP_CONST) || - ri->fieldoffset || - ((ri->raw_writefn || ri->writefn) && (ri->raw_readfn || ri->readfn))) { - return false; - } - return true; -} - -bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync) -{ - /* Write the coprocessor state from cpu->env to the (index,value) list. */ - int i; - bool ok = true; - - for (i = 0; i < cpu->cpreg_array_len; i++) { - uint32_t regidx = kvm_to_cpreg_id(cpu->cpreg_indexes[i]); - const ARMCPRegInfo *ri; - uint64_t newval; - - ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); - if (!ri) { - ok = false; - continue; - } - if (ri->type & ARM_CP_NO_RAW) { - continue; - } - - newval = read_raw_cp_reg(&cpu->env, ri); - if (kvm_sync) { - /* - * Only sync if the previous list->cpustate sync succeeded. - * Rather than tracking the success/failure state for every - * item in the list, we just recheck "does the raw write we must - * have made in write_list_to_cpustate() read back OK" here. - */ - uint64_t oldval = cpu->cpreg_values[i]; - - if (oldval == newval) { - continue; - } - - write_raw_cp_reg(&cpu->env, ri, oldval); - if (read_raw_cp_reg(&cpu->env, ri) != oldval) { - continue; - } - - write_raw_cp_reg(&cpu->env, ri, newval); - } - cpu->cpreg_values[i] = newval; - } - return ok; -} - -bool write_list_to_cpustate(ARMCPU *cpu) +/* + * Return the effective value of HCR_EL2. + * Bits that are not included here: + * RW (read from SCR_EL3.RW as needed) + */ +uint64_t arm_hcr_el2_eff(CPUARMState *env) { - int i; - bool ok = true; - - for (i = 0; i < cpu->cpreg_array_len; i++) { - uint32_t regidx = kvm_to_cpreg_id(cpu->cpreg_indexes[i]); - uint64_t v = cpu->cpreg_values[i]; - const ARMCPRegInfo *ri; + uint64_t ret = env->cp15.hcr_el2; - ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); - if (!ri) { - ok = false; - continue; - } - if (ri->type & ARM_CP_NO_RAW) { - continue; - } - /* Write value and confirm it reads back as written - * (to catch read-only registers and partially read-only - * registers where the incoming migration value doesn't match) + if (!arm_is_el2_enabled(env)) { + /* + * "This register has no effect if EL2 is not enabled in the + * current Security state". This is ARMv8.4-SecEL2 speak for + * !(SCR_EL3.NS==1 || SCR_EL3.EEL2==1). + * + * Prior to that, the language was "In an implementation that + * includes EL3, when the value of SCR_EL3.NS is 0 the PE behaves + * as if this field is 0 for all purposes other than a direct + * read or write access of HCR_EL2". With lots of enumeration + * on a per-field basis. In current QEMU, this is condition + * is arm_is_secure_below_el3. + * + * Since the v8.4 language applies to the entire register, and + * appears to be backward compatible, use that. */ - write_raw_cp_reg(&cpu->env, ri, v); - if (read_raw_cp_reg(&cpu->env, ri) != v) { - ok = false; - } + return 0; } - return ok; -} -static void add_cpreg_to_list(gpointer key, gpointer opaque) -{ - ARMCPU *cpu = opaque; - uint64_t regidx; - const ARMCPRegInfo *ri; - - regidx = *(uint32_t *)key; - ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); + /* + * For a cpu that supports both aarch64 and aarch32, we can set bits + * in HCR_EL2 (e.g. via EL3) that are RES0 when we enter EL2 as aa32. + * Ignore all of the bits in HCR+HCR2 that are not valid for aarch32. + */ + if (!arm_el_is_aa64(env, 2)) { + uint64_t aa32_valid; - if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) { - cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx); - /* The value array need not be initialized at this point */ - cpu->cpreg_array_len++; + /* + * These bits are up-to-date as of ARMv8.6. + * For HCR, it's easiest to list just the 2 bits that are invalid. + * For HCR2, list those that are valid. + */ + aa32_valid = MAKE_64BIT_MASK(0, 32) & ~(HCR_RW | HCR_TDZ); + aa32_valid |= (HCR_CD | HCR_ID | HCR_TERR | HCR_TEA | HCR_MIOCNCE | + HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_TTLBIS); + ret &= aa32_valid; } -} -static void count_cpreg(gpointer key, gpointer opaque) -{ - ARMCPU *cpu = opaque; - uint64_t regidx; - const ARMCPRegInfo *ri; - - regidx = *(uint32_t *)key; - ri = get_arm_cp_reginfo(cpu->cp_regs, regidx); - - if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) { - cpu->cpreg_array_len++; + if (ret & HCR_TGE) { + /* These bits are up-to-date as of ARMv8.6. */ + if (ret & HCR_E2H) { + ret &= ~(HCR_VM | HCR_FMO | HCR_IMO | HCR_AMO | + HCR_BSU_MASK | HCR_DC | HCR_TWI | HCR_TWE | + HCR_TID0 | HCR_TID2 | HCR_TPCP | HCR_TPU | + HCR_TDZ | HCR_CD | HCR_ID | HCR_MIOCNCE | + HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_ENSCXT | + HCR_TTLBIS | HCR_TTLBOS | HCR_TID5); + } else { + ret |= HCR_FMO | HCR_IMO | HCR_AMO; + } + ret &= ~(HCR_SWIO | HCR_PTW | HCR_VF | HCR_VI | HCR_VSE | + HCR_FB | HCR_TID1 | HCR_TID3 | HCR_TSC | HCR_TACR | + HCR_TSW | HCR_TTLB | HCR_TVM | HCR_HCD | HCR_TRVM | + HCR_TLOR); } -} - -static gint cpreg_key_compare(gconstpointer a, gconstpointer b) -{ - uint64_t aidx = cpreg_to_kvm_id(*(uint32_t *)a); - uint64_t bidx = cpreg_to_kvm_id(*(uint32_t *)b); - if (aidx > bidx) { - return 1; - } - if (aidx < bidx) { - return -1; - } - return 0; + return ret; } -void init_cpreg_list(ARMCPU *cpu) +/* Return the exception level to which exceptions should be taken + * via SVEAccessTrap. If an exception should be routed through + * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should + * take care of raising that exception. + * C.f. the ARM pseudocode function CheckSVEEnabled. + */ +int sve_exception_el(CPUARMState *env, int el) { - /* - * Initialise the cpreg_tuples[] array based on the cp_regs hash. - * Note that we require cpreg_tuples[] to be sorted by key ID. - */ - GList *keys; - int arraylen; - - keys = g_hash_table_get_keys(cpu->cp_regs); - keys = g_list_sort(keys, cpreg_key_compare); - - cpu->cpreg_array_len = 0; - - g_list_foreach(keys, count_cpreg, cpu); - - arraylen = cpu->cpreg_array_len; - cpu->cpreg_indexes = g_new(uint64_t, arraylen); - cpu->cpreg_values = g_new(uint64_t, arraylen); - cpu->cpreg_vmstate_indexes = g_new(uint64_t, arraylen); - cpu->cpreg_vmstate_values = g_new(uint64_t, arraylen); - cpu->cpreg_vmstate_array_len = cpu->cpreg_array_len; - cpu->cpreg_array_len = 0; - - g_list_foreach(keys, add_cpreg_to_list, cpu); +#ifndef CONFIG_USER_ONLY + uint64_t hcr_el2 = arm_hcr_el2_eff(env); - assert(cpu->cpreg_array_len == arraylen); + if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { + bool disabled = false; - g_list_free(keys); -} + /* The CPACR.ZEN controls traps to EL1: + * 0, 2 : trap EL0 and EL1 accesses + * 1 : trap only EL0 accesses + * 3 : trap no accesses + */ + if (!extract32(env->cp15.cpacr_el1, 16, 1)) { + disabled = true; + } else if (!extract32(env->cp15.cpacr_el1, 17, 1)) { + disabled = el == 0; + } + if (disabled) { + /* route_to_el2 */ + return hcr_el2 & HCR_TGE ? 2 : 1; + } -/* - * Some registers are not accessible from AArch32 EL3 if SCR.NS == 0. - */ -static CPAccessResult access_el3_aa32ns(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - if (!is_a64(env) && arm_current_el(env) == 3 && - arm_is_secure_below_el3(env)) { - return CP_ACCESS_TRAP_UNCATEGORIZED; + /* Check CPACR.FPEN. */ + if (!extract32(env->cp15.cpacr_el1, 20, 1)) { + disabled = true; + } else if (!extract32(env->cp15.cpacr_el1, 21, 1)) { + disabled = el == 0; + } + if (disabled) { + return 0; + } } - return CP_ACCESS_OK; -} -/* - * Some secure-only AArch32 registers trap to EL3 if used from - * Secure EL1 (but are just ordinary UNDEF in other non-EL3 contexts). - * Note that an access from Secure EL1 can only happen if EL3 is AArch64. - * We assume that the .access field is set to PL1_RW. - */ -static CPAccessResult access_trap_aa32s_el1(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 3) { - return CP_ACCESS_OK; - } - if (arm_is_secure_below_el3(env)) { - if (env->cp15.scr_el3 & SCR_EEL2) { - return CP_ACCESS_TRAP_EL2; + /* CPTR_EL2. Since TZ and TFP are positive, + * they will be zero when EL2 is not present. + */ + if (el <= 2 && arm_is_el2_enabled(env)) { + if (env->cp15.cptr_el[2] & CPTR_TZ) { + return 2; + } + if (env->cp15.cptr_el[2] & CPTR_TFP) { + return 0; } - return CP_ACCESS_TRAP_EL3; } - /* This will be EL1 NS and EL2 NS, which just UNDEF */ - return CP_ACCESS_TRAP_UNCATEGORIZED; -} - -static uint64_t arm_mdcr_el2_eff(CPUARMState *env) -{ - return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0; -} - -/* - * Check for traps to "powerdown debug" registers, which are controlled - * by MDCR.TDOSA - */ -static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - int el = arm_current_el(env); - uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); - bool mdcr_el2_tdosa = (mdcr_el2 & MDCR_TDOSA) || (mdcr_el2 & MDCR_TDE) || - (arm_hcr_el2_eff(env) & HCR_TGE); - if (el < 2 && mdcr_el2_tdosa) { - return CP_ACCESS_TRAP_EL2; - } - if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) { - return CP_ACCESS_TRAP_EL3; + /* CPTR_EL3. Since EZ is negative we must check for EL3. */ + if (arm_feature(env, ARM_FEATURE_EL3) + && !(env->cp15.cptr_el[3] & CPTR_EZ)) { + return 3; } - return CP_ACCESS_OK; +#endif + return 0; } -/* - * Check for traps to "debug ROM" registers, which are controlled - * by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3. - */ -static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) +static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) { - int el = arm_current_el(env); - uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); - bool mdcr_el2_tdra = (mdcr_el2 & MDCR_TDRA) || (mdcr_el2 & MDCR_TDE) || - (arm_hcr_el2_eff(env) & HCR_TGE); + uint32_t end_len; - if (el < 2 && mdcr_el2_tdra) { - return CP_ACCESS_TRAP_EL2; - } - if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) { - return CP_ACCESS_TRAP_EL3; + end_len = start_len &= 0xf; + if (!test_bit(start_len, cpu->sve_vq_map)) { + end_len = find_last_bit(cpu->sve_vq_map, start_len); + assert(end_len < start_len); } - return CP_ACCESS_OK; + return end_len; } /* - * Check for traps to general debug registers, which are controlled - * by MDCR_EL2.TDA for EL2 and MDCR_EL3.TDA for EL3. + * Given that SVE is enabled, return the vector length for EL. */ -static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) +uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) { - int el = arm_current_el(env); - uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); - bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) || - (arm_hcr_el2_eff(env) & HCR_TGE); + ARMCPU *cpu = env_archcpu(env); + uint32_t zcr_len = cpu->sve_max_vq - 1; - if (el < 2 && mdcr_el2_tda) { - return CP_ACCESS_TRAP_EL2; - } - if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) { - return CP_ACCESS_TRAP_EL3; + if (el <= 1) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]); } - return CP_ACCESS_OK; -} - -/* - * Check for traps to performance monitor registers, which are controlled - * by MDCR_EL2.TPM for EL2 and MDCR_EL3.TPM for EL3. - */ -static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - int el = arm_current_el(env); - uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); - - if (el < 2 && (mdcr_el2 & MDCR_TPM)) { - return CP_ACCESS_TRAP_EL2; + if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]); } - if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) { - return CP_ACCESS_TRAP_EL3; + if (arm_feature(env, ARM_FEATURE_EL3)) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]); } - return CP_ACCESS_OK; -} -/* Check for traps from EL1 due to HCR_EL2.TVM and HCR_EL2.TRVM. */ -static CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 1) { - uint64_t trap = isread ? HCR_TRVM : HCR_TVM; - if (arm_hcr_el2_eff(env) & trap) { - return CP_ACCESS_TRAP_EL2; - } - } - return CP_ACCESS_OK; + return sve_zcr_get_valid_len(cpu, zcr_len); } -/* Check for traps from EL1 due to HCR_EL2.TSW. */ -static CPAccessResult access_tsw(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) +void hw_watchpoint_update(ARMCPU *cpu, int n) { - if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TSW)) { - return CP_ACCESS_TRAP_EL2; - } - return CP_ACCESS_OK; -} + CPUARMState *env = &cpu->env; + vaddr len = 0; + vaddr wvr = env->cp15.dbgwvr[n]; + uint64_t wcr = env->cp15.dbgwcr[n]; + int mask; + int flags = BP_CPU | BP_STOP_BEFORE_ACCESS; -/* Check for traps from EL1 due to HCR_EL2.TACR. */ -static CPAccessResult access_tacr(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TACR)) { - return CP_ACCESS_TRAP_EL2; + if (env->cpu_watchpoint[n]) { + cpu_watchpoint_remove_by_ref(CPU(cpu), env->cpu_watchpoint[n]); + env->cpu_watchpoint[n] = NULL; } - return CP_ACCESS_OK; -} -/* Check for traps from EL1 due to HCR_EL2.TTLB. */ -static CPAccessResult access_ttlb(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TTLB)) { - return CP_ACCESS_TRAP_EL2; + if (!extract64(wcr, 0, 1)) { + /* E bit clear : watchpoint disabled */ + return; } - return CP_ACCESS_OK; -} -static void dacr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - - raw_write(env, ri, value); - tlb_flush(CPU(cpu)); /* Flush TLB as domain not tracked in TLB */ -} - -static void fcse_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - - if (raw_read(env, ri) != value) { - /* - * Unlike real hardware the qemu TLB uses virtual addresses, - * not modified virtual addresses, so this causes a TLB flush. - */ - tlb_flush(CPU(cpu)); - raw_write(env, ri, value); - } -} - -static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - - if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA) - && !extended_addresses_enabled(env)) { - /* - * For VMSA (when not using the LPAE long descriptor page table - * format) this register includes the ASID, so do a TLB flush. - * For PMSA it is purely a process ID and no action is needed. - */ - tlb_flush(CPU(cpu)); - } - raw_write(env, ri, value); -} - -/* IS variants of TLB operations must affect all cores */ -static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_all_cpus_synced(cs); -} - -static void tlbiasid_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_all_cpus_synced(cs); -} - -static void tlbimva_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK); -} - -static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK); -} - -/* - * Non-IS variants of TLB operations are upgraded to - * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to - * force broadcast of these operations. - */ -static bool tlb_force_broadcast(CPUARMState *env) -{ - return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB); -} - -static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Invalidate all (TLBIALL) */ - CPUState *cs = env_cpu(env); - - if (tlb_force_broadcast(env)) { - tlb_flush_all_cpus_synced(cs); - } else { - tlb_flush(cs); - } -} - -static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */ - CPUState *cs = env_cpu(env); - - value &= TARGET_PAGE_MASK; - if (tlb_force_broadcast(env)) { - tlb_flush_page_all_cpus_synced(cs, value); - } else { - tlb_flush_page(cs, value); - } -} - -static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Invalidate by ASID (TLBIASID) */ - CPUState *cs = env_cpu(env); - - if (tlb_force_broadcast(env)) { - tlb_flush_all_cpus_synced(cs); - } else { - tlb_flush(cs); - } -} - -static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */ - CPUState *cs = env_cpu(env); - - value &= TARGET_PAGE_MASK; - if (tlb_force_broadcast(env)) { - tlb_flush_page_all_cpus_synced(cs, value); - } else { - tlb_flush_page(cs, value); - } -} - -static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_by_mmuidx(cs, - ARMMMUIdxBit_E10_1 | - ARMMMUIdxBit_E10_1_PAN | - ARMMMUIdxBit_E10_0); -} - -static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_by_mmuidx_all_cpus_synced(cs, - ARMMMUIdxBit_E10_1 | - ARMMMUIdxBit_E10_1_PAN | - ARMMMUIdxBit_E10_0); -} - - -static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2); -} - -static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2); -} - -static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12); - - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2); -} - -static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12); - - tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, - ARMMMUIdxBit_E2); -} - -static const ARMCPRegInfo cp_reginfo[] = { - /* - * Define the secure and non-secure FCSE identifier CP registers - * separately because there is no secure bank in V8 (no _EL3). This allows - * the secure register to be properly reset and migrated. There is also no - * v8 EL1 version of the register so the non-secure instance stands alone. - */ - { .name = "FCSEIDR", - .cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0, - .access = PL1_RW, .secure = ARM_CP_SECSTATE_NS, - .fieldoffset = offsetof(CPUARMState, cp15.fcseidr_ns), - .resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, }, - { .name = "FCSEIDR_S", - .cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0, - .access = PL1_RW, .secure = ARM_CP_SECSTATE_S, - .fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s), - .resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, }, - /* - * Define the secure and non-secure context identifier CP registers - * separately because there is no secure bank in V8 (no _EL3). This allows - * the secure register to be properly reset and migrated. In the - * non-secure case, the 32-bit register will have reset and migration - * disabled during registration as it is handled by the 64-bit instance. - */ - { .name = "CONTEXTIDR_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .secure = ARM_CP_SECSTATE_NS, - .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[1]), - .resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, }, - { .name = "CONTEXTIDR_S", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .secure = ARM_CP_SECSTATE_S, - .fieldoffset = offsetof(CPUARMState, cp15.contextidr_s), - .resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo not_v8_cp_reginfo[] = { - /* - * NB: Some of these registers exist in v8 but with more precise - * definitions that don't use CP_ANY wildcards (mostly in v8_cp_reginfo[]). - */ - /* MMU Domain access control / MPU write buffer control */ - { .name = "DACR", - .cp = 15, .opc1 = CP_ANY, .crn = 3, .crm = CP_ANY, .opc2 = CP_ANY, - .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, - .writefn = dacr_write, .raw_writefn = raw_write, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s), - offsetoflow32(CPUARMState, cp15.dacr_ns) } }, - /* - * ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs. - * For v6 and v5, these mappings are overly broad. - */ - { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 0, - .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, - { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 1, - .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, - { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 4, - .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, - { .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 8, - .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_NOP }, - /* Cache maintenance ops; some of this space may be overridden later. */ - { .name = "CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY, - .opc1 = 0, .opc2 = CP_ANY, .access = PL1_W, - .type = ARM_CP_NOP | ARM_CP_OVERRIDE }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo not_v6_cp_reginfo[] = { - /* - * Not all pre-v6 cores implemented this WFI, so this is slightly - * over-broad. - */ - { .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2, - .access = PL1_W, .type = ARM_CP_WFI }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo not_v7_cp_reginfo[] = { - /* - * Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which - * is UNPREDICTABLE; we choose to NOP as most implementations do). - */ - { .name = "WFI_v6", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4, - .access = PL1_W, .type = ARM_CP_WFI }, - /* - * L1 cache lockdown. Not architectural in v6 and earlier but in practice - * implemented in 926, 946, 1026, 1136, 1176 and 11MPCore. StrongARM and - * OMAPCP will override this space. - */ - { .name = "DLOCKDOWN", .cp = 15, .crn = 9, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.c9_data), - .resetvalue = 0 }, - { .name = "ILOCKDOWN", .cp = 15, .crn = 9, .crm = 0, .opc1 = 0, .opc2 = 1, - .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.c9_insn), - .resetvalue = 0 }, - /* v6 doesn't have the cache ID registers but Linux reads them anyway */ - { .name = "DUMMY", .cp = 15, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = CP_ANY, - .access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, - .resetvalue = 0 }, - /* - * We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR; - * implementing it as RAZ means the "debug architecture version" bits - * will read as a reserved value, which should cause Linux to not try - * to use the debug hardware. - */ - { .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 }, - /* - * MMU TLB control. Note that the wildcarding means we cover not just - * the unified TLB ops but also the dside/iside/inner-shareable variants. - */ - { .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY, - .opc1 = CP_ANY, .opc2 = 0, .access = PL1_W, .writefn = tlbiall_write, - .type = ARM_CP_NO_RAW }, - { .name = "TLBIMVA", .cp = 15, .crn = 8, .crm = CP_ANY, - .opc1 = CP_ANY, .opc2 = 1, .access = PL1_W, .writefn = tlbimva_write, - .type = ARM_CP_NO_RAW }, - { .name = "TLBIASID", .cp = 15, .crn = 8, .crm = CP_ANY, - .opc1 = CP_ANY, .opc2 = 2, .access = PL1_W, .writefn = tlbiasid_write, - .type = ARM_CP_NO_RAW }, - { .name = "TLBIMVAA", .cp = 15, .crn = 8, .crm = CP_ANY, - .opc1 = CP_ANY, .opc2 = 3, .access = PL1_W, .writefn = tlbimvaa_write, - .type = ARM_CP_NO_RAW }, - { .name = "PRRR", .cp = 15, .crn = 10, .crm = 2, - .opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP }, - { .name = "NMRR", .cp = 15, .crn = 10, .crm = 2, - .opc1 = 0, .opc2 = 1, .access = PL1_RW, .type = ARM_CP_NOP }, - REGINFO_SENTINEL -}; - -static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - uint32_t mask = 0; - - /* In ARMv8 most bits of CPACR_EL1 are RES0. */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - /* - * ARMv7 defines bits for unimplemented coprocessors as RAZ/WI. - * ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP. - * TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell. - */ - if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) { - /* VFP coprocessor: cp10 & cp11 [23:20] */ - mask |= (1 << 31) | (1 << 30) | (0xf << 20); - - if (!arm_feature(env, ARM_FEATURE_NEON)) { - /* ASEDIS [31] bit is RAO/WI */ - value |= (1 << 31); - } - - /* - * VFPv3 and upwards with NEON implement 32 double precision - * registers (D0-D31). - */ - if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) { - /* D32DIS [30] is RAO/WI if D16-31 are not implemented. */ - value |= (1 << 30); - } - } - value &= mask; - } - - /* - * For A-profile AArch32 EL3 (but not M-profile secure mode), if NSACR.CP10 - * is 0 then CPACR.{CP11,CP10} ignore writes and read as 0b00. - */ - if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && - !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { - value &= ~(0xf << 20); - value |= env->cp15.cpacr_el1 & (0xf << 20); - } - - env->cp15.cpacr_el1 = value; -} - -static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* - * For A-profile AArch32 EL3 (but not M-profile secure mode), if NSACR.CP10 - * is 0 then CPACR.{CP11,CP10} ignore writes and read as 0b00. - */ - uint64_t value = env->cp15.cpacr_el1; - - if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && - !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { - value &= ~(0xf << 20); - } - return value; -} - - -static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* - * Call cpacr_write() so that we reset with the correct RAO bits set - * for our CPU features. - */ - cpacr_write(env, ri, 0); -} - -static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_feature(env, ARM_FEATURE_V8)) { - /* Check if CPACR accesses are to be trapped to EL2 */ - if (arm_current_el(env) == 1 && arm_is_el2_enabled(env) && - (env->cp15.cptr_el[2] & CPTR_TCPAC)) { - return CP_ACCESS_TRAP_EL2; - /* Check if CPACR accesses are to be trapped to EL3 */ - } else if (arm_current_el(env) < 3 && - (env->cp15.cptr_el[3] & CPTR_TCPAC)) { - return CP_ACCESS_TRAP_EL3; - } - } - - return CP_ACCESS_OK; -} - -static CPAccessResult cptr_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - /* Check if CPTR accesses are set to trap to EL3 */ - if (arm_current_el(env) == 2 && (env->cp15.cptr_el[3] & CPTR_TCPAC)) { - return CP_ACCESS_TRAP_EL3; - } - - return CP_ACCESS_OK; -} - -static const ARMCPRegInfo v6_cp_reginfo[] = { - /* prefetch by MVA in v6, NOP in v7 */ - { .name = "MVA_prefetch", - .cp = 15, .crn = 7, .crm = 13, .opc1 = 0, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NOP }, - /* - * We need to break the TB after ISB to execute self-modifying code - * correctly and also to take any pending interrupts immediately. - * So use arm_cp_write_ignore() function instead of ARM_CP_NOP flag. - */ - { .name = "ISB", .cp = 15, .crn = 7, .crm = 5, .opc1 = 0, .opc2 = 4, - .access = PL0_W, .type = ARM_CP_NO_RAW, .writefn = arm_cp_write_ignore }, - { .name = "DSB", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 4, - .access = PL0_W, .type = ARM_CP_NOP }, - { .name = "DMB", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 5, - .access = PL0_W, .type = ARM_CP_NOP }, - { .name = "IFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 2, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ifar_s), - offsetof(CPUARMState, cp15.ifar_ns) }, - .resetvalue = 0, }, - /* - * Watchpoint Fault Address Register : should actually only be present - * for 1136, 1176, 11MPCore. - */ - { .name = "WFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1, - .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0, }, - { .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3, - .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access, - .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1), - .resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read }, - REGINFO_SENTINEL -}; - -/* Definitions for the PMU registers */ -#define PMCRN_MASK 0xf800 -#define PMCRN_SHIFT 11 -#define PMCRLC 0x40 -#define PMCRDP 0x20 -#define PMCRX 0x10 -#define PMCRD 0x8 -#define PMCRC 0x4 -#define PMCRP 0x2 -#define PMCRE 0x1 -/* - * Mask of PMCR bits writeable by guest (not including WO bits like C, P, - * which can be written as 1 to trigger behaviour but which stay RAZ). - */ -#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE) - -#define PMXEVTYPER_P 0x80000000 -#define PMXEVTYPER_U 0x40000000 -#define PMXEVTYPER_NSK 0x20000000 -#define PMXEVTYPER_NSU 0x10000000 -#define PMXEVTYPER_NSH 0x08000000 -#define PMXEVTYPER_M 0x04000000 -#define PMXEVTYPER_MT 0x02000000 -#define PMXEVTYPER_EVTCOUNT 0x0000ffff -#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \ - PMXEVTYPER_NSU | PMXEVTYPER_NSH | \ - PMXEVTYPER_M | PMXEVTYPER_MT | \ - PMXEVTYPER_EVTCOUNT) - -#define PMCCFILTR 0xf8000000 -#define PMCCFILTR_M PMXEVTYPER_M -#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M) - -static inline uint32_t pmu_num_counters(CPUARMState *env) -{ - return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT; -} - -/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */ -static inline uint64_t pmu_counter_mask(CPUARMState *env) -{ - return (1 << 31) | ((1 << pmu_num_counters(env)) - 1); -} - -typedef struct pm_event { - uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */ - /* If the event is supported on this CPU (used to generate PMCEID[01]) */ - bool (*supported)(CPUARMState *); - /* - * Retrieve the current count of the underlying event. The programmed - * counters hold a difference from the return value from this function - */ - uint64_t (*get_count)(CPUARMState *); - /* - * Return how many nanoseconds it will take (at a minimum) for count events - * to occur. A negative value indicates the counter will never overflow, or - * that the counter has otherwise arranged for the overflow bit to be set - * and the PMU interrupt to be raised on overflow. - */ - int64_t (*ns_per_count)(uint64_t); -} pm_event; - -static bool event_always_supported(CPUARMState *env) -{ - return true; -} - -static uint64_t swinc_get_count(CPUARMState *env) -{ - /* - * SW_INCR events are written directly to the pmevcntr's by writes to - * PMSWINC, so there is no underlying count maintained by the PMU itself - */ - return 0; -} - -static int64_t swinc_ns_per(uint64_t ignored) -{ - return -1; -} - -/* - * Return the underlying cycle count for the PMU cycle counters. If we're in - * usermode, simply return 0. - */ -static uint64_t cycles_get_count(CPUARMState *env) -{ -#ifndef CONFIG_USER_ONLY - return muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), - ARM_CPU_FREQ, NANOSECONDS_PER_SECOND); -#else - return cpu_get_host_ticks(); -#endif -} - -#ifndef CONFIG_USER_ONLY -static int64_t cycles_ns_per(uint64_t cycles) -{ - return (ARM_CPU_FREQ / NANOSECONDS_PER_SECOND) * cycles; -} - -static bool instructions_supported(CPUARMState *env) -{ - return icount_enabled() == 1; /* Precise instruction counting */ -} - -static uint64_t instructions_get_count(CPUARMState *env) -{ - return (uint64_t)icount_get_raw(); -} - -static int64_t instructions_ns_per(uint64_t icount) -{ - return icount_to_ns((int64_t)icount); -} -#endif - -static bool pmu_8_1_events_supported(CPUARMState *env) -{ - /* For events which are supported in any v8.1 PMU */ - return cpu_isar_feature(any_pmu_8_1, env_archcpu(env)); -} - -static bool pmu_8_4_events_supported(CPUARMState *env) -{ - /* For events which are supported in any v8.1 PMU */ - return cpu_isar_feature(any_pmu_8_4, env_archcpu(env)); -} - -static uint64_t zero_event_get_count(CPUARMState *env) -{ - /* For events which on QEMU never fire, so their count is always zero */ - return 0; -} - -static int64_t zero_event_ns_per(uint64_t cycles) -{ - /* An event which never fires can never overflow */ - return -1; -} - -static const pm_event pm_events[] = { - { .number = 0x000, /* SW_INCR */ - .supported = event_always_supported, - .get_count = swinc_get_count, - .ns_per_count = swinc_ns_per, - }, -#ifndef CONFIG_USER_ONLY - { .number = 0x008, /* INST_RETIRED, Instruction architecturally executed */ - .supported = instructions_supported, - .get_count = instructions_get_count, - .ns_per_count = instructions_ns_per, - }, - { .number = 0x011, /* CPU_CYCLES, Cycle */ - .supported = event_always_supported, - .get_count = cycles_get_count, - .ns_per_count = cycles_ns_per, - }, -#endif - { .number = 0x023, /* STALL_FRONTEND */ - .supported = pmu_8_1_events_supported, - .get_count = zero_event_get_count, - .ns_per_count = zero_event_ns_per, - }, - { .number = 0x024, /* STALL_BACKEND */ - .supported = pmu_8_1_events_supported, - .get_count = zero_event_get_count, - .ns_per_count = zero_event_ns_per, - }, - { .number = 0x03c, /* STALL */ - .supported = pmu_8_4_events_supported, - .get_count = zero_event_get_count, - .ns_per_count = zero_event_ns_per, - }, -}; - -/* - * Note: Before increasing MAX_EVENT_ID beyond 0x3f into the 0x40xx range of - * events (i.e. the statistical profiling extension), this implementation - * should first be updated to something sparse instead of the current - * supported_event_map[] array. - */ -#define MAX_EVENT_ID 0x3c -#define UNSUPPORTED_EVENT UINT16_MAX -static uint16_t supported_event_map[MAX_EVENT_ID + 1]; - -/* - * Called upon CPU initialization to initialize PMCEID[01]_EL0 and build a map - * of ARM event numbers to indices in our pm_events array. - * - * Note: Events in the 0x40XX range are not currently supported. - */ -void pmu_init(ARMCPU *cpu) -{ - unsigned int i; - - /* - * Empty supported_event_map and cpu->pmceid[01] before adding supported - * events to them - */ - for (i = 0; i < ARRAY_SIZE(supported_event_map); i++) { - supported_event_map[i] = UNSUPPORTED_EVENT; - } - cpu->pmceid0 = 0; - cpu->pmceid1 = 0; - - for (i = 0; i < ARRAY_SIZE(pm_events); i++) { - const pm_event *cnt = &pm_events[i]; - assert(cnt->number <= MAX_EVENT_ID); - /* We do not currently support events in the 0x40xx range */ - assert(cnt->number <= 0x3f); - - if (cnt->supported(&cpu->env)) { - supported_event_map[cnt->number] = i; - uint64_t event_mask = 1ULL << (cnt->number & 0x1f); - if (cnt->number & 0x20) { - cpu->pmceid1 |= event_mask; - } else { - cpu->pmceid0 |= event_mask; - } - } - } -} - -/* - * Check at runtime whether a PMU event is supported for the current machine - */ -static bool event_supported(uint16_t number) -{ - if (number > MAX_EVENT_ID) { - return false; - } - return supported_event_map[number] != UNSUPPORTED_EVENT; -} - -static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - /* Performance monitor registers user accessibility is controlled - * by PMUSERENR. MDCR_EL2.TPM and MDCR_EL3.TPM allow configurable - * trapping to EL2 or EL3 for other accesses. - */ - int el = arm_current_el(env); - uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); - - if (el == 0 && !(env->cp15.c9_pmuserenr & 1)) { - return CP_ACCESS_TRAP; - } - if (el < 2 && (mdcr_el2 & MDCR_TPM)) { - return CP_ACCESS_TRAP_EL2; - } - if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) { - return CP_ACCESS_TRAP_EL3; - } - - return CP_ACCESS_OK; -} - -static CPAccessResult pmreg_access_xevcntr(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - /* ER: event counter read trap control */ - if (arm_feature(env, ARM_FEATURE_V8) - && arm_current_el(env) == 0 - && (env->cp15.c9_pmuserenr & (1 << 3)) != 0 - && isread) { - return CP_ACCESS_OK; - } - - return pmreg_access(env, ri, isread); -} - -static CPAccessResult pmreg_access_swinc(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - /* SW: software increment write trap control */ - if (arm_feature(env, ARM_FEATURE_V8) - && arm_current_el(env) == 0 - && (env->cp15.c9_pmuserenr & (1 << 1)) != 0 - && !isread) { - return CP_ACCESS_OK; - } - - return pmreg_access(env, ri, isread); -} - -static CPAccessResult pmreg_access_selr(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - /* ER: event counter read trap control */ - if (arm_feature(env, ARM_FEATURE_V8) - && arm_current_el(env) == 0 - && (env->cp15.c9_pmuserenr & (1 << 3)) != 0) { - return CP_ACCESS_OK; - } - - return pmreg_access(env, ri, isread); -} - -static CPAccessResult pmreg_access_ccntr(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - /* CR: cycle counter read trap control */ - if (arm_feature(env, ARM_FEATURE_V8) - && arm_current_el(env) == 0 - && (env->cp15.c9_pmuserenr & (1 << 2)) != 0 - && isread) { - return CP_ACCESS_OK; - } - - return pmreg_access(env, ri, isread); -} - -/* Returns true if the counter (pass 31 for PMCCNTR) should count events using - * the current EL, security state, and register configuration. - */ -static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter) -{ - uint64_t filter; - bool e, p, u, nsk, nsu, nsh, m; - bool enabled, prohibited, filtered; - bool secure = arm_is_secure(env); - int el = arm_current_el(env); - uint64_t mdcr_el2 = arm_mdcr_el2_eff(env); - uint8_t hpmn = mdcr_el2 & MDCR_HPMN; - - if (!arm_feature(env, ARM_FEATURE_PMU)) { - return false; - } - - if (!arm_feature(env, ARM_FEATURE_EL2) || - (counter < hpmn || counter == 31)) { - e = env->cp15.c9_pmcr & PMCRE; - } else { - e = mdcr_el2 & MDCR_HPME; - } - enabled = e && (env->cp15.c9_pmcnten & (1 << counter)); - - if (!secure) { - if (el == 2 && (counter < hpmn || counter == 31)) { - prohibited = mdcr_el2 & MDCR_HPMD; - } else { - prohibited = false; - } - } else { - prohibited = arm_feature(env, ARM_FEATURE_EL3) && - !(env->cp15.mdcr_el3 & MDCR_SPME); - } - - if (prohibited && counter == 31) { - prohibited = env->cp15.c9_pmcr & PMCRDP; - } - - if (counter == 31) { - filter = env->cp15.pmccfiltr_el0; - } else { - filter = env->cp15.c14_pmevtyper[counter]; - } - - p = filter & PMXEVTYPER_P; - u = filter & PMXEVTYPER_U; - nsk = arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_NSK); - nsu = arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_NSU); - nsh = arm_feature(env, ARM_FEATURE_EL2) && (filter & PMXEVTYPER_NSH); - m = arm_el_is_aa64(env, 1) && - arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_M); - - if (el == 0) { - filtered = secure ? u : u != nsu; - } else if (el == 1) { - filtered = secure ? p : p != nsk; - } else if (el == 2) { - filtered = !nsh; - } else { /* EL3 */ - filtered = m != p; - } - - if (counter != 31) { - /* - * If not checking PMCCNTR, ensure the counter is setup to an event we - * support - */ - uint16_t event = filter & PMXEVTYPER_EVTCOUNT; - if (!event_supported(event)) { - return false; - } - } - - return enabled && !prohibited && !filtered; -} - -static void pmu_update_irq(CPUARMState *env) -{ - ARMCPU *cpu = env_archcpu(env); - qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) && - (env->cp15.c9_pminten & env->cp15.c9_pmovsr)); -} - -/* - * Ensure c15_ccnt is the guest-visible count so that operations such as - * enabling/disabling the counter or filtering, modifying the count itself, - * etc. can be done logically. This is essentially a no-op if the counter is - * not enabled at the time of the call. - */ -static void pmccntr_op_start(CPUARMState *env) -{ - uint64_t cycles = cycles_get_count(env); - - if (pmu_counter_enabled(env, 31)) { - uint64_t eff_cycles = cycles; - if (env->cp15.c9_pmcr & PMCRD) { - /* Increment once every 64 processor clock cycles */ - eff_cycles /= 64; - } - - uint64_t new_pmccntr = eff_cycles - env->cp15.c15_ccnt_delta; - - uint64_t overflow_mask = env->cp15.c9_pmcr & PMCRLC ? \ - 1ull << 63 : 1ull << 31; - if (env->cp15.c15_ccnt & ~new_pmccntr & overflow_mask) { - env->cp15.c9_pmovsr |= (1 << 31); - pmu_update_irq(env); - } - - env->cp15.c15_ccnt = new_pmccntr; - } - env->cp15.c15_ccnt_delta = cycles; -} - -/* - * If PMCCNTR is enabled, recalculate the delta between the clock and the - * guest-visible count. A call to pmccntr_op_finish should follow every call to - * pmccntr_op_start. - */ -static void pmccntr_op_finish(CPUARMState *env) -{ - if (pmu_counter_enabled(env, 31)) { -#ifndef CONFIG_USER_ONLY - /* Calculate when the counter will next overflow */ - uint64_t remaining_cycles = -env->cp15.c15_ccnt; - if (!(env->cp15.c9_pmcr & PMCRLC)) { - remaining_cycles = (uint32_t)remaining_cycles; - } - int64_t overflow_in = cycles_ns_per(remaining_cycles); - - if (overflow_in > 0) { - int64_t overflow_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + - overflow_in; - ARMCPU *cpu = env_archcpu(env); - timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at); - } -#endif - - uint64_t prev_cycles = env->cp15.c15_ccnt_delta; - if (env->cp15.c9_pmcr & PMCRD) { - /* Increment once every 64 processor clock cycles */ - prev_cycles /= 64; - } - env->cp15.c15_ccnt_delta = prev_cycles - env->cp15.c15_ccnt; - } -} - -static void pmevcntr_op_start(CPUARMState *env, uint8_t counter) -{ - - uint16_t event = env->cp15.c14_pmevtyper[counter] & PMXEVTYPER_EVTCOUNT; - uint64_t count = 0; - if (event_supported(event)) { - uint16_t event_idx = supported_event_map[event]; - count = pm_events[event_idx].get_count(env); - } - - if (pmu_counter_enabled(env, counter)) { - uint32_t new_pmevcntr = count - env->cp15.c14_pmevcntr_delta[counter]; - - if (env->cp15.c14_pmevcntr[counter] & ~new_pmevcntr & INT32_MIN) { - env->cp15.c9_pmovsr |= (1 << counter); - pmu_update_irq(env); - } - env->cp15.c14_pmevcntr[counter] = new_pmevcntr; - } - env->cp15.c14_pmevcntr_delta[counter] = count; -} - -static void pmevcntr_op_finish(CPUARMState *env, uint8_t counter) -{ - if (pmu_counter_enabled(env, counter)) { -#ifndef CONFIG_USER_ONLY - uint16_t event = env->cp15.c14_pmevtyper[counter] & PMXEVTYPER_EVTCOUNT; - uint16_t event_idx = supported_event_map[event]; - uint64_t delta = UINT32_MAX - - (uint32_t)env->cp15.c14_pmevcntr[counter] + 1; - int64_t overflow_in = pm_events[event_idx].ns_per_count(delta); - - if (overflow_in > 0) { - int64_t overflow_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + - overflow_in; - ARMCPU *cpu = env_archcpu(env); - timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at); - } -#endif - - env->cp15.c14_pmevcntr_delta[counter] -= - env->cp15.c14_pmevcntr[counter]; - } -} - -void pmu_op_start(CPUARMState *env) -{ - unsigned int i; - pmccntr_op_start(env); - for (i = 0; i < pmu_num_counters(env); i++) { - pmevcntr_op_start(env, i); - } -} - -void pmu_op_finish(CPUARMState *env) -{ - unsigned int i; - pmccntr_op_finish(env); - for (i = 0; i < pmu_num_counters(env); i++) { - pmevcntr_op_finish(env, i); - } -} - -void pmu_pre_el_change(ARMCPU *cpu, void *ignored) -{ - pmu_op_start(&cpu->env); -} - -void pmu_post_el_change(ARMCPU *cpu, void *ignored) -{ - pmu_op_finish(&cpu->env); -} - -void arm_pmu_timer_cb(void *opaque) -{ - ARMCPU *cpu = opaque; - - /* - * Update all the counter values based on the current underlying counts, - * triggering interrupts to be raised, if necessary. pmu_op_finish() also - * has the effect of setting the cpu->pmu_timer to the next earliest time a - * counter may expire. - */ - pmu_op_start(&cpu->env); - pmu_op_finish(&cpu->env); -} - -static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - pmu_op_start(env); - - if (value & PMCRC) { - /* The counter has been reset */ - env->cp15.c15_ccnt = 0; - } - - if (value & PMCRP) { - unsigned int i; - for (i = 0; i < pmu_num_counters(env); i++) { - env->cp15.c14_pmevcntr[i] = 0; - } - } - - env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK; - env->cp15.c9_pmcr |= (value & PMCR_WRITEABLE_MASK); - - pmu_op_finish(env); -} - -static void pmswinc_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - unsigned int i; - for (i = 0; i < pmu_num_counters(env); i++) { - /* Increment a counter's count iff: */ - if ((value & (1 << i)) && /* counter's bit is set */ - /* counter is enabled and not filtered */ - pmu_counter_enabled(env, i) && - /* counter is SW_INCR */ - (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) { - pmevcntr_op_start(env, i); - - /* - * Detect if this write causes an overflow since we can't predict - * PMSWINC overflows like we can for other events - */ - uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1; - - if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) { - env->cp15.c9_pmovsr |= (1 << i); - pmu_update_irq(env); - } - - env->cp15.c14_pmevcntr[i] = new_pmswinc; - - pmevcntr_op_finish(env, i); - } - } -} - -static uint64_t pmccntr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - uint64_t ret; - pmccntr_op_start(env); - ret = env->cp15.c15_ccnt; - pmccntr_op_finish(env); - return ret; -} - -static void pmselr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* The value of PMSELR.SEL affects the behavior of PMXEVTYPER and - * PMXEVCNTR. We allow [0..31] to be written to PMSELR here; in the - * meanwhile, we check PMSELR.SEL when PMXEVTYPER and PMXEVCNTR are - * accessed. - */ - env->cp15.c9_pmselr = value & 0x1f; -} - -static void pmccntr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - pmccntr_op_start(env); - env->cp15.c15_ccnt = value; - pmccntr_op_finish(env); -} - -static void pmccntr_write32(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - uint64_t cur_val = pmccntr_read(env, NULL); - - pmccntr_write(env, ri, deposit64(cur_val, 0, 32, value)); -} - -static void pmccfiltr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - pmccntr_op_start(env); - env->cp15.pmccfiltr_el0 = value & PMCCFILTR_EL0; - pmccntr_op_finish(env); -} - -static void pmccfiltr_write_a32(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - pmccntr_op_start(env); - /* M is not accessible from AArch32 */ - env->cp15.pmccfiltr_el0 = (env->cp15.pmccfiltr_el0 & PMCCFILTR_M) | - (value & PMCCFILTR); - pmccntr_op_finish(env); -} - -static uint64_t pmccfiltr_read_a32(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* M is not visible in AArch32 */ - return env->cp15.pmccfiltr_el0 & PMCCFILTR; -} - -static void pmcntenset_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - value &= pmu_counter_mask(env); - env->cp15.c9_pmcnten |= value; -} - -static void pmcntenclr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - value &= pmu_counter_mask(env); - env->cp15.c9_pmcnten &= ~value; -} - -static void pmovsr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - value &= pmu_counter_mask(env); - env->cp15.c9_pmovsr &= ~value; - pmu_update_irq(env); -} - -static void pmovsset_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - value &= pmu_counter_mask(env); - env->cp15.c9_pmovsr |= value; - pmu_update_irq(env); -} - -static void pmevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value, const uint8_t counter) -{ - if (counter == 31) { - pmccfiltr_write(env, ri, value); - } else if (counter < pmu_num_counters(env)) { - pmevcntr_op_start(env, counter); - - /* - * If this counter's event type is changing, store the current - * underlying count for the new type in c14_pmevcntr_delta[counter] so - * pmevcntr_op_finish has the correct baseline when it converts back to - * a delta. - */ - uint16_t old_event = env->cp15.c14_pmevtyper[counter] & - PMXEVTYPER_EVTCOUNT; - uint16_t new_event = value & PMXEVTYPER_EVTCOUNT; - if (old_event != new_event) { - uint64_t count = 0; - if (event_supported(new_event)) { - uint16_t event_idx = supported_event_map[new_event]; - count = pm_events[event_idx].get_count(env); - } - env->cp15.c14_pmevcntr_delta[counter] = count; - } - - env->cp15.c14_pmevtyper[counter] = value & PMXEVTYPER_MASK; - pmevcntr_op_finish(env, counter); - } - /* Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when - * PMSELR value is equal to or greater than the number of implemented - * counters, but not equal to 0x1f. We opt to behave as a RAZ/WI. - */ -} - -static uint64_t pmevtyper_read(CPUARMState *env, const ARMCPRegInfo *ri, - const uint8_t counter) -{ - if (counter == 31) { - return env->cp15.pmccfiltr_el0; - } else if (counter < pmu_num_counters(env)) { - return env->cp15.c14_pmevtyper[counter]; - } else { - /* - * We opt to behave as a RAZ/WI when attempts to access PMXEVTYPER - * are CONSTRAINED UNPREDICTABLE. See comments in pmevtyper_write(). - */ - return 0; - } -} - -static void pmevtyper_writefn(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); - pmevtyper_write(env, ri, value, counter); -} - -static void pmevtyper_rawwrite(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); - env->cp15.c14_pmevtyper[counter] = value; - - /* - * pmevtyper_rawwrite is called between a pair of pmu_op_start and - * pmu_op_finish calls when loading saved state for a migration. Because - * we're potentially updating the type of event here, the value written to - * c14_pmevcntr_delta by the preceeding pmu_op_start call may be for a - * different counter type. Therefore, we need to set this value to the - * current count for the counter type we're writing so that pmu_op_finish - * has the correct count for its calculation. - */ - uint16_t event = value & PMXEVTYPER_EVTCOUNT; - if (event_supported(event)) { - uint16_t event_idx = supported_event_map[event]; - env->cp15.c14_pmevcntr_delta[counter] = - pm_events[event_idx].get_count(env); - } -} - -static uint64_t pmevtyper_readfn(CPUARMState *env, const ARMCPRegInfo *ri) -{ - uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); - return pmevtyper_read(env, ri, counter); -} - -static void pmxevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - pmevtyper_write(env, ri, value, env->cp15.c9_pmselr & 31); -} - -static uint64_t pmxevtyper_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return pmevtyper_read(env, ri, env->cp15.c9_pmselr & 31); -} - -static void pmevcntr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value, uint8_t counter) -{ - if (counter < pmu_num_counters(env)) { - pmevcntr_op_start(env, counter); - env->cp15.c14_pmevcntr[counter] = value; - pmevcntr_op_finish(env, counter); - } - /* - * We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR - * are CONSTRAINED UNPREDICTABLE. - */ -} - -static uint64_t pmevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri, - uint8_t counter) -{ - if (counter < pmu_num_counters(env)) { - uint64_t ret; - pmevcntr_op_start(env, counter); - ret = env->cp15.c14_pmevcntr[counter]; - pmevcntr_op_finish(env, counter); - return ret; - } else { - /* We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR - * are CONSTRAINED UNPREDICTABLE. */ - return 0; - } -} - -static void pmevcntr_writefn(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); - pmevcntr_write(env, ri, value, counter); -} - -static uint64_t pmevcntr_readfn(CPUARMState *env, const ARMCPRegInfo *ri) -{ - uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); - return pmevcntr_read(env, ri, counter); -} - -static void pmevcntr_rawwrite(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); - assert(counter < pmu_num_counters(env)); - env->cp15.c14_pmevcntr[counter] = value; - pmevcntr_write(env, ri, value, counter); -} - -static uint64_t pmevcntr_rawread(CPUARMState *env, const ARMCPRegInfo *ri) -{ - uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7); - assert(counter < pmu_num_counters(env)); - return env->cp15.c14_pmevcntr[counter]; -} - -static void pmxevcntr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - pmevcntr_write(env, ri, value, env->cp15.c9_pmselr & 31); -} - -static uint64_t pmxevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return pmevcntr_read(env, ri, env->cp15.c9_pmselr & 31); -} - -static void pmuserenr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - if (arm_feature(env, ARM_FEATURE_V8)) { - env->cp15.c9_pmuserenr = value & 0xf; - } else { - env->cp15.c9_pmuserenr = value & 1; - } -} - -static void pmintenset_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* We have no event counters so only the C bit can be changed */ - value &= pmu_counter_mask(env); - env->cp15.c9_pminten |= value; - pmu_update_irq(env); -} - -static void pmintenclr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - value &= pmu_counter_mask(env); - env->cp15.c9_pminten &= ~value; - pmu_update_irq(env); -} - -static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Note that even though the AArch64 view of this register has bits - * [10:0] all RES0 we can only mask the bottom 5, to comply with the - * architectural requirements for bits which are RES0 only in some - * contexts. (ARMv8 would permit us to do no masking at all, but ARMv7 - * requires the bottom five bits to be RAZ/WI because they're UNK/SBZP.) - */ - raw_write(env, ri, value & ~0x1FULL); -} - -static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) -{ - /* Begin with base v8.0 state. */ - uint32_t valid_mask = 0x3fff; - ARMCPU *cpu = env_archcpu(env); - - if (ri->state == ARM_CP_STATE_AA64) { - if (arm_feature(env, ARM_FEATURE_AARCH64) && - !cpu_isar_feature(aa64_aa32_el1, cpu)) { - value |= SCR_FW | SCR_AW; /* these two bits are RES1. */ - } - valid_mask &= ~SCR_NET; - - if (cpu_isar_feature(aa64_lor, cpu)) { - valid_mask |= SCR_TLOR; - } - if (cpu_isar_feature(aa64_pauth, cpu)) { - valid_mask |= SCR_API | SCR_APK; - } - if (cpu_isar_feature(aa64_sel2, cpu)) { - valid_mask |= SCR_EEL2; - } - if (cpu_isar_feature(aa64_mte, cpu)) { - valid_mask |= SCR_ATA; - } - } else { - valid_mask &= ~(SCR_RW | SCR_ST); - } - - if (!arm_feature(env, ARM_FEATURE_EL2)) { - valid_mask &= ~SCR_HCE; - - /* On ARMv7, SMD (or SCD as it is called in v7) is only - * supported if EL2 exists. The bit is UNK/SBZP when - * EL2 is unavailable. In QEMU ARMv7, we force it to always zero - * when EL2 is unavailable. - * On ARMv8, this bit is always available. - */ - if (arm_feature(env, ARM_FEATURE_V7) && - !arm_feature(env, ARM_FEATURE_V8)) { - valid_mask &= ~SCR_SMD; - } - } - - /* Clear all-context RES0 bits. */ - value &= valid_mask; - raw_write(env, ri, value); -} - -static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* - * scr_write will set the RES1 bits on an AArch64-only CPU. - * The reset value will be 0x30 on an AArch64-only CPU and 0 otherwise. - */ - scr_write(env, ri, 0); -} - -static CPAccessResult access_aa64_tid2(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TID2)) { - return CP_ACCESS_TRAP_EL2; - } - - return CP_ACCESS_OK; -} - -static uint64_t ccsidr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - ARMCPU *cpu = env_archcpu(env); - - /* Acquire the CSSELR index from the bank corresponding to the CCSIDR - * bank - */ - uint32_t index = A32_BANKED_REG_GET(env, csselr, - ri->secure & ARM_CP_SECSTATE_S); - - return cpu->ccsidr[index]; -} - -static void csselr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - raw_write(env, ri, value & 0xf); -} - -static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - CPUState *cs = env_cpu(env); - bool el1 = arm_current_el(env) == 1; - uint64_t hcr_el2 = el1 ? arm_hcr_el2_eff(env) : 0; - uint64_t ret = 0; - - if (hcr_el2 & HCR_IMO) { - if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) { - ret |= CPSR_I; - } - } else { - if (cs->interrupt_request & CPU_INTERRUPT_HARD) { - ret |= CPSR_I; - } - } - - if (hcr_el2 & HCR_FMO) { - if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) { - ret |= CPSR_F; - } - } else { - if (cs->interrupt_request & CPU_INTERRUPT_FIQ) { - ret |= CPSR_F; - } - } - - /* External aborts are not possible in QEMU so A bit is always clear */ - return ret; -} - -static CPAccessResult access_aa64_tid1(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TID1)) { - return CP_ACCESS_TRAP_EL2; - } - - return CP_ACCESS_OK; -} - -static CPAccessResult access_aa32_tid1(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_feature(env, ARM_FEATURE_V8)) { - return access_aa64_tid1(env, ri, isread); - } - - return CP_ACCESS_OK; -} - -static const ARMCPRegInfo v7_cp_reginfo[] = { - /* the old v6 WFI, UNPREDICTABLE in v7 but we choose to NOP */ - { .name = "NOP", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4, - .access = PL1_W, .type = ARM_CP_NOP }, - /* Performance monitors are implementation defined in v7, - * but with an ARM recommended set of registers, which we - * follow. - * - * Performance registers fall into three categories: - * (a) always UNDEF in PL0, RW in PL1 (PMINTENSET, PMINTENCLR) - * (b) RO in PL0 (ie UNDEF on write), RW in PL1 (PMUSERENR) - * (c) UNDEF in PL0 if PMUSERENR.EN==0, otherwise accessible (all others) - * For the cases controlled by PMUSERENR we must set .access to PL0_RW - * or PL0_RO as appropriate and then check PMUSERENR in the helper fn. - */ - { .name = "PMCNTENSET", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 1, - .access = PL0_RW, .type = ARM_CP_ALIAS, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten), - .writefn = pmcntenset_write, - .accessfn = pmreg_access, - .raw_writefn = raw_write }, - { .name = "PMCNTENSET_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 1, - .access = PL0_RW, .accessfn = pmreg_access, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten), .resetvalue = 0, - .writefn = pmcntenset_write, .raw_writefn = raw_write }, - { .name = "PMCNTENCLR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 2, - .access = PL0_RW, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten), - .accessfn = pmreg_access, - .writefn = pmcntenclr_write, - .type = ARM_CP_ALIAS }, - { .name = "PMCNTENCLR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 2, - .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten), - .writefn = pmcntenclr_write }, - { .name = "PMOVSR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 3, - .access = PL0_RW, .type = ARM_CP_IO, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr), - .accessfn = pmreg_access, - .writefn = pmovsr_write, - .raw_writefn = raw_write }, - { .name = "PMOVSCLR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 3, - .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_ALIAS | ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr), - .writefn = pmovsr_write, - .raw_writefn = raw_write }, - { .name = "PMSWINC", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 4, - .access = PL0_W, .accessfn = pmreg_access_swinc, - .type = ARM_CP_NO_RAW | ARM_CP_IO, - .writefn = pmswinc_write }, - { .name = "PMSWINC_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 4, - .access = PL0_W, .accessfn = pmreg_access_swinc, - .type = ARM_CP_NO_RAW | ARM_CP_IO, - .writefn = pmswinc_write }, - { .name = "PMSELR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 5, - .access = PL0_RW, .type = ARM_CP_ALIAS, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmselr), - .accessfn = pmreg_access_selr, .writefn = pmselr_write, - .raw_writefn = raw_write}, - { .name = "PMSELR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 5, - .access = PL0_RW, .accessfn = pmreg_access_selr, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pmselr), - .writefn = pmselr_write, .raw_writefn = raw_write, }, - { .name = "PMCCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 0, - .access = PL0_RW, .resetvalue = 0, .type = ARM_CP_ALIAS | ARM_CP_IO, - .readfn = pmccntr_read, .writefn = pmccntr_write32, - .accessfn = pmreg_access_ccntr }, - { .name = "PMCCNTR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 0, - .access = PL0_RW, .accessfn = pmreg_access_ccntr, - .type = ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.c15_ccnt), - .readfn = pmccntr_read, .writefn = pmccntr_write, - .raw_readfn = raw_read, .raw_writefn = raw_write, }, - { .name = "PMCCFILTR", .cp = 15, .opc1 = 0, .crn = 14, .crm = 15, .opc2 = 7, - .writefn = pmccfiltr_write_a32, .readfn = pmccfiltr_read_a32, - .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_ALIAS | ARM_CP_IO, - .resetvalue = 0, }, - { .name = "PMCCFILTR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 15, .opc2 = 7, - .writefn = pmccfiltr_write, .raw_writefn = raw_write, - .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.pmccfiltr_el0), - .resetvalue = 0, }, - { .name = "PMXEVTYPER", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 1, - .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = pmreg_access, - .writefn = pmxevtyper_write, .readfn = pmxevtyper_read }, - { .name = "PMXEVTYPER_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 1, - .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = pmreg_access, - .writefn = pmxevtyper_write, .readfn = pmxevtyper_read }, - { .name = "PMXEVCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 2, - .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = pmreg_access_xevcntr, - .writefn = pmxevcntr_write, .readfn = pmxevcntr_read }, - { .name = "PMXEVCNTR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 2, - .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = pmreg_access_xevcntr, - .writefn = pmxevcntr_write, .readfn = pmxevcntr_read }, - { .name = "PMUSERENR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 0, - .access = PL0_R | PL1_RW, .accessfn = access_tpm, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmuserenr), - .resetvalue = 0, - .writefn = pmuserenr_write, .raw_writefn = raw_write }, - { .name = "PMUSERENR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 14, .opc2 = 0, - .access = PL0_R | PL1_RW, .accessfn = access_tpm, .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pmuserenr), - .resetvalue = 0, - .writefn = pmuserenr_write, .raw_writefn = raw_write }, - { .name = "PMINTENSET", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tpm, - .type = ARM_CP_ALIAS | ARM_CP_IO, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pminten), - .resetvalue = 0, - .writefn = pmintenset_write, .raw_writefn = raw_write }, - { .name = "PMINTENSET_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tpm, - .type = ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pminten), - .writefn = pmintenset_write, .raw_writefn = raw_write, - .resetvalue = 0x0 }, - { .name = "PMINTENCLR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 2, - .access = PL1_RW, .accessfn = access_tpm, - .type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pminten), - .writefn = pmintenclr_write, }, - { .name = "PMINTENCLR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 2, - .access = PL1_RW, .accessfn = access_tpm, - .type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pminten), - .writefn = pmintenclr_write }, - { .name = "CCSIDR", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 0, - .access = PL1_R, - .accessfn = access_aa64_tid2, - .readfn = ccsidr_read, .type = ARM_CP_NO_RAW }, - { .name = "CSSELR", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 2, .opc2 = 0, - .access = PL1_RW, - .accessfn = access_aa64_tid2, - .writefn = csselr_write, .resetvalue = 0, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s), - offsetof(CPUARMState, cp15.csselr_ns) } }, - /* Auxiliary ID register: this actually has an IMPDEF value but for now - * just RAZ for all cores: - */ - { .name = "AIDR", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid1, - .resetvalue = 0 }, - /* Auxiliary fault status registers: these also are IMPDEF, and we - * choose to RAZ/WI for all cores. - */ - { .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "AFSR1_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_CONST, .resetvalue = 0 }, - /* MAIR can just read-as-written because we don't implement caches - * and so don't need to care about memory attributes. - */ - { .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .fieldoffset = offsetof(CPUARMState, cp15.mair_el[1]), - .resetvalue = 0 }, - { .name = "MAIR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 10, .crm = 2, .opc2 = 0, - .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[3]), - .resetvalue = 0 }, - /* For non-long-descriptor page tables these are PRRR and NMRR; - * regardless they still act as reads-as-written for QEMU. - */ - /* MAIR0/1 are defined separately from their 64-bit counterpart which - * allows them to assign the correct fieldoffset based on the endianness - * handled in the field definitions. - */ - { .name = "MAIR0", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.mair0_s), - offsetof(CPUARMState, cp15.mair0_ns) }, - .resetfn = arm_cp_reset_ignore }, - { .name = "MAIR1", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.mair1_s), - offsetof(CPUARMState, cp15.mair1_ns) }, - .resetfn = arm_cp_reset_ignore }, - { .name = "ISR_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL1_R, .readfn = isr_read }, - /* 32 bit ITLB invalidates */ - { .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiall_write }, - { .name = "ITLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimva_write }, - { .name = "ITLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 2, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiasid_write }, - /* 32 bit DTLB invalidates */ - { .name = "DTLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiall_write }, - { .name = "DTLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimva_write }, - { .name = "DTLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 2, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiasid_write }, - /* 32 bit TLB invalidates */ - { .name = "TLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiall_write }, - { .name = "TLBIMVA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimva_write }, - { .name = "TLBIASID", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiasid_write }, - { .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimvaa_write }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo v7mp_cp_reginfo[] = { - /* 32 bit TLB invalidates, Inner Shareable */ - { .name = "TLBIALLIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiall_is_write }, - { .name = "TLBIMVAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimva_is_write }, - { .name = "TLBIASIDIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbiasid_is_write }, - { .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimvaa_is_write }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo pmovsset_cp_reginfo[] = { - /* PMOVSSET is not implemented in v7 before v7ve */ - { .name = "PMOVSSET", .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 3, - .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_ALIAS | ARM_CP_IO, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr), - .writefn = pmovsset_write, - .raw_writefn = raw_write }, - { .name = "PMOVSSET_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 14, .opc2 = 3, - .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_ALIAS | ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr), - .writefn = pmovsset_write, - .raw_writefn = raw_write }, - REGINFO_SENTINEL -}; - -static void teecr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - value &= 1; - env->teecr = value; -} - -static CPAccessResult teehbr_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 0 && (env->teecr & 1)) { - return CP_ACCESS_TRAP; - } - return CP_ACCESS_OK; -} - -static const ARMCPRegInfo t2ee_cp_reginfo[] = { - { .name = "TEECR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 6, .opc2 = 0, - .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, teecr), - .resetvalue = 0, - .writefn = teecr_write }, - { .name = "TEEHBR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 6, .opc2 = 0, - .access = PL0_RW, .fieldoffset = offsetof(CPUARMState, teehbr), - .accessfn = teehbr_access, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo v6k_cp_reginfo[] = { - { .name = "TPIDR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 2, .crn = 13, .crm = 0, - .access = PL0_RW, - .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[0]), .resetvalue = 0 }, - { .name = "TPIDRURW", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 2, - .access = PL0_RW, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrurw_s), - offsetoflow32(CPUARMState, cp15.tpidrurw_ns) }, - .resetfn = arm_cp_reset_ignore }, - { .name = "TPIDRRO_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 3, .crn = 13, .crm = 0, - .access = PL0_R|PL1_W, - .fieldoffset = offsetof(CPUARMState, cp15.tpidrro_el[0]), - .resetvalue = 0}, - { .name = "TPIDRURO", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 3, - .access = PL0_R|PL1_W, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidruro_s), - offsetoflow32(CPUARMState, cp15.tpidruro_ns) }, - .resetfn = arm_cp_reset_ignore }, - { .name = "TPIDR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .opc2 = 4, .crn = 13, .crm = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[1]), .resetvalue = 0 }, - { .name = "TPIDRPRW", .opc1 = 0, .cp = 15, .crn = 13, .crm = 0, .opc2 = 4, - .access = PL1_RW, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrprw_s), - offsetoflow32(CPUARMState, cp15.tpidrprw_ns) }, - .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -#ifndef CONFIG_USER_ONLY - -static CPAccessResult gt_cntfrq_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - /* - * CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero. - * Writable only at the highest implemented exception level. - */ - int el = arm_current_el(env); - uint64_t hcr; - uint32_t cntkctl; - - switch (el) { - case 0: - hcr = arm_hcr_el2_eff(env); - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - cntkctl = env->cp15.cnthctl_el2; - } else { - cntkctl = env->cp15.c14_cntkctl; - } - if (!extract32(cntkctl, 0, 2)) { - return CP_ACCESS_TRAP; - } - break; - case 1: - if (!isread && ri->state == ARM_CP_STATE_AA32 && - arm_is_secure_below_el3(env)) { - /* Accesses from 32-bit Secure EL1 UNDEF (*not* trap to EL3!) */ - return CP_ACCESS_TRAP_UNCATEGORIZED; - } - break; - case 2: - case 3: - break; - } - - if (!isread && el < arm_highest_el(env)) { - return CP_ACCESS_TRAP_UNCATEGORIZED; - } - - return CP_ACCESS_OK; -} - -static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx, - bool isread) -{ - unsigned int cur_el = arm_current_el(env); - bool has_el2 = arm_is_el2_enabled(env); - uint64_t hcr = arm_hcr_el2_eff(env); - - switch (cur_el) { - case 0: - /* If HCR_EL2. == '11': check CNTHCTL_EL2.EL0[PV]CTEN. */ - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - return (extract32(env->cp15.cnthctl_el2, timeridx, 1) - ? CP_ACCESS_OK : CP_ACCESS_TRAP_EL2); - } - - /* CNT[PV]CT: not visible from PL0 if EL0[PV]CTEN is zero */ - if (!extract32(env->cp15.c14_cntkctl, timeridx, 1)) { - return CP_ACCESS_TRAP; - } - - /* If HCR_EL2. == '10': check CNTHCTL_EL2.EL1PCTEN. */ - if (hcr & HCR_E2H) { - if (timeridx == GTIMER_PHYS && - !extract32(env->cp15.cnthctl_el2, 10, 1)) { - return CP_ACCESS_TRAP_EL2; - } - } else { - /* If HCR_EL2. == 0: check CNTHCTL_EL2.EL1PCEN. */ - if (has_el2 && timeridx == GTIMER_PHYS && - !extract32(env->cp15.cnthctl_el2, 1, 1)) { - return CP_ACCESS_TRAP_EL2; - } - } - break; - - case 1: - /* Check CNTHCTL_EL2.EL1PCTEN, which changes location based on E2H. */ - if (has_el2 && timeridx == GTIMER_PHYS && - (hcr & HCR_E2H - ? !extract32(env->cp15.cnthctl_el2, 10, 1) - : !extract32(env->cp15.cnthctl_el2, 0, 1))) { - return CP_ACCESS_TRAP_EL2; - } - break; - } - return CP_ACCESS_OK; -} - -static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx, - bool isread) -{ - unsigned int cur_el = arm_current_el(env); - bool has_el2 = arm_is_el2_enabled(env); - uint64_t hcr = arm_hcr_el2_eff(env); - - switch (cur_el) { - case 0: - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - /* If HCR_EL2. == '11': check CNTHCTL_EL2.EL0[PV]TEN. */ - return (extract32(env->cp15.cnthctl_el2, 9 - timeridx, 1) - ? CP_ACCESS_OK : CP_ACCESS_TRAP_EL2); - } - - /* - * CNT[PV]_CVAL, CNT[PV]_CTL, CNT[PV]_TVAL: not visible from - * EL0 if EL0[PV]TEN is zero. - */ - if (!extract32(env->cp15.c14_cntkctl, 9 - timeridx, 1)) { - return CP_ACCESS_TRAP; - } - /* fall through */ - - case 1: - if (has_el2 && timeridx == GTIMER_PHYS) { - if (hcr & HCR_E2H) { - /* If HCR_EL2. == '10': check CNTHCTL_EL2.EL1PTEN. */ - if (!extract32(env->cp15.cnthctl_el2, 11, 1)) { - return CP_ACCESS_TRAP_EL2; - } - } else { - /* If HCR_EL2. == 0: check CNTHCTL_EL2.EL1PCEN. */ - if (!extract32(env->cp15.cnthctl_el2, 1, 1)) { - return CP_ACCESS_TRAP_EL2; - } - } - } - break; - } - return CP_ACCESS_OK; -} - -static CPAccessResult gt_pct_access(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - return gt_counter_access(env, GTIMER_PHYS, isread); -} - -static CPAccessResult gt_vct_access(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - return gt_counter_access(env, GTIMER_VIRT, isread); -} - -static CPAccessResult gt_ptimer_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - return gt_timer_access(env, GTIMER_PHYS, isread); -} - -static CPAccessResult gt_vtimer_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - return gt_timer_access(env, GTIMER_VIRT, isread); -} - -static CPAccessResult gt_stimer_access(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - /* - * The AArch64 register view of the secure physical timer is - * always accessible from EL3, and configurably accessible from - * Secure EL1. - */ - switch (arm_current_el(env)) { - case 1: - if (!arm_is_secure(env)) { - return CP_ACCESS_TRAP; - } - if (!(env->cp15.scr_el3 & SCR_ST)) { - return CP_ACCESS_TRAP_EL3; - } - return CP_ACCESS_OK; - case 0: - case 2: - return CP_ACCESS_TRAP; - case 3: - return CP_ACCESS_OK; - default: - g_assert_not_reached(); - } -} - -static uint64_t gt_get_countervalue(CPUARMState *env) -{ - ARMCPU *cpu = env_archcpu(env); - - return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) / gt_cntfrq_period_ns(cpu); -} - -static void gt_recalc_timer(ARMCPU *cpu, int timeridx) -{ - ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx]; - - if (gt->ctl & 1) { - /* - * Timer enabled: calculate and set current ISTATUS, irq, and - * reset timer to when ISTATUS next has to change - */ - uint64_t offset = timeridx == GTIMER_VIRT ? - cpu->env.cp15.cntvoff_el2 : 0; - uint64_t count = gt_get_countervalue(&cpu->env); - /* Note that this must be unsigned 64 bit arithmetic: */ - int istatus = count - offset >= gt->cval; - uint64_t nexttick; - int irqstate; - - gt->ctl = deposit32(gt->ctl, 2, 1, istatus); - - irqstate = (istatus && !(gt->ctl & 2)); - qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate); - - if (istatus) { - /* Next transition is when count rolls back over to zero */ - nexttick = UINT64_MAX; - } else { - /* Next transition is when we hit cval */ - nexttick = gt->cval + offset; - } - /* - * Note that the desired next expiry time might be beyond the - * signed-64-bit range of a QEMUTimer -- in this case we just - * set the timer for as far in the future as possible. When the - * timer expires we will reset the timer for any remaining period. - */ - if (nexttick > INT64_MAX / gt_cntfrq_period_ns(cpu)) { - timer_mod_ns(cpu->gt_timer[timeridx], INT64_MAX); - } else { - timer_mod(cpu->gt_timer[timeridx], nexttick); - } - trace_arm_gt_recalc(timeridx, irqstate, nexttick); - } else { - /* Timer disabled: ISTATUS and timer output always clear */ - gt->ctl &= ~4; - qemu_set_irq(cpu->gt_timer_outputs[timeridx], 0); - timer_del(cpu->gt_timer[timeridx]); - trace_arm_gt_recalc_disabled(timeridx); - } -} - -static void gt_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri, - int timeridx) -{ - ARMCPU *cpu = env_archcpu(env); - - timer_del(cpu->gt_timer[timeridx]); -} - -static uint64_t gt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return gt_get_countervalue(env); -} - -static uint64_t gt_virt_cnt_offset(CPUARMState *env) -{ - uint64_t hcr; - - switch (arm_current_el(env)) { - case 2: - hcr = arm_hcr_el2_eff(env); - if (hcr & HCR_E2H) { - return 0; - } - break; - case 0: - hcr = arm_hcr_el2_eff(env); - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - return 0; - } - break; - } - - return env->cp15.cntvoff_el2; -} - -static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return gt_get_countervalue(env) - gt_virt_cnt_offset(env); -} - -static void gt_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - int timeridx, - uint64_t value) -{ - trace_arm_gt_cval_write(timeridx, value); - env->cp15.c14_timer[timeridx].cval = value; - gt_recalc_timer(env_archcpu(env), timeridx); -} - -static uint64_t gt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri, - int timeridx) -{ - uint64_t offset = 0; - - switch (timeridx) { - case GTIMER_VIRT: - case GTIMER_HYPVIRT: - offset = gt_virt_cnt_offset(env); - break; - } - - return (uint32_t)(env->cp15.c14_timer[timeridx].cval - - (gt_get_countervalue(env) - offset)); -} - -static void gt_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - int timeridx, - uint64_t value) -{ - uint64_t offset = 0; - - switch (timeridx) { - case GTIMER_VIRT: - case GTIMER_HYPVIRT: - offset = gt_virt_cnt_offset(env); - break; - } - - trace_arm_gt_tval_write(timeridx, value); - env->cp15.c14_timer[timeridx].cval = gt_get_countervalue(env) - offset + - sextract64(value, 0, 32); - gt_recalc_timer(env_archcpu(env), timeridx); -} - -static void gt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - int timeridx, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - uint32_t oldval = env->cp15.c14_timer[timeridx].ctl; - - trace_arm_gt_ctl_write(timeridx, value); - env->cp15.c14_timer[timeridx].ctl = deposit64(oldval, 0, 2, value); - if ((oldval ^ value) & 1) { - /* Enable toggled */ - gt_recalc_timer(cpu, timeridx); - } else if ((oldval ^ value) & 2) { - /* - * IMASK toggled: don't need to recalculate, - * just set the interrupt line based on ISTATUS - */ - int irqstate = (oldval & 4) && !(value & 2); - - trace_arm_gt_imask_toggle(timeridx, irqstate); - qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate); - } -} - -static void gt_phys_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - gt_timer_reset(env, ri, GTIMER_PHYS); -} - -static void gt_phys_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_cval_write(env, ri, GTIMER_PHYS, value); -} - -static uint64_t gt_phys_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return gt_tval_read(env, ri, GTIMER_PHYS); -} - -static void gt_phys_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_tval_write(env, ri, GTIMER_PHYS, value); -} - -static void gt_phys_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_ctl_write(env, ri, GTIMER_PHYS, value); -} - -static int gt_phys_redir_timeridx(CPUARMState *env) -{ - switch (arm_mmu_idx(env)) { - case ARMMMUIdx_E20_0: - case ARMMMUIdx_E20_2: - case ARMMMUIdx_E20_2_PAN: - case ARMMMUIdx_SE20_0: - case ARMMMUIdx_SE20_2: - case ARMMMUIdx_SE20_2_PAN: - return GTIMER_HYP; - default: - return GTIMER_PHYS; - } -} - -static int gt_virt_redir_timeridx(CPUARMState *env) -{ - switch (arm_mmu_idx(env)) { - case ARMMMUIdx_E20_0: - case ARMMMUIdx_E20_2: - case ARMMMUIdx_E20_2_PAN: - case ARMMMUIdx_SE20_0: - case ARMMMUIdx_SE20_2: - case ARMMMUIdx_SE20_2_PAN: - return GTIMER_HYPVIRT; - default: - return GTIMER_VIRT; - } -} - -static uint64_t gt_phys_redir_cval_read(CPUARMState *env, - const ARMCPRegInfo *ri) -{ - int timeridx = gt_phys_redir_timeridx(env); - return env->cp15.c14_timer[timeridx].cval; -} - -static void gt_phys_redir_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - int timeridx = gt_phys_redir_timeridx(env); - gt_cval_write(env, ri, timeridx, value); -} - -static uint64_t gt_phys_redir_tval_read(CPUARMState *env, - const ARMCPRegInfo *ri) -{ - int timeridx = gt_phys_redir_timeridx(env); - return gt_tval_read(env, ri, timeridx); -} - -static void gt_phys_redir_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - int timeridx = gt_phys_redir_timeridx(env); - gt_tval_write(env, ri, timeridx, value); -} - -static uint64_t gt_phys_redir_ctl_read(CPUARMState *env, - const ARMCPRegInfo *ri) -{ - int timeridx = gt_phys_redir_timeridx(env); - return env->cp15.c14_timer[timeridx].ctl; -} - -static void gt_phys_redir_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - int timeridx = gt_phys_redir_timeridx(env); - gt_ctl_write(env, ri, timeridx, value); -} - -static void gt_virt_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - gt_timer_reset(env, ri, GTIMER_VIRT); -} - -static void gt_virt_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_cval_write(env, ri, GTIMER_VIRT, value); -} - -static uint64_t gt_virt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return gt_tval_read(env, ri, GTIMER_VIRT); -} - -static void gt_virt_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_tval_write(env, ri, GTIMER_VIRT, value); -} - -static void gt_virt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_ctl_write(env, ri, GTIMER_VIRT, value); -} - -static void gt_cntvoff_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - - trace_arm_gt_cntvoff_write(value); - raw_write(env, ri, value); - gt_recalc_timer(cpu, GTIMER_VIRT); -} - -static uint64_t gt_virt_redir_cval_read(CPUARMState *env, - const ARMCPRegInfo *ri) -{ - int timeridx = gt_virt_redir_timeridx(env); - return env->cp15.c14_timer[timeridx].cval; -} - -static void gt_virt_redir_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - int timeridx = gt_virt_redir_timeridx(env); - gt_cval_write(env, ri, timeridx, value); -} - -static uint64_t gt_virt_redir_tval_read(CPUARMState *env, - const ARMCPRegInfo *ri) -{ - int timeridx = gt_virt_redir_timeridx(env); - return gt_tval_read(env, ri, timeridx); -} - -static void gt_virt_redir_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - int timeridx = gt_virt_redir_timeridx(env); - gt_tval_write(env, ri, timeridx, value); -} - -static uint64_t gt_virt_redir_ctl_read(CPUARMState *env, - const ARMCPRegInfo *ri) -{ - int timeridx = gt_virt_redir_timeridx(env); - return env->cp15.c14_timer[timeridx].ctl; -} - -static void gt_virt_redir_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - int timeridx = gt_virt_redir_timeridx(env); - gt_ctl_write(env, ri, timeridx, value); -} - -static void gt_hyp_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - gt_timer_reset(env, ri, GTIMER_HYP); -} - -static void gt_hyp_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_cval_write(env, ri, GTIMER_HYP, value); -} - -static uint64_t gt_hyp_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return gt_tval_read(env, ri, GTIMER_HYP); -} - -static void gt_hyp_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_tval_write(env, ri, GTIMER_HYP, value); -} - -static void gt_hyp_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_ctl_write(env, ri, GTIMER_HYP, value); -} - -static void gt_sec_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - gt_timer_reset(env, ri, GTIMER_SEC); -} - -static void gt_sec_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_cval_write(env, ri, GTIMER_SEC, value); -} - -static uint64_t gt_sec_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return gt_tval_read(env, ri, GTIMER_SEC); -} - -static void gt_sec_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_tval_write(env, ri, GTIMER_SEC, value); -} - -static void gt_sec_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_ctl_write(env, ri, GTIMER_SEC, value); -} - -static void gt_hv_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - gt_timer_reset(env, ri, GTIMER_HYPVIRT); -} - -static void gt_hv_cval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_cval_write(env, ri, GTIMER_HYPVIRT, value); -} - -static uint64_t gt_hv_tval_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return gt_tval_read(env, ri, GTIMER_HYPVIRT); -} - -static void gt_hv_tval_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_tval_write(env, ri, GTIMER_HYPVIRT, value); -} - -static void gt_hv_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - gt_ctl_write(env, ri, GTIMER_HYPVIRT, value); -} - -void arm_gt_ptimer_cb(void *opaque) -{ - ARMCPU *cpu = opaque; - - gt_recalc_timer(cpu, GTIMER_PHYS); -} - -void arm_gt_vtimer_cb(void *opaque) -{ - ARMCPU *cpu = opaque; - - gt_recalc_timer(cpu, GTIMER_VIRT); -} - -void arm_gt_htimer_cb(void *opaque) -{ - ARMCPU *cpu = opaque; - - gt_recalc_timer(cpu, GTIMER_HYP); -} - -void arm_gt_stimer_cb(void *opaque) -{ - ARMCPU *cpu = opaque; - - gt_recalc_timer(cpu, GTIMER_SEC); -} - -void arm_gt_hvtimer_cb(void *opaque) -{ - ARMCPU *cpu = opaque; - - gt_recalc_timer(cpu, GTIMER_HYPVIRT); -} - -static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque) -{ - ARMCPU *cpu = env_archcpu(env); - - cpu->env.cp15.c14_cntfrq = cpu->gt_cntfrq_hz; -} - -static const ARMCPRegInfo generic_timer_cp_reginfo[] = { - /* - * Note that CNTFRQ is purely reads-as-written for the benefit - * of software; writing it doesn't actually change the timer frequency. - * Our reset value matches the fixed frequency we implement the timer at. - */ - { .name = "CNTFRQ", .cp = 15, .crn = 14, .crm = 0, .opc1 = 0, .opc2 = 0, - .type = ARM_CP_ALIAS, - .access = PL1_RW | PL0_R, .accessfn = gt_cntfrq_access, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c14_cntfrq), - }, - { .name = "CNTFRQ_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 0, - .access = PL1_RW | PL0_R, .accessfn = gt_cntfrq_access, - .fieldoffset = offsetof(CPUARMState, cp15.c14_cntfrq), - .resetfn = arm_gt_cntfrq_reset, - }, - /* overall control: mostly access permissions */ - { .name = "CNTKCTL", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 14, .crm = 1, .opc2 = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c14_cntkctl), - .resetvalue = 0, - }, - /* per-timer control */ - { .name = "CNTP_CTL", .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 1, - .secure = ARM_CP_SECSTATE_NS, - .type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL0_RW, - .accessfn = gt_ptimer_access, - .fieldoffset = offsetoflow32(CPUARMState, - cp15.c14_timer[GTIMER_PHYS].ctl), - .readfn = gt_phys_redir_ctl_read, .raw_readfn = raw_read, - .writefn = gt_phys_redir_ctl_write, .raw_writefn = raw_write, - }, - { .name = "CNTP_CTL_S", - .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 1, - .secure = ARM_CP_SECSTATE_S, - .type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL0_RW, - .accessfn = gt_ptimer_access, - .fieldoffset = offsetoflow32(CPUARMState, - cp15.c14_timer[GTIMER_SEC].ctl), - .writefn = gt_sec_ctl_write, .raw_writefn = raw_write, - }, - { .name = "CNTP_CTL_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 2, .opc2 = 1, - .type = ARM_CP_IO, .access = PL0_RW, - .accessfn = gt_ptimer_access, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].ctl), - .resetvalue = 0, - .readfn = gt_phys_redir_ctl_read, .raw_readfn = raw_read, - .writefn = gt_phys_redir_ctl_write, .raw_writefn = raw_write, - }, - { .name = "CNTV_CTL", .cp = 15, .crn = 14, .crm = 3, .opc1 = 0, .opc2 = 1, - .type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL0_RW, - .accessfn = gt_vtimer_access, - .fieldoffset = offsetoflow32(CPUARMState, - cp15.c14_timer[GTIMER_VIRT].ctl), - .readfn = gt_virt_redir_ctl_read, .raw_readfn = raw_read, - .writefn = gt_virt_redir_ctl_write, .raw_writefn = raw_write, - }, - { .name = "CNTV_CTL_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 1, - .type = ARM_CP_IO, .access = PL0_RW, - .accessfn = gt_vtimer_access, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].ctl), - .resetvalue = 0, - .readfn = gt_virt_redir_ctl_read, .raw_readfn = raw_read, - .writefn = gt_virt_redir_ctl_write, .raw_writefn = raw_write, - }, - /* TimerValue views: a 32 bit downcounting view of the underlying state */ - { .name = "CNTP_TVAL", .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0, - .secure = ARM_CP_SECSTATE_NS, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, - .accessfn = gt_ptimer_access, - .readfn = gt_phys_redir_tval_read, .writefn = gt_phys_redir_tval_write, - }, - { .name = "CNTP_TVAL_S", - .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0, - .secure = ARM_CP_SECSTATE_S, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, - .accessfn = gt_ptimer_access, - .readfn = gt_sec_tval_read, .writefn = gt_sec_tval_write, - }, - { .name = "CNTP_TVAL_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 2, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, - .accessfn = gt_ptimer_access, .resetfn = gt_phys_timer_reset, - .readfn = gt_phys_redir_tval_read, .writefn = gt_phys_redir_tval_write, - }, - { .name = "CNTV_TVAL", .cp = 15, .crn = 14, .crm = 3, .opc1 = 0, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, - .accessfn = gt_vtimer_access, - .readfn = gt_virt_redir_tval_read, .writefn = gt_virt_redir_tval_write, - }, - { .name = "CNTV_TVAL_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW, - .accessfn = gt_vtimer_access, .resetfn = gt_virt_timer_reset, - .readfn = gt_virt_redir_tval_read, .writefn = gt_virt_redir_tval_write, - }, - /* The counter itself */ - { .name = "CNTPCT", .cp = 15, .crm = 14, .opc1 = 0, - .access = PL0_R, .type = ARM_CP_64BIT | ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = gt_pct_access, - .readfn = gt_cnt_read, .resetfn = arm_cp_reset_ignore, - }, - { .name = "CNTPCT_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 1, - .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = gt_pct_access, .readfn = gt_cnt_read, - }, - { .name = "CNTVCT", .cp = 15, .crm = 14, .opc1 = 1, - .access = PL0_R, .type = ARM_CP_64BIT | ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = gt_vct_access, - .readfn = gt_virt_cnt_read, .resetfn = arm_cp_reset_ignore, - }, - { .name = "CNTVCT_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 2, - .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO, - .accessfn = gt_vct_access, .readfn = gt_virt_cnt_read, - }, - /* Comparison value, indicating when the timer goes off */ - { .name = "CNTP_CVAL", .cp = 15, .crm = 14, .opc1 = 2, - .secure = ARM_CP_SECSTATE_NS, - .access = PL0_RW, - .type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval), - .accessfn = gt_ptimer_access, - .readfn = gt_phys_redir_cval_read, .raw_readfn = raw_read, - .writefn = gt_phys_redir_cval_write, .raw_writefn = raw_write, - }, - { .name = "CNTP_CVAL_S", .cp = 15, .crm = 14, .opc1 = 2, - .secure = ARM_CP_SECSTATE_S, - .access = PL0_RW, - .type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval), - .accessfn = gt_ptimer_access, - .writefn = gt_sec_cval_write, .raw_writefn = raw_write, - }, - { .name = "CNTP_CVAL_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 2, .opc2 = 2, - .access = PL0_RW, - .type = ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval), - .resetvalue = 0, .accessfn = gt_ptimer_access, - .readfn = gt_phys_redir_cval_read, .raw_readfn = raw_read, - .writefn = gt_phys_redir_cval_write, .raw_writefn = raw_write, - }, - { .name = "CNTV_CVAL", .cp = 15, .crm = 14, .opc1 = 3, - .access = PL0_RW, - .type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval), - .accessfn = gt_vtimer_access, - .readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read, - .writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write, - }, - { .name = "CNTV_CVAL_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 2, - .access = PL0_RW, - .type = ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval), - .resetvalue = 0, .accessfn = gt_vtimer_access, - .readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read, - .writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write, - }, - /* - * Secure timer -- this is actually restricted to only EL3 - * and configurably Secure-EL1 via the accessfn. - */ - { .name = "CNTPS_TVAL_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 7, .crn = 14, .crm = 2, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL1_RW, - .accessfn = gt_stimer_access, - .readfn = gt_sec_tval_read, - .writefn = gt_sec_tval_write, - .resetfn = gt_sec_timer_reset, - }, - { .name = "CNTPS_CTL_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 7, .crn = 14, .crm = 2, .opc2 = 1, - .type = ARM_CP_IO, .access = PL1_RW, - .accessfn = gt_stimer_access, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].ctl), - .resetvalue = 0, - .writefn = gt_sec_ctl_write, .raw_writefn = raw_write, - }, - { .name = "CNTPS_CVAL_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 7, .crn = 14, .crm = 2, .opc2 = 2, - .type = ARM_CP_IO, .access = PL1_RW, - .accessfn = gt_stimer_access, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval), - .writefn = gt_sec_cval_write, .raw_writefn = raw_write, - }, - REGINFO_SENTINEL -}; - -static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (!(arm_hcr_el2_eff(env) & HCR_E2H)) { - return CP_ACCESS_TRAP; - } - return CP_ACCESS_OK; -} - -#else - -/* - * In user-mode most of the generic timer registers are inaccessible - * however modern kernels (4.12+) allow access to cntvct_el0 - */ - -static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - ARMCPU *cpu = env_archcpu(env); - - /* - * Currently we have no support for QEMUTimer in linux-user so we - * can't call gt_get_countervalue(env), instead we directly - * call the lower level functions. - */ - return cpu_get_clock() / gt_cntfrq_period_ns(cpu); -} - -static const ARMCPRegInfo generic_timer_cp_reginfo[] = { - { .name = "CNTFRQ_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 0, - .type = ARM_CP_CONST, .access = PL0_R /* no PL1_RW in linux-user */, - .fieldoffset = offsetof(CPUARMState, cp15.c14_cntfrq), - .resetvalue = NANOSECONDS_PER_SECOND / GTIMER_SCALE, - }, - { .name = "CNTVCT_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 0, .opc2 = 2, - .access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO, - .readfn = gt_virt_cnt_read, - }, - REGINFO_SENTINEL -}; - -#endif - -static void par_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) -{ - if (arm_feature(env, ARM_FEATURE_LPAE)) { - raw_write(env, ri, value); - } else if (arm_feature(env, ARM_FEATURE_V7)) { - raw_write(env, ri, value & 0xfffff6ff); - } else { - raw_write(env, ri, value & 0xfffff1ff); - } -} - -#ifndef CONFIG_USER_ONLY -/* get_phys_addr() isn't present for user-mode-only targets */ - -static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (ri->opc2 & 4) { - /* - * The ATS12NSO* operations must trap to EL3 or EL2 if executed in - * Secure EL1 (which can only happen if EL3 is AArch64). - * They are simply UNDEF if executed from NS EL1. - * They function normally from EL2 or EL3. - */ - if (arm_current_el(env) == 1) { - if (arm_is_secure_below_el3(env)) { - if (env->cp15.scr_el3 & SCR_EEL2) { - return CP_ACCESS_TRAP_UNCATEGORIZED_EL2; - } - return CP_ACCESS_TRAP_UNCATEGORIZED_EL3; - } - return CP_ACCESS_TRAP_UNCATEGORIZED; - } - } - return CP_ACCESS_OK; -} - -#ifdef CONFIG_TCG -static uint64_t do_ats_write(CPUARMState *env, uint64_t value, - MMUAccessType access_type, ARMMMUIdx mmu_idx) -{ - hwaddr phys_addr; - target_ulong page_size; - int prot; - bool ret; - uint64_t par64; - bool format64 = false; - MemTxAttrs attrs = {}; - ARMMMUFaultInfo fi = {}; - ARMCacheAttrs cacheattrs = {}; - - ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs, - &prot, &page_size, &fi, &cacheattrs); - - if (ret) { - /* - * Some kinds of translation fault must cause exceptions rather - * than being reported in the PAR. - */ - int current_el = arm_current_el(env); - int target_el; - uint32_t syn, fsr, fsc; - bool take_exc = false; - - if (fi.s1ptw && current_el == 1 - && arm_mmu_idx_is_stage1_of_2(mmu_idx)) { - /* - * Synchronous stage 2 fault on an access made as part of the - * translation table walk for AT S1E0* or AT S1E1* insn - * executed from NS EL1. If this is a synchronous external abort - * and SCR_EL3.EA == 1, then we take a synchronous external abort - * to EL3. Otherwise the fault is taken as an exception to EL2, - * and HPFAR_EL2 holds the faulting IPA. - */ - if (fi.type == ARMFault_SyncExternalOnWalk && - (env->cp15.scr_el3 & SCR_EA)) { - target_el = 3; - } else { - env->cp15.hpfar_el2 = extract64(fi.s2addr, 12, 47) << 4; - if (arm_is_secure_below_el3(env) && fi.s1ns) { - env->cp15.hpfar_el2 |= HPFAR_NS; - } - target_el = 2; - } - take_exc = true; - } else if (fi.type == ARMFault_SyncExternalOnWalk) { - /* - * Synchronous external aborts during a translation table walk - * are taken as Data Abort exceptions. - */ - if (fi.stage2) { - if (current_el == 3) { - target_el = 3; - } else { - target_el = 2; - } - } else { - target_el = exception_target_el(env); - } - take_exc = true; - } - - if (take_exc) { - /* Construct FSR and FSC using same logic as arm_deliver_fault() */ - if (target_el == 2 || arm_el_is_aa64(env, target_el) || - arm_s1_regime_using_lpae_format(env, mmu_idx)) { - fsr = arm_fi_to_lfsc(&fi); - fsc = extract32(fsr, 0, 6); - } else { - fsr = arm_fi_to_sfsc(&fi); - fsc = 0x3f; - } - /* - * Report exception with ESR indicating a fault due to a - * translation table walk for a cache maintenance instruction. - */ - syn = syn_data_abort_no_iss(current_el == target_el, 0, - fi.ea, 1, fi.s1ptw, 1, fsc); - env->exception.vaddress = value; - env->exception.fsr = fsr; - raise_exception(env, EXCP_DATA_ABORT, syn, target_el); - } - } - - if (is_a64(env)) { - format64 = true; - } else if (arm_feature(env, ARM_FEATURE_LPAE)) { - /* - * ATS1Cxx: - * * TTBCR.EAE determines whether the result is returned using the - * 32-bit or the 64-bit PAR format - * * Instructions executed in Hyp mode always use the 64bit format - * - * ATS1S2NSOxx uses the 64bit format if any of the following is true: - * * The Non-secure TTBCR.EAE bit is set to 1 - * * The implementation includes EL2, and the value of HCR.VM is 1 - * - * (Note that HCR.DC makes HCR.VM behave as if it is 1.) - * - * ATS1Hx always uses the 64bit format. - */ - format64 = arm_s1_regime_using_lpae_format(env, mmu_idx); - - if (arm_feature(env, ARM_FEATURE_EL2)) { - if (mmu_idx == ARMMMUIdx_E10_0 || - mmu_idx == ARMMMUIdx_E10_1 || - mmu_idx == ARMMMUIdx_E10_1_PAN) { - format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC); - } else { - format64 |= arm_current_el(env) == 2; - } - } - } - - if (format64) { - /* Create a 64-bit PAR */ - par64 = (1 << 11); /* LPAE bit always set */ - if (!ret) { - par64 |= phys_addr & ~0xfffULL; - if (!attrs.secure) { - par64 |= (1 << 9); /* NS */ - } - par64 |= (uint64_t)cacheattrs.attrs << 56; /* ATTR */ - par64 |= cacheattrs.shareability << 7; /* SH */ - } else { - uint32_t fsr = arm_fi_to_lfsc(&fi); - - par64 |= 1; /* F */ - par64 |= (fsr & 0x3f) << 1; /* FS */ - if (fi.stage2) { - par64 |= (1 << 9); /* S */ - } - if (fi.s1ptw) { - par64 |= (1 << 8); /* PTW */ - } - } - } else { - /* - * fsr is a DFSR/IFSR value for the short descriptor - * translation table format (with WnR always clear). - * Convert it to a 32-bit PAR. - */ - if (!ret) { - /* We do not set any attribute bits in the PAR */ - if (page_size == (1 << 24) - && arm_feature(env, ARM_FEATURE_V7)) { - par64 = (phys_addr & 0xff000000) | (1 << 1); - } else { - par64 = phys_addr & 0xfffff000; - } - if (!attrs.secure) { - par64 |= (1 << 9); /* NS */ - } - } else { - uint32_t fsr = arm_fi_to_sfsc(&fi); - - par64 = ((fsr & (1 << 10)) >> 5) | ((fsr & (1 << 12)) >> 6) | - ((fsr & 0xf) << 1) | 1; - } - } - return par64; -} -#endif /* CONFIG_TCG */ - -static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) -{ -#ifdef CONFIG_TCG - MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; - uint64_t par64; - ARMMMUIdx mmu_idx; - int el = arm_current_el(env); - bool secure = arm_is_secure_below_el3(env); - - switch (ri->opc2 & 6) { - case 0: - /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */ - switch (el) { - case 3: - mmu_idx = ARMMMUIdx_SE3; - break; - case 2: - g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ - /* fall through */ - case 1: - if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) { - mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN - : ARMMMUIdx_Stage1_E1_PAN); - } else { - mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; - } - break; - default: - g_assert_not_reached(); - } - break; - case 2: - /* stage 1 current state PL0: ATS1CUR, ATS1CUW */ - switch (el) { - case 3: - mmu_idx = ARMMMUIdx_SE10_0; - break; - case 2: - g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ - mmu_idx = ARMMMUIdx_Stage1_E0; - break; - case 1: - mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; - break; - default: - g_assert_not_reached(); - } - break; - case 4: - /* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */ - mmu_idx = ARMMMUIdx_E10_1; - break; - case 6: - /* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */ - mmu_idx = ARMMMUIdx_E10_0; - break; - default: - g_assert_not_reached(); - } - - par64 = do_ats_write(env, value, access_type, mmu_idx); - - A32_BANKED_CURRENT_REG_SET(env, par, par64); -#else - /* Handled by hardware accelerator. */ - g_assert_not_reached(); -#endif /* CONFIG_TCG */ -} - -static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ -#ifdef CONFIG_TCG - MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; - uint64_t par64; - - par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2); - - A32_BANKED_CURRENT_REG_SET(env, par, par64); -#else - /* Handled by hardware accelerator. */ - g_assert_not_reached(); -#endif /* CONFIG_TCG */ -} - -static CPAccessResult at_s1e2_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 3 && - !(env->cp15.scr_el3 & (SCR_NS | SCR_EEL2))) { - return CP_ACCESS_TRAP; - } - return CP_ACCESS_OK; -} - -static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ -#ifdef CONFIG_TCG - MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; - ARMMMUIdx mmu_idx; - int secure = arm_is_secure_below_el3(env); - - switch (ri->opc2 & 6) { - case 0: - switch (ri->opc1) { - case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */ - if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) { - mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN - : ARMMMUIdx_Stage1_E1_PAN); - } else { - mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; - } - break; - case 4: /* AT S1E2R, AT S1E2W */ - mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2; - break; - case 6: /* AT S1E3R, AT S1E3W */ - mmu_idx = ARMMMUIdx_SE3; - break; - default: - g_assert_not_reached(); - } - break; - case 2: /* AT S1E0R, AT S1E0W */ - mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; - break; - case 4: /* AT S12E1R, AT S12E1W */ - mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1; - break; - case 6: /* AT S12E0R, AT S12E0W */ - mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0; - break; - default: - g_assert_not_reached(); - } - - env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx); -#else - /* Handled by hardware accelerator. */ - g_assert_not_reached(); -#endif /* CONFIG_TCG */ -} -#endif - -static const ARMCPRegInfo vapa_cp_reginfo[] = { - { .name = "PAR", .cp = 15, .crn = 7, .crm = 4, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .resetvalue = 0, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.par_s), - offsetoflow32(CPUARMState, cp15.par_ns) }, - .writefn = par_write }, -#ifndef CONFIG_USER_ONLY - /* This underdecoding is safe because the reginfo is NO_RAW. */ - { .name = "ATS", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = CP_ANY, - .access = PL1_W, .accessfn = ats_access, - .writefn = ats_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC }, -#endif - REGINFO_SENTINEL -}; - -/* Return basic MPU access permission bits. */ -static uint32_t simple_mpu_ap_bits(uint32_t val) -{ - uint32_t ret; - uint32_t mask; - int i; - ret = 0; - mask = 3; - for (i = 0; i < 16; i += 2) { - ret |= (val >> i) & mask; - mask <<= 2; - } - return ret; -} - -/* Pad basic MPU access permission bits to extended format. */ -static uint32_t extended_mpu_ap_bits(uint32_t val) -{ - uint32_t ret; - uint32_t mask; - int i; - ret = 0; - mask = 3; - for (i = 0; i < 16; i += 2) { - ret |= (val & mask) << i; - mask <<= 2; - } - return ret; -} - -static void pmsav5_data_ap_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->cp15.pmsav5_data_ap = extended_mpu_ap_bits(value); -} - -static uint64_t pmsav5_data_ap_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return simple_mpu_ap_bits(env->cp15.pmsav5_data_ap); -} - -static void pmsav5_insn_ap_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->cp15.pmsav5_insn_ap = extended_mpu_ap_bits(value); -} - -static uint64_t pmsav5_insn_ap_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return simple_mpu_ap_bits(env->cp15.pmsav5_insn_ap); -} - -static uint64_t pmsav7_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - uint32_t *u32p = *(uint32_t **)raw_ptr(env, ri); - - if (!u32p) { - return 0; - } - - u32p += env->pmsav7.rnr[M_REG_NS]; - return *u32p; -} - -static void pmsav7_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - uint32_t *u32p = *(uint32_t **)raw_ptr(env, ri); - - if (!u32p) { - return; - } - - u32p += env->pmsav7.rnr[M_REG_NS]; - tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */ - *u32p = value; -} - -static void pmsav7_rgnr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - uint32_t nrgs = cpu->pmsav7_dregion; - - if (value >= nrgs) { - qemu_log_mask(LOG_GUEST_ERROR, - "PMSAv7 RGNR write >= # supported regions, %" PRIu32 - " > %" PRIu32 "\n", (uint32_t)value, nrgs); - return; - } - - raw_write(env, ri, value); -} - -static const ARMCPRegInfo pmsav7_cp_reginfo[] = { - /* - * Reset for all these registers is handled in arm_cpu_reset(), - * because the PMSAv7 is also used by M-profile CPUs, which do - * not register cpregs but still need the state to be reset. - */ - { .name = "DRBAR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 1, .opc2 = 0, - .access = PL1_RW, .type = ARM_CP_NO_RAW, - .fieldoffset = offsetof(CPUARMState, pmsav7.drbar), - .readfn = pmsav7_read, .writefn = pmsav7_write, - .resetfn = arm_cp_reset_ignore }, - { .name = "DRSR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 1, .opc2 = 2, - .access = PL1_RW, .type = ARM_CP_NO_RAW, - .fieldoffset = offsetof(CPUARMState, pmsav7.drsr), - .readfn = pmsav7_read, .writefn = pmsav7_write, - .resetfn = arm_cp_reset_ignore }, - { .name = "DRACR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 1, .opc2 = 4, - .access = PL1_RW, .type = ARM_CP_NO_RAW, - .fieldoffset = offsetof(CPUARMState, pmsav7.dracr), - .readfn = pmsav7_read, .writefn = pmsav7_write, - .resetfn = arm_cp_reset_ignore }, - { .name = "RGNR", .cp = 15, .crn = 6, .opc1 = 0, .crm = 2, .opc2 = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]), - .writefn = pmsav7_rgnr_write, - .resetfn = arm_cp_reset_ignore }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo pmsav5_cp_reginfo[] = { - { .name = "DATA_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_data_ap), - .readfn = pmsav5_data_ap_read, .writefn = pmsav5_data_ap_write, }, - { .name = "INSN_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 1, - .access = PL1_RW, .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_insn_ap), - .readfn = pmsav5_insn_ap_read, .writefn = pmsav5_insn_ap_write, }, - { .name = "DATA_EXT_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 2, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_data_ap), - .resetvalue = 0, }, - { .name = "INSN_EXT_AP", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 3, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.pmsav5_insn_ap), - .resetvalue = 0, }, - { .name = "DCACHE_CFG", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c2_data), .resetvalue = 0, }, - { .name = "ICACHE_CFG", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 1, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c2_insn), .resetvalue = 0, }, - /* Protection region base and size registers */ - { .name = "946_PRBS0", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[0]) }, - { .name = "946_PRBS1", .cp = 15, .crn = 6, .crm = 1, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[1]) }, - { .name = "946_PRBS2", .cp = 15, .crn = 6, .crm = 2, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[2]) }, - { .name = "946_PRBS3", .cp = 15, .crn = 6, .crm = 3, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[3]) }, - { .name = "946_PRBS4", .cp = 15, .crn = 6, .crm = 4, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[4]) }, - { .name = "946_PRBS5", .cp = 15, .crn = 6, .crm = 5, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[5]) }, - { .name = "946_PRBS6", .cp = 15, .crn = 6, .crm = 6, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[6]) }, - { .name = "946_PRBS7", .cp = 15, .crn = 6, .crm = 7, .opc1 = 0, - .opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.c6_region[7]) }, - REGINFO_SENTINEL -}; - -static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - TCR *tcr = raw_ptr(env, ri); - int maskshift = extract32(value, 0, 3); - - if (!arm_feature(env, ARM_FEATURE_V8)) { - if (arm_feature(env, ARM_FEATURE_LPAE) && (value & TTBCR_EAE)) { - /* - * Pre ARMv8 bits [21:19], [15:14] and [6:3] are UNK/SBZP when - * using Long-desciptor translation table format - */ - value &= ~((7 << 19) | (3 << 14) | (0xf << 3)); - } else if (arm_feature(env, ARM_FEATURE_EL3)) { - /* - * In an implementation that includes the Security Extensions - * TTBCR has additional fields PD0 [4] and PD1 [5] for - * Short-descriptor translation table format. - */ - value &= TTBCR_PD1 | TTBCR_PD0 | TTBCR_N; - } else { - value &= TTBCR_N; - } - } - - /* - * Update the masks corresponding to the TCR bank being written - * Note that we always calculate mask and base_mask, but - * they are only used for short-descriptor tables (ie if EAE is 0); - * for long-descriptor tables the TCR fields are used differently - * and the mask and base_mask values are meaningless. - */ - tcr->raw_tcr = value; - tcr->mask = ~(((uint32_t)0xffffffffu) >> maskshift); - tcr->base_mask = ~((uint32_t)0x3fffu >> maskshift); -} - -static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - TCR *tcr = raw_ptr(env, ri); - - if (arm_feature(env, ARM_FEATURE_LPAE)) { - /* - * With LPAE the TTBCR could result in a change of ASID - * via the TTBCR.A1 bit, so do a TLB flush. - */ - tlb_flush(CPU(cpu)); - } - /* Preserve the high half of TCR_EL1, set via TTBCR2. */ - value = deposit64(tcr->raw_tcr, 0, 32, value); - vmsa_ttbcr_raw_write(env, ri, value); -} - -static void vmsa_ttbcr_reset(CPUARMState *env, const ARMCPRegInfo *ri) -{ - TCR *tcr = raw_ptr(env, ri); - - /* - * Reset both the TCR as well as the masks corresponding to the bank of - * the TCR being reset. - */ - tcr->raw_tcr = 0; - tcr->mask = 0; - tcr->base_mask = 0xffffc000u; -} - -static void vmsa_tcr_el12_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - TCR *tcr = raw_ptr(env, ri); - - /* For AArch64 the A1 bit could result in a change of ASID, so TLB flush. */ - tlb_flush(CPU(cpu)); - tcr->raw_tcr = value; -} - -static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* If the ASID changes (with a 64-bit write), we must flush the TLB. */ - if (cpreg_field_is_64bit(ri) && - extract64(raw_read(env, ri) ^ value, 48, 16) != 0) { - ARMCPU *cpu = env_archcpu(env); - tlb_flush(CPU(cpu)); - } - raw_write(env, ri, value); -} - -static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * If we are running with E2&0 regime, then an ASID is active. - * Flush if that might be changing. Note we're not checking - * TCR_EL2.A1 to know if this is really the TTBRx_EL2 that - * holds the active ASID, only checking the field that might. - */ - if (extract64(raw_read(env, ri) ^ value, 48, 16) && - (arm_hcr_el2_eff(env) & HCR_E2H)) { - uint16_t mask = ARMMMUIdxBit_E20_2 | - ARMMMUIdxBit_E20_2_PAN | - ARMMMUIdxBit_E20_0; - - if (arm_is_secure_below_el3(env)) { - mask >>= ARM_MMU_IDX_A_NS; - } - - tlb_flush_by_mmuidx(env_cpu(env), mask); - } - raw_write(env, ri, value); -} - -static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - CPUState *cs = CPU(cpu); - - /* - * A change in VMID to the stage2 page table (Stage2) invalidates - * the combined stage 1&2 tlbs (EL10_1 and EL10_0). - */ - if (raw_read(env, ri) != value) { - uint16_t mask = ARMMMUIdxBit_E10_1 | - ARMMMUIdxBit_E10_1_PAN | - ARMMMUIdxBit_E10_0; - - if (arm_is_secure_below_el3(env)) { - mask >>= ARM_MMU_IDX_A_NS; - } - - tlb_flush_by_mmuidx(cs, mask); - raw_write(env, ri, value); - } -} - -static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = { - { .name = "DFSR", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, .type = ARM_CP_ALIAS, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dfsr_s), - offsetoflow32(CPUARMState, cp15.dfsr_ns) }, }, - { .name = "IFSR", .cp = 15, .crn = 5, .crm = 0, .opc1 = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.ifsr_s), - offsetoflow32(CPUARMState, cp15.ifsr_ns) } }, - { .name = "DFAR", .cp = 15, .opc1 = 0, .crn = 6, .crm = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.dfar_s), - offsetof(CPUARMState, cp15.dfar_ns) } }, - { .name = "FAR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .fieldoffset = offsetof(CPUARMState, cp15.far_el[1]), - .resetvalue = 0, }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo vmsa_cp_reginfo[] = { - { .name = "ESR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .crn = 5, .crm = 2, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .fieldoffset = offsetof(CPUARMState, cp15.esr_el[1]), .resetvalue = 0, }, - { .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .writefn = vmsa_ttbr_write, .resetvalue = 0, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s), - offsetof(CPUARMState, cp15.ttbr0_ns) } }, - { .name = "TTBR1_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .writefn = vmsa_ttbr_write, .resetvalue = 0, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s), - offsetof(CPUARMState, cp15.ttbr1_ns) } }, - { .name = "TCR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .writefn = vmsa_tcr_el12_write, - .resetfn = vmsa_ttbcr_reset, .raw_writefn = raw_write, - .fieldoffset = offsetof(CPUARMState, cp15.tcr_el[1]) }, - { .name = "TTBCR", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_ALIAS, .writefn = vmsa_ttbcr_write, - .raw_writefn = vmsa_ttbcr_raw_write, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tcr_el[3]), - offsetoflow32(CPUARMState, cp15.tcr_el[1])} }, - REGINFO_SENTINEL -}; - -/* - * Note that unlike TTBCR, writing to TTBCR2 does not require flushing - * qemu tlbs nor adjusting cached masks. - */ -static const ARMCPRegInfo ttbcr2_reginfo = { - .name = "TTBCR2", .cp = 15, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 3, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_ALIAS, - .bank_fieldoffsets = { offsetofhigh32(CPUARMState, cp15.tcr_el[3]), - offsetofhigh32(CPUARMState, cp15.tcr_el[1]) }, -}; - -static void omap_ticonfig_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->cp15.c15_ticonfig = value & 0xe7; - /* The OS_TYPE bit in this register changes the reported CPUID! */ - env->cp15.c0_cpuid = (value & (1 << 5)) ? - ARM_CPUID_TI915T : ARM_CPUID_TI925T; -} - -static void omap_threadid_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->cp15.c15_threadid = value & 0xffff; -} - -static void omap_wfi_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Wait-for-interrupt (deprecated) */ - cpu_interrupt(env_cpu(env), CPU_INTERRUPT_HALT); -} - -static void omap_cachemaint_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * On OMAP there are registers indicating the max/min index of dcache lines - * containing a dirty line; cache flush operations have to reset these. - */ - env->cp15.c15_i_max = 0x000; - env->cp15.c15_i_min = 0xff0; -} - -static const ARMCPRegInfo omap_cp_reginfo[] = { - { .name = "DFSR", .cp = 15, .crn = 5, .crm = CP_ANY, - .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, .type = ARM_CP_OVERRIDE, - .fieldoffset = offsetoflow32(CPUARMState, cp15.esr_el[1]), - .resetvalue = 0, }, - { .name = "", .cp = 15, .crn = 15, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .type = ARM_CP_NOP }, - { .name = "TICONFIG", .cp = 15, .crn = 15, .crm = 1, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c15_ticonfig), .resetvalue = 0, - .writefn = omap_ticonfig_write }, - { .name = "IMAX", .cp = 15, .crn = 15, .crm = 2, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c15_i_max), .resetvalue = 0, }, - { .name = "IMIN", .cp = 15, .crn = 15, .crm = 3, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .resetvalue = 0xff0, - .fieldoffset = offsetof(CPUARMState, cp15.c15_i_min) }, - { .name = "THREADID", .cp = 15, .crn = 15, .crm = 4, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c15_threadid), .resetvalue = 0, - .writefn = omap_threadid_write }, - { .name = "TI925T_STATUS", .cp = 15, .crn = 15, - .crm = 8, .opc1 = 0, .opc2 = 0, .access = PL1_RW, - .type = ARM_CP_NO_RAW, - .readfn = arm_cp_read_zero, .writefn = omap_wfi_write, }, - /* - * TODO: Peripheral port remap register: - * On OMAP2 mcr p15, 0, rn, c15, c2, 4 sets up the interrupt controller - * base address at $rn & ~0xfff and map size of 0x200 << ($rn & 0xfff), - * when MMU is off. - */ - { .name = "OMAP_CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY, - .opc1 = 0, .opc2 = CP_ANY, .access = PL1_W, - .type = ARM_CP_OVERRIDE | ARM_CP_NO_RAW, - .writefn = omap_cachemaint_write }, - { .name = "C9", .cp = 15, .crn = 9, - .crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW, - .type = ARM_CP_CONST | ARM_CP_OVERRIDE, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -static void xscale_cpar_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->cp15.c15_cpar = value & 0x3fff; -} - -static const ARMCPRegInfo xscale_cp_reginfo[] = { - { .name = "XSCALE_CPAR", - .cp = 15, .crn = 15, .crm = 1, .opc1 = 0, .opc2 = 0, .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c15_cpar), .resetvalue = 0, - .writefn = xscale_cpar_write, }, - { .name = "XSCALE_AUXCR", - .cp = 15, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 1, .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c1_xscaleauxcr), - .resetvalue = 0, }, - /* - * XScale specific cache-lockdown: since we have no cache we NOP these - * and hope the guest does not really rely on cache behaviour. - */ - { .name = "XSCALE_LOCK_ICACHE_LINE", - .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NOP }, - { .name = "XSCALE_UNLOCK_ICACHE", - .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NOP }, - { .name = "XSCALE_DCACHE_LOCK", - .cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 0, - .access = PL1_RW, .type = ARM_CP_NOP }, - { .name = "XSCALE_UNLOCK_DCACHE", - .cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NOP }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo dummy_c15_cp_reginfo[] = { - /* - * RAZ/WI the whole crn=15 space, when we don't have a more specific - * implementation of this implementation-defined space. - * Ideally this should eventually disappear in favour of actually - * implementing the correct behaviour for all cores. - */ - { .name = "C15_IMPDEF", .cp = 15, .crn = 15, - .crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, - .access = PL1_RW, - .type = ARM_CP_CONST | ARM_CP_NO_RAW | ARM_CP_OVERRIDE, - .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = { - /* Cache status: RAZ because we have no cache so it's always clean */ - { .name = "CDSR", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, - .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = { - /* We never have a a block transfer operation in progress */ - { .name = "BXSR", .cp = 15, .crn = 7, .crm = 12, .opc1 = 0, .opc2 = 4, - .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, - .resetvalue = 0 }, - /* The cache ops themselves: these all NOP for QEMU */ - { .name = "IICR", .cp = 15, .crm = 5, .opc1 = 0, - .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, - { .name = "IDCR", .cp = 15, .crm = 6, .opc1 = 0, - .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, - { .name = "CDCR", .cp = 15, .crm = 12, .opc1 = 0, - .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, - { .name = "PIR", .cp = 15, .crm = 12, .opc1 = 1, - .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, - { .name = "PDR", .cp = 15, .crm = 12, .opc1 = 2, - .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, - { .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0, - .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = { - /* - * The cache test-and-clean instructions always return (1 << 30) - * to indicate that there are no dirty cache lines. - */ - { .name = "TC_DCACHE", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 3, - .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, - .resetvalue = (1 << 30) }, - { .name = "TCI_DCACHE", .cp = 15, .crn = 7, .crm = 14, .opc1 = 0, .opc2 = 3, - .access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW, - .resetvalue = (1 << 30) }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo strongarm_cp_reginfo[] = { - /* Ignore ReadBuffer accesses */ - { .name = "C9_READBUFFER", .cp = 15, .crn = 9, - .crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, - .access = PL1_RW, .resetvalue = 0, - .type = ARM_CP_CONST | ARM_CP_OVERRIDE | ARM_CP_NO_RAW }, - REGINFO_SENTINEL -}; - -static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - unsigned int cur_el = arm_current_el(env); - - if (arm_is_el2_enabled(env) && cur_el == 1) { - return env->cp15.vpidr_el2; - } - return raw_read(env, ri); -} - -static uint64_t mpidr_read_val(CPUARMState *env) -{ - ARMCPU *cpu = env_archcpu(env); - uint64_t mpidr = cpu->mp_affinity; - - if (arm_feature(env, ARM_FEATURE_V7MP)) { - mpidr |= (1U << 31); - /* - * Cores which are uniprocessor (non-coherent) - * but still implement the MP extensions set - * bit 30. (For instance, Cortex-R5). - */ - if (cpu->mp_is_up) { - mpidr |= (1u << 30); - } - } - return mpidr; -} - -static uint64_t mpidr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - unsigned int cur_el = arm_current_el(env); - - if (arm_is_el2_enabled(env) && cur_el == 1) { - return env->cp15.vmpidr_el2; - } - return mpidr_read_val(env); -} - -static const ARMCPRegInfo lpae_cp_reginfo[] = { - /* NOP AMAIR0/1 */ - { .name = "AMAIR0", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_CONST, .resetvalue = 0 }, - /* AMAIR1 is mapped to AMAIR_EL1[63:32] */ - { .name = "AMAIR1", .cp = 15, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "PAR", .cp = 15, .crm = 7, .opc1 = 0, - .access = PL1_RW, .type = ARM_CP_64BIT, .resetvalue = 0, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.par_s), - offsetof(CPUARMState, cp15.par_ns)} }, - { .name = "TTBR0", .cp = 15, .crm = 2, .opc1 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_64BIT | ARM_CP_ALIAS, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s), - offsetof(CPUARMState, cp15.ttbr0_ns) }, - .writefn = vmsa_ttbr_write, }, - { .name = "TTBR1", .cp = 15, .crm = 2, .opc1 = 1, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .type = ARM_CP_64BIT | ARM_CP_ALIAS, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s), - offsetof(CPUARMState, cp15.ttbr1_ns) }, - .writefn = vmsa_ttbr_write, }, - REGINFO_SENTINEL -}; - -static uint64_t aa64_fpcr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return vfp_get_fpcr(env); -} - -static void aa64_fpcr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - vfp_set_fpcr(env, value); -} - -static uint64_t aa64_fpsr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return vfp_get_fpsr(env); -} - -static void aa64_fpsr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - vfp_set_fpsr(env, value); -} - -static CPAccessResult aa64_daif_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UMA)) { - return CP_ACCESS_TRAP; - } - return CP_ACCESS_OK; -} - -static void aa64_daif_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->daif = value & PSTATE_DAIF; -} - -static uint64_t aa64_pan_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return env->pstate & PSTATE_PAN; -} - -static void aa64_pan_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->pstate = (env->pstate & ~PSTATE_PAN) | (value & PSTATE_PAN); -} - -static const ARMCPRegInfo pan_reginfo = { - .name = "PAN", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 3, - .type = ARM_CP_NO_RAW, .access = PL1_RW, - .readfn = aa64_pan_read, .writefn = aa64_pan_write -}; - -static uint64_t aa64_uao_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return env->pstate & PSTATE_UAO; -} - -static void aa64_uao_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->pstate = (env->pstate & ~PSTATE_UAO) | (value & PSTATE_UAO); -} - -static const ARMCPRegInfo uao_reginfo = { - .name = "UAO", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 4, - .type = ARM_CP_NO_RAW, .access = PL1_RW, - .readfn = aa64_uao_read, .writefn = aa64_uao_write -}; - -static uint64_t aa64_dit_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return env->pstate & PSTATE_DIT; -} - -static void aa64_dit_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->pstate = (env->pstate & ~PSTATE_DIT) | (value & PSTATE_DIT); -} - -static const ARMCPRegInfo dit_reginfo = { - .name = "DIT", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 5, - .type = ARM_CP_NO_RAW, .access = PL0_RW, - .readfn = aa64_dit_read, .writefn = aa64_dit_write -}; - -static uint64_t aa64_ssbs_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return env->pstate & PSTATE_SSBS; -} - -static void aa64_ssbs_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->pstate = (env->pstate & ~PSTATE_SSBS) | (value & PSTATE_SSBS); -} - -static const ARMCPRegInfo ssbs_reginfo = { - .name = "SSBS", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 6, - .type = ARM_CP_NO_RAW, .access = PL0_RW, - .readfn = aa64_ssbs_read, .writefn = aa64_ssbs_write -}; - -static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - /* Cache invalidate/clean to Point of Coherency or Persistence... */ - switch (arm_current_el(env)) { - case 0: - /* ... EL0 must UNDEF unless SCTLR_EL1.UCI is set. */ - if (!(arm_sctlr(env, 0) & SCTLR_UCI)) { - return CP_ACCESS_TRAP; - } - /* fall through */ - case 1: - /* ... EL1 must trap to EL2 if HCR_EL2.TPCP is set. */ - if (arm_hcr_el2_eff(env) & HCR_TPCP) { - return CP_ACCESS_TRAP_EL2; - } - break; - } - return CP_ACCESS_OK; -} - -static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env, - const ARMCPRegInfo *ri, - bool isread) -{ - /* Cache invalidate/clean to Point of Unification... */ - switch (arm_current_el(env)) { - case 0: - /* ... EL0 must UNDEF unless SCTLR_EL1.UCI is set. */ - if (!(arm_sctlr(env, 0) & SCTLR_UCI)) { - return CP_ACCESS_TRAP; - } - /* fall through */ - case 1: - /* ... EL1 must trap to EL2 if HCR_EL2.TPU is set. */ - if (arm_hcr_el2_eff(env) & HCR_TPU) { - return CP_ACCESS_TRAP_EL2; - } - break; - } - return CP_ACCESS_OK; -} - -/* - * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions - * Page D4-1736 (DDI0487A.b) - */ - -static int vae1_tlbmask(CPUARMState *env) -{ - uint64_t hcr = arm_hcr_el2_eff(env); - uint16_t mask; - - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - mask = ARMMMUIdxBit_E20_2 | - ARMMMUIdxBit_E20_2_PAN | - ARMMMUIdxBit_E20_0; - } else { - mask = ARMMMUIdxBit_E10_1 | - ARMMMUIdxBit_E10_1_PAN | - ARMMMUIdxBit_E10_0; - } - - if (arm_is_secure_below_el3(env)) { - mask >>= ARM_MMU_IDX_A_NS; - } - - return mask; -} - -/* Return 56 if TBI is enabled, 64 otherwise. */ -static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx, - uint64_t addr) -{ - uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr; - int tbi = aa64_va_parameter_tbi(tcr, mmu_idx); - int select = extract64(addr, 55, 1); - - return (tbi >> select) & 1 ? 56 : 64; -} - -static int vae1_tlbbits(CPUARMState *env, uint64_t addr) -{ - uint64_t hcr = arm_hcr_el2_eff(env); - ARMMMUIdx mmu_idx; - - /* Only the regime of the mmu_idx below is significant. */ - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - mmu_idx = ARMMMUIdx_E20_0; - } else { - mmu_idx = ARMMMUIdx_E10_0; - } - - if (arm_is_secure_below_el3(env)) { - mmu_idx &= ~ARM_MMU_IDX_A_NS; - } - - return tlbbits_for_regime(env, mmu_idx, addr); -} - -static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - int mask = vae1_tlbmask(env); - - tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); -} - -static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - int mask = vae1_tlbmask(env); - - if (tlb_force_broadcast(env)) { - tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); - } else { - tlb_flush_by_mmuidx(cs, mask); - } -} - -static int alle1_tlbmask(CPUARMState *env) -{ - /* - * Note that the 'ALL' scope must invalidate both stage 1 and - * stage 2 translations, whereas most other scopes only invalidate - * stage 1 translations. - */ - if (arm_is_secure_below_el3(env)) { - return ARMMMUIdxBit_SE10_1 | - ARMMMUIdxBit_SE10_1_PAN | - ARMMMUIdxBit_SE10_0; - } else { - return ARMMMUIdxBit_E10_1 | - ARMMMUIdxBit_E10_1_PAN | - ARMMMUIdxBit_E10_0; - } -} - -static int e2_tlbmask(CPUARMState *env) -{ - if (arm_is_secure_below_el3(env)) { - return ARMMMUIdxBit_SE20_0 | - ARMMMUIdxBit_SE20_2 | - ARMMMUIdxBit_SE20_2_PAN | - ARMMMUIdxBit_SE2; - } else { - return ARMMMUIdxBit_E20_0 | - ARMMMUIdxBit_E20_2 | - ARMMMUIdxBit_E20_2_PAN | - ARMMMUIdxBit_E2; - } -} - -static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - int mask = alle1_tlbmask(env); - - tlb_flush_by_mmuidx(cs, mask); -} - -static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - int mask = e2_tlbmask(env); - - tlb_flush_by_mmuidx(cs, mask); -} - -static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - CPUState *cs = CPU(cpu); - - tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3); -} - -static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - int mask = alle1_tlbmask(env); - - tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); -} - -static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - int mask = e2_tlbmask(env); - - tlb_flush_by_mmuidx_all_cpus_synced(cs, mask); -} - -static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - - tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3); -} - -static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA, EL2 - * Currently handles both VAE2 and VALE2, since we don't support - * flush-last-level-only. - */ - CPUState *cs = env_cpu(env); - int mask = e2_tlbmask(env); - uint64_t pageaddr = sextract64(value << 12, 0, 56); - - tlb_flush_page_by_mmuidx(cs, pageaddr, mask); -} - -static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA, EL3 - * Currently handles both VAE3 and VALE3, since we don't support - * flush-last-level-only. - */ - ARMCPU *cpu = env_archcpu(env); - CPUState *cs = CPU(cpu); - uint64_t pageaddr = sextract64(value << 12, 0, 56); - - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3); -} - -static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - int mask = vae1_tlbmask(env); - uint64_t pageaddr = sextract64(value << 12, 0, 56); - int bits = vae1_tlbbits(env, pageaddr); - - tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); -} - -static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA, EL1&0 (AArch64 version). - * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1, - * since we don't support flush-for-specific-ASID-only or - * flush-last-level-only. - */ - CPUState *cs = env_cpu(env); - int mask = vae1_tlbmask(env); - uint64_t pageaddr = sextract64(value << 12, 0, 56); - int bits = vae1_tlbbits(env, pageaddr); - - if (tlb_force_broadcast(env)) { - tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); - } else { - tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits); - } -} - -static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - uint64_t pageaddr = sextract64(value << 12, 0, 56); - bool secure = arm_is_secure_below_el3(env); - int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2; - int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2, - pageaddr); - - tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); -} - -static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPUState *cs = env_cpu(env); - uint64_t pageaddr = sextract64(value << 12, 0, 56); - int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr); - - tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, - ARMMMUIdxBit_SE3, bits); -} - -#ifdef TARGET_AARCH64 -static uint64_t tlbi_aa64_range_get_length(CPUARMState *env, - uint64_t value) -{ - unsigned int page_shift; - unsigned int page_size_granule; - uint64_t num; - uint64_t scale; - uint64_t exponent; - uint64_t length; - - num = extract64(value, 39, 4); - scale = extract64(value, 44, 2); - page_size_granule = extract64(value, 46, 2); - - page_shift = page_size_granule * 2 + 12; - - if (page_size_granule == 0) { - qemu_log_mask(LOG_GUEST_ERROR, "Invalid page size granule %d\n", - page_size_granule); - return 0; - } - - exponent = (5 * scale) + 1; - length = (num + 1) << (exponent + page_shift); - - return length; -} - -static uint64_t tlbi_aa64_range_get_base(CPUARMState *env, uint64_t value, - bool two_ranges) -{ - /* TODO: ARMv8.7 FEAT_LPA2 */ - uint64_t pageaddr; - - if (two_ranges) { - pageaddr = sextract64(value, 0, 37) << TARGET_PAGE_BITS; - } else { - pageaddr = extract64(value, 0, 37) << TARGET_PAGE_BITS; - } - - return pageaddr; -} - -static void do_rvae_write(CPUARMState *env, uint64_t value, - int idxmap, bool synced) -{ - ARMMMUIdx one_idx = ARM_MMU_IDX_A | ctz32(idxmap); - bool two_ranges = regime_has_2_ranges(one_idx); - uint64_t baseaddr, length; - int bits; - - baseaddr = tlbi_aa64_range_get_base(env, value, two_ranges); - length = tlbi_aa64_range_get_length(env, value); - bits = tlbbits_for_regime(env, one_idx, baseaddr); - - if (synced) { - tlb_flush_range_by_mmuidx_all_cpus_synced(env_cpu(env), - baseaddr, - length, - idxmap, - bits); - } else { - tlb_flush_range_by_mmuidx(env_cpu(env), baseaddr, - length, idxmap, bits); - } -} - -static void tlbi_aa64_rvae1_write(CPUARMState *env, - const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA range, EL1&0. - * Currently handles all of RVAE1, RVAAE1, RVAALE1 and RVALE1, - * since we don't support flush-for-specific-ASID-only or - * flush-last-level-only. - */ - - do_rvae_write(env, value, vae1_tlbmask(env), - tlb_force_broadcast(env)); -} - -static void tlbi_aa64_rvae1is_write(CPUARMState *env, - const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA range, Inner/Outer Shareable EL1&0. - * Currently handles all of RVAE1IS, RVAE1OS, RVAAE1IS, RVAAE1OS, - * RVAALE1IS, RVAALE1OS, RVALE1IS and RVALE1OS, since we don't support - * flush-for-specific-ASID-only, flush-last-level-only or inner/outer - * shareable specific flushes. - */ - - do_rvae_write(env, value, vae1_tlbmask(env), true); -} - -static int vae2_tlbmask(CPUARMState *env) -{ - return (arm_is_secure_below_el3(env) - ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2); -} - -static void tlbi_aa64_rvae2_write(CPUARMState *env, - const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA range, EL2. - * Currently handles all of RVAE2 and RVALE2, - * since we don't support flush-for-specific-ASID-only or - * flush-last-level-only. - */ - - do_rvae_write(env, value, vae2_tlbmask(env), - tlb_force_broadcast(env)); - - -} - -static void tlbi_aa64_rvae2is_write(CPUARMState *env, - const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA range, Inner/Outer Shareable, EL2. - * Currently handles all of RVAE2IS, RVAE2OS, RVALE2IS and RVALE2OS, - * since we don't support flush-for-specific-ASID-only, - * flush-last-level-only or inner/outer shareable specific flushes. - */ - - do_rvae_write(env, value, vae2_tlbmask(env), true); - -} - -static void tlbi_aa64_rvae3_write(CPUARMState *env, - const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA range, EL3. - * Currently handles all of RVAE3 and RVALE3, - * since we don't support flush-for-specific-ASID-only or - * flush-last-level-only. - */ - - do_rvae_write(env, value, ARMMMUIdxBit_SE3, - tlb_force_broadcast(env)); -} - -static void tlbi_aa64_rvae3is_write(CPUARMState *env, - const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Invalidate by VA range, EL3, Inner/Outer Shareable. - * Currently handles all of RVAE3IS, RVAE3OS, RVALE3IS and RVALE3OS, - * since we don't support flush-for-specific-ASID-only, - * flush-last-level-only or inner/outer specific flushes. - */ - - do_rvae_write(env, value, ARMMMUIdxBit_SE3, true); -} -#endif - -static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - int cur_el = arm_current_el(env); - - if (cur_el < 2) { - uint64_t hcr = arm_hcr_el2_eff(env); - - if (cur_el == 0) { - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - if (!(env->cp15.sctlr_el[2] & SCTLR_DZE)) { - return CP_ACCESS_TRAP_EL2; - } - } else { - if (!(env->cp15.sctlr_el[1] & SCTLR_DZE)) { - return CP_ACCESS_TRAP; - } - if (hcr & HCR_TDZ) { - return CP_ACCESS_TRAP_EL2; - } - } - } else if (hcr & HCR_TDZ) { - return CP_ACCESS_TRAP_EL2; - } - } - return CP_ACCESS_OK; -} - -static uint64_t aa64_dczid_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - ARMCPU *cpu = env_archcpu(env); - int dzp_bit = 1 << 4; - - /* DZP indicates whether DC ZVA access is allowed */ - if (aa64_zva_access(env, NULL, false) == CP_ACCESS_OK) { - dzp_bit = 0; - } - return cpu->dcz_blocksize | dzp_bit; -} - -static CPAccessResult sp_el0_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (!(env->pstate & PSTATE_SP)) { - /* - * Access to SP_EL0 is undefined if it's being used as - * the stack pointer. - */ - return CP_ACCESS_TRAP_UNCATEGORIZED; - } - return CP_ACCESS_OK; -} - -static uint64_t spsel_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return env->pstate & PSTATE_SP; -} - -static void spsel_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val) -{ - update_spsel(env, val); -} - -static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - - if (arm_feature(env, ARM_FEATURE_PMSA) && !cpu->has_mpu) { - /* M bit is RAZ/WI for PMSA with no MPU implemented */ - value &= ~SCTLR_M; - } - - /* ??? Lots of these bits are not implemented. */ - - if (ri->state == ARM_CP_STATE_AA64 && !cpu_isar_feature(aa64_mte, cpu)) { - if (ri->opc1 == 6) { /* SCTLR_EL3 */ - value &= ~(SCTLR_ITFSB | SCTLR_TCF | SCTLR_ATA); - } else { - value &= ~(SCTLR_ITFSB | SCTLR_TCF0 | SCTLR_TCF | - SCTLR_ATA0 | SCTLR_ATA); - } - } - - if (raw_read(env, ri) == value) { - /* - * Skip the TLB flush if nothing actually changed; Linux likes - * to do a lot of pointless SCTLR writes. - */ - return; - } - - raw_write(env, ri, value); - - /* This may enable/disable the MMU, so do a TLB flush. */ - tlb_flush(CPU(cpu)); - - if (ri->type & ARM_CP_SUPPRESS_TB_END) { - /* - * Normally we would always end the TB on an SCTLR write; see the - * comment in ARMCPRegInfo sctlr initialization below for why Xscale - * is special. Setting ARM_CP_SUPPRESS_TB_END also stops the rebuild - * of hflags from the translator, so do it here. - */ - arm_rebuild_hflags(env); - } -} - -static CPAccessResult fpexc32_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if ((env->cp15.cptr_el[2] & CPTR_TFP) && arm_current_el(env) == 2) { - return CP_ACCESS_TRAP_FP_EL2; - } - if (env->cp15.cptr_el[3] & CPTR_TFP) { - return CP_ACCESS_TRAP_FP_EL3; - } - return CP_ACCESS_OK; -} - -static void sdcr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - env->cp15.mdcr_el3 = value & SDCR_VALID_MASK; -} - -static const ARMCPRegInfo v8_cp_reginfo[] = { - /* - * Minimal set of EL0-visible registers. This will need to be expanded - * significantly for system emulation of AArch64 CPUs. - */ - { .name = "NZCV", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 2, - .access = PL0_RW, .type = ARM_CP_NZCV }, - { .name = "DAIF", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 4, .crm = 2, - .type = ARM_CP_NO_RAW, - .access = PL0_RW, .accessfn = aa64_daif_access, - .fieldoffset = offsetof(CPUARMState, daif), - .writefn = aa64_daif_write, .resetfn = arm_cp_reset_ignore }, - { .name = "FPCR", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 4, - .access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END, - .readfn = aa64_fpcr_read, .writefn = aa64_fpcr_write }, - { .name = "FPSR", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 4, .crm = 4, - .access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END, - .readfn = aa64_fpsr_read, .writefn = aa64_fpsr_write }, - { .name = "DCZID_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 7, .crn = 0, .crm = 0, - .access = PL0_R, .type = ARM_CP_NO_RAW, - .readfn = aa64_dczid_read }, - { .name = "DC_ZVA", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 1, - .access = PL0_W, .type = ARM_CP_DC_ZVA, -#ifndef CONFIG_USER_ONLY - /* Avoid overhead of an access check that always passes in user-mode */ - .accessfn = aa64_zva_access, -#endif - }, - { .name = "CURRENTEL", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .opc2 = 2, .crn = 4, .crm = 2, - .access = PL1_R, .type = ARM_CP_CURRENTEL }, - /* Cache ops: all NOPs since we don't emulate caches */ - { .name = "IC_IALLUIS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NOP, - .accessfn = aa64_cacheop_pou_access }, - { .name = "IC_IALLU", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NOP, - .accessfn = aa64_cacheop_pou_access }, - { .name = "IC_IVAU", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 5, .opc2 = 1, - .access = PL0_W, .type = ARM_CP_NOP, - .accessfn = aa64_cacheop_pou_access }, - { .name = "DC_IVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1, - .access = PL1_W, .accessfn = aa64_cacheop_poc_access, - .type = ARM_CP_NOP }, - { .name = "DC_ISW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2, - .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP }, - { .name = "DC_CVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1, - .access = PL0_W, .type = ARM_CP_NOP, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CSW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2, - .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP }, - { .name = "DC_CVAU", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 11, .opc2 = 1, - .access = PL0_W, .type = ARM_CP_NOP, - .accessfn = aa64_cacheop_pou_access }, - { .name = "DC_CIVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 1, - .access = PL0_W, .type = ARM_CP_NOP, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CISW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2, - .access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP }, - /* TLBI operations */ - { .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vmalle1is_write }, - { .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1is_write }, - { .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vmalle1is_write }, - { .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1is_write }, - { .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1is_write }, - { .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1is_write }, - { .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vmalle1_write }, - { .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1_write }, - { .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vmalle1_write }, - { .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1_write }, - { .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1_write }, - { .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7, - .access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae1_write }, - { .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle1is_write }, - { .name = "TLBI_VMALLS12E1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 6, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle1is_write }, - { .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle1_write }, - { .name = "TLBI_VMALLS12E1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 6, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle1is_write }, -#ifndef CONFIG_USER_ONLY - /* 64 bit address translation operations */ - { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S1E0R", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 2, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S1E0W", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S12E1R", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 4, - .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S12E1W", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S12E0R", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 6, - .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S12E0W", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 7, - .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - /* AT S1E2* are elsewhere as they UNDEF from EL3 if EL2 is not present */ - { .name = "AT_S1E3R", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 7, .crm = 8, .opc2 = 0, - .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S1E3W", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 7, .crm = 8, .opc2 = 1, - .access = PL3_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "PAR_EL1", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 0, .crn = 7, .crm = 4, .opc2 = 0, - .access = PL1_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.par_el[1]), - .writefn = par_write }, -#endif - /* TLB invalidate last level of translation table walk */ - { .name = "TLBIMVALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimva_is_write }, - { .name = "TLBIMVAALIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimvaa_is_write }, - { .name = "TLBIMVAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimva_write }, - { .name = "TLBIMVAAL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7, - .type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb, - .writefn = tlbimvaa_write }, - { .name = "TLBIMVALH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbimva_hyp_write }, - { .name = "TLBIMVALHIS", - .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbimva_hyp_is_write }, - { .name = "TLBIIPAS2", - .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1, - .type = ARM_CP_NOP, .access = PL2_W }, - { .name = "TLBIIPAS2IS", - .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1, - .type = ARM_CP_NOP, .access = PL2_W }, - { .name = "TLBIIPAS2L", - .cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL2_W }, - { .name = "TLBIIPAS2LIS", - .cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL2_W }, - /* 32 bit cache operations */ - { .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, - { .name = "BPIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 6, - .type = ARM_CP_NOP, .access = PL1_W }, - { .name = "ICIALLU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, - { .name = "ICIMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 1, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, - { .name = "BPIALL", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 6, - .type = ARM_CP_NOP, .access = PL1_W }, - { .name = "BPIMVA", .cp = 15, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 7, - .type = ARM_CP_NOP, .access = PL1_W }, - { .name = "DCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access }, - { .name = "DCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - { .name = "DCCMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 1, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access }, - { .name = "DCCSW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - { .name = "DCCMVAU", .cp = 15, .opc1 = 0, .crn = 7, .crm = 11, .opc2 = 1, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access }, - { .name = "DCCIMVAC", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 1, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_poc_access }, - { .name = "DCCISW", .cp = 15, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - /* MMU Domain access control / MPU write buffer control */ - { .name = "DACR", .cp = 15, .opc1 = 0, .crn = 3, .crm = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, .resetvalue = 0, - .writefn = dacr_write, .raw_writefn = raw_write, - .bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s), - offsetoflow32(CPUARMState, cp15.dacr_ns) } }, - { .name = "ELR_EL1", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 1, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, elr_el[1]) }, - { .name = "SPSR_EL1", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 0, - .access = PL1_RW, - .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_SVC]) }, - /* - * We rely on the access checks not allowing the guest to write to the - * state field when SPSel indicates that it's being used as the stack - * pointer. - */ - { .name = "SP_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 1, .opc2 = 0, - .access = PL1_RW, .accessfn = sp_el0_access, - .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, sp_el[0]) }, - { .name = "SP_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 1, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, sp_el[1]) }, - { .name = "SPSel", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 4, .crm = 2, .opc2 = 0, - .type = ARM_CP_NO_RAW, - .access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write }, - { .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0, - .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]), - .access = PL2_RW, .accessfn = fpexc32_access }, - { .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0, - .access = PL2_RW, .resetvalue = 0, - .writefn = dacr_write, .raw_writefn = raw_write, - .fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) }, - { .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1, - .access = PL2_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) }, - { .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 0, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_IRQ]) }, - { .name = "SPSR_ABT", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 1, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_ABT]) }, - { .name = "SPSR_UND", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 2, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_UND]) }, - { .name = "SPSR_FIQ", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 3, .opc2 = 3, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_FIQ]) }, - { .name = "MDCR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 3, .opc2 = 1, - .resetvalue = 0, - .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el3) }, - { .name = "SDCR", .type = ARM_CP_ALIAS, - .cp = 15, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 1, - .access = PL1_RW, .accessfn = access_trap_aa32s_el1, - .writefn = sdcr_write, - .fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) }, - REGINFO_SENTINEL -}; - -/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */ -static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = { - { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0, - .access = PL2_RW, - .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore }, - { .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0, - .access = PL2_RW, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0, - .access = PL2_RW, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "HMAIR1", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "VTTBR", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 6, .crm = 2, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 }, - { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2, - .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14, - .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14, - .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1, - .access = PL2_RW, .accessfn = access_tda, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "HIFAR", .state = ARM_CP_STATE_AA32, - .type = ARM_CP_CONST, - .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2, - .access = PL2_RW, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -/* Ditto, but for registers which exist in ARMv8 but not v7 */ -static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = { - { .name = "HCR2", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4, - .access = PL2_RW, - .type = ARM_CP_CONST, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask) -{ - ARMCPU *cpu = env_archcpu(env); - - if (arm_feature(env, ARM_FEATURE_V8)) { - valid_mask |= MAKE_64BIT_MASK(0, 34); /* ARMv8.0 */ - } else { - valid_mask |= MAKE_64BIT_MASK(0, 28); /* ARMv7VE */ - } - - if (arm_feature(env, ARM_FEATURE_EL3)) { - valid_mask &= ~HCR_HCD; - } else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) { - /* - * Architecturally HCR.TSC is RES0 if EL3 is not implemented. - * However, if we're using the SMC PSCI conduit then QEMU is - * effectively acting like EL3 firmware and so the guest at - * EL2 should retain the ability to prevent EL1 from being - * able to make SMC calls into the ersatz firmware, so in - * that case HCR.TSC should be read/write. - */ - valid_mask &= ~HCR_TSC; - } - - if (arm_feature(env, ARM_FEATURE_AARCH64)) { - if (cpu_isar_feature(aa64_vh, cpu)) { - valid_mask |= HCR_E2H; - } - if (cpu_isar_feature(aa64_lor, cpu)) { - valid_mask |= HCR_TLOR; - } - if (cpu_isar_feature(aa64_pauth, cpu)) { - valid_mask |= HCR_API | HCR_APK; - } - if (cpu_isar_feature(aa64_mte, cpu)) { - valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5; - } - } - - /* Clear RES0 bits. */ - value &= valid_mask; - - /* - * These bits change the MMU setup: - * HCR_VM enables stage 2 translation - * HCR_PTW forbids certain page-table setups - * HCR_DC disables stage1 and enables stage2 translation - * HCR_DCT enables tagging on (disabled) stage1 translation - */ - if ((env->cp15.hcr_el2 ^ value) & (HCR_VM | HCR_PTW | HCR_DC | HCR_DCT)) { - tlb_flush(CPU(cpu)); - } - env->cp15.hcr_el2 = value; - - /* - * Updates to VI and VF require us to update the status of - * virtual interrupts, which are the logical OR of these bits - * and the state of the input lines from the GIC. (This requires - * that we have the iothread lock, which is done by marking the - * reginfo structs as ARM_CP_IO.) - * Note that if a write to HCR pends a VIRQ or VFIQ it is never - * possible for it to be taken immediately, because VIRQ and - * VFIQ are masked unless running at EL0 or EL1, and HCR - * can only be written at EL2. - */ - g_assert(qemu_mutex_iothread_locked()); - arm_cpu_update_virq(cpu); - arm_cpu_update_vfiq(cpu); -} - -static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) -{ - do_hcr_write(env, value, 0); -} - -static void hcr_writehigh(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Handle HCR2 write, i.e. write to high half of HCR_EL2 */ - value = deposit64(env->cp15.hcr_el2, 32, 32, value); - do_hcr_write(env, value, MAKE_64BIT_MASK(0, 32)); -} - -static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Handle HCR write, i.e. write to low half of HCR_EL2 */ - value = deposit64(env->cp15.hcr_el2, 0, 32, value); - do_hcr_write(env, value, MAKE_64BIT_MASK(32, 32)); -} - -/* - * Return the effective value of HCR_EL2. - * Bits that are not included here: - * RW (read from SCR_EL3.RW as needed) - */ -uint64_t arm_hcr_el2_eff(CPUARMState *env) -{ - uint64_t ret = env->cp15.hcr_el2; - - if (!arm_is_el2_enabled(env)) { - /* - * "This register has no effect if EL2 is not enabled in the - * current Security state". This is ARMv8.4-SecEL2 speak for - * !(SCR_EL3.NS==1 || SCR_EL3.EEL2==1). - * - * Prior to that, the language was "In an implementation that - * includes EL3, when the value of SCR_EL3.NS is 0 the PE behaves - * as if this field is 0 for all purposes other than a direct - * read or write access of HCR_EL2". With lots of enumeration - * on a per-field basis. In current QEMU, this is condition - * is arm_is_secure_below_el3. - * - * Since the v8.4 language applies to the entire register, and - * appears to be backward compatible, use that. - */ - return 0; - } - - /* - * For a cpu that supports both aarch64 and aarch32, we can set bits - * in HCR_EL2 (e.g. via EL3) that are RES0 when we enter EL2 as aa32. - * Ignore all of the bits in HCR+HCR2 that are not valid for aarch32. - */ - if (!arm_el_is_aa64(env, 2)) { - uint64_t aa32_valid; - - /* - * These bits are up-to-date as of ARMv8.6. - * For HCR, it's easiest to list just the 2 bits that are invalid. - * For HCR2, list those that are valid. - */ - aa32_valid = MAKE_64BIT_MASK(0, 32) & ~(HCR_RW | HCR_TDZ); - aa32_valid |= (HCR_CD | HCR_ID | HCR_TERR | HCR_TEA | HCR_MIOCNCE | - HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_TTLBIS); - ret &= aa32_valid; - } - - if (ret & HCR_TGE) { - /* These bits are up-to-date as of ARMv8.6. */ - if (ret & HCR_E2H) { - ret &= ~(HCR_VM | HCR_FMO | HCR_IMO | HCR_AMO | - HCR_BSU_MASK | HCR_DC | HCR_TWI | HCR_TWE | - HCR_TID0 | HCR_TID2 | HCR_TPCP | HCR_TPU | - HCR_TDZ | HCR_CD | HCR_ID | HCR_MIOCNCE | - HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_ENSCXT | - HCR_TTLBIS | HCR_TTLBOS | HCR_TID5); - } else { - ret |= HCR_FMO | HCR_IMO | HCR_AMO; - } - ret &= ~(HCR_SWIO | HCR_PTW | HCR_VF | HCR_VI | HCR_VSE | - HCR_FB | HCR_TID1 | HCR_TID3 | HCR_TSC | HCR_TACR | - HCR_TSW | HCR_TTLB | HCR_TVM | HCR_HCD | HCR_TRVM | - HCR_TLOR); - } - - return ret; -} - -static void cptr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * For A-profile AArch32 EL3, if NSACR.CP10 - * is 0 then HCPTR.{TCP11,TCP10} ignore writes and read as 1. - */ - if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && - !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { - value &= ~(0x3 << 10); - value |= env->cp15.cptr_el[2] & (0x3 << 10); - } - env->cp15.cptr_el[2] = value; -} - -static uint64_t cptr_el2_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* - * For A-profile AArch32 EL3, if NSACR.CP10 - * is 0 then HCPTR.{TCP11,TCP10} ignore writes and read as 1. - */ - uint64_t value = env->cp15.cptr_el[2]; - - if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && - !arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) { - value |= 0x3 << 10; - } - return value; -} - -static const ARMCPRegInfo el2_cp_reginfo[] = { - { .name = "HCR_EL2", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_IO, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0, - .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2), - .writefn = hcr_write }, - { .name = "HCR", .state = ARM_CP_STATE_AA32, - .type = ARM_CP_ALIAS | ARM_CP_IO, - .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0, - .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2), - .writefn = hcr_writelow }, - { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7, - .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "ELR_EL2", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, elr_el[2]) }, - { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0, - .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.esr_el[2]) }, - { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0, - .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.far_el[2]) }, - { .name = "HIFAR", .state = ARM_CP_STATE_AA32, - .type = ARM_CP_ALIAS, - .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2, - .access = PL2_RW, - .fieldoffset = offsetofhigh32(CPUARMState, cp15.far_el[2]) }, - { .name = "SPSR_EL2", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 0, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_HYP]) }, - { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0, - .access = PL2_RW, .writefn = vbar_write, - .fieldoffset = offsetof(CPUARMState, cp15.vbar_el[2]), - .resetvalue = 0 }, - { .name = "SP_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 4, .crm = 1, .opc2 = 0, - .access = PL3_RW, .type = ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, sp_el[2]) }, - { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2, - .access = PL2_RW, .accessfn = cptr_access, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.cptr_el[2]), - .readfn = cptr_el2_read, .writefn = cptr_el2_write }, - { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0, - .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[2]), - .resetvalue = 0 }, - { .name = "HMAIR1", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_ALIAS, - .fieldoffset = offsetofhigh32(CPUARMState, cp15.mair_el[2]) }, - { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - /* HAMAIR1 is mapped to AMAIR_EL2[63:32] */ - { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2, - .access = PL2_RW, .writefn = vmsa_tcr_el12_write, - /* no .raw_writefn or .resetfn needed as we never use mask/base_mask */ - .fieldoffset = offsetof(CPUARMState, cp15.tcr_el[2]) }, - { .name = "VTCR", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2, - .type = ARM_CP_ALIAS, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .fieldoffset = offsetof(CPUARMState, cp15.vtcr_el2) }, - { .name = "VTCR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2, - .access = PL2_RW, - /* - * no .writefn needed as this can't cause an ASID change; - * no .raw_writefn or .resetfn needed as we never use mask/base_mask - */ - .fieldoffset = offsetof(CPUARMState, cp15.vtcr_el2) }, - { .name = "VTTBR", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 6, .crm = 2, - .type = ARM_CP_64BIT | ARM_CP_ALIAS, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .fieldoffset = offsetof(CPUARMState, cp15.vttbr_el2), - .writefn = vttbr_write }, - { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0, - .access = PL2_RW, .writefn = vttbr_write, - .fieldoffset = offsetof(CPUARMState, cp15.vttbr_el2) }, - { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0, - .access = PL2_RW, .raw_writefn = raw_write, .writefn = sctlr_write, - .fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[2]) }, - { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2, - .access = PL2_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[2]) }, - { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0, - .access = PL2_RW, .resetvalue = 0, .writefn = vmsa_tcr_ttbr_el2_write, - .fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) }, - { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2, - .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) }, - { .name = "TLBIALLNSNH", - .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbiall_nsnh_write }, - { .name = "TLBIALLNSNHIS", - .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbiall_nsnh_is_write }, - { .name = "TLBIALLH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbiall_hyp_write }, - { .name = "TLBIALLHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbiall_hyp_is_write }, - { .name = "TLBIMVAH", .cp = 15, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbimva_hyp_write }, - { .name = "TLBIMVAHIS", .cp = 15, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbimva_hyp_is_write }, - { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbi_aa64_alle2_write }, - { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbi_aa64_vae2_write }, - { .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae2_write }, - { .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle2is_write }, - { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1, - .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbi_aa64_vae2is_write }, - { .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae2is_write }, -#ifndef CONFIG_USER_ONLY - /* - * Unlike the other EL2-related AT operations, these must - * UNDEF from EL3 if EL2 is not implemented, which is why we - * define them here rather than with the rest of the AT ops. - */ - { .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0, - .access = PL2_W, .accessfn = at_s1e2_access, - .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 }, - { .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1, - .access = PL2_W, .accessfn = at_s1e2_access, - .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 }, - /* - * The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE - * if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3 - * with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose - * to behave as if SCR.NS was 1. - */ - { .name = "ATS1HR", .cp = 15, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0, - .access = PL2_W, - .writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC }, - { .name = "ATS1HW", .cp = 15, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1, - .access = PL2_W, - .writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC }, - { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0, - /* - * ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the - * reset values as IMPDEF. We choose to reset to 3 to comply with - * both ARMv7 and ARMv8. - */ - .access = PL2_RW, .resetvalue = 3, - .fieldoffset = offsetof(CPUARMState, cp15.cnthctl_el2) }, - { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3, - .access = PL2_RW, .type = ARM_CP_IO, .resetvalue = 0, - .writefn = gt_cntvoff_write, - .fieldoffset = offsetof(CPUARMState, cp15.cntvoff_el2) }, - { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14, - .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS | ARM_CP_IO, - .writefn = gt_cntvoff_write, - .fieldoffset = offsetof(CPUARMState, cp15.cntvoff_el2) }, - { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYP].cval), - .type = ARM_CP_IO, .access = PL2_RW, - .writefn = gt_hyp_cval_write, .raw_writefn = raw_write }, - { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYP].cval), - .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_IO, - .writefn = gt_hyp_cval_write, .raw_writefn = raw_write }, - { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL2_RW, - .resetfn = gt_hyp_timer_reset, - .readfn = gt_hyp_tval_read, .writefn = gt_hyp_tval_write }, - { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH, - .type = ARM_CP_IO, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYP].ctl), - .resetvalue = 0, - .writefn = gt_hyp_ctl_write, .raw_writefn = raw_write }, -#endif - /* The only field of MDCR_EL2 that has a defined architectural reset value - * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N. - */ - { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1, - .access = PL2_RW, .resetvalue = PMCR_NUM_COUNTERS, - .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2), }, - { .name = "HPFAR", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .fieldoffset = offsetof(CPUARMState, cp15.hpfar_el2) }, - { .name = "HPFAR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, cp15.hpfar_el2) }, - { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH, - .cp = 15, .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, cp15.hstr_el2) }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo el2_v8_cp_reginfo[] = { - { .name = "HCR2", .state = ARM_CP_STATE_AA32, - .type = ARM_CP_ALIAS | ARM_CP_IO, - .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4, - .access = PL2_RW, - .fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2), - .writefn = hcr_writehigh }, - REGINFO_SENTINEL -}; - -static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 3 || arm_is_secure_below_el3(env)) { - return CP_ACCESS_OK; - } - return CP_ACCESS_TRAP_UNCATEGORIZED; -} - -static const ARMCPRegInfo el2_sec_cp_reginfo[] = { - { .name = "VSTTBR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 0, - .access = PL2_RW, .accessfn = sel2_access, - .fieldoffset = offsetof(CPUARMState, cp15.vsttbr_el2) }, - { .name = "VSTCR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2, - .access = PL2_RW, .accessfn = sel2_access, - .fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) }, - REGINFO_SENTINEL -}; - -static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - /* - * The NSACR is RW at EL3, and RO for NS EL1 and NS EL2. - * At Secure EL1 it traps to EL3 or EL2. - */ - if (arm_current_el(env) == 3) { - return CP_ACCESS_OK; - } - if (arm_is_secure_below_el3(env)) { - if (env->cp15.scr_el3 & SCR_EEL2) { - return CP_ACCESS_TRAP_EL2; - } - return CP_ACCESS_TRAP_EL3; - } - /* Accesses from EL1 NS and EL2 NS are UNDEF for write but allow reads. */ - if (isread) { - return CP_ACCESS_OK; - } - return CP_ACCESS_TRAP_UNCATEGORIZED; -} - -static const ARMCPRegInfo el3_cp_reginfo[] = { - { .name = "SCR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 0, - .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.scr_el3), - .resetfn = scr_reset, .writefn = scr_write }, - { .name = "SCR", .type = ARM_CP_ALIAS | ARM_CP_NEWEL, - .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 0, - .access = PL1_RW, .accessfn = access_trap_aa32s_el1, - .fieldoffset = offsetoflow32(CPUARMState, cp15.scr_el3), - .writefn = scr_write }, - { .name = "SDER32_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 1, - .access = PL3_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.sder) }, - { .name = "SDER", - .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 1, - .access = PL3_RW, .resetvalue = 0, - .fieldoffset = offsetoflow32(CPUARMState, cp15.sder) }, - { .name = "MVBAR", .cp = 15, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_trap_aa32s_el1, - .writefn = vbar_write, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.mvbar) }, - { .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0, - .access = PL3_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) }, - { .name = "TCR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2, - .access = PL3_RW, - /* - * no .writefn needed as this can't cause an ASID change; - * we must provide a .raw_writefn and .resetfn because we handle - * reset and migration for the AArch32 TTBCR(S), which might be - * using mask and base_mask. - */ - .resetfn = vmsa_ttbcr_reset, .raw_writefn = vmsa_ttbcr_raw_write, - .fieldoffset = offsetof(CPUARMState, cp15.tcr_el[3]) }, - { .name = "ELR_EL3", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 6, .crn = 4, .crm = 0, .opc2 = 1, - .access = PL3_RW, - .fieldoffset = offsetof(CPUARMState, elr_el[3]) }, - { .name = "ESR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 2, .opc2 = 0, - .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.esr_el[3]) }, - { .name = "FAR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 6, .crm = 0, .opc2 = 0, - .access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.far_el[3]) }, - { .name = "SPSR_EL3", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_ALIAS, - .opc0 = 3, .opc1 = 6, .crn = 4, .crm = 0, .opc2 = 0, - .access = PL3_RW, - .fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_MON]) }, - { .name = "VBAR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 12, .crm = 0, .opc2 = 0, - .access = PL3_RW, .writefn = vbar_write, - .fieldoffset = offsetof(CPUARMState, cp15.vbar_el[3]), - .resetvalue = 0 }, - { .name = "CPTR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 2, - .access = PL3_RW, .accessfn = cptr_access, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.cptr_el[3]) }, - { .name = "TPIDR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 13, .crm = 0, .opc2 = 2, - .access = PL3_RW, .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[3]) }, - { .name = "AMAIR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 10, .crm = 3, .opc2 = 0, - .access = PL3_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "AFSR0_EL3", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 1, .opc2 = 0, - .access = PL3_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "AFSR1_EL3", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 1, .opc2 = 1, - .access = PL3_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "TLBI_ALLE3IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 0, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle3is_write }, - { .name = "TLBI_VAE3IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 1, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae3is_write }, - { .name = "TLBI_VALE3IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 3, .opc2 = 5, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae3is_write }, - { .name = "TLBI_ALLE3", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 0, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle3_write }, - { .name = "TLBI_VAE3", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 1, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae3_write }, - { .name = "TLBI_VALE3", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vae3_write }, - REGINFO_SENTINEL -}; - -#ifndef CONFIG_USER_ONLY -/* Test if system register redirection is to occur in the current state. */ -static bool redirect_for_e2h(CPUARMState *env) -{ - return arm_current_el(env) == 2 && (arm_hcr_el2_eff(env) & HCR_E2H); -} - -static uint64_t el2_e2h_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - CPReadFn *readfn; - - if (redirect_for_e2h(env)) { - /* Switch to the saved EL2 version of the register. */ - ri = ri->opaque; - readfn = ri->readfn; - } else { - readfn = ri->orig_readfn; - } - if (readfn == NULL) { - readfn = raw_read; - } - return readfn(env, ri); -} - -static void el2_e2h_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - CPWriteFn *writefn; - - if (redirect_for_e2h(env)) { - /* Switch to the saved EL2 version of the register. */ - ri = ri->opaque; - writefn = ri->writefn; - } else { - writefn = ri->orig_writefn; - } - if (writefn == NULL) { - writefn = raw_write; - } - writefn(env, ri, value); -} - -static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu) -{ - struct E2HAlias { - uint32_t src_key, dst_key, new_key; - const char *src_name, *dst_name, *new_name; - bool (*feature)(const ARMISARegisters *id); - }; - -#define K(op0, op1, crn, crm, op2) \ - ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2) - - static const struct E2HAlias aliases[] = { - { K(3, 0, 1, 0, 0), K(3, 4, 1, 0, 0), K(3, 5, 1, 0, 0), - "SCTLR", "SCTLR_EL2", "SCTLR_EL12" }, - { K(3, 0, 1, 0, 2), K(3, 4, 1, 1, 2), K(3, 5, 1, 0, 2), - "CPACR", "CPTR_EL2", "CPACR_EL12" }, - { K(3, 0, 2, 0, 0), K(3, 4, 2, 0, 0), K(3, 5, 2, 0, 0), - "TTBR0_EL1", "TTBR0_EL2", "TTBR0_EL12" }, - { K(3, 0, 2, 0, 1), K(3, 4, 2, 0, 1), K(3, 5, 2, 0, 1), - "TTBR1_EL1", "TTBR1_EL2", "TTBR1_EL12" }, - { K(3, 0, 2, 0, 2), K(3, 4, 2, 0, 2), K(3, 5, 2, 0, 2), - "TCR_EL1", "TCR_EL2", "TCR_EL12" }, - { K(3, 0, 4, 0, 0), K(3, 4, 4, 0, 0), K(3, 5, 4, 0, 0), - "SPSR_EL1", "SPSR_EL2", "SPSR_EL12" }, - { K(3, 0, 4, 0, 1), K(3, 4, 4, 0, 1), K(3, 5, 4, 0, 1), - "ELR_EL1", "ELR_EL2", "ELR_EL12" }, - { K(3, 0, 5, 1, 0), K(3, 4, 5, 1, 0), K(3, 5, 5, 1, 0), - "AFSR0_EL1", "AFSR0_EL2", "AFSR0_EL12" }, - { K(3, 0, 5, 1, 1), K(3, 4, 5, 1, 1), K(3, 5, 5, 1, 1), - "AFSR1_EL1", "AFSR1_EL2", "AFSR1_EL12" }, - { K(3, 0, 5, 2, 0), K(3, 4, 5, 2, 0), K(3, 5, 5, 2, 0), - "ESR_EL1", "ESR_EL2", "ESR_EL12" }, - { K(3, 0, 6, 0, 0), K(3, 4, 6, 0, 0), K(3, 5, 6, 0, 0), - "FAR_EL1", "FAR_EL2", "FAR_EL12" }, - { K(3, 0, 10, 2, 0), K(3, 4, 10, 2, 0), K(3, 5, 10, 2, 0), - "MAIR_EL1", "MAIR_EL2", "MAIR_EL12" }, - { K(3, 0, 10, 3, 0), K(3, 4, 10, 3, 0), K(3, 5, 10, 3, 0), - "AMAIR0", "AMAIR_EL2", "AMAIR_EL12" }, - { K(3, 0, 12, 0, 0), K(3, 4, 12, 0, 0), K(3, 5, 12, 0, 0), - "VBAR", "VBAR_EL2", "VBAR_EL12" }, - { K(3, 0, 13, 0, 1), K(3, 4, 13, 0, 1), K(3, 5, 13, 0, 1), - "CONTEXTIDR_EL1", "CONTEXTIDR_EL2", "CONTEXTIDR_EL12" }, - { K(3, 0, 14, 1, 0), K(3, 4, 14, 1, 0), K(3, 5, 14, 1, 0), - "CNTKCTL", "CNTHCTL_EL2", "CNTKCTL_EL12" }, - - /* - * Note that redirection of ZCR is mentioned in the description - * of ZCR_EL2, and aliasing in the description of ZCR_EL1, but - * not in the summary table. - */ - { K(3, 0, 1, 2, 0), K(3, 4, 1, 2, 0), K(3, 5, 1, 2, 0), - "ZCR_EL1", "ZCR_EL2", "ZCR_EL12", isar_feature_aa64_sve }, - - { K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0), - "TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte }, - - /* TODO: ARMv8.2-SPE -- PMSCR_EL2 */ - /* TODO: ARMv8.4-Trace -- TRFCR_EL2 */ - }; -#undef K - - size_t i; - - for (i = 0; i < ARRAY_SIZE(aliases); i++) { - const struct E2HAlias *a = &aliases[i]; - ARMCPRegInfo *src_reg, *dst_reg; - - if (a->feature && !a->feature(&cpu->isar)) { - continue; - } - - src_reg = g_hash_table_lookup(cpu->cp_regs, &a->src_key); - dst_reg = g_hash_table_lookup(cpu->cp_regs, &a->dst_key); - g_assert(src_reg != NULL); - g_assert(dst_reg != NULL); - - /* Cross-compare names to detect typos in the keys. */ - g_assert(strcmp(src_reg->name, a->src_name) == 0); - g_assert(strcmp(dst_reg->name, a->dst_name) == 0); - - /* None of the core system registers use opaque; we will. */ - g_assert(src_reg->opaque == NULL); - - /* Create alias before redirection so we dup the right data. */ - if (a->new_key) { - ARMCPRegInfo *new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo)); - uint32_t *new_key = g_memdup(&a->new_key, sizeof(uint32_t)); - bool ok; - - new_reg->name = a->new_name; - new_reg->type |= ARM_CP_ALIAS; - /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */ - new_reg->access &= PL2_RW | PL3_RW; - - ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg); - g_assert(ok); - } - - src_reg->opaque = dst_reg; - src_reg->orig_readfn = src_reg->readfn ?: raw_read; - src_reg->orig_writefn = src_reg->writefn ?: raw_write; - if (!src_reg->raw_readfn) { - src_reg->raw_readfn = raw_read; - } - if (!src_reg->raw_writefn) { - src_reg->raw_writefn = raw_write; - } - src_reg->readfn = el2_e2h_read; - src_reg->writefn = el2_e2h_write; - } -} -#endif - -static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - int cur_el = arm_current_el(env); - - if (cur_el < 2) { - uint64_t hcr = arm_hcr_el2_eff(env); - - if (cur_el == 0) { - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - if (!(env->cp15.sctlr_el[2] & SCTLR_UCT)) { - return CP_ACCESS_TRAP_EL2; - } - } else { - if (!(env->cp15.sctlr_el[1] & SCTLR_UCT)) { - return CP_ACCESS_TRAP; - } - if (hcr & HCR_TID2) { - return CP_ACCESS_TRAP_EL2; - } - } - } else if (hcr & HCR_TID2) { - return CP_ACCESS_TRAP_EL2; - } - } - - if (arm_current_el(env) < 2 && arm_hcr_el2_eff(env) & HCR_TID2) { - return CP_ACCESS_TRAP_EL2; - } - - return CP_ACCESS_OK; -} - -static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* - * Writes to OSLAR_EL1 may update the OS lock status, which can be - * read via a bit in OSLSR_EL1. - */ - int oslock; - - if (ri->state == ARM_CP_STATE_AA32) { - oslock = (value == 0xC5ACCE55); - } else { - oslock = value & 1; - } - - env->cp15.oslsr_el1 = deposit32(env->cp15.oslsr_el1, 1, 1, oslock); -} - -static const ARMCPRegInfo debug_cp_reginfo[] = { - /* - * DBGDRAR, DBGDSAR: always RAZ since we don't implement memory mapped - * debug components. The AArch64 version of DBGDRAR is named MDRAR_EL1; - * unlike DBGDRAR it is never accessible from EL0. - * DBGDSAR is deprecated and must RAZ from v8 anyway, so it has no AArch64 - * accessor. - */ - { .name = "DBGDRAR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL0_R, .accessfn = access_tdra, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "MDRAR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0, - .access = PL1_R, .accessfn = access_tdra, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "DBGDSAR", .cp = 14, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL0_R, .accessfn = access_tdra, - .type = ARM_CP_CONST, .resetvalue = 0 }, - /* Monitor debug system control register; the 32-bit alias is DBGDSCRext. */ - { .name = "MDSCR_EL1", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2, - .access = PL1_RW, .accessfn = access_tda, - .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1), - .resetvalue = 0 }, - /* - * MDCCSR_EL0, aka DBGDSCRint. This is a read-only mirror of MDSCR_EL1. - * We don't implement the configurable EL0 access. - */ - { .name = "MDCCSR_EL0", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0, - .type = ARM_CP_ALIAS, - .access = PL1_R, .accessfn = access_tda, - .fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1), }, - { .name = "OSLAR_EL1", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 4, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .accessfn = access_tdosa, - .writefn = oslar_write }, - { .name = "OSLSR_EL1", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 4, - .access = PL1_R, .resetvalue = 10, - .accessfn = access_tdosa, - .fieldoffset = offsetof(CPUARMState, cp15.oslsr_el1) }, - /* Dummy OSDLR_EL1: 32-bit Linux will read this */ - { .name = "OSDLR_EL1", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4, - .access = PL1_RW, .accessfn = access_tdosa, - .type = ARM_CP_NOP }, - /* - * Dummy DBGVCR: Linux wants to clear this on startup, but we don't - * implement vector catch debug events yet. - */ - { .name = "DBGVCR", - .cp = 14, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tda, - .type = ARM_CP_NOP }, - /* - * Dummy DBGVCR32_EL2 (which is only for a 64-bit hypervisor - * to save and restore a 32-bit guest's DBGVCR) - */ - { .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0, - .access = PL2_RW, .accessfn = access_tda, - .type = ARM_CP_NOP }, - /* - * Dummy MDCCINT_EL1, since we don't implement the Debug Communications - * Channel but Linux may try to access this register. The 32-bit - * alias is DBGDCCINT. - */ - { .name = "MDCCINT_EL1", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tda, - .type = ARM_CP_NOP }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo debug_lpae_cp_reginfo[] = { - /* 64 bit access versions of the (dummy) debug registers */ - { .name = "DBGDRAR", .cp = 14, .crm = 1, .opc1 = 0, - .access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 }, - { .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0, - .access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -/* Return the exception level to which exceptions should be taken - * via SVEAccessTrap. If an exception should be routed through - * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should - * take care of raising that exception. - * C.f. the ARM pseudocode function CheckSVEEnabled. - */ -int sve_exception_el(CPUARMState *env, int el) -{ -#ifndef CONFIG_USER_ONLY - uint64_t hcr_el2 = arm_hcr_el2_eff(env); - - if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { - bool disabled = false; - - /* The CPACR.ZEN controls traps to EL1: - * 0, 2 : trap EL0 and EL1 accesses - * 1 : trap only EL0 accesses - * 3 : trap no accesses - */ - if (!extract32(env->cp15.cpacr_el1, 16, 1)) { - disabled = true; - } else if (!extract32(env->cp15.cpacr_el1, 17, 1)) { - disabled = el == 0; - } - if (disabled) { - /* route_to_el2 */ - return hcr_el2 & HCR_TGE ? 2 : 1; - } - - /* Check CPACR.FPEN. */ - if (!extract32(env->cp15.cpacr_el1, 20, 1)) { - disabled = true; - } else if (!extract32(env->cp15.cpacr_el1, 21, 1)) { - disabled = el == 0; - } - if (disabled) { - return 0; - } - } - - /* CPTR_EL2. Since TZ and TFP are positive, - * they will be zero when EL2 is not present. - */ - if (el <= 2 && arm_is_el2_enabled(env)) { - if (env->cp15.cptr_el[2] & CPTR_TZ) { - return 2; - } - if (env->cp15.cptr_el[2] & CPTR_TFP) { - return 0; - } - } - - /* CPTR_EL3. Since EZ is negative we must check for EL3. */ - if (arm_feature(env, ARM_FEATURE_EL3) - && !(env->cp15.cptr_el[3] & CPTR_EZ)) { - return 3; - } -#endif - return 0; -} - -static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) -{ - uint32_t end_len; - - end_len = start_len &= 0xf; - if (!test_bit(start_len, cpu->sve_vq_map)) { - end_len = find_last_bit(cpu->sve_vq_map, start_len); - assert(end_len < start_len); - } - return end_len; -} - -/* - * Given that SVE is enabled, return the vector length for EL. - */ -uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) -{ - ARMCPU *cpu = env_archcpu(env); - uint32_t zcr_len = cpu->sve_max_vq - 1; - - if (el <= 1) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]); - } - if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]); - } - if (arm_feature(env, ARM_FEATURE_EL3)) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]); - } - - return sve_zcr_get_valid_len(cpu, zcr_len); -} - -static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - int cur_el = arm_current_el(env); - int old_len = sve_zcr_len_for_el(env, cur_el); - int new_len; - - /* Bits other than [3:0] are RAZ/WI. */ - QEMU_BUILD_BUG_ON(ARM_MAX_VQ > 16); - raw_write(env, ri, value & 0xf); - - /* - * Because we arrived here, we know both FP and SVE are enabled; - * otherwise we would have trapped access to the ZCR_ELn register. - */ - new_len = sve_zcr_len_for_el(env, cur_el); - if (new_len < old_len) { - aarch64_sve_narrow_vq(env, new_len + 1); - } -} - -static const ARMCPRegInfo zcr_el1_reginfo = { - .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0, - .access = PL1_RW, .type = ARM_CP_SVE, - .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]), - .writefn = zcr_write, .raw_writefn = raw_write -}; - -static const ARMCPRegInfo zcr_el2_reginfo = { - .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_SVE, - .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]), - .writefn = zcr_write, .raw_writefn = raw_write -}; - -static const ARMCPRegInfo zcr_no_el2_reginfo = { - .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0, - .access = PL2_RW, .type = ARM_CP_SVE, - .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore -}; - -static const ARMCPRegInfo zcr_el3_reginfo = { - .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0, - .access = PL3_RW, .type = ARM_CP_SVE, - .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]), - .writefn = zcr_write, .raw_writefn = raw_write -}; - -void hw_watchpoint_update(ARMCPU *cpu, int n) -{ - CPUARMState *env = &cpu->env; - vaddr len = 0; - vaddr wvr = env->cp15.dbgwvr[n]; - uint64_t wcr = env->cp15.dbgwcr[n]; - int mask; - int flags = BP_CPU | BP_STOP_BEFORE_ACCESS; - - if (env->cpu_watchpoint[n]) { - cpu_watchpoint_remove_by_ref(CPU(cpu), env->cpu_watchpoint[n]); - env->cpu_watchpoint[n] = NULL; - } - - if (!extract64(wcr, 0, 1)) { - /* E bit clear : watchpoint disabled */ - return; - } - - switch (extract64(wcr, 3, 2)) { - case 0: - /* LSC 00 is reserved and must behave as if the wp is disabled */ - return; - case 1: - flags |= BP_MEM_READ; - break; - case 2: - flags |= BP_MEM_WRITE; - break; - case 3: - flags |= BP_MEM_ACCESS; - break; - } - - /* Attempts to use both MASK and BAS fields simultaneously are - * CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case, - * thus generating a watchpoint for every byte in the masked region. - */ - mask = extract64(wcr, 24, 4); - if (mask == 1 || mask == 2) { - /* Reserved values of MASK; we must act as if the mask value was - * some non-reserved value, or as if the watchpoint were disabled. - * We choose the latter. - */ - return; - } else if (mask) { - /* Watchpoint covers an aligned area up to 2GB in size */ - len = 1ULL << mask; - /* If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE - * whether the watchpoint fires when the unmasked bits match; we opt - * to generate the exceptions. - */ - wvr &= ~(len - 1); - } else { - /* Watchpoint covers bytes defined by the byte address select bits */ - int bas = extract64(wcr, 5, 8); - int basstart; - - if (extract64(wvr, 2, 1)) { - /* Deprecated case of an only 4-aligned address. BAS[7:4] are - * ignored, and BAS[3:0] define which bytes to watch. - */ - bas &= 0xf; - } - - if (bas == 0) { - /* This must act as if the watchpoint is disabled */ - return; - } - - /* The BAS bits are supposed to be programmed to indicate a contiguous - * range of bytes. Otherwise it is CONSTRAINED UNPREDICTABLE whether - * we fire for each byte in the word/doubleword addressed by the WVR. - * We choose to ignore any non-zero bits after the first range of 1s. - */ - basstart = ctz32(bas); - len = cto32(bas >> basstart); - wvr += basstart; - } - - cpu_watchpoint_insert(CPU(cpu), wvr, len, flags, - &env->cpu_watchpoint[n]); -} - -void hw_watchpoint_update_all(ARMCPU *cpu) -{ - int i; - CPUARMState *env = &cpu->env; - - /* Completely clear out existing QEMU watchpoints and our array, to - * avoid possible stale entries following migration load. - */ - cpu_watchpoint_remove_all(CPU(cpu), BP_CPU); - memset(env->cpu_watchpoint, 0, sizeof(env->cpu_watchpoint)); - - for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_watchpoint); i++) { - hw_watchpoint_update(cpu, i); - } -} - -static void dbgwvr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - int i = ri->crm; - - /* - * Bits [63:49] are hardwired to the value of bit [48]; that is, the - * register reads and behaves as if values written are sign extended. - * Bits [1:0] are RES0. - */ - value = sextract64(value, 0, 49) & ~3ULL; - - raw_write(env, ri, value); - hw_watchpoint_update(cpu, i); -} - -static void dbgwcr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - int i = ri->crm; - - raw_write(env, ri, value); - hw_watchpoint_update(cpu, i); -} - -void hw_breakpoint_update(ARMCPU *cpu, int n) -{ - CPUARMState *env = &cpu->env; - uint64_t bvr = env->cp15.dbgbvr[n]; - uint64_t bcr = env->cp15.dbgbcr[n]; - vaddr addr; - int bt; - int flags = BP_CPU; - - if (env->cpu_breakpoint[n]) { - cpu_breakpoint_remove_by_ref(CPU(cpu), env->cpu_breakpoint[n]); - env->cpu_breakpoint[n] = NULL; - } - - if (!extract64(bcr, 0, 1)) { - /* E bit clear : watchpoint disabled */ - return; - } - - bt = extract64(bcr, 20, 4); - - switch (bt) { - case 4: /* unlinked address mismatch (reserved if AArch64) */ - case 5: /* linked address mismatch (reserved if AArch64) */ - qemu_log_mask(LOG_UNIMP, - "arm: address mismatch breakpoint types not implemented\n"); - return; - case 0: /* unlinked address match */ - case 1: /* linked address match */ - { - /* Bits [63:49] are hardwired to the value of bit [48]; that is, - * we behave as if the register was sign extended. Bits [1:0] are - * RES0. The BAS field is used to allow setting breakpoints on 16 - * bit wide instructions; it is CONSTRAINED UNPREDICTABLE whether - * a bp will fire if the addresses covered by the bp and the addresses - * covered by the insn overlap but the insn doesn't start at the - * start of the bp address range. We choose to require the insn and - * the bp to have the same address. The constraints on writing to - * BAS enforced in dbgbcr_write mean we have only four cases: - * 0b0000 => no breakpoint - * 0b0011 => breakpoint on addr - * 0b1100 => breakpoint on addr + 2 - * 0b1111 => breakpoint on addr - * See also figure D2-3 in the v8 ARM ARM (DDI0487A.c). - */ - int bas = extract64(bcr, 5, 4); - addr = sextract64(bvr, 0, 49) & ~3ULL; - if (bas == 0) { - return; - } - if (bas == 0xc) { - addr += 2; - } - break; - } - case 2: /* unlinked context ID match */ - case 8: /* unlinked VMID match (reserved if no EL2) */ - case 10: /* unlinked context ID and VMID match (reserved if no EL2) */ - qemu_log_mask(LOG_UNIMP, - "arm: unlinked context breakpoint types not implemented\n"); - return; - case 9: /* linked VMID match (reserved if no EL2) */ - case 11: /* linked context ID and VMID match (reserved if no EL2) */ - case 3: /* linked context ID match */ - default: - /* We must generate no events for Linked context matches (unless - * they are linked to by some other bp/wp, which is handled in - * updates for the linking bp/wp). We choose to also generate no events - * for reserved values. - */ - return; - } - - cpu_breakpoint_insert(CPU(cpu), addr, flags, &env->cpu_breakpoint[n]); -} - -void hw_breakpoint_update_all(ARMCPU *cpu) -{ - int i; - CPUARMState *env = &cpu->env; - - /* Completely clear out existing QEMU breakpoints and our array, to - * avoid possible stale entries following migration load. - */ - cpu_breakpoint_remove_all(CPU(cpu), BP_CPU); - memset(env->cpu_breakpoint, 0, sizeof(env->cpu_breakpoint)); - - for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_breakpoint); i++) { - hw_breakpoint_update(cpu, i); - } -} - -static void dbgbvr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - int i = ri->crm; - - raw_write(env, ri, value); - hw_breakpoint_update(cpu, i); -} - -static void dbgbcr_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - int i = ri->crm; - - /* - * BAS[3] is a read-only copy of BAS[2], and BAS[1] a read-only - * copy of BAS[0]. - */ - value = deposit64(value, 6, 1, extract64(value, 5, 1)); - value = deposit64(value, 8, 1, extract64(value, 7, 1)); - - raw_write(env, ri, value); - hw_breakpoint_update(cpu, i); -} - -static void define_debug_regs(ARMCPU *cpu) -{ - /* - * Define v7 and v8 architectural debug registers. - * These are just dummy implementations for now. - */ - int i; - int wrps, brps, ctx_cmps; - - /* - * The Arm ARM says DBGDIDR is optional and deprecated if EL1 cannot - * use AArch32. Given that bit 15 is RES1, if the value is 0 then - * the register must not exist for this cpu. - */ - if (cpu->isar.dbgdidr != 0) { - ARMCPRegInfo dbgdidr = { - .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, - .opc1 = 0, .opc2 = 0, - .access = PL0_R, .accessfn = access_tda, - .type = ARM_CP_CONST, .resetvalue = cpu->isar.dbgdidr, - }; - define_one_arm_cp_reg(cpu, &dbgdidr); - } - - /* Note that all these register fields hold "number of Xs minus 1". */ - brps = arm_num_brps(cpu); - wrps = arm_num_wrps(cpu); - ctx_cmps = arm_num_ctx_cmps(cpu); - - assert(ctx_cmps <= brps); - - define_arm_cp_regs(cpu, debug_cp_reginfo); - - if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) { - define_arm_cp_regs(cpu, debug_lpae_cp_reginfo); - } - - for (i = 0; i < brps; i++) { - ARMCPRegInfo dbgregs[] = { - { .name = "DBGBVR", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4, - .access = PL1_RW, .accessfn = access_tda, - .fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]), - .writefn = dbgbvr_write, .raw_writefn = raw_write - }, - { .name = "DBGBCR", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5, - .access = PL1_RW, .accessfn = access_tda, - .fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]), - .writefn = dbgbcr_write, .raw_writefn = raw_write - }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, dbgregs); - } - - for (i = 0; i < wrps; i++) { - ARMCPRegInfo dbgregs[] = { - { .name = "DBGWVR", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6, - .access = PL1_RW, .accessfn = access_tda, - .fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]), - .writefn = dbgwvr_write, .raw_writefn = raw_write - }, - { .name = "DBGWCR", .state = ARM_CP_STATE_BOTH, - .cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7, - .access = PL1_RW, .accessfn = access_tda, - .fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]), - .writefn = dbgwcr_write, .raw_writefn = raw_write - }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, dbgregs); - } -} - -static void define_pmu_regs(ARMCPU *cpu) -{ - /* - * v7 performance monitor control register: same implementor - * field as main ID register, and we implement four counters in - * addition to the cycle count register. - */ - unsigned int i, pmcrn = PMCR_NUM_COUNTERS; - ARMCPRegInfo pmcr = { - .name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0, - .access = PL0_RW, - .type = ARM_CP_IO | ARM_CP_ALIAS, - .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcr), - .accessfn = pmreg_access, .writefn = pmcr_write, - .raw_writefn = raw_write, - }; - ARMCPRegInfo pmcr64 = { - .name = "PMCR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 0, - .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_IO, - .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr), - .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT) | - PMCRLC, - .writefn = pmcr_write, .raw_writefn = raw_write, - }; - define_one_arm_cp_reg(cpu, &pmcr); - define_one_arm_cp_reg(cpu, &pmcr64); - for (i = 0; i < pmcrn; i++) { - char *pmevcntr_name = g_strdup_printf("PMEVCNTR%d", i); - char *pmevcntr_el0_name = g_strdup_printf("PMEVCNTR%d_EL0", i); - char *pmevtyper_name = g_strdup_printf("PMEVTYPER%d", i); - char *pmevtyper_el0_name = g_strdup_printf("PMEVTYPER%d_EL0", i); - ARMCPRegInfo pmev_regs[] = { - { .name = pmevcntr_name, .cp = 15, .crn = 14, - .crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7, - .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS, - .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn, - .accessfn = pmreg_access }, - { .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)), - .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_IO, - .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn, - .raw_readfn = pmevcntr_rawread, - .raw_writefn = pmevcntr_rawwrite }, - { .name = pmevtyper_name, .cp = 15, .crn = 14, - .crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7, - .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS, - .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn, - .accessfn = pmreg_access }, - { .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 12 | (3 & (i >> 3)), - .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access, - .type = ARM_CP_IO, - .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn, - .raw_writefn = pmevtyper_rawwrite }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, pmev_regs); - g_free(pmevcntr_name); - g_free(pmevcntr_el0_name); - g_free(pmevtyper_name); - g_free(pmevtyper_el0_name); - } - if (cpu_isar_feature(aa32_pmu_8_1, cpu)) { - ARMCPRegInfo v81_pmu_regs[] = { - { .name = "PMCEID2", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4, - .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, - .resetvalue = extract64(cpu->pmceid0, 32, 32) }, - { .name = "PMCEID3", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5, - .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, - .resetvalue = extract64(cpu->pmceid1, 32, 32) }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, v81_pmu_regs); - } - if (cpu_isar_feature(any_pmu_8_4, cpu)) { - static const ARMCPRegInfo v84_pmmir = { - .name = "PMMIR_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 6, - .access = PL1_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, - .resetvalue = 0 - }; - define_one_arm_cp_reg(cpu, &v84_pmmir); - } -} - -/* - * We don't know until after realize whether there's a GICv3 - * attached, and that is what registers the gicv3 sysregs. - * So we have to fill in the GIC fields in ID_PFR/ID_PFR1_EL1/ID_AA64PFR0_EL1 - * at runtime. - */ -static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - ARMCPU *cpu = env_archcpu(env); - uint64_t pfr1 = cpu->isar.id_pfr1; - - if (env->gicv3state) { - pfr1 |= 1 << 28; - } - return pfr1; -} - -#ifndef CONFIG_USER_ONLY -static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - ARMCPU *cpu = env_archcpu(env); - uint64_t pfr0 = cpu->isar.id_aa64pfr0; - - if (env->gicv3state) { - pfr0 |= 1 << 24; - } - return pfr0; -} -#endif - -/* - * Shared logic between LORID and the rest of the LOR* registers. - * Secure state exclusion has already been dealt with. - */ -static CPAccessResult access_lor_ns(CPUARMState *env, - const ARMCPRegInfo *ri, bool isread) -{ - int el = arm_current_el(env); - - if (el < 2 && (arm_hcr_el2_eff(env) & HCR_TLOR)) { - return CP_ACCESS_TRAP_EL2; - } - if (el < 3 && (env->cp15.scr_el3 & SCR_TLOR)) { - return CP_ACCESS_TRAP_EL3; - } - return CP_ACCESS_OK; -} - -static CPAccessResult access_lor_other(CPUARMState *env, - const ARMCPRegInfo *ri, bool isread) -{ - if (arm_is_secure_below_el3(env)) { - /* Access denied in secure mode. */ - return CP_ACCESS_TRAP; - } - return access_lor_ns(env, ri, isread); -} - -/* - * A trivial implementation of ARMv8.1-LOR leaves all of these - * registers fixed at 0, which indicates that there are zero - * supported Limited Ordering regions. - */ -static const ARMCPRegInfo lor_reginfo[] = { - { .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0, - .access = PL1_RW, .accessfn = access_lor_other, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1, - .access = PL1_RW, .accessfn = access_lor_other, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "LORN_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2, - .access = PL1_RW, .accessfn = access_lor_other, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "LORC_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3, - .access = PL1_RW, .accessfn = access_lor_other, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7, - .access = PL1_R, .accessfn = access_lor_ns, - .type = ARM_CP_CONST, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -#ifdef TARGET_AARCH64 -static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - int el = arm_current_el(env); - - if (el < 2 && - arm_feature(env, ARM_FEATURE_EL2) && - !(arm_hcr_el2_eff(env) & HCR_APK)) { - return CP_ACCESS_TRAP_EL2; - } - if (el < 3 && - arm_feature(env, ARM_FEATURE_EL3) && - !(env->cp15.scr_el3 & SCR_APK)) { - return CP_ACCESS_TRAP_EL3; - } - return CP_ACCESS_OK; -} - -static const ARMCPRegInfo pauth_reginfo[] = { - { .name = "APDAKEYLO_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 0, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apda.lo) }, - { .name = "APDAKEYHI_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 1, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apda.hi) }, - { .name = "APDBKEYLO_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 2, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apdb.lo) }, - { .name = "APDBKEYHI_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 3, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apdb.hi) }, - { .name = "APGAKEYLO_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 0, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apga.lo) }, - { .name = "APGAKEYHI_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 1, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apga.hi) }, - { .name = "APIAKEYLO_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 0, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apia.lo) }, - { .name = "APIAKEYHI_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 1, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apia.hi) }, - { .name = "APIBKEYLO_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 2, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apib.lo) }, - { .name = "APIBKEYHI_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3, - .access = PL1_RW, .accessfn = access_pauth, - .fieldoffset = offsetof(CPUARMState, keys.apib.hi) }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo tlbirange_reginfo[] = { - { .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1is_write }, - { .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1_write }, - { .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1_write }, - { .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1_write }, - { .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae1_write }, - { .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae2is_write }, - { .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae2is_write }, - { .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae2is_write }, - { .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae2is_write }, - { .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae2_write }, - { .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae2_write }, - { .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae3is_write }, - { .name = "TLBI_RVALE3IS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 5, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae3is_write }, - { .name = "TLBI_RVAE3OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 1, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae3is_write }, - { .name = "TLBI_RVALE3OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 5, .opc2 = 5, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae3is_write }, - { .name = "TLBI_RVAE3", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 1, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae3_write }, - { .name = "TLBI_RVALE3", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_rvae3_write }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo tlbios_reginfo[] = { - { .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vmalle1is_write }, - { .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2, - .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vmalle1is_write }, - { .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle2is_write }, - { .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle1is_write }, - { .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6, - .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle1is_write }, - { .name = "TLBI_IPAS2E1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 0, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_RIPAS2E1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 3, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_IPAS2LE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 4, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_RIPAS2LE1OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 7, - .access = PL2_W, .type = ARM_CP_NOP }, - { .name = "TLBI_ALLE3OS", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 0, - .access = PL3_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_alle3is_write }, - REGINFO_SENTINEL -}; - -static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri) -{ - Error *err = NULL; - uint64_t ret; - - /* Success sets NZCV = 0000. */ - env->NF = env->CF = env->VF = 0, env->ZF = 1; - - if (qemu_guest_getrandom(&ret, sizeof(ret), &err) < 0) { - /* - * ??? Failed, for unknown reasons in the crypto subsystem. - * The best we can do is log the reason and return the - * timed-out indication to the guest. There is no reason - * we know to expect this failure to be transitory, so the - * guest may well hang retrying the operation. - */ - qemu_log_mask(LOG_UNIMP, "%s: Crypto failure: %s", - ri->name, error_get_pretty(err)); - error_free(err); - - env->ZF = 0; /* NZCF = 0100 */ - return 0; - } - return ret; -} - -/* We do not support re-seeding, so the two registers operate the same. */ -static const ARMCPRegInfo rndr_reginfo[] = { - { .name = "RNDR", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO, - .opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 0, - .access = PL0_R, .readfn = rndr_readfn }, - { .name = "RNDRRS", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO, - .opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 1, - .access = PL0_R, .readfn = rndr_readfn }, - REGINFO_SENTINEL -}; - -#ifndef CONFIG_USER_ONLY -static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque, - uint64_t value) -{ - ARMCPU *cpu = env_archcpu(env); - /* CTR_EL0 System register -> DminLine, bits [19:16] */ - uint64_t dline_size = 4 << ((cpu->ctr >> 16) & 0xF); - uint64_t vaddr_in = (uint64_t) value; - uint64_t vaddr = vaddr_in & ~(dline_size - 1); - void *haddr; - int mem_idx = cpu_mmu_index(env, false); - - /* This won't be crossing page boundaries */ - haddr = probe_read(env, vaddr, dline_size, mem_idx, GETPC()); - if (haddr) { - - ram_addr_t offset; - MemoryRegion *mr; - - /* RCU lock is already being held */ - mr = memory_region_from_host(haddr, &offset); - - if (mr) { - memory_region_writeback(mr, offset, dline_size); - } - } -} - -static const ARMCPRegInfo dcpop_reg[] = { - { .name = "DC_CVAP", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1, - .access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END, - .accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo dcpodp_reg[] = { - { .name = "DC_CVADP", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1, - .access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END, - .accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn }, - REGINFO_SENTINEL -}; -#endif /*CONFIG_USER_ONLY*/ - -static CPAccessResult access_aa64_tid5(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if ((arm_current_el(env) < 2) && (arm_hcr_el2_eff(env) & HCR_TID5)) { - return CP_ACCESS_TRAP_EL2; - } - - return CP_ACCESS_OK; -} - -static CPAccessResult access_mte(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - int el = arm_current_el(env); - - if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) { - uint64_t hcr = arm_hcr_el2_eff(env); - if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) { - return CP_ACCESS_TRAP_EL2; - } - } - if (el < 3 && - arm_feature(env, ARM_FEATURE_EL3) && - !(env->cp15.scr_el3 & SCR_ATA)) { - return CP_ACCESS_TRAP_EL3; - } - return CP_ACCESS_OK; -} - -static uint64_t tco_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - return env->pstate & PSTATE_TCO; -} - -static void tco_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val) -{ - env->pstate = (env->pstate & ~PSTATE_TCO) | (val & PSTATE_TCO); -} - -static const ARMCPRegInfo mte_reginfo[] = { - { .name = "TFSRE0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 6, .opc2 = 1, - .access = PL1_RW, .accessfn = access_mte, - .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[0]) }, - { .name = "TFSR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 6, .opc2 = 0, - .access = PL1_RW, .accessfn = access_mte, - .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[1]) }, - { .name = "TFSR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 6, .opc2 = 0, - .access = PL2_RW, .accessfn = access_mte, - .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[2]) }, - { .name = "TFSR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 5, .crm = 6, .opc2 = 0, - .access = PL3_RW, - .fieldoffset = offsetof(CPUARMState, cp15.tfsr_el[3]) }, - { .name = "RGSR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 5, - .access = PL1_RW, .accessfn = access_mte, - .fieldoffset = offsetof(CPUARMState, cp15.rgsr_el1) }, - { .name = "GCR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 6, - .access = PL1_RW, .accessfn = access_mte, - .fieldoffset = offsetof(CPUARMState, cp15.gcr_el1) }, - { .name = "GMID_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 4, - .access = PL1_R, .accessfn = access_aa64_tid5, - .type = ARM_CP_CONST, .resetvalue = GMID_EL1_BS }, - { .name = "TCO", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7, - .type = ARM_CP_NO_RAW, - .access = PL0_RW, .readfn = tco_read, .writefn = tco_write }, - { .name = "DC_IGVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 3, - .type = ARM_CP_NOP, .access = PL1_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_IGSW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 4, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - { .name = "DC_IGDVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL1_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_IGDSW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 6, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - { .name = "DC_CGSW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 4, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - { .name = "DC_CGDSW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 6, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - { .name = "DC_CIGSW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 4, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - { .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6, - .type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo mte_tco_ro_reginfo[] = { - { .name = "TCO", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7, - .type = ARM_CP_CONST, .access = PL0_RW, }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = { - { .name = "DC_CGVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 3, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CGDVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CGVAP", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 3, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CGDVAP", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CGVADP", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 3, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CGDVADP", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CIGVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 3, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_CIGDVAC", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL0_W, - .accessfn = aa64_cacheop_poc_access }, - { .name = "DC_GVA", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 3, - .access = PL0_W, .type = ARM_CP_DC_GVA, -#ifndef CONFIG_USER_ONLY - /* Avoid overhead of an access check that always passes in user-mode */ - .accessfn = aa64_zva_access, -#endif - }, - { .name = "DC_GZVA", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 4, - .access = PL0_W, .type = ARM_CP_DC_GZVA, -#ifndef CONFIG_USER_ONLY - /* Avoid overhead of an access check that always passes in user-mode */ - .accessfn = aa64_zva_access, -#endif - }, - REGINFO_SENTINEL -}; - -#endif - -static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - int el = arm_current_el(env); - - if (el == 0) { - uint64_t sctlr = arm_sctlr(env, el); - if (!(sctlr & SCTLR_EnRCTX)) { - return CP_ACCESS_TRAP; - } - } else if (el == 1) { - uint64_t hcr = arm_hcr_el2_eff(env); - if (hcr & HCR_NV) { - return CP_ACCESS_TRAP_EL2; - } - } - return CP_ACCESS_OK; -} - -static const ARMCPRegInfo predinv_reginfo[] = { - { .name = "CFP_RCTX", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 4, - .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, - { .name = "DVP_RCTX", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, - { .name = "CPP_RCTX", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 7, - .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, - /* - * Note the AArch32 opcodes have a different OPC1. - */ - { .name = "CFPRCTX", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 4, - .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, - { .name = "DVPRCTX", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 5, - .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, - { .name = "CPPRCTX", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7, - .type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv }, - REGINFO_SENTINEL -}; - -static uint64_t ccsidr2_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* Read the high 32 bits of the current CCSIDR */ - return extract64(ccsidr_read(env, ri), 32, 32); -} - -static const ARMCPRegInfo ccsidr2_reginfo[] = { - { .name = "CCSIDR2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 2, - .access = PL1_R, - .accessfn = access_aa64_tid2, - .readfn = ccsidr2_read, .type = ARM_CP_NO_RAW }, - REGINFO_SENTINEL -}; - -static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if ((arm_current_el(env) < 2) && (arm_hcr_el2_eff(env) & HCR_TID3)) { - return CP_ACCESS_TRAP_EL2; - } - - return CP_ACCESS_OK; -} - -static CPAccessResult access_aa32_tid3(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_feature(env, ARM_FEATURE_V8)) { - return access_aa64_tid3(env, ri, isread); - } - - return CP_ACCESS_OK; -} - -static CPAccessResult access_jazelle(CPUARMState *env, const ARMCPRegInfo *ri, - bool isread) -{ - if (arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_TID0)) { - return CP_ACCESS_TRAP_EL2; - } - - return CP_ACCESS_OK; -} - -static const ARMCPRegInfo jazelle_regs[] = { - { .name = "JIDR", - .cp = 14, .crn = 0, .crm = 0, .opc1 = 7, .opc2 = 0, - .access = PL1_R, .accessfn = access_jazelle, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "JOSCR", - .cp = 14, .crn = 1, .crm = 0, .opc1 = 7, .opc2 = 0, - .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "JMCR", - .cp = 14, .crn = 2, .crm = 0, .opc1 = 7, .opc2 = 0, - .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo vhe_reginfo[] = { - { .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) }, - { .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1, - .access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write, - .fieldoffset = offsetof(CPUARMState, cp15.ttbr1_el[2]) }, -#ifndef CONFIG_USER_ONLY - { .name = "CNTHV_CVAL_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 2, - .fieldoffset = - offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].cval), - .type = ARM_CP_IO, .access = PL2_RW, - .writefn = gt_hv_cval_write, .raw_writefn = raw_write }, - { .name = "CNTHV_TVAL_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL2_RW, - .resetfn = gt_hv_timer_reset, - .readfn = gt_hv_tval_read, .writefn = gt_hv_tval_write }, - { .name = "CNTHV_CTL_EL2", .state = ARM_CP_STATE_BOTH, - .type = ARM_CP_IO, - .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 1, - .access = PL2_RW, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].ctl), - .writefn = gt_hv_ctl_write, .raw_writefn = raw_write }, - { .name = "CNTP_CTL_EL02", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 1, - .type = ARM_CP_IO | ARM_CP_ALIAS, - .access = PL2_RW, .accessfn = e2h_access, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].ctl), - .writefn = gt_phys_ctl_write, .raw_writefn = raw_write }, - { .name = "CNTV_CTL_EL02", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 1, - .type = ARM_CP_IO | ARM_CP_ALIAS, - .access = PL2_RW, .accessfn = e2h_access, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].ctl), - .writefn = gt_virt_ctl_write, .raw_writefn = raw_write }, - { .name = "CNTP_TVAL_EL02", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO | ARM_CP_ALIAS, - .access = PL2_RW, .accessfn = e2h_access, - .readfn = gt_phys_tval_read, .writefn = gt_phys_tval_write }, - { .name = "CNTV_TVAL_EL02", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 0, - .type = ARM_CP_NO_RAW | ARM_CP_IO | ARM_CP_ALIAS, - .access = PL2_RW, .accessfn = e2h_access, - .readfn = gt_virt_tval_read, .writefn = gt_virt_tval_write }, - { .name = "CNTP_CVAL_EL02", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 2, - .type = ARM_CP_IO | ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval), - .access = PL2_RW, .accessfn = e2h_access, - .writefn = gt_phys_cval_write, .raw_writefn = raw_write }, - { .name = "CNTV_CVAL_EL02", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 2, - .type = ARM_CP_IO | ARM_CP_ALIAS, - .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval), - .access = PL2_RW, .accessfn = e2h_access, - .writefn = gt_virt_cval_write, .raw_writefn = raw_write }, -#endif - REGINFO_SENTINEL -}; - -#ifndef CONFIG_USER_ONLY -static const ARMCPRegInfo ats1e1_reginfo[] = { - { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64, - .opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write64 }, - REGINFO_SENTINEL -}; - -static const ARMCPRegInfo ats1cp_reginfo[] = { - { .name = "ATS1CPRP", - .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write }, - { .name = "ATS1CPWP", - .cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1, - .access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, - .writefn = ats_write }, - REGINFO_SENTINEL -}; -#endif - -/* - * ACTLR2 and HACTLR2 map to ACTLR_EL1[63:32] and - * ACTLR_EL2[63:32]. They exist only if the ID_MMFR4.AC2 field - * is non-zero, which is never for ARMv7, optionally in ARMv8 - * and mandatorily for ARMv8.2 and up. - * ACTLR2 is banked for S and NS if EL3 is AArch32. Since QEMU's - * implementation is RAZ/WI we can ignore this detail, as we - * do for ACTLR. - */ -static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = { - { .name = "ACTLR2", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 3, - .access = PL1_RW, .accessfn = access_tacr, - .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "HACTLR2", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - REGINFO_SENTINEL -}; - -void register_cp_regs_for_features(ARMCPU *cpu) -{ - /* Register all the coprocessor registers based on feature bits */ - CPUARMState *env = &cpu->env; - if (arm_feature(env, ARM_FEATURE_M)) { - /* M profile has no coprocessor registers */ + switch (extract64(wcr, 3, 2)) { + case 0: + /* LSC 00 is reserved and must behave as if the wp is disabled */ return; + case 1: + flags |= BP_MEM_READ; + break; + case 2: + flags |= BP_MEM_WRITE; + break; + case 3: + flags |= BP_MEM_ACCESS; + break; } - define_arm_cp_regs(cpu, cp_reginfo); - if (!arm_feature(env, ARM_FEATURE_V8)) { - /* - * Must go early as it is full of wildcards that may be - * overridden by later definitions. - */ - define_arm_cp_regs(cpu, not_v8_cp_reginfo); - } - - if (arm_feature(env, ARM_FEATURE_V6)) { - /* The ID registers all have impdef reset values */ - ARMCPRegInfo v6_idregs[] = { - { .name = "ID_PFR0", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_pfr0 }, - /* - * ID_PFR1 is not a plain ARM_CP_CONST because we don't know - * the value of the GIC field until after we define these regs. - */ - { .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_NO_RAW, - .accessfn = access_aa32_tid3, - .readfn = id_pfr1_read, - .writefn = arm_cp_write_ignore }, - { .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_dfr0 }, - { .name = "ID_AFR0", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 3, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->id_afr0 }, - { .name = "ID_MMFR0", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_mmfr0 }, - { .name = "ID_MMFR1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 5, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_mmfr1 }, - { .name = "ID_MMFR2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_mmfr2 }, - { .name = "ID_MMFR3", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_mmfr3 }, - { .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_isar0 }, - { .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_isar1 }, - { .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_isar2 }, - { .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_isar3 }, - { .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_isar4 }, - { .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_isar5 }, - { .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_mmfr4 }, - { .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa32_tid3, - .resetvalue = cpu->isar.id_isar6 }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, v6_idregs); - define_arm_cp_regs(cpu, v6_cp_reginfo); - } else { - define_arm_cp_regs(cpu, not_v6_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_V6K)) { - define_arm_cp_regs(cpu, v6k_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_V7MP) && - !arm_feature(env, ARM_FEATURE_PMSA)) { - define_arm_cp_regs(cpu, v7mp_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_V7VE)) { - define_arm_cp_regs(cpu, pmovsset_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_V7)) { - ARMCPRegInfo clidr = { - .name = "CLIDR", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid2, - .resetvalue = cpu->clidr - }; - define_one_arm_cp_reg(cpu, &clidr); - define_arm_cp_regs(cpu, v7_cp_reginfo); - define_debug_regs(cpu); - define_pmu_regs(cpu); - } else { - define_arm_cp_regs(cpu, not_v7_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_V8)) { - /* - * AArch64 ID registers, which all have impdef reset values. - * Note that within the ID register ranges the unused slots - * must all RAZ, not UNDEF; future architecture versions may - * define new registers here. + /* Attempts to use both MASK and BAS fields simultaneously are + * CONSTRAINED UNPREDICTABLE; we opt to ignore BAS in this case, + * thus generating a watchpoint for every byte in the masked region. + */ + mask = extract64(wcr, 24, 4); + if (mask == 1 || mask == 2) { + /* Reserved values of MASK; we must act as if the mask value was + * some non-reserved value, or as if the watchpoint were disabled. + * We choose the latter. */ - ARMCPRegInfo v8_idregs[] = { - /* - * ID_AA64PFR0_EL1 is not a plain ARM_CP_CONST in system - * emulation because we don't know the right value for the - * GIC field until after we define these regs. - */ - { .name = "ID_AA64PFR0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 0, - .access = PL1_R, -#ifdef CONFIG_USER_ONLY - .type = ARM_CP_CONST, - .resetvalue = cpu->isar.id_aa64pfr0 -#else - .type = ARM_CP_NO_RAW, - .accessfn = access_aa64_tid3, - .readfn = id_aa64pfr0_read, - .writefn = arm_cp_write_ignore -#endif - }, - { .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64pfr1}, - { .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64PFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 3, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64ZFR0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64zfr0 }, - { .name = "ID_AA64PFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 5, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64PFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64PFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64DFR0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 0, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64dfr0 }, - { .name = "ID_AA64DFR1_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64dfr1 }, - { .name = "ID_AA64DFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 2, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64DFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 3, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64AFR0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->id_aa64afr0 }, - { .name = "ID_AA64AFR1_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 5, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->id_aa64afr1 }, - { .name = "ID_AA64AFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64AFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64isar0 }, - { .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64isar1 }, - { .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64ISAR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 3, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64ISAR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64ISAR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 5, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64ISAR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64ISAR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64MMFR0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64mmfr0 }, - { .name = "ID_AA64MMFR1_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64mmfr1 }, - { .name = "ID_AA64MMFR2_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 2, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_aa64mmfr2 }, - { .name = "ID_AA64MMFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 3, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64MMFR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64MMFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 5, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64MMFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_AA64MMFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.mvfr0 }, - { .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.mvfr1 }, - { .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.mvfr2 }, - { .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "ID_PFR2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = cpu->isar.id_pfr2 }, - { .name = "MVFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 5, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "MVFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 6, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "MVFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 7, - .access = PL1_R, .type = ARM_CP_CONST, - .accessfn = access_aa64_tid3, - .resetvalue = 0 }, - { .name = "PMCEID0", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 6, - .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, - .resetvalue = extract64(cpu->pmceid0, 0, 32) }, - { .name = "PMCEID0_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 6, - .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, - .resetvalue = cpu->pmceid0 }, - { .name = "PMCEID1", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 7, - .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, - .resetvalue = extract64(cpu->pmceid1, 0, 32) }, - { .name = "PMCEID1_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7, - .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST, - .resetvalue = cpu->pmceid1 }, - REGINFO_SENTINEL - }; -#ifdef CONFIG_USER_ONLY - ARMCPRegUserSpaceInfo v8_user_idregs[] = { - { .name = "ID_AA64PFR0_EL1", - .exported_bits = 0x000f000f00ff0000, - .fixed_bits = 0x0000000000000011 }, - { .name = "ID_AA64PFR1_EL1", - .exported_bits = 0x00000000000000f0 }, - { .name = "ID_AA64PFR*_EL1_RESERVED", - .is_glob = true }, - { .name = "ID_AA64ZFR0_EL1" }, - { .name = "ID_AA64MMFR0_EL1", - .fixed_bits = 0x00000000ff000000 }, - { .name = "ID_AA64MMFR1_EL1" }, - { .name = "ID_AA64MMFR*_EL1_RESERVED", - .is_glob = true }, - { .name = "ID_AA64DFR0_EL1", - .fixed_bits = 0x0000000000000006 }, - { .name = "ID_AA64DFR1_EL1" }, - { .name = "ID_AA64DFR*_EL1_RESERVED", - .is_glob = true }, - { .name = "ID_AA64AFR*", - .is_glob = true }, - { .name = "ID_AA64ISAR0_EL1", - .exported_bits = 0x00fffffff0fffff0 }, - { .name = "ID_AA64ISAR1_EL1", - .exported_bits = 0x000000f0ffffffff }, - { .name = "ID_AA64ISAR*_EL1_RESERVED", - .is_glob = true }, - REGUSERINFO_SENTINEL - }; - modify_arm_cp_regs(v8_idregs, v8_user_idregs); -#endif - /* RVBAR_EL1 is only implemented if EL1 is the highest EL */ - if (!arm_feature(env, ARM_FEATURE_EL3) && - !arm_feature(env, ARM_FEATURE_EL2)) { - ARMCPRegInfo rvbar = { - .name = "RVBAR_EL1", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1, - .type = ARM_CP_CONST, .access = PL1_R, .resetvalue = cpu->rvbar - }; - define_one_arm_cp_reg(cpu, &rvbar); - } - define_arm_cp_regs(cpu, v8_idregs); - define_arm_cp_regs(cpu, v8_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_EL2)) { - uint64_t vmpidr_def = mpidr_read_val(env); - ARMCPRegInfo vpidr_regs[] = { - { .name = "VPIDR", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .resetvalue = cpu->midr, .type = ARM_CP_ALIAS, - .fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) }, - { .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, - .access = PL2_RW, .resetvalue = cpu->midr, - .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) }, - { .name = "VMPIDR", .state = ARM_CP_STATE_AA32, - .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .resetvalue = vmpidr_def, .type = ARM_CP_ALIAS, - .fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) }, - { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, - .access = PL2_RW, - .resetvalue = vmpidr_def, - .fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, vpidr_regs); - define_arm_cp_regs(cpu, el2_cp_reginfo); - if (arm_feature(env, ARM_FEATURE_V8)) { - define_arm_cp_regs(cpu, el2_v8_cp_reginfo); - } - if (cpu_isar_feature(aa64_sel2, cpu)) { - define_arm_cp_regs(cpu, el2_sec_cp_reginfo); - } - /* RVBAR_EL2 is only implemented if EL2 is the highest EL */ - if (!arm_feature(env, ARM_FEATURE_EL3)) { - ARMCPRegInfo rvbar = { - .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1, - .type = ARM_CP_CONST, .access = PL2_R, .resetvalue = cpu->rvbar - }; - define_one_arm_cp_reg(cpu, &rvbar); - } - } else { - /* - * If EL2 is missing but higher ELs are enabled, we need to - * register the no_el2 reginfos. + return; + } else if (mask) { + /* Watchpoint covers an aligned area up to 2GB in size */ + len = 1ULL << mask; + /* If masked bits in WVR are not zero it's CONSTRAINED UNPREDICTABLE + * whether the watchpoint fires when the unmasked bits match; we opt + * to generate the exceptions. */ - if (arm_feature(env, ARM_FEATURE_EL3)) { - /* - * When EL3 exists but not EL2, VPIDR and VMPIDR take the value - * of MIDR_EL1 and MPIDR_EL1. - */ - ARMCPRegInfo vpidr_regs[] = { - { .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .type = ARM_CP_CONST, .resetvalue = cpu->midr, - .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) }, - { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5, - .access = PL2_RW, .accessfn = access_el3_aa32ns, - .type = ARM_CP_NO_RAW, - .writefn = arm_cp_write_ignore, .readfn = mpidr_read }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, vpidr_regs); - define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo); - if (arm_feature(env, ARM_FEATURE_V8)) { - define_arm_cp_regs(cpu, el3_no_el2_v8_cp_reginfo); - } - } - } - if (arm_feature(env, ARM_FEATURE_EL3)) { - define_arm_cp_regs(cpu, el3_cp_reginfo); - ARMCPRegInfo el3_regs[] = { - { .name = "RVBAR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 12, .crm = 0, .opc2 = 1, - .type = ARM_CP_CONST, .access = PL3_R, .resetvalue = cpu->rvbar }, - { .name = "SCTLR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 0, - .access = PL3_RW, - .raw_writefn = raw_write, .writefn = sctlr_write, - .fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[3]), - .resetvalue = cpu->reset_sctlr }, - REGINFO_SENTINEL - }; - - define_arm_cp_regs(cpu, el3_regs); - } - /* - * The behaviour of NSACR is sufficiently various that we don't - * try to describe it in a single reginfo: - * if EL3 is 64 bit, then trap to EL3 from S EL1, - * reads as constant 0xc00 from NS EL1 and NS EL2 - * if EL3 is 32 bit, then RW at EL3, RO at NS EL1 and NS EL2 - * if v7 without EL3, register doesn't exist - * if v8 without EL3, reads as constant 0xc00 from NS EL1 and NS EL2 - */ - if (arm_feature(env, ARM_FEATURE_EL3)) { - if (arm_feature(env, ARM_FEATURE_AARCH64)) { - ARMCPRegInfo nsacr = { - .name = "NSACR", .type = ARM_CP_CONST, - .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2, - .access = PL1_RW, .accessfn = nsacr_access, - .resetvalue = 0xc00 - }; - define_one_arm_cp_reg(cpu, &nsacr); - } else { - ARMCPRegInfo nsacr = { - .name = "NSACR", - .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2, - .access = PL3_RW | PL1_R, - .resetvalue = 0, - .fieldoffset = offsetof(CPUARMState, cp15.nsacr) - }; - define_one_arm_cp_reg(cpu, &nsacr); - } + wvr &= ~(len - 1); } else { - if (arm_feature(env, ARM_FEATURE_V8)) { - ARMCPRegInfo nsacr = { - .name = "NSACR", .type = ARM_CP_CONST, - .cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2, - .access = PL1_R, - .resetvalue = 0xc00 - }; - define_one_arm_cp_reg(cpu, &nsacr); - } - } + /* Watchpoint covers bytes defined by the byte address select bits */ + int bas = extract64(wcr, 5, 8); + int basstart; - if (arm_feature(env, ARM_FEATURE_PMSA)) { - if (arm_feature(env, ARM_FEATURE_V6)) { - /* PMSAv6 not implemented */ - assert(arm_feature(env, ARM_FEATURE_V7)); - define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo); - define_arm_cp_regs(cpu, pmsav7_cp_reginfo); - } else { - define_arm_cp_regs(cpu, pmsav5_cp_reginfo); - } - } else { - define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo); - define_arm_cp_regs(cpu, vmsa_cp_reginfo); - /* TTCBR2 is introduced with ARMv8.2-AA32HPD. */ - if (cpu_isar_feature(aa32_hpd, cpu)) { - define_one_arm_cp_reg(cpu, &ttbcr2_reginfo); - } - } - if (arm_feature(env, ARM_FEATURE_THUMB2EE)) { - define_arm_cp_regs(cpu, t2ee_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) { - define_arm_cp_regs(cpu, generic_timer_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_VAPA)) { - define_arm_cp_regs(cpu, vapa_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_CACHE_TEST_CLEAN)) { - define_arm_cp_regs(cpu, cache_test_clean_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_CACHE_DIRTY_REG)) { - define_arm_cp_regs(cpu, cache_dirty_status_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_CACHE_BLOCK_OPS)) { - define_arm_cp_regs(cpu, cache_block_ops_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_OMAPCP)) { - define_arm_cp_regs(cpu, omap_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_STRONGARM)) { - define_arm_cp_regs(cpu, strongarm_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_XSCALE)) { - define_arm_cp_regs(cpu, xscale_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_DUMMY_C15_REGS)) { - define_arm_cp_regs(cpu, dummy_c15_cp_reginfo); - } - if (arm_feature(env, ARM_FEATURE_LPAE)) { - define_arm_cp_regs(cpu, lpae_cp_reginfo); - } - if (cpu_isar_feature(aa32_jazelle, cpu)) { - define_arm_cp_regs(cpu, jazelle_regs); - } - /* - * Slightly awkwardly, the OMAP and StrongARM cores need all of - * cp15 crn=0 to be writes-ignored, whereas for other cores they should - * be read-only (ie write causes UNDEF exception). - */ - { - ARMCPRegInfo id_pre_v8_midr_cp_reginfo[] = { - /* - * Pre-v8 MIDR space. - * Note that the MIDR isn't a simple constant register because - * of the TI925 behaviour where writes to another register can - * cause the MIDR value to change. - * - * Unimplemented registers in the c15 0 0 0 space default to - * MIDR. Define MIDR first as this entire space, then CTR, TCMTR - * and friends override accordingly. - */ - { .name = "MIDR", - .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = CP_ANY, - .access = PL1_R, .resetvalue = cpu->midr, - .writefn = arm_cp_write_ignore, .raw_writefn = raw_write, - .readfn = midr_read, - .fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid), - .type = ARM_CP_OVERRIDE }, - /* crn = 0 op1 = 0 crm = 3..7 : currently unassigned; we RAZ. */ - { .name = "DUMMY", - .cp = 15, .crn = 0, .crm = 3, .opc1 = 0, .opc2 = CP_ANY, - .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "DUMMY", - .cp = 15, .crn = 0, .crm = 4, .opc1 = 0, .opc2 = CP_ANY, - .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "DUMMY", - .cp = 15, .crn = 0, .crm = 5, .opc1 = 0, .opc2 = CP_ANY, - .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "DUMMY", - .cp = 15, .crn = 0, .crm = 6, .opc1 = 0, .opc2 = CP_ANY, - .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, - { .name = "DUMMY", - .cp = 15, .crn = 0, .crm = 7, .opc1 = 0, .opc2 = CP_ANY, - .access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 }, - REGINFO_SENTINEL - }; - ARMCPRegInfo id_v8_midr_cp_reginfo[] = { - { .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 0, - .access = PL1_R, .type = ARM_CP_NO_RAW, .resetvalue = cpu->midr, - .fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid), - .readfn = midr_read }, - /* crn = 0 op1 = 0 crm = 0 op2 = 4,7 : AArch32 aliases of MIDR */ - { .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST, - .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4, - .access = PL1_R, .resetvalue = cpu->midr }, - { .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST, - .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 7, - .access = PL1_R, .resetvalue = cpu->midr }, - { .name = "REVIDR_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 6, - .access = PL1_R, - .accessfn = access_aa64_tid1, - .type = ARM_CP_CONST, .resetvalue = cpu->revidr }, - REGINFO_SENTINEL - }; - ARMCPRegInfo id_cp_reginfo[] = { - /* These are common to v8 and pre-v8 */ - { .name = "CTR", - .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 1, - .access = PL1_R, .accessfn = ctr_el0_access, - .type = ARM_CP_CONST, .resetvalue = cpu->ctr }, - { .name = "CTR_EL0", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 0, .crm = 0, - .access = PL0_R, .accessfn = ctr_el0_access, - .type = ARM_CP_CONST, .resetvalue = cpu->ctr }, - /* TCMTR and TLBTR exist in v8 but have no 64-bit versions */ - { .name = "TCMTR", - .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 2, - .access = PL1_R, - .accessfn = access_aa32_tid1, - .type = ARM_CP_CONST, .resetvalue = 0 }, - REGINFO_SENTINEL - }; - /* TLBTR is specific to VMSA */ - ARMCPRegInfo id_tlbtr_reginfo = { - .name = "TLBTR", - .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 3, - .access = PL1_R, - .accessfn = access_aa32_tid1, - .type = ARM_CP_CONST, .resetvalue = 0, - }; - /* MPUIR is specific to PMSA V6+ */ - ARMCPRegInfo id_mpuir_reginfo = { - .name = "MPUIR", - .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4, - .access = PL1_R, .type = ARM_CP_CONST, - .resetvalue = cpu->pmsav7_dregion << 8 - }; - ARMCPRegInfo crn0_wi_reginfo = { - .name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY, - .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W, - .type = ARM_CP_NOP | ARM_CP_OVERRIDE - }; -#ifdef CONFIG_USER_ONLY - ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = { - { .name = "MIDR_EL1", - .exported_bits = 0x00000000ffffffff }, - { .name = "REVIDR_EL1" }, - REGUSERINFO_SENTINEL - }; - modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo); -#endif - if (arm_feature(env, ARM_FEATURE_OMAPCP) || - arm_feature(env, ARM_FEATURE_STRONGARM)) { - ARMCPRegInfo *r; - /* - * Register the blanket "writes ignored" value first to cover the - * whole space. Then update the specific ID registers to allow write - * access, so that they ignore writes rather than causing them to - * UNDEF. + if (extract64(wvr, 2, 1)) { + /* Deprecated case of an only 4-aligned address. BAS[7:4] are + * ignored, and BAS[3:0] define which bytes to watch. */ - define_one_arm_cp_reg(cpu, &crn0_wi_reginfo); - for (r = id_pre_v8_midr_cp_reginfo; - r->type != ARM_CP_SENTINEL; r++) { - r->access = PL1_RW; - } - for (r = id_cp_reginfo; r->type != ARM_CP_SENTINEL; r++) { - r->access = PL1_RW; - } - id_mpuir_reginfo.access = PL1_RW; - id_tlbtr_reginfo.access = PL1_RW; - } - if (arm_feature(env, ARM_FEATURE_V8)) { - define_arm_cp_regs(cpu, id_v8_midr_cp_reginfo); - } else { - define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo); - } - define_arm_cp_regs(cpu, id_cp_reginfo); - if (!arm_feature(env, ARM_FEATURE_PMSA)) { - define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo); - } else if (arm_feature(env, ARM_FEATURE_V7)) { - define_one_arm_cp_reg(cpu, &id_mpuir_reginfo); + bas &= 0xf; } - } - if (arm_feature(env, ARM_FEATURE_MPIDR)) { - ARMCPRegInfo mpidr_cp_reginfo[] = { - { .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5, - .access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW }, - REGINFO_SENTINEL - }; -#ifdef CONFIG_USER_ONLY - ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = { - { .name = "MPIDR_EL1", - .fixed_bits = 0x0000000080000000 }, - REGUSERINFO_SENTINEL - }; - modify_arm_cp_regs(mpidr_cp_reginfo, mpidr_user_cp_reginfo); -#endif - define_arm_cp_regs(cpu, mpidr_cp_reginfo); - } - - if (arm_feature(env, ARM_FEATURE_AUXCR)) { - ARMCPRegInfo auxcr_reginfo[] = { - { .name = "ACTLR_EL1", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 1, - .access = PL1_RW, .accessfn = access_tacr, - .type = ARM_CP_CONST, .resetvalue = cpu->reset_auxcr }, - { .name = "ACTLR_EL2", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 1, - .access = PL2_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - { .name = "ACTLR_EL3", .state = ARM_CP_STATE_AA64, - .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 1, - .access = PL3_RW, .type = ARM_CP_CONST, - .resetvalue = 0 }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, auxcr_reginfo); - if (cpu_isar_feature(aa32_ac2, cpu)) { - define_arm_cp_regs(cpu, actlr2_hactlr2_reginfo); + if (bas == 0) { + /* This must act as if the watchpoint is disabled */ + return; } - } - if (arm_feature(env, ARM_FEATURE_CBAR)) { - /* - * CBAR is IMPDEF, but common on Arm Cortex-A implementations. - * There are two flavours: - * (1) older 32-bit only cores have a simple 32-bit CBAR - * (2) 64-bit cores have a 64-bit CBAR visible to AArch64, plus a - * 32-bit register visible to AArch32 at a different encoding - * to the "flavour 1" register and with the bits rearranged to - * be able to squash a 64-bit address into the 32-bit view. - * We distinguish the two via the ARM_FEATURE_AARCH64 flag, but - * in future if we support AArch32-only configs of some of the - * AArch64 cores we might need to add a specific feature flag - * to indicate cores with "flavour 2" CBAR. + /* The BAS bits are supposed to be programmed to indicate a contiguous + * range of bytes. Otherwise it is CONSTRAINED UNPREDICTABLE whether + * we fire for each byte in the word/doubleword addressed by the WVR. + * We choose to ignore any non-zero bits after the first range of 1s. */ - if (arm_feature(env, ARM_FEATURE_AARCH64)) { - /* 32 bit view is [31:18] 0...0 [43:32]. */ - uint32_t cbar32 = (extract64(cpu->reset_cbar, 18, 14) << 18) - | extract64(cpu->reset_cbar, 32, 12); - ARMCPRegInfo cbar_reginfo[] = { - { .name = "CBAR", - .type = ARM_CP_CONST, - .cp = 15, .crn = 15, .crm = 3, .opc1 = 1, .opc2 = 0, - .access = PL1_R, .resetvalue = cbar32 }, - { .name = "CBAR_EL1", .state = ARM_CP_STATE_AA64, - .type = ARM_CP_CONST, - .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 3, .opc2 = 0, - .access = PL1_R, .resetvalue = cpu->reset_cbar }, - REGINFO_SENTINEL - }; - /* We don't implement a r/w 64 bit CBAR currently */ - assert(arm_feature(env, ARM_FEATURE_CBAR_RO)); - define_arm_cp_regs(cpu, cbar_reginfo); - } else { - ARMCPRegInfo cbar = { - .name = "CBAR", - .cp = 15, .crn = 15, .crm = 0, .opc1 = 4, .opc2 = 0, - .access = PL1_R|PL3_W, .resetvalue = cpu->reset_cbar, - .fieldoffset = offsetof(CPUARMState, - cp15.c15_config_base_address) - }; - if (arm_feature(env, ARM_FEATURE_CBAR_RO)) { - cbar.access = PL1_R; - cbar.fieldoffset = 0; - cbar.type = ARM_CP_CONST; - } - define_one_arm_cp_reg(cpu, &cbar); - } + basstart = ctz32(bas); + len = cto32(bas >> basstart); + wvr += basstart; } - if (arm_feature(env, ARM_FEATURE_VBAR)) { - ARMCPRegInfo vbar_cp_reginfo[] = { - { .name = "VBAR", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0, - .access = PL1_RW, .writefn = vbar_write, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s), - offsetof(CPUARMState, cp15.vbar_ns) }, - .resetvalue = 0 }, - REGINFO_SENTINEL - }; - define_arm_cp_regs(cpu, vbar_cp_reginfo); - } + cpu_watchpoint_insert(CPU(cpu), wvr, len, flags, + &env->cpu_watchpoint[n]); +} - /* Generic registers whose values depend on the implementation */ - { - ARMCPRegInfo sctlr = { - .name = "SCTLR", .state = ARM_CP_STATE_BOTH, - .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0, - .access = PL1_RW, .accessfn = access_tvm_trvm, - .bank_fieldoffsets = { offsetof(CPUARMState, cp15.sctlr_s), - offsetof(CPUARMState, cp15.sctlr_ns) }, - .writefn = sctlr_write, .resetvalue = cpu->reset_sctlr, - .raw_writefn = raw_write, - }; - if (arm_feature(env, ARM_FEATURE_XSCALE)) { - /* Normally we would always end the TB on an SCTLR write, but Linux - * arch/arm/mach-pxa/sleep.S expects two instructions following - * an MMU enable to execute from cache. Imitate this behaviour. - */ - sctlr.type |= ARM_CP_SUPPRESS_TB_END; - } - define_one_arm_cp_reg(cpu, &sctlr); - } +void hw_watchpoint_update_all(ARMCPU *cpu) +{ + int i; + CPUARMState *env = &cpu->env; - if (cpu_isar_feature(aa64_lor, cpu)) { - define_arm_cp_regs(cpu, lor_reginfo); - } - if (cpu_isar_feature(aa64_pan, cpu)) { - define_one_arm_cp_reg(cpu, &pan_reginfo); - } -#ifndef CONFIG_USER_ONLY - if (cpu_isar_feature(aa64_ats1e1, cpu)) { - define_arm_cp_regs(cpu, ats1e1_reginfo); - } - if (cpu_isar_feature(aa32_ats1e1, cpu)) { - define_arm_cp_regs(cpu, ats1cp_reginfo); - } -#endif - if (cpu_isar_feature(aa64_uao, cpu)) { - define_one_arm_cp_reg(cpu, &uao_reginfo); - } + /* Completely clear out existing QEMU watchpoints and our array, to + * avoid possible stale entries following migration load. + */ + cpu_watchpoint_remove_all(CPU(cpu), BP_CPU); + memset(env->cpu_watchpoint, 0, sizeof(env->cpu_watchpoint)); - if (cpu_isar_feature(aa64_dit, cpu)) { - define_one_arm_cp_reg(cpu, &dit_reginfo); - } - if (cpu_isar_feature(aa64_ssbs, cpu)) { - define_one_arm_cp_reg(cpu, &ssbs_reginfo); + for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_watchpoint); i++) { + hw_watchpoint_update(cpu, i); } +} - if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) { - define_arm_cp_regs(cpu, vhe_reginfo); - } +void hw_breakpoint_update(ARMCPU *cpu, int n) +{ + CPUARMState *env = &cpu->env; + uint64_t bvr = env->cp15.dbgbvr[n]; + uint64_t bcr = env->cp15.dbgbcr[n]; + vaddr addr; + int bt; + int flags = BP_CPU; - if (cpu_isar_feature(aa64_sve, cpu)) { - define_one_arm_cp_reg(cpu, &zcr_el1_reginfo); - if (arm_feature(env, ARM_FEATURE_EL2)) { - define_one_arm_cp_reg(cpu, &zcr_el2_reginfo); - } else { - define_one_arm_cp_reg(cpu, &zcr_no_el2_reginfo); - } - if (arm_feature(env, ARM_FEATURE_EL3)) { - define_one_arm_cp_reg(cpu, &zcr_el3_reginfo); - } + if (env->cpu_breakpoint[n]) { + cpu_breakpoint_remove_by_ref(CPU(cpu), env->cpu_breakpoint[n]); + env->cpu_breakpoint[n] = NULL; } -#ifdef TARGET_AARCH64 - if (cpu_isar_feature(aa64_pauth, cpu)) { - define_arm_cp_regs(cpu, pauth_reginfo); - } - if (cpu_isar_feature(aa64_rndr, cpu)) { - define_arm_cp_regs(cpu, rndr_reginfo); - } - if (cpu_isar_feature(aa64_tlbirange, cpu)) { - define_arm_cp_regs(cpu, tlbirange_reginfo); - } - if (cpu_isar_feature(aa64_tlbios, cpu)) { - define_arm_cp_regs(cpu, tlbios_reginfo); + if (!extract64(bcr, 0, 1)) { + /* E bit clear : watchpoint disabled */ + return; } -#ifndef CONFIG_USER_ONLY - /* Data Cache clean instructions up to PoP */ - if (cpu_isar_feature(aa64_dcpop, cpu)) { - define_one_arm_cp_reg(cpu, dcpop_reg); - if (cpu_isar_feature(aa64_dcpodp, cpu)) { - define_one_arm_cp_reg(cpu, dcpodp_reg); + bt = extract64(bcr, 20, 4); + + switch (bt) { + case 4: /* unlinked address mismatch (reserved if AArch64) */ + case 5: /* linked address mismatch (reserved if AArch64) */ + qemu_log_mask(LOG_UNIMP, + "arm: address mismatch breakpoint types not implemented\n"); + return; + case 0: /* unlinked address match */ + case 1: /* linked address match */ + { + /* Bits [63:49] are hardwired to the value of bit [48]; that is, + * we behave as if the register was sign extended. Bits [1:0] are + * RES0. The BAS field is used to allow setting breakpoints on 16 + * bit wide instructions; it is CONSTRAINED UNPREDICTABLE whether + * a bp will fire if the addresses covered by the bp and the addresses + * covered by the insn overlap but the insn doesn't start at the + * start of the bp address range. We choose to require the insn and + * the bp to have the same address. The constraints on writing to + * BAS enforced in dbgbcr_write mean we have only four cases: + * 0b0000 => no breakpoint + * 0b0011 => breakpoint on addr + * 0b1100 => breakpoint on addr + 2 + * 0b1111 => breakpoint on addr + * See also figure D2-3 in the v8 ARM ARM (DDI0487A.c). + */ + int bas = extract64(bcr, 5, 4); + addr = sextract64(bvr, 0, 49) & ~3ULL; + if (bas == 0) { + return; + } + if (bas == 0xc) { + addr += 2; } + break; } -#endif /*CONFIG_USER_ONLY*/ - - /* - * If full MTE is enabled, add all of the system registers. - * If only "instructions available at EL0" are enabled, - * then define only a RAZ/WI version of PSTATE.TCO. - */ - if (cpu_isar_feature(aa64_mte, cpu)) { - define_arm_cp_regs(cpu, mte_reginfo); - define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo); - } else if (cpu_isar_feature(aa64_mte_insn_reg, cpu)) { - define_arm_cp_regs(cpu, mte_tco_ro_reginfo); - define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo); + case 2: /* unlinked context ID match */ + case 8: /* unlinked VMID match (reserved if no EL2) */ + case 10: /* unlinked context ID and VMID match (reserved if no EL2) */ + qemu_log_mask(LOG_UNIMP, + "arm: unlinked context breakpoint types not implemented\n"); + return; + case 9: /* linked VMID match (reserved if no EL2) */ + case 11: /* linked context ID and VMID match (reserved if no EL2) */ + case 3: /* linked context ID match */ + default: + /* We must generate no events for Linked context matches (unless + * they are linked to by some other bp/wp, which is handled in + * updates for the linking bp/wp). We choose to also generate no events + * for reserved values. + */ + return; } -#endif - if (cpu_isar_feature(any_predinv, cpu)) { - define_arm_cp_regs(cpu, predinv_reginfo); - } + cpu_breakpoint_insert(CPU(cpu), addr, flags, &env->cpu_breakpoint[n]); +} - if (cpu_isar_feature(any_ccidx, cpu)) { - define_arm_cp_regs(cpu, ccsidr2_reginfo); - } +void hw_breakpoint_update_all(ARMCPU *cpu) +{ + int i; + CPUARMState *env = &cpu->env; -#ifndef CONFIG_USER_ONLY - /* - * Register redirections and aliases must be done last, - * after the registers from the other extensions have been defined. + /* Completely clear out existing QEMU breakpoints and our array, to + * avoid possible stale entries following migration load. */ - if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) { - define_arm_vh_e2h_redirects_aliases(cpu); + cpu_breakpoint_remove_all(CPU(cpu), BP_CPU); + memset(env->cpu_breakpoint, 0, sizeof(env->cpu_breakpoint)); + + for (i = 0; i < ARRAY_SIZE(cpu->env.cpu_breakpoint); i++) { + hw_breakpoint_update(cpu, i); } -#endif } void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu) @@ -8835,397 +725,6 @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp) return cpu_list; } -static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r, - void *opaque, int state, int secstate, - int crm, int opc1, int opc2, - const char *name) -{ - /* - * Private utility function for define_one_arm_cp_reg_with_opaque(): - * add a single reginfo struct to the hash table. - */ - uint32_t *key = g_new(uint32_t, 1); - ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo)); - int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0; - int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0; - - r2->name = g_strdup(name); - /* - * Reset the secure state to the specific incoming state. This is - * necessary as the register may have been defined with both states. - */ - r2->secure = secstate; - - if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) { - /* - * Register is banked (using both entries in array). - * Overwriting fieldoffset as the array is only used to define - * banked registers but later only fieldoffset is used. - */ - r2->fieldoffset = r->bank_fieldoffsets[ns]; - } - - if (state == ARM_CP_STATE_AA32) { - if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) { - /* - * If the register is banked then we don't need to migrate or - * reset the 32-bit instance in certain cases: - * - * 1) If the register has both 32-bit and 64-bit instances then we - * can count on the 64-bit instance taking care of the - * non-secure bank. - * 2) If ARMv8 is enabled then we can count on a 64-bit version - * taking care of the secure bank. This requires that separate - * 32 and 64-bit definitions are provided. - */ - if ((r->state == ARM_CP_STATE_BOTH && ns) || - (arm_feature(&cpu->env, ARM_FEATURE_V8) && !ns)) { - r2->type |= ARM_CP_ALIAS; - } - } else if ((secstate != r->secure) && !ns) { - /* - * The register is not banked so we only want to allow migration of - * the non-secure instance. - */ - r2->type |= ARM_CP_ALIAS; - } - - if (r->state == ARM_CP_STATE_BOTH) { - /* We assume it is a cp15 register if the .cp field is left unset */ - if (r2->cp == 0) { - r2->cp = 15; - } - -#ifdef HOST_WORDS_BIGENDIAN - if (r2->fieldoffset) { - r2->fieldoffset += sizeof(uint32_t); - } -#endif - } - } - if (state == ARM_CP_STATE_AA64) { - /* - * To allow abbreviation of ARMCPRegInfo - * definitions, we treat cp == 0 as equivalent to - * the value for "standard guest-visible sysreg". - * STATE_BOTH definitions are also always "standard - * sysreg" in their AArch64 view (the .cp value may - * be non-zero for the benefit of the AArch32 view). - */ - if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) { - r2->cp = CP_REG_ARM64_SYSREG_CP; - } - *key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm, - r2->opc0, opc1, opc2); - } else { - *key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2); - } - if (opaque) { - r2->opaque = opaque; - } - /* - * reginfo passed to helpers is correct for the actual access, - * and is never ARM_CP_STATE_BOTH: - */ - r2->state = state; - /* - * Make sure reginfo passed to helpers for wildcarded regs - * has the correct crm/opc1/opc2 for this reg, not CP_ANY: - */ - r2->crm = crm; - r2->opc1 = opc1; - r2->opc2 = opc2; - /* - * By convention, for wildcarded registers only the first - * entry is used for migration; the others are marked as - * ALIAS so we don't try to transfer the register - * multiple times. Special registers (ie NOP/WFI) are - * never migratable and not even raw-accessible. - */ - if ((r->type & ARM_CP_SPECIAL)) { - r2->type |= ARM_CP_NO_RAW; - } - if (((r->crm == CP_ANY) && crm != 0) || - ((r->opc1 == CP_ANY) && opc1 != 0) || - ((r->opc2 == CP_ANY) && opc2 != 0)) { - r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB; - } - - /* - * Check that raw accesses are either forbidden or handled. Note that - * we can't assert this earlier because the setup of fieldoffset for - * banked registers has to be done first. - */ - if (!(r2->type & ARM_CP_NO_RAW)) { - assert(!raw_accessors_invalid(r2)); - } - - /* Overriding of an existing definition must be explicitly requested. */ - if (!(r->type & ARM_CP_OVERRIDE)) { - ARMCPRegInfo *oldreg; - oldreg = g_hash_table_lookup(cpu->cp_regs, key); - if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) { - fprintf(stderr, "Register redefined: cp=%d %d bit " - "crn=%d crm=%d opc1=%d opc2=%d, " - "was %s, now %s\n", r2->cp, 32 + 32 * is64, - r2->crn, r2->crm, r2->opc1, r2->opc2, - oldreg->name, r2->name); - g_assert_not_reached(); - } - } - g_hash_table_insert(cpu->cp_regs, key, r2); -} - - -void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, - const ARMCPRegInfo *r, void *opaque) -{ - /* - * Define implementations of coprocessor registers. - * We store these in a hashtable because typically - * there are less than 150 registers in a space which - * is 16*16*16*8*8 = 262144 in size. - * Wildcarding is supported for the crm, opc1 and opc2 fields. - * If a register is defined twice then the second definition is - * used, so this can be used to define some generic registers and - * then override them with implementation specific variations. - * At least one of the original and the second definition should - * include ARM_CP_OVERRIDE in its type bits -- this is just a guard - * against accidental use. - * - * The state field defines whether the register is to be - * visible in the AArch32 or AArch64 execution state. If the - * state is set to ARM_CP_STATE_BOTH then we synthesise a - * reginfo structure for the AArch32 view, which sees the lower - * 32 bits of the 64 bit register. - * - * Only registers visible in AArch64 may set r->opc0; opc0 cannot - * be wildcarded. AArch64 registers are always considered to be 64 - * bits; the ARM_CP_64BIT* flag applies only to the AArch32 view of - * the register, if any. - */ - int crm, opc1, opc2, state; - int crmmin = (r->crm == CP_ANY) ? 0 : r->crm; - int crmmax = (r->crm == CP_ANY) ? 15 : r->crm; - int opc1min = (r->opc1 == CP_ANY) ? 0 : r->opc1; - int opc1max = (r->opc1 == CP_ANY) ? 7 : r->opc1; - int opc2min = (r->opc2 == CP_ANY) ? 0 : r->opc2; - int opc2max = (r->opc2 == CP_ANY) ? 7 : r->opc2; - /* 64 bit registers have only CRm and Opc1 fields */ - assert(!((r->type & ARM_CP_64BIT) && (r->opc2 || r->crn))); - /* op0 only exists in the AArch64 encodings */ - assert((r->state != ARM_CP_STATE_AA32) || (r->opc0 == 0)); - /* AArch64 regs are all 64 bit so ARM_CP_64BIT is meaningless */ - assert((r->state != ARM_CP_STATE_AA64) || !(r->type & ARM_CP_64BIT)); - /* - * This API is only for Arm's system coprocessors (14 and 15) or - * (M-profile or v7A-and-earlier only) for implementation defined - * coprocessors in the range 0..7. Our decode assumes this, since - * 8..13 can be used for other insns including VFP and Neon. See - * valid_cp() in translate.c. Assert here that we haven't tried - * to use an invalid coprocessor number. - */ - switch (r->state) { - case ARM_CP_STATE_BOTH: - /* 0 has a special meaning, but otherwise the same rules as AA32. */ - if (r->cp == 0) { - break; - } - /* fall through */ - case ARM_CP_STATE_AA32: - if (arm_feature(&cpu->env, ARM_FEATURE_V8) && - !arm_feature(&cpu->env, ARM_FEATURE_M)) { - assert(r->cp >= 14 && r->cp <= 15); - } else { - assert(r->cp < 8 || (r->cp >= 14 && r->cp <= 15)); - } - break; - case ARM_CP_STATE_AA64: - assert(r->cp == 0 || r->cp == CP_REG_ARM64_SYSREG_CP); - break; - default: - g_assert_not_reached(); - } - /* - * The AArch64 pseudocode CheckSystemAccess() specifies that op1 - * encodes a minimum access level for the register. We roll this - * runtime check into our general permission check code, so check - * here that the reginfo's specified permissions are strict enough - * to encompass the generic architectural permission check. - */ - if (r->state != ARM_CP_STATE_AA32) { - int mask = 0; - switch (r->opc1) { - case 0: - /* min_EL EL1, but some accessible to EL0 via kernel ABI */ - mask = PL0U_R | PL1_RW; - break; - case 1: case 2: - /* min_EL EL1 */ - mask = PL1_RW; - break; - case 3: - /* min_EL EL0 */ - mask = PL0_RW; - break; - case 4: - case 5: - /* min_EL EL2 */ - mask = PL2_RW; - break; - case 6: - /* min_EL EL3 */ - mask = PL3_RW; - break; - case 7: - /* min_EL EL1, secure mode only (we don't check the latter) */ - mask = PL1_RW; - break; - default: - /* broken reginfo with out-of-range opc1 */ - assert(false); - break; - } - /* assert our permissions are not too lax (stricter is fine) */ - assert((r->access & ~mask) == 0); - } - - /* - * Check that the register definition has enough info to handle - * reads and writes if they are permitted. - */ - if (!(r->type & (ARM_CP_SPECIAL | ARM_CP_CONST))) { - if (r->access & PL3_R) { - assert((r->fieldoffset || - (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) || - r->readfn); - } - if (r->access & PL3_W) { - assert((r->fieldoffset || - (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) || - r->writefn); - } - } - /* Bad type field probably means missing sentinel at end of reg list */ - assert(cptype_valid(r->type)); - for (crm = crmmin; crm <= crmmax; crm++) { - for (opc1 = opc1min; opc1 <= opc1max; opc1++) { - for (opc2 = opc2min; opc2 <= opc2max; opc2++) { - for (state = ARM_CP_STATE_AA32; - state <= ARM_CP_STATE_AA64; state++) { - if (r->state != state && r->state != ARM_CP_STATE_BOTH) { - continue; - } - if (state == ARM_CP_STATE_AA32) { - /* - * Under AArch32 CP registers can be common - * (same for secure and non-secure world) or banked. - */ - char *name; - - switch (r->secure) { - case ARM_CP_SECSTATE_S: - case ARM_CP_SECSTATE_NS: - add_cpreg_to_hashtable(cpu, r, opaque, state, - r->secure, crm, opc1, opc2, - r->name); - break; - default: - name = g_strdup_printf("%s_S", r->name); - add_cpreg_to_hashtable(cpu, r, opaque, state, - ARM_CP_SECSTATE_S, - crm, opc1, opc2, name); - g_free(name); - add_cpreg_to_hashtable(cpu, r, opaque, state, - ARM_CP_SECSTATE_NS, - crm, opc1, opc2, r->name); - break; - } - } else { - /* - * AArch64 registers get mapped to non-secure - * instance of AArch32 - */ - add_cpreg_to_hashtable(cpu, r, opaque, state, - ARM_CP_SECSTATE_NS, - crm, opc1, opc2, r->name); - } - } - } - } - } -} - -void define_arm_cp_regs_with_opaque(ARMCPU *cpu, - const ARMCPRegInfo *regs, void *opaque) -{ - /* Define a whole list of registers */ - const ARMCPRegInfo *r; - for (r = regs; r->type != ARM_CP_SENTINEL; r++) { - define_one_arm_cp_reg_with_opaque(cpu, r, opaque); - } -} - -/* - * Modify ARMCPRegInfo for access from userspace. - * - * This is a data driven modification directed by - * ARMCPRegUserSpaceInfo. All registers become ARM_CP_CONST as - * user-space cannot alter any values and dynamic values pertaining to - * execution state are hidden from user space view anyway. - */ -void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods) -{ - const ARMCPRegUserSpaceInfo *m; - ARMCPRegInfo *r; - - for (m = mods; m->name; m++) { - GPatternSpec *pat = NULL; - if (m->is_glob) { - pat = g_pattern_spec_new(m->name); - } - for (r = regs; r->type != ARM_CP_SENTINEL; r++) { - if (pat && g_pattern_match_string(pat, r->name)) { - r->type = ARM_CP_CONST; - r->access = PL0U_R; - r->resetvalue = 0; - /* continue */ - } else if (strcmp(r->name, m->name) == 0) { - r->type = ARM_CP_CONST; - r->access = PL0U_R; - r->resetvalue &= m->exported_bits; - r->resetvalue |= m->fixed_bits; - break; - } - } - if (pat) { - g_pattern_spec_free(pat); - } - } -} - -const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp) -{ - return g_hash_table_lookup(cpregs, &encoded_cp); -} - -void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) -{ - /* Helper coprocessor write function for write-ignore registers */ -} - -uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri) -{ - /* Helper coprocessor write function for read-as-zero registers */ - return 0; -} - -void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque) -{ - /* Helper coprocessor reset function for do-nothing-on-reset registers */ -} - static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type) { /* Return true if it is not valid for us to switch to diff --git a/target/arm/tcg/op_helper.c b/target/arm/tcg/op_helper.c index efcb600992..8c95d7773d 100644 --- a/target/arm/tcg/op_helper.c +++ b/target/arm/tcg/op_helper.c @@ -19,6 +19,7 @@ #include "qemu/osdep.h" #include "qemu/main-loop.h" #include "cpu.h" +#include "cpregs.h" #include "exec/helper-proto.h" #include "internals.h" #include "exec/exec-all.h" diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c index ceac0ee2bd..26ed1f2e02 100644 --- a/target/arm/tcg/translate-a64.c +++ b/target/arm/tcg/translate-a64.c @@ -19,6 +19,7 @@ #include "qemu/osdep.h" #include "cpu.h" +#include "cpregs.h" #include "exec/exec-all.h" #include "tcg/tcg-op.h" #include "tcg/tcg-op-gvec.h" diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c index 8e0e55c1e0..2e626a1a93 100644 --- a/target/arm/tcg/translate.c +++ b/target/arm/tcg/translate.c @@ -22,6 +22,7 @@ #include "cpu.h" #include "internals.h" +#include "cpregs.h" #include "disas/disas.h" #include "exec/exec-all.h" #include "tcg/tcg-op.h" diff --git a/target/arm/meson.build b/target/arm/meson.build index 3e7cea7604..5fb34c1af1 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -1,7 +1,9 @@ arm_ss = ss.source_set() arm_ss.add(files( + 'cpregs.c', 'cpu.c', 'cpu-mmu.c', + 'cpustate-list.c', 'gdbstub.c', 'cpu_tcg.c', )) diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 3503ad96c8..3d34723eee 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -20,6 +20,7 @@ arm_ss.add(when: 'CONFIG_TCG', if_true: files( 'translate-neon.c', 'translate-vfp.c', 'helper.c', + 'cpregs.c', 'iwmmxt_helper.c', 'm_helper.c', 'neon_helper.c', From patchwork Fri Jun 4 15:52:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454076 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp561606jae; Fri, 4 Jun 2021 09:24:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzABJ80PXWYfVEX8UeFkyUm8k1YXKvNBj0fmRLXXBfX9PJD3EosZV1dWBqH72yzr2c8z7Rp X-Received: by 2002:a9d:4f19:: with SMTP id d25mr4420565otl.72.1622823869814; Fri, 04 Jun 2021 09:24:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823869; cv=none; d=google.com; s=arc-20160816; b=FXYbgC1D/1PMNpQqJ14CQYdWxAZxQ4hrWKGwWvAyF8JDWpjbS1qrCFEB9C6emCA3vd Nahy0b94JuBdxMAGcYIQPCwVrLUx5mpCRKm1HwcQF6JkyGpNH98RSSpnrWp7tj/qN73w KhZIOUn2fdtT+iXRWvZJ6mqPzh7ydFXrAFevd9nyS6V0lvzEYlQrs9k/JLx9/6SezlPu m4CQQv5XQiPIXQEa64e514NYyloYHyXfb3hd2H2yTHXDIgWWUHAoX8shEostPZvbnRim sT3BJVkzqGO8cNb3FB+D3ZpjQLlyaePM6/jQtrSBh3X4QLekMhC6ozCNkYMFuIPW+qKz fe1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=zBvCfaGTlADJ93Rev0o5Ivgyc3KA6JcHQPgwZFaWAco=; b=p5csozQJzZZbJ6oN6ISP5iS7AduCO7TbnyZGVYih/GrrcklQmlBANqnpQh3U14PMQt 5eM6vMdZlkTzJvCkHNFLA9+skOVhpfKFD3v+L1SCHAspZ+EpGIqleamFE1nTnuUvS65R kttUhy/gyf3G8l1JJvUzQ0a5OTOtHXdeLsf9w4DiI4Y00YLAY3XQHTd8jjQ/27sF9elo hcPLe1/PIOy+nMu94+ayZUTgrzVeSOJWp/h5FTBQ6Wx4Hwyqfbc+nY4MzMKSK8whpDu9 csSFQ0BaxQbz5cbg79U6yL3YdUcb1sRZf7nS+lxMHLN5ppGinSSTEQWkNNkj+5+metQK WLMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="jFn7/emm"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d185si2363662oib.81.2021.06.04.09.24.29 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:24:29 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="jFn7/emm"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:55748 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCcX-0003Gq-8D for patch@linaro.org; Fri, 04 Jun 2021 12:24:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48104) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHW-0007aN-NB for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:46 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:35697) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHP-0005h7-Rq for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:46 -0400 Received: by mail-wr1-x433.google.com with SMTP id m18so9836576wrv.2 for ; Fri, 04 Jun 2021 09:02:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zBvCfaGTlADJ93Rev0o5Ivgyc3KA6JcHQPgwZFaWAco=; b=jFn7/emm2wEAbeqKtehC4ildRuKyLj4XMDN3iPwIHRYhGb3yWor9sGmXqXgRf88OwS zSPRkto9317bz6L5sy9Sai2C3ni6sHpcFf7Oz+rXsZSVSm/BqVTPnKMuTau7/1x80rni OJg02CCg932HUosPXBlSjnlqNBVejSJDB+WqUVkJkZfwam9FM4QNYMFwHRNngUkTNBma Eb2Yxi7cheizAMbEdH2Njd3uKLvNuscNEzw4V5rOrxq/iq+JUSolwn0Gk3wgwVgGX/o7 3TrQcxKOwuyVSlqHL9ZDvSINSaQq01MH+zE8tgdUJESTx1W1MguNvBSQwty4oi9s7FZg 4rvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zBvCfaGTlADJ93Rev0o5Ivgyc3KA6JcHQPgwZFaWAco=; b=Xh9inQWHFnV7XY7nGs3p+pOmle2XLJN1yIJbd44jkluuKDsTQgpF8DWsgwG0yXqbWN 5E0ObnXE+LsI5A9ajfp0hWiKt+zuISVxPwTHCA2P2BJ2/saUYFuw8Z3ygzbGqnTdPsc8 +pWxh85fJ99HYqdUhC+ll1EZarhaYgEPwMBQC42Zl8QyuEZP9Um3QpOQqcscZ8C6b9D7 E5fBhbgBTUGq8fdKj1qqbS3Lc2sRndw4DnGIGkgggGyZ1AZ5BWsTAdeDsInUI2+2Rz/D 9g1makDMXwftJfmILghT+WzpykoB+Vw5BscoLBQiMMbLwL+VZ254Fo5G0NRWPrY4OhfC ic7g== X-Gm-Message-State: AOAM531V4KggVsDlr+mSHHZPso5Fwtrkwh8uo3iWHlj1PMDe9tDoYc80 NJHjM97lYFPP89pvWLJ3n4KgiQ== X-Received: by 2002:adf:8bc9:: with SMTP id w9mr4578743wra.378.1622822558358; Fri, 04 Jun 2021 09:02:38 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id w23sm7610085wmi.0.2021.06.04.09.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 941A71FFB4; Fri, 4 Jun 2021 16:53:16 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 29/99] target/arm: move cpu definitions to common cpu module Date: Fri, 4 Jun 2021 16:52:02 +0100 Message-Id: <20210604155312.15902-30-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::433; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x433.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Reviewed-by: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/cpu-common.c | 41 +++++++++++++++++++++++++++++++++++++++++ target/arm/tcg/helper.c | 29 ----------------------------- target/arm/meson.build | 1 + 3 files changed, 42 insertions(+), 29 deletions(-) create mode 100644 target/arm/cpu-common.c -- 2.20.1 diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c new file mode 100644 index 0000000000..0f8ca94815 --- /dev/null +++ b/target/arm/cpu-common.c @@ -0,0 +1,41 @@ +/* + * ARM CPU common definitions + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "qom/object.h" +#include "qapi/qapi-commands-machine-target.h" +#include "qapi/error.h" +#include "cpu.h" + +static void arm_cpu_add_definition(gpointer data, gpointer user_data) +{ + ObjectClass *oc = data; + CpuDefinitionInfoList **cpu_list = user_data; + CpuDefinitionInfo *info; + const char *typename; + + typename = object_class_get_name(oc); + info = g_malloc0(sizeof(*info)); + info->name = g_strndup(typename, + strlen(typename) - strlen("-" TYPE_ARM_CPU)); + info->q_typename = g_strdup(typename); + + QAPI_LIST_PREPEND(*cpu_list, info); +} + +CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp) +{ + CpuDefinitionInfoList *cpu_list = NULL; + GSList *list; + + list = object_class_get_list(TYPE_ARM_CPU, false); + g_slist_foreach(list, arm_cpu_add_definition, &cpu_list); + g_slist_free(list); + + return cpu_list; +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 09503db37b..f54ece9b42 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -28,7 +28,6 @@ #include "sysemu/kvm.h" #include "sysemu/tcg.h" #include "qemu/range.h" -#include "qapi/qapi-commands-machine-target.h" #include "qapi/error.h" #include "qemu/guest-random.h" #ifdef CONFIG_TCG @@ -697,34 +696,6 @@ void arm_cpu_list(void) g_slist_free(list); } -static void arm_cpu_add_definition(gpointer data, gpointer user_data) -{ - ObjectClass *oc = data; - CpuDefinitionInfoList **cpu_list = user_data; - CpuDefinitionInfo *info; - const char *typename; - - typename = object_class_get_name(oc); - info = g_malloc0(sizeof(*info)); - info->name = g_strndup(typename, - strlen(typename) - strlen("-" TYPE_ARM_CPU)); - info->q_typename = g_strdup(typename); - - QAPI_LIST_PREPEND(*cpu_list, info); -} - -CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp) -{ - CpuDefinitionInfoList *cpu_list = NULL; - GSList *list; - - list = object_class_get_list(TYPE_ARM_CPU, false); - g_slist_foreach(list, arm_cpu_add_definition, &cpu_list); - g_slist_free(list); - - return cpu_list; -} - static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type) { /* Return true if it is not valid for us to switch to diff --git a/target/arm/meson.build b/target/arm/meson.build index 5fb34c1af1..8d6177c1fb 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -2,6 +2,7 @@ arm_ss = ss.source_set() arm_ss.add(files( 'cpregs.c', 'cpu.c', + 'cpu-common.c', 'cpu-mmu.c', 'cpustate-list.c', 'gdbstub.c', From patchwork Fri Jun 4 15:52:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454119 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp601246jae; Fri, 4 Jun 2021 10:12:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzK8ekZnoypM+pDscnh4xbso/YZ4vT45i2eXmJjnEEFy76R9ZoaREfgz6CGYw03jcFpk1yj X-Received: by 2002:a9f:35a4:: with SMTP id t33mr4315532uad.43.1622826762398; Fri, 04 Jun 2021 10:12:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826762; cv=none; d=google.com; s=arc-20160816; b=XXiQWEI8E2X2Dtht4czny79T1K60MboIyw1LZl1f+ZJOjLo0/RP5iu29lFicElFASy eYw4dR8jZJD5QC1yXqFKwzwsqRL7NURKVpvwXQUBgv2vc9jjmJq/nFyRdke9fhfQznIR k0n/hBpDvmbHOAjPsodVZZsz9j2/N415aTLyHMhj9SfTJ1YBWMt75hxejBKAa0n37LCo 9tAXrc1cYGoo0s+DhXzuKLRd1xHKSA6Zb1CRv1Beu1oEtVeoIshOCmG490OTnzI+naNz Mjfh5ssiGPflfBi+NB0/i3CQ6x11323EE2AqGQwV651m61NQLHbDm00jmZKJWJKTUX4A 3nCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=PhGU2QFq4twF/mv+HlJpA4XPpT43a+FIEkesYvvyBBs=; b=ZdDIFXVTyZyb+wOs3sDBS2956Te2XNox7W+u5gPtdoPQRXCXwsED/HHs6jbfRd+WNN SkdCtlspZzDusx6fTDThGwmMMjQxHvlBSAifffMGb+CXrQgp2AIuUywbqQGIl43C+hx/ HTIoWyEZRJREh9oKxm2AeXdUOm1itvpLFXymXzp6k6x0bC8H8IU3aMfdGbNUjvQSp7Xz Yj2PbhcEi1vhIfm7LddYUQ6piimIFDUBorQgyFCjryHJKL1HSTRMetzFdJG1hTYpCAhr mJHt/xY62nygTN9NhR00ngg3E0ByT+K49A9ABaI7UdN1lP/5Uo1ETRvC4plncofeW7cX FVeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=h9SfbPu3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id z14si413829uao.178.2021.06.04.10.12.42 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:12:42 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=h9SfbPu3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:51992 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDNB-0007Fc-QD for patch@linaro.org; Fri, 04 Jun 2021 13:12:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33436) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkb-00089e-4O for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:49 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]:46699) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkY-00028Z-6F for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:48 -0400 Received: by mail-wr1-x42d.google.com with SMTP id a11so8036049wrt.13 for ; Fri, 04 Jun 2021 09:32:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PhGU2QFq4twF/mv+HlJpA4XPpT43a+FIEkesYvvyBBs=; b=h9SfbPu3XAsH6gTS4+glqg4BVSYXE3ystjWl5UfeVBGIR7c0nvlHUXLPBR58jRfz4G 9ydX21jUr+FtU/Bmpyn2DhonFzPpKhvyOzi5UqpFDsU+54nzoDLzfOInMdSAWIWYsWXZ 2U5/3D3Osjpia3iFEQuNNC/yvlwGeYaRrwPKIhTPfjzsVA91xop7f/4VkypNSlSj+Y0a mdQkMBP2Y9oCnqsdhNBuUiP8LqntVRLCBO2issiELmxV1HNwxgPiZ7xgq/NqBfkY7gZ+ xcJ8RqAFZHpyk9Cksv1lRGmklHilMXwFhxNlaYokwWQpxhQWa6lJ5Cp9wSMKw7x8wk5i 7/4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PhGU2QFq4twF/mv+HlJpA4XPpT43a+FIEkesYvvyBBs=; b=Pp5Wcs5F6B19noLxwnzoFZPBlRMvlKjM6cdcwjHUffcyK/45OnM4x+VTV8BUNnVw5d GSoCGk9ktUiJbtyeE0I1/tQ5TS4p96lvm0l39RHEHZpnQoPN/ZR2IFVT4kwN66LCxqaw KpbFN2X8scRQbTu/q3MRIyou/leCWtTmNGcJnQf0jx1xkUtDKx20kbBT8MOUhm0BGkGw hhe2bcesLyz2hG08QVaI33yKos7+8StFEh3hOdOY8XKWzZpC0/dYy6ehoG60CJ/qwnAZ rucNRLC137/NqJLHHwWyh4rVh1pD0ADgUdeNie89J4W/AnTrXvnLGkVlR5GBx0RAhHDc APFA== X-Gm-Message-State: AOAM531klN4CSZRNKqwUqmytGXX7GRHFPEirCLi+AFZwdYryYC9cYPQ2 oUTt9p5CkiJY6H2W1Ebk9Daq6g== X-Received: by 2002:adf:f1c3:: with SMTP id z3mr4647776wro.375.1622824364849; Fri, 04 Jun 2021 09:32:44 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id k8sm4338274wrp.3.2021.06.04.09.32.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id AEA5F1FFB5; Fri, 4 Jun 2021 16:53:16 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 30/99] target/arm: only perform TCG cpu and machine inits if TCG enabled Date: Fri, 4 Jun 2021 16:52:03 +0100 Message-Id: <20210604155312.15902-31-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , "open list:Overall KVM CPUs" , Richard Henderson , qemu-arm@nongnu.org, Claudio Fontana , Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana of note, cpreg lists were previously initialized by TCG first, and then thrown away and replaced with the data coming from KVM. Now we just initialize once, either for TCG or for KVM. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu.c | 32 ++++++++++++++++++-------------- target/arm/kvm.c | 18 +++++++++--------- target/arm/machine.c | 20 +++++++++++++------- 3 files changed, 40 insertions(+), 30 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 9e616a15e1..7bb406efd2 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -435,9 +435,11 @@ static void arm_cpu_reset(DeviceState *dev) } #endif - hw_breakpoint_update_all(cpu); - hw_watchpoint_update_all(cpu); - arm_rebuild_hflags(env); + if (tcg_enabled()) { + hw_breakpoint_update_all(cpu); + hw_watchpoint_update_all(cpu); + arm_rebuild_hflags(env); + } } static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, @@ -1318,6 +1320,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) } } +#ifdef CONFIG_TCG { uint64_t scale; @@ -1343,7 +1346,8 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) cpu->gt_timer[GTIMER_HYPVIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale, arm_gt_hvtimer_cb, cpu); } -#endif +#endif /* CONFIG_TCG */ +#endif /* !CONFIG_USER_ONLY */ cpu_exec_realizefn(cs, &local_err); if (local_err != NULL) { @@ -1646,17 +1650,16 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) unset_feature(env, ARM_FEATURE_PMU); } if (arm_feature(env, ARM_FEATURE_PMU)) { - pmu_init(cpu); - - if (!kvm_enabled()) { + if (tcg_enabled()) { + pmu_init(cpu); arm_register_pre_el_change_hook(cpu, &pmu_pre_el_change, 0); arm_register_el_change_hook(cpu, &pmu_post_el_change, 0); - } #ifndef CONFIG_USER_ONLY - cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, arm_pmu_timer_cb, - cpu); + cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, arm_pmu_timer_cb, + cpu); #endif + } } else { cpu->isar.id_aa64dfr0 = FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, PMUVER, 0); @@ -1739,10 +1742,11 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) set_feature(env, ARM_FEATURE_VBAR); } - register_cp_regs_for_features(cpu); - arm_cpu_register_gdb_regs_for_features(cpu); - - init_cpreg_list(cpu); + if (tcg_enabled()) { + register_cp_regs_for_features(cpu); + arm_cpu_register_gdb_regs_for_features(cpu); + init_cpreg_list(cpu); + } #ifndef CONFIG_USER_ONLY MachineState *ms = MACHINE(qdev_get_machine()); diff --git a/target/arm/kvm.c b/target/arm/kvm.c index d8381ba224..1b093cc52f 100644 --- a/target/arm/kvm.c +++ b/target/arm/kvm.c @@ -431,9 +431,11 @@ static uint64_t *kvm_arm_get_cpreg_ptr(ARMCPU *cpu, uint64_t regidx) return &cpu->cpreg_values[res - cpu->cpreg_indexes]; } -/* Initialize the ARMCPU cpreg list according to the kernel's - * definition of what CPU registers it knows about (and throw away - * the previous TCG-created cpreg list). +/* + * Initialize the ARMCPU cpreg list according to the kernel's + * definition of what CPU registers it knows about. + * + * The parallel for TCG is init_cpreg_list() in tcg/ */ int kvm_arm_init_cpreg_list(ARMCPU *cpu) { @@ -475,12 +477,10 @@ int kvm_arm_init_cpreg_list(ARMCPU *cpu) arraylen++; } - cpu->cpreg_indexes = g_renew(uint64_t, cpu->cpreg_indexes, arraylen); - cpu->cpreg_values = g_renew(uint64_t, cpu->cpreg_values, arraylen); - cpu->cpreg_vmstate_indexes = g_renew(uint64_t, cpu->cpreg_vmstate_indexes, - arraylen); - cpu->cpreg_vmstate_values = g_renew(uint64_t, cpu->cpreg_vmstate_values, - arraylen); + cpu->cpreg_indexes = g_new(uint64_t, arraylen); + cpu->cpreg_values = g_new(uint64_t, arraylen); + cpu->cpreg_vmstate_indexes = g_new(uint64_t, arraylen); + cpu->cpreg_vmstate_values = g_new(uint64_t, arraylen); cpu->cpreg_array_len = arraylen; cpu->cpreg_vmstate_array_len = arraylen; diff --git a/target/arm/machine.c b/target/arm/machine.c index e568662cca..2982e8d7f4 100644 --- a/target/arm/machine.c +++ b/target/arm/machine.c @@ -2,6 +2,7 @@ #include "cpu.h" #include "qemu/error-report.h" #include "sysemu/kvm.h" +#include "sysemu/tcg.h" #include "kvm_arm.h" #include "internals.h" #include "migration/cpu.h" @@ -635,7 +636,7 @@ static int cpu_pre_save(void *opaque) { ARMCPU *cpu = opaque; - if (!kvm_enabled()) { + if (tcg_enabled()) { pmu_op_start(&cpu->env); } @@ -670,7 +671,7 @@ static int cpu_post_save(void *opaque) { ARMCPU *cpu = opaque; - if (!kvm_enabled()) { + if (tcg_enabled()) { pmu_op_finish(&cpu->env); } @@ -689,7 +690,7 @@ static int cpu_pre_load(void *opaque) */ env->irq_line_state = UINT32_MAX; - if (!kvm_enabled()) { + if (tcg_enabled()) { pmu_op_start(&cpu->env); } @@ -759,13 +760,13 @@ static int cpu_post_load(void *opaque, int version_id) } } - hw_breakpoint_update_all(cpu); - hw_watchpoint_update_all(cpu); + if (tcg_enabled()) { + hw_breakpoint_update_all(cpu); + hw_watchpoint_update_all(cpu); - if (!kvm_enabled()) { pmu_op_finish(&cpu->env); + arm_rebuild_hflags(&cpu->env); } - arm_rebuild_hflags(&cpu->env); return 0; } @@ -815,8 +816,13 @@ const VMStateDescription vmstate_arm_cpu = { VMSTATE_UINT32(env.exception.syndrome, ARMCPU), VMSTATE_UINT32(env.exception.fsr, ARMCPU), VMSTATE_UINT64(env.exception.vaddress, ARMCPU), +#ifdef CONFIG_TCG VMSTATE_TIMER_PTR(gt_timer[GTIMER_PHYS], ARMCPU), VMSTATE_TIMER_PTR(gt_timer[GTIMER_VIRT], ARMCPU), +#else + VMSTATE_UNUSED(sizeof(QEMUTimer *)), + VMSTATE_UNUSED(sizeof(QEMUTimer *)), +#endif /* CONFIG_TCG */ { .name = "power_state", .version_id = 0, From patchwork Fri Jun 4 15:52:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454107 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp585176jae; Fri, 4 Jun 2021 09:53:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxVR47l8tS4qV8MdAnqWF45J+LdrvTvRIPkN32eoJqmO1nBFm4wo+V6Gw/IuvGAluBgDkaJ X-Received: by 2002:a9f:3232:: with SMTP id x47mr4544125uad.80.1622825600252; Fri, 04 Jun 2021 09:53:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825600; cv=none; d=google.com; s=arc-20160816; b=AXgYq9m0zGJ2ztleBc1XDirSZLDvRFX4yjpobHVZyRHW7UbKadXHLO8NsJfqCSIqzS yG9VrAR2nl+KHhVkOEyqOJgwX30bej5/2qpElXnSfuzlMIE8K3ETUn8AhPcCcYlibkcD RLJvjlGHb8NETDlBovLPFB3yE5GgK3qJBsU2OQShxh9NO6iSg0o2uUvWi8uoAWZRoGBR HnfJMN2oKUR1F97sXDUESMSkTqeiZBSrGrEAHEEFNiML9LxPgezsSB6BJ6ojH5cXw0KW xWMqHTb6gzWG7tbXhfqg3Zw3Xngr7EszNrTWXXfjSvpm47cNoOql6I6xLCu9k460Dzs+ FejA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=jHTHPK7EVjgaDIrR0C18cgN5rIwMNuNIJkwhJ0GA1a8=; b=EpoimXg/WUFsSNc8EK3gwXAIMBAIhnOdkhozD14MBZgLV2bT2W74CDi0R+Msb5t9Z8 CsWeUJnWbX7TP/HT4B98VDMQdKf+FVy1zTQeIiMZdtz7FifbeJjxq9YJ28imxOoi3wJO qWLPABAODljKN9qp2WGPTB+nQhReKqvWleBWLvkShRPT68ktoVnA6201We29Y/15p5wq LPHz/F+GoCwRR5GwmTSXRfwq3BY9wzISKqQDEWeXrWHMeXCbBRBNdbuv0kk70/J5uD1x 633WDw5a5kKHKFcE6rmTVBVmNigX/+m56fdBBoaYueV0Dl+niaqcbFfxYbcWxQ+A1Pt2 oQ8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=adNs06f9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q1si3439859vsh.348.2021.06.04.09.53.20 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:53:20 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=adNs06f9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:38358 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD4R-0001WY-Kg for patch@linaro.org; Fri, 04 Jun 2021 12:53:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52156) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRZ-0003Sp-Ul for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:09 -0400 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]:40624) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRQ-0003tH-U7 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:09 -0400 Received: by mail-wr1-x435.google.com with SMTP id y7so5220350wrh.7 for ; Fri, 04 Jun 2021 09:12:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jHTHPK7EVjgaDIrR0C18cgN5rIwMNuNIJkwhJ0GA1a8=; b=adNs06f9qKR686LUipzpgnO+ktPDZyAZz6BNdqaxK+SMqnDN7ZKkXPaZY9cc5aLNvL eccy7qRHtop8bTWC3TEF1cdc5OLXmDj4EoQ3b736zv0F2YFWRcHwb1UogNMLagnLGrXf INh7lnm+32pIVeZF+GGxN+RxD1/BnMMwbAlXiBpqnqIaCgfepnFC15pziSyKfJ6Cq7GT eYdG3GVdDGQqTCEn/643ailO5lRD7kdBHAwkyWh0gd1LG9ro1hcprFvMOycpxacRs4nN jRy678TwxR+h35AkoFBywm0oK/zoFwlprsSPoNt9r0xaPBUWjxfgojhCX6vYNgLRZFvO FEMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jHTHPK7EVjgaDIrR0C18cgN5rIwMNuNIJkwhJ0GA1a8=; b=G4ajDCoEkZSJi7QmutXpBq0hGHsRIRq1m3lY4lroOUBMLGFIbHu4Ljl9PJNZ5i/9uz Qc6JgSZjBD7UTPJmXcnpqgtSa4mp3xwrQZ3NvghqqmRPNsAib0S++6gnVNN8UqRRzzeP aS+92hhWl1xU5f4Nr5G0f7ksPi0Oyqf9Pb6a4DpvkEcyokufeOBtzdZy4goqMJRqdIfo DAJngBJ9TtrCn4L9bbtCRbOYX0eJlZTCJ0jXjYRjmrjkxwASFnZreTiKCU/KO7hkn7Th X0zeX5tmhfTABqoAm8KRmFA0xxDPxpFc6jF7qBLxekainoTeEmCanBAMBv75QAHCesa1 CnDw== X-Gm-Message-State: AOAM531oBHSO/pOdryZGXpl+ZTAOEDjh5d3zcJfAMpmWRju4FXzriG96 TtT7brjv7TNUVyNKc1mJ3NfEtw== X-Received: by 2002:a5d:4fc6:: with SMTP id h6mr4820531wrw.1.1622823178645; Fri, 04 Jun 2021 09:12:58 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id c12sm8185315wrr.90.2021.06.04.09.12.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:53 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id C61851FFB7; Fri, 4 Jun 2021 16:53:16 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 31/99] target/arm: tcg: add stubs for some helpers for non-tcg builds Date: Fri, 4 Jun 2021 16:52:04 +0100 Message-Id: <20210604155312.15902-32-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::435; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana this first armv7m one should go away with proper configuration changes (only enabling possible boards for KVM). Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/tcg/tcg-stubs.c | 16 ++++++++++++++++ target/arm/tcg/meson.build | 3 +++ 2 files changed, 19 insertions(+) create mode 100644 target/arm/tcg/tcg-stubs.c -- 2.20.1 diff --git a/target/arm/tcg/tcg-stubs.c b/target/arm/tcg/tcg-stubs.c new file mode 100644 index 0000000000..14220d59a1 --- /dev/null +++ b/target/arm/tcg/tcg-stubs.c @@ -0,0 +1,16 @@ +/* + * QEMU ARM stubs for some TCG helper functions + * + * Copyright 2021 SUSE LLC + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "cpu.h" + +void write_v7m_exception(CPUARMState *env, uint32_t new_exc) +{ + g_assert_not_reached(); +} diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 3d34723eee..78c34742ec 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -30,6 +30,9 @@ arm_ss.add(when: 'CONFIG_TCG', if_true: files( 'vfp_helper.c', 'crypto_helper.c', 'debug_helper.c', + +), if_false: files( + 'tcg-stubs.c', )) arm_ss.add(when: ['TARGET_AARCH64','CONFIG_TCG'], if_true: files( From patchwork Fri Jun 4 15:52:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454100 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp580216jae; Fri, 4 Jun 2021 09:46:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyErF9GB3vzCHN9Nya6atGbo9lRggN1J5W/8mVGHY+eLGiWLglPJTaN2+lA0a2v9S8+FKdm X-Received: by 2002:ab0:2811:: with SMTP id w17mr4272491uap.34.1622825212410; Fri, 04 Jun 2021 09:46:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825212; cv=none; d=google.com; s=arc-20160816; b=RNSoi5wshfXB4yOmf+6Cyfk+S8pQZGnBB0zsv85DmB2aSwQPcVwEAjIOgk+XYmNk5r GulMvLik9U+W7FuAckQZq5fcvmLW0pUxrtrWnSN9WiGeI0BDxvIwzGxhdpg8iATBCaH2 toH8VgTtA8592AgspuY2SRWrJOJq7JJPPUe4aIGglFQ8fxqBIRJJQ2NeQVnLzzxeFZvw Ua6j/Z1VogfK5BupsFF1RnqGWusxY4nuhGFTdnmMijg7i4opE3GVDER/yF2QV/TTzKI1 fkrVO5/xYF7DzI0yQTrF++kbUWav+4Zd+/PK+q+FcwSHkBBEeW5w1zUzPu6f8sluzsnN LaqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=cQ2ujkUUxMglVlDp0hN1+qrHfRzklbjMoNxNds9wJe8=; b=tsstp07icLjTDB8byruSruMTXuGHk3Gzq0MABWh5xYwyn3xz3AKHC2qI8D/XYDzQbz g8M0Cj2csVGgw1kd+INTtaizl9RZM3nORj3ax/0SPiZvsEDtQUynR1u7hVzGGfiFnouH 23TieVE6VVz/KxvB02pFUBaGK3JopGBzagKXajScxOmi5H7EQf1eZyl6QS+ZoJ0hqA31 rzq3HwAe1KLUJcppsk9ZNKXzXMXpU+ZNh+IMxp9z833fBhOMy8vyXejaBxjuYyBpfWf6 MBNNttqkpYVobdockGHrNPvqntDG6pBEEx4dNbdD3VBbwsUJ7aYoTNPuGtDc3C1BGIT8 vBzw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=MunX7wJy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id s191si3139213vsc.101.2021.06.04.09.46.52 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:46:52 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=MunX7wJy; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44424 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCyB-0003VX-O7 for patch@linaro.org; Fri, 04 Jun 2021 12:46:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48700) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCIC-00004y-Jd for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:29 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]:43961) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHm-0005vb-TY for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:28 -0400 Received: by mail-wm1-x32d.google.com with SMTP id 3-20020a05600c0243b029019f2f9b2b8aso5901890wmj.2 for ; Fri, 04 Jun 2021 09:03:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cQ2ujkUUxMglVlDp0hN1+qrHfRzklbjMoNxNds9wJe8=; b=MunX7wJysLY09AHzntU+rLHcprV9jFfSAE39+dP0Eh5PnMCxtJTB93rmun2c5Gjesh /hGkWMqW70Mx0C2E8a7r6evcWfT5N3QuVCVpMrHpl1PmJYnNzt2EiaVN1fH0eySaoxIB 5CnamhDHDazJmamWDOEQmGXTpaHpvyg9METK1GlzRDwT/aUb/CzGKTKuB3b3O0UIQg7i 9nUgykAfYKwrygtAy2zgsthNdduBZyUwMXjVFSEW9K6JOpXwCw6JlV+Y3LmPh8Klmo+s 5oBcBR2uY4ZL+HD5rvu8M0F7typmXzUwd1e/ultDHwi6tiQX3/jpcyVNYalvf0r3pqBt mEcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cQ2ujkUUxMglVlDp0hN1+qrHfRzklbjMoNxNds9wJe8=; b=AV2f6KC+MZPD528E+0H1HvHRalV/8Ic0peJ0uZ9Z5yu1tqPt7+GbVoCJOF6PQY19BF N5c02T/fwE21qiNymFm0ohVJwUK62joW1hRPPDC3febqyxlTSkyMHu1MaRrbcM6sEYio R7PJ/DwQicwFtySVBLAuwhiaohgmxcmDJQT12J7iGj76nc1jMc9mIAVvSOqpIZ3D3JDF YMQTLAQgkyicUVBLcwBQMEu/xGN0rsowVmX7EffQGGsfxQ9eIUrmTZWEqRdtycUfR6cF 1kuRvm34dKZyZ7aMFZ/tkGWJIQlysRQLKlCR9xx9imjBUcu1GtzgNIVvJfKInQRnWoWH 86pw== X-Gm-Message-State: AOAM531dHlKfAY/5wTDgVByc3bnV30vIx6y0ev9p6H7D07TJlLV/Ryzw J2VWXm994Wh/z88Md69u0E9PMTy2Wyx3SQ== X-Received: by 2002:a7b:c935:: with SMTP id h21mr4334812wml.183.1622822580448; Fri, 04 Jun 2021 09:03:00 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id z3sm7565593wrl.13.2021.06.04.09.02.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:56 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E95011FFB8; Fri, 4 Jun 2021 16:53:16 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 32/99] target/arm: move cpsr_read, cpsr_write to cpu_common Date: Fri, 4 Jun 2021 16:52:05 +0100 Message-Id: <20210604155312.15902-33-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana we need as a result to move switch_mode too, so we put an implementation into cpu_user and cpu_sysemu. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu.h | 2 + target/arm/cpu-common.c | 192 +++++++++++++++++++++++++++++++++++ target/arm/cpu-sysemu.c | 30 ++++++ target/arm/cpu-user.c | 24 +++++ target/arm/tcg/helper.c | 220 ---------------------------------------- target/arm/meson.build | 3 + 6 files changed, 251 insertions(+), 220 deletions(-) create mode 100644 target/arm/cpu-user.c -- 2.20.1 diff --git a/target/arm/cpu.h b/target/arm/cpu.h index adb9d2828d..c5ead3365f 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1390,6 +1390,8 @@ typedef enum CPSRWriteType { void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask, CPSRWriteType write_type); +void switch_mode(CPUARMState *env, int mode); + /* Return the current xPSR value. */ static inline uint32_t xpsr_read(CPUARMState *env) { diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c index 0f8ca94815..694e5d73f3 100644 --- a/target/arm/cpu-common.c +++ b/target/arm/cpu-common.c @@ -7,10 +7,12 @@ */ #include "qemu/osdep.h" +#include "qemu/log.h" #include "qom/object.h" #include "qapi/qapi-commands-machine-target.h" #include "qapi/error.h" #include "cpu.h" +#include "internals.h" static void arm_cpu_add_definition(gpointer data, gpointer user_data) { @@ -39,3 +41,193 @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp) return cpu_list; } + +uint32_t cpsr_read(CPUARMState *env) +{ + int ZF; + ZF = (env->ZF == 0); + return env->uncached_cpsr | (env->NF & 0x80000000) | (ZF << 30) | + (env->CF << 29) | ((env->VF & 0x80000000) >> 3) | (env->QF << 27) + | (env->thumb << 5) | ((env->condexec_bits & 3) << 25) + | ((env->condexec_bits & 0xfc) << 8) + | (env->GE << 16) | (env->daif & CPSR_AIF); +} + +static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type) +{ + /* + * Return true if it is not valid for us to switch to + * this CPU mode (ie all the UNPREDICTABLE cases in + * the ARM ARM CPSRWriteByInstr pseudocode). + */ + + /* Changes to or from Hyp via MSR and CPS are illegal. */ + if (write_type == CPSRWriteByInstr && + ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_HYP || + mode == ARM_CPU_MODE_HYP)) { + return 1; + } + + switch (mode) { + case ARM_CPU_MODE_USR: + return 0; + case ARM_CPU_MODE_SYS: + case ARM_CPU_MODE_SVC: + case ARM_CPU_MODE_ABT: + case ARM_CPU_MODE_UND: + case ARM_CPU_MODE_IRQ: + case ARM_CPU_MODE_FIQ: + /* + * Note that we don't implement the IMPDEF NSACR.RFR which in v7 + * allows FIQ mode to be Secure-only. (In v8 this doesn't exist.) + * + * If HCR.TGE is set then changes from Monitor to NS PL1 via MSR + * and CPS are treated as illegal mode changes. + */ + if (write_type == CPSRWriteByInstr && + (env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_MON && + (arm_hcr_el2_eff(env) & HCR_TGE)) { + return 1; + } + return 0; + case ARM_CPU_MODE_HYP: + return !arm_is_el2_enabled(env) || arm_current_el(env) < 2; + case ARM_CPU_MODE_MON: + return arm_current_el(env) < 3; + default: + return 1; + } +} + +void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask, + CPSRWriteType write_type) +{ + uint32_t changed_daif; + + if (mask & CPSR_NZCV) { + env->ZF = (~val) & CPSR_Z; + env->NF = val; + env->CF = (val >> 29) & 1; + env->VF = (val << 3) & 0x80000000; + } + if (mask & CPSR_Q) { + env->QF = ((val & CPSR_Q) != 0); + } + if (mask & CPSR_T) { + env->thumb = ((val & CPSR_T) != 0); + } + if (mask & CPSR_IT_0_1) { + env->condexec_bits &= ~3; + env->condexec_bits |= (val >> 25) & 3; + } + if (mask & CPSR_IT_2_7) { + env->condexec_bits &= 3; + env->condexec_bits |= (val >> 8) & 0xfc; + } + if (mask & CPSR_GE) { + env->GE = (val >> 16) & 0xf; + } + + /* + * In a V7 implementation that includes the security extensions but does + * not include Virtualization Extensions the SCR.FW and SCR.AW bits control + * whether non-secure software is allowed to change the CPSR_F and CPSR_A + * bits respectively. + * + * In a V8 implementation, it is permitted for privileged software to + * change the CPSR A/F bits regardless of the SCR.AW/FW bits. + */ + if (write_type != CPSRWriteRaw && !arm_feature(env, ARM_FEATURE_V8) && + arm_feature(env, ARM_FEATURE_EL3) && + !arm_feature(env, ARM_FEATURE_EL2) && + !arm_is_secure(env)) { + + changed_daif = (env->daif ^ val) & mask; + + if (changed_daif & CPSR_A) { + /* + * Check to see if we are allowed to change the masking of async + * abort exceptions from a non-secure state. + */ + if (!(env->cp15.scr_el3 & SCR_AW)) { + qemu_log_mask(LOG_GUEST_ERROR, + "Ignoring attempt to switch CPSR_A flag from " + "non-secure world with SCR.AW bit clear\n"); + mask &= ~CPSR_A; + } + } + + if (changed_daif & CPSR_F) { + /* + * Check to see if we are allowed to change the masking of FIQ + * exceptions from a non-secure state. + */ + if (!(env->cp15.scr_el3 & SCR_FW)) { + qemu_log_mask(LOG_GUEST_ERROR, + "Ignoring attempt to switch CPSR_F flag from " + "non-secure world with SCR.FW bit clear\n"); + mask &= ~CPSR_F; + } + + /* + * Check whether non-maskable FIQ (NMFI) support is enabled. + * If this bit is set software is not allowed to mask + * FIQs, but is allowed to set CPSR_F to 0. + */ + if ((A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_NMFI) && + (val & CPSR_F)) { + qemu_log_mask(LOG_GUEST_ERROR, + "Ignoring attempt to enable CPSR_F flag " + "(non-maskable FIQ [NMFI] support enabled)\n"); + mask &= ~CPSR_F; + } + } + } + + env->daif &= ~(CPSR_AIF & mask); + env->daif |= val & CPSR_AIF & mask; + + if (write_type != CPSRWriteRaw && + ((env->uncached_cpsr ^ val) & mask & CPSR_M)) { + if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_USR) { + /* + * Note that we can only get here in USR mode if this is a + * gdb stub write; for this case we follow the architectural + * behaviour for guest writes in USR mode of ignoring an attempt + * to switch mode. (Those are caught by translate.c for writes + * triggered by guest instructions.) + */ + mask &= ~CPSR_M; + } else if (bad_mode_switch(env, val & CPSR_M, write_type)) { + /* + * Attempt to switch to an invalid mode: this is UNPREDICTABLE in + * v7, and has defined behaviour in v8: + * + leave CPSR.M untouched + * + allow changes to the other CPSR fields + * + set PSTATE.IL + * For user changes via the GDB stub, we don't set PSTATE.IL, + * as this would be unnecessarily harsh for a user error. + */ + mask &= ~CPSR_M; + if (write_type != CPSRWriteByGDBStub && + arm_feature(env, ARM_FEATURE_V8)) { + mask |= CPSR_IL; + val |= CPSR_IL; + } + qemu_log_mask(LOG_GUEST_ERROR, + "Illegal AArch32 mode switch attempt from %s to %s\n", + aarch32_mode_name(env->uncached_cpsr), + aarch32_mode_name(val)); + } else { + qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n", + write_type == CPSRWriteExceptionReturn ? + "Exception return from AArch32" : + "AArch32 mode switch from", + aarch32_mode_name(env->uncached_cpsr), + aarch32_mode_name(val), env->regs[15]); + switch_mode(env, val & CPSR_M); + } + } + mask &= ~CACHED_CPSR_BITS; + env->uncached_cpsr = (env->uncached_cpsr & ~mask) | (val & mask); +} diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index db1c8cb245..3add2c2439 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -103,3 +103,33 @@ bool arm_cpu_virtio_is_big_endian(CPUState *cs) cpu_synchronize_state(cs); return arm_cpu_data_is_big_endian(env); } + +void switch_mode(CPUARMState *env, int mode) +{ + int old_mode; + int i; + + old_mode = env->uncached_cpsr & CPSR_M; + if (mode == old_mode) { + return; + } + + if (old_mode == ARM_CPU_MODE_FIQ) { + memcpy(env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t)); + memcpy(env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t)); + } else if (mode == ARM_CPU_MODE_FIQ) { + memcpy(env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t)); + memcpy(env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t)); + } + + i = bank_number(old_mode); + env->banked_r13[i] = env->regs[13]; + env->banked_spsr[i] = env->spsr; + + i = bank_number(mode); + env->regs[13] = env->banked_r13[i]; + env->spsr = env->banked_spsr[i]; + + env->banked_r14[r14_bank_number(old_mode)] = env->regs[14]; + env->regs[14] = env->banked_r14[r14_bank_number(mode)]; +} diff --git a/target/arm/cpu-user.c b/target/arm/cpu-user.c new file mode 100644 index 0000000000..a72b7f5703 --- /dev/null +++ b/target/arm/cpu-user.c @@ -0,0 +1,24 @@ +/* + * ARM CPU user-mode only code + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "qemu/log.h" +#include "qom/object.h" +#include "qapi/qapi-commands-machine-target.h" +#include "qapi/error.h" +#include "cpu.h" +#include "internals.h" + +void switch_mode(CPUARMState *env, int mode) +{ + ARMCPU *cpu = env_archcpu(env); + + if (mode != ARM_CPU_MODE_USR) { + cpu_abort(CPU(cpu), "Tried to switch out of user mode\n"); + } +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index f54ece9b42..d32f9659bc 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -38,8 +38,6 @@ #include "cpu-mmu.h" #include "cpregs.h" -static void switch_mode(CPUARMState *env, int mode); - static int vfp_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg) { ARMCPU *cpu = env_archcpu(env); @@ -696,186 +694,6 @@ void arm_cpu_list(void) g_slist_free(list); } -static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type) -{ - /* Return true if it is not valid for us to switch to - * this CPU mode (ie all the UNPREDICTABLE cases in - * the ARM ARM CPSRWriteByInstr pseudocode). - */ - - /* Changes to or from Hyp via MSR and CPS are illegal. */ - if (write_type == CPSRWriteByInstr && - ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_HYP || - mode == ARM_CPU_MODE_HYP)) { - return 1; - } - - switch (mode) { - case ARM_CPU_MODE_USR: - return 0; - case ARM_CPU_MODE_SYS: - case ARM_CPU_MODE_SVC: - case ARM_CPU_MODE_ABT: - case ARM_CPU_MODE_UND: - case ARM_CPU_MODE_IRQ: - case ARM_CPU_MODE_FIQ: - /* Note that we don't implement the IMPDEF NSACR.RFR which in v7 - * allows FIQ mode to be Secure-only. (In v8 this doesn't exist.) - */ - /* If HCR.TGE is set then changes from Monitor to NS PL1 via MSR - * and CPS are treated as illegal mode changes. - */ - if (write_type == CPSRWriteByInstr && - (env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_MON && - (arm_hcr_el2_eff(env) & HCR_TGE)) { - return 1; - } - return 0; - case ARM_CPU_MODE_HYP: - return !arm_is_el2_enabled(env) || arm_current_el(env) < 2; - case ARM_CPU_MODE_MON: - return arm_current_el(env) < 3; - default: - return 1; - } -} - -uint32_t cpsr_read(CPUARMState *env) -{ - int ZF; - ZF = (env->ZF == 0); - return env->uncached_cpsr | (env->NF & 0x80000000) | (ZF << 30) | - (env->CF << 29) | ((env->VF & 0x80000000) >> 3) | (env->QF << 27) - | (env->thumb << 5) | ((env->condexec_bits & 3) << 25) - | ((env->condexec_bits & 0xfc) << 8) - | (env->GE << 16) | (env->daif & CPSR_AIF); -} - -void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask, - CPSRWriteType write_type) -{ - uint32_t changed_daif; - - if (mask & CPSR_NZCV) { - env->ZF = (~val) & CPSR_Z; - env->NF = val; - env->CF = (val >> 29) & 1; - env->VF = (val << 3) & 0x80000000; - } - if (mask & CPSR_Q) - env->QF = ((val & CPSR_Q) != 0); - if (mask & CPSR_T) - env->thumb = ((val & CPSR_T) != 0); - if (mask & CPSR_IT_0_1) { - env->condexec_bits &= ~3; - env->condexec_bits |= (val >> 25) & 3; - } - if (mask & CPSR_IT_2_7) { - env->condexec_bits &= 3; - env->condexec_bits |= (val >> 8) & 0xfc; - } - if (mask & CPSR_GE) { - env->GE = (val >> 16) & 0xf; - } - - /* In a V7 implementation that includes the security extensions but does - * not include Virtualization Extensions the SCR.FW and SCR.AW bits control - * whether non-secure software is allowed to change the CPSR_F and CPSR_A - * bits respectively. - * - * In a V8 implementation, it is permitted for privileged software to - * change the CPSR A/F bits regardless of the SCR.AW/FW bits. - */ - if (write_type != CPSRWriteRaw && !arm_feature(env, ARM_FEATURE_V8) && - arm_feature(env, ARM_FEATURE_EL3) && - !arm_feature(env, ARM_FEATURE_EL2) && - !arm_is_secure(env)) { - - changed_daif = (env->daif ^ val) & mask; - - if (changed_daif & CPSR_A) { - /* Check to see if we are allowed to change the masking of async - * abort exceptions from a non-secure state. - */ - if (!(env->cp15.scr_el3 & SCR_AW)) { - qemu_log_mask(LOG_GUEST_ERROR, - "Ignoring attempt to switch CPSR_A flag from " - "non-secure world with SCR.AW bit clear\n"); - mask &= ~CPSR_A; - } - } - - if (changed_daif & CPSR_F) { - /* Check to see if we are allowed to change the masking of FIQ - * exceptions from a non-secure state. - */ - if (!(env->cp15.scr_el3 & SCR_FW)) { - qemu_log_mask(LOG_GUEST_ERROR, - "Ignoring attempt to switch CPSR_F flag from " - "non-secure world with SCR.FW bit clear\n"); - mask &= ~CPSR_F; - } - - /* Check whether non-maskable FIQ (NMFI) support is enabled. - * If this bit is set software is not allowed to mask - * FIQs, but is allowed to set CPSR_F to 0. - */ - if ((A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_NMFI) && - (val & CPSR_F)) { - qemu_log_mask(LOG_GUEST_ERROR, - "Ignoring attempt to enable CPSR_F flag " - "(non-maskable FIQ [NMFI] support enabled)\n"); - mask &= ~CPSR_F; - } - } - } - - env->daif &= ~(CPSR_AIF & mask); - env->daif |= val & CPSR_AIF & mask; - - if (write_type != CPSRWriteRaw && - ((env->uncached_cpsr ^ val) & mask & CPSR_M)) { - if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_USR) { - /* Note that we can only get here in USR mode if this is a - * gdb stub write; for this case we follow the architectural - * behaviour for guest writes in USR mode of ignoring an attempt - * to switch mode. (Those are caught by translate.c for writes - * triggered by guest instructions.) - */ - mask &= ~CPSR_M; - } else if (bad_mode_switch(env, val & CPSR_M, write_type)) { - /* Attempt to switch to an invalid mode: this is UNPREDICTABLE in - * v7, and has defined behaviour in v8: - * + leave CPSR.M untouched - * + allow changes to the other CPSR fields - * + set PSTATE.IL - * For user changes via the GDB stub, we don't set PSTATE.IL, - * as this would be unnecessarily harsh for a user error. - */ - mask &= ~CPSR_M; - if (write_type != CPSRWriteByGDBStub && - arm_feature(env, ARM_FEATURE_V8)) { - mask |= CPSR_IL; - val |= CPSR_IL; - } - qemu_log_mask(LOG_GUEST_ERROR, - "Illegal AArch32 mode switch attempt from %s to %s\n", - aarch32_mode_name(env->uncached_cpsr), - aarch32_mode_name(val)); - } else { - qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n", - write_type == CPSRWriteExceptionReturn ? - "Exception return from AArch32" : - "AArch32 mode switch from", - aarch32_mode_name(env->uncached_cpsr), - aarch32_mode_name(val), env->regs[15]); - switch_mode(env, val & CPSR_M); - } - } - mask &= ~CACHED_CPSR_BITS; - env->uncached_cpsr = (env->uncached_cpsr & ~mask) | (val & mask); -} - /* Sign/zero extend */ uint32_t HELPER(sxtb16)(uint32_t x) { @@ -916,15 +734,6 @@ uint32_t HELPER(rbit)(uint32_t x) #ifdef CONFIG_USER_ONLY -static void switch_mode(CPUARMState *env, int mode) -{ - ARMCPU *cpu = env_archcpu(env); - - if (mode != ARM_CPU_MODE_USR) { - cpu_abort(CPU(cpu), "Tried to switch out of user mode\n"); - } -} - uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx, uint32_t cur_el, bool secure) { @@ -938,35 +747,6 @@ void aarch64_sync_64_to_32(CPUARMState *env) #else -static void switch_mode(CPUARMState *env, int mode) -{ - int old_mode; - int i; - - old_mode = env->uncached_cpsr & CPSR_M; - if (mode == old_mode) - return; - - if (old_mode == ARM_CPU_MODE_FIQ) { - memcpy (env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t)); - memcpy (env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t)); - } else if (mode == ARM_CPU_MODE_FIQ) { - memcpy (env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t)); - memcpy (env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t)); - } - - i = bank_number(old_mode); - env->banked_r13[i] = env->regs[13]; - env->banked_spsr[i] = env->spsr; - - i = bank_number(mode); - env->regs[13] = env->banked_r13[i]; - env->spsr = env->banked_spsr[i]; - - env->banked_r14[r14_bank_number(old_mode)] = env->regs[14]; - env->regs[14] = env->banked_r14[r14_bank_number(mode)]; -} - /* Physical Interrupt Target EL Lookup Table * * [ From ARM ARM section G1.13.4 (Table G1-15) ] diff --git a/target/arm/meson.build b/target/arm/meson.build index 8d6177c1fb..1f7375375e 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -32,6 +32,9 @@ arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( )) arm_user_ss = ss.source_set() +arm_user_ss.add(files( + 'cpu-user.c', +)) subdir('tcg') From patchwork Fri Jun 4 15:52:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454101 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp582024jae; Fri, 4 Jun 2021 09:49:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJypI2IQFhpUc+XMVChal1hGkJES8Bxmxrp9tT6n1aV1+eTIYhcyENtAbI7Q5EYOs8ykJo+R X-Received: by 2002:a05:6102:124d:: with SMTP id p13mr3596282vsg.21.1622825351184; Fri, 04 Jun 2021 09:49:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825351; cv=none; d=google.com; s=arc-20160816; b=CH7RZ55IvHmSFAYP0FJ6xZQ8TepNAhm9p+zSlv5PYedp/9ZrWqAa9FHDSHDRoEvG6d 7fBvkhpw9gHOsQ2P/VsBKTG3rngEDo4NO9ULkCVsRGswSnlWIumiBuH/F4f0zC0blYK9 vvQKgkeXhEiBe6tQaDxqB87yIJh1qlueculxrK0AClG4thQ27UzqQuNKGhjwiJk11WZe qQriC8hFaKxbMMimtwbAxrNHF4Ocrc5aWmwIJ3xJcrjkjxjjqsk5OOqklobUw2xx5bPI mKpcciyjGcIgVsm3N56oenxWR4s9C9r56uf26NIEZDNsw/ADK7x2V22V0Drr7l7W+v7W CLwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=1niJS7hSglD8mEiCtLQ9fXnqmC61tEmzNc7ZOj4a5VQ=; b=DKprGa1UlR/m6mYRlpv3WFFvQwYVZrszY18F+GaOyi8fEbyLpz6wmLkOniN6n1kYcL 5cuZctqSrQab/m4rnV2+xGyTf4nAI+scSXo8ELs1hOh/QftvTmYLobHVGEmiqwjSNuRb LiYGbQovnI+usuHxnD9hf3IVuRd0ClWTTjTFKDx2r+SSMytWg3K9g9kPJRTUBGSbGh4K D6jt2Zlo6uUW2CE+Nbq4/OXgIY9lquzWBD7r6z3I70a8xkkrWJSpqgfBkd+x8KrUBJxz 19XVW4e/byHBXvVfETF1p5ZYbti+9AwUoEOSKesQmTkn1ACijzwHQtdw0MdJgDoz8Bb4 m4YA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=kc9bC2bx; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j194si3022323vke.15.2021.06.04.09.49.11 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:49:11 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=kc9bC2bx; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:50486 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD0Q-0007Xx-J1 for patch@linaro.org; Fri, 04 Jun 2021 12:49:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52108) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRX-0003Hs-Cn for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:07 -0400 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]:47041) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRO-0003si-90 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:07 -0400 Received: by mail-wr1-x434.google.com with SMTP id a11so7976638wrt.13 for ; Fri, 04 Jun 2021 09:12:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1niJS7hSglD8mEiCtLQ9fXnqmC61tEmzNc7ZOj4a5VQ=; b=kc9bC2bxV3woV3SDD6n85+Q/HlLWOtCSuOMUG86UL7O+llzIcSVvnTAZmB+Fd9Da2u LRFC6IYt2IclWWcT2IAyDzAGbnF77oHyucOCFy+OIeVFYBCkksuaqWLsHbScwbxMG8Cu SdS5Rls9e7TXxeYeJJTk8qhx+QutkRb1EmPc5dy2CrN4VhXn2VRkVESgTHPLpeqq7Qhk IhiqzVG4qT0hYBWBj9zjCDCYHr1Dc80FGpYv2xKYJ+xwAR3c71Rp7gS4KN+JH+2ZIglo towCNMFvSfY3iZNYDdJrBlJq07SJBvuYUh3KlYuQKkHhLlXbMNImJMKcz8H31xVO/pAr mw/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1niJS7hSglD8mEiCtLQ9fXnqmC61tEmzNc7ZOj4a5VQ=; b=X3WLglTi+XCpG1exBECN4TZpbwwp9bNW9SV+yOdCjVOcdtVUjIERNcPLCRuWF8xyc1 7ewi3ge6+qLTi6hMW/I4iIF2j2Kb7VZmrHx9/hivnWMJXkjPlS6CXau3JR74hvEJQ+It BOE6PcjymFFnwJ0RBjNtv6HLYQ+Vu4fVqnrp9/TB9lyKQJqNSOVurT7RVqc/zvgpJVMD ZD/7lshjkeRHz8b5wGYxFuNYVMaVOR2NWWR2O3In62QS5n3lcFUvWRtTj4MygtdNTe0K CM8LjVtxpiQA32kMap0jCx305xtDIlNtHJdDnOokjN4xVWDgVjdhiH6YYPZXhEEL7yVx 2GtQ== X-Gm-Message-State: AOAM533BZ4Y+ShGJV1OwX8TbtxH/6uKvjYubzaDPnK6mbGow4YPN9Jxg kZhzZk9jejwxzgXBJ3qKb8nL2A== X-Received: by 2002:a5d:51c3:: with SMTP id n3mr4679437wrv.322.1622823176253; Fri, 04 Jun 2021 09:12:56 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id s128sm6256282wme.6.2021.06.04.09.12.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:52 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 0EECA1FFBA; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 33/99] target/arm: add temporary stub for arm_rebuild_hflags Date: Fri, 4 Jun 2021 16:52:06 +0100 Message-Id: <20210604155312.15902-34-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana this should go away once the configuration and hw/arm is clean Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- hw/arm/boot.c | 5 ++++- target/arm/arm-powerctl.c | 8 +++++--- target/arm/tcg/tcg-stubs.c | 5 +++++ 3 files changed, 14 insertions(+), 4 deletions(-) -- 2.20.1 diff --git a/hw/arm/boot.c b/hw/arm/boot.c index d7b059225e..13eea9e372 100644 --- a/hw/arm/boot.c +++ b/hw/arm/boot.c @@ -26,6 +26,7 @@ #include "qemu/config-file.h" #include "qemu/option.h" #include "qemu/units.h" +#include "sysemu/tcg.h" /* Kernel boot protocol is specified in the kernel docs * Documentation/arm/Booting and Documentation/arm64/booting.txt @@ -796,7 +797,9 @@ static void do_cpu_reset(void *opaque) info->secondary_cpu_reset_hook(cpu, info); } } - arm_rebuild_hflags(env); + if (tcg_enabled()) { + arm_rebuild_hflags(env); + } } } diff --git a/target/arm/arm-powerctl.c b/target/arm/arm-powerctl.c index b75f813b40..a00624876c 100644 --- a/target/arm/arm-powerctl.c +++ b/target/arm/arm-powerctl.c @@ -15,6 +15,7 @@ #include "arm-powerctl.h" #include "qemu/log.h" #include "qemu/main-loop.h" +#include "sysemu/tcg.h" #ifndef DEBUG_ARM_POWERCTL #define DEBUG_ARM_POWERCTL 0 @@ -127,9 +128,10 @@ static void arm_set_cpu_on_async_work(CPUState *target_cpu_state, target_cpu->env.regs[0] = info->context_id; } - /* CP15 update requires rebuilding hflags */ - arm_rebuild_hflags(&target_cpu->env); - + if (tcg_enabled()) { + /* CP15 update requires rebuilding hflags */ + arm_rebuild_hflags(&target_cpu->env); + } /* Start the new CPU at the requested address */ cpu_set_pc(target_cpu_state, info->entry); diff --git a/target/arm/tcg/tcg-stubs.c b/target/arm/tcg/tcg-stubs.c index 14220d59a1..332f1b9cfb 100644 --- a/target/arm/tcg/tcg-stubs.c +++ b/target/arm/tcg/tcg-stubs.c @@ -14,3 +14,8 @@ void write_v7m_exception(CPUARMState *env, uint32_t new_exc) { g_assert_not_reached(); } + +void arm_rebuild_hflags(CPUARMState *env) +{ + g_assert_not_reached(); +} From patchwork Fri Jun 4 15:52:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454148 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp655362jae; Fri, 4 Jun 2021 11:27:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxC7Wg34V6W3osKJ0b4Xw+n0qRr4ZpNWhEtdB+GdeFcXuf4ITUgvTKpriuiIL49udI/fz9U X-Received: by 2002:a9d:5e8c:: with SMTP id f12mr4741770otl.18.1622831226343; Fri, 04 Jun 2021 11:27:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831226; cv=none; d=google.com; s=arc-20160816; b=D+JsphVudt1b0RgieIlH0vP/NL2QWbmzh2mtovIOinFoZsEzLJjRvNlSjg5e6hLCyi Ve9er2b/zii/WDiNiU31+gbo1RTWLU/0c19ezBEWX+0eHzHr4Z1/Z0+06KDQn/Sgm6t7 Rho1fur/uLrLvHyEmATNL/e9PVyZqigbVWKpT4dY8xlYSesOoEO3mbt48/GJb89SkoeG Qs04ruyhkj1yQTdWvodCXt/sE2x+4Jueh4WBKMmAk/VEfj/BjbqY62kprfxbzIBBrou5 Vax40Uy7+ninEyKYwUSIrN+Uxy2Wt/6CeKU4hNFXBggqpVaEKNCt63cu+bKzdv4PTDt1 whJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Z78Q04mjV1LVORaYbrxDOeNi35S151UOcJPYVSic2GM=; b=lZbotqKe0ZEpcGN3VTkH5XMNlDXEbTlIMGHJV/vaIBPmvOFZciMx7aJZ1tepkpHRF3 dGNUXLj0Ou2BkaGFfP5P914HxvslLKi+weW0EShPo4Za37iwpS8UjiBtp3sXxLJefX2U ZEAPkWDS/ZQBOZGAguRwTFftzNvr3YPGrfJNi+O6WxdBtdH09aEWmXAdu1tXF0MvdKVE 5Uhln7/1+GTd4G8B0ptQg9jGLNYfsEdkH++jpJb2TeKLHxpb7WK9mXIGVP4TJ8ZQuNgy 6DAhORUqdCl60lGduM5adR9Ex8ySIzCBt+BdDRBpipDWShGmFV6NGMMWzqhDIEe09Wf6 95+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=pLtkDC7H; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 184si3241435oig.14.2021.06.04.11.27.06 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:27:06 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=pLtkDC7H; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49666 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpEXB-0003bi-NU for patch@linaro.org; Fri, 04 Jun 2021 14:27:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42416) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpET5-00011x-I5 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:51 -0400 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]:43682) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpET3-0000Pe-BV for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:51 -0400 Received: by mail-wr1-x430.google.com with SMTP id u7so4751492wrs.10 for ; Fri, 04 Jun 2021 11:22:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Z78Q04mjV1LVORaYbrxDOeNi35S151UOcJPYVSic2GM=; b=pLtkDC7HeIb5DEaEcC2s1iOeouuq9rjcmMc+Kk30tJVcE1JDU/4vmaqp5SmD5mCm5M 9hj+qudJQ5wpDfqSmCPh/lyYumOBD6tOLgRFB1M+1ueJlBUnp6v3azI9VPbOlZz5Fpc1 MeAj7MYliku+hdpw28dZui6NktZB51xZ0PkoO2CHR11jOqw7LJlplIo/bXUqvMdDKfc8 9OUCrlAqhWCteLCR2SRfV5R5UwCO61Y162qoxVO/dm2TPwrMkv1rmXeQ0ZofhBbsKShW Vh4SQR2uvuRCKV3QSJDuVYkwaX8mag4gZaEWTcZnZz5Wn0R+7xMiBYpVdPpwOjjD/XEq sPRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Z78Q04mjV1LVORaYbrxDOeNi35S151UOcJPYVSic2GM=; b=cC/ZsI4gexJONXIQ1hAt8tutFr5lsbZ1WG3d8OEzm3Mr3N46eRROqxyBgp3LBQoK3u GqE9QLrcavmrJcI1KwyokUA5QqIDB3/Ot+k2U2Qa4daDFvWDG4im9IOBNDq8YWv8yX5p MH5H6yab8lI1wWi5ABmaxry5xX+erfv6N+539GdNYAKLg/K+0m1ZAKIUXH2lnoR8ET7s 4gFM8cOaGv3n9dUlZs7rHK9imavzDUH1toS/W4NOfnKoIciuljGp9Oof9g7mgrkIHSY6 ZVBx26eN9nwKlAXFADNqJRskWQiCzTptqAV72KpngJS//FDifQudVBulr+J+OEbRrlss 8Yqw== X-Gm-Message-State: AOAM530Dsyt1TxifY5NYd6nKSgWEcEF5a6MzqGetCLYDg/u6QEJEszhg vG5F4FNgJQDj4RBeZ8zWco47JA== X-Received: by 2002:adf:ee85:: with SMTP id b5mr5168329wro.95.1622830967926; Fri, 04 Jun 2021 11:22:47 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id x7sm8156479wre.8.2021.06.04.11.22.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:45 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2F1161FF7E; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 34/99] target/arm: move arm_hcr_el2_eff from tcg/ to common_cpu Date: Fri, 4 Jun 2021 16:52:07 +0100 Message-Id: <20210604155312.15902-35-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana we will need this for KVM too, especially for Nested support. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-common.c | 68 +++++++++++++++++++++++++++++++++++++++++ target/arm/tcg/helper.c | 68 ----------------------------------------- 2 files changed, 68 insertions(+), 68 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c index 694e5d73f3..040e06392a 100644 --- a/target/arm/cpu-common.c +++ b/target/arm/cpu-common.c @@ -231,3 +231,71 @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask, mask &= ~CACHED_CPSR_BITS; env->uncached_cpsr = (env->uncached_cpsr & ~mask) | (val & mask); } + +/* + * Return the effective value of HCR_EL2. + * Bits that are not included here: + * RW (read from SCR_EL3.RW as needed) + */ +uint64_t arm_hcr_el2_eff(CPUARMState *env) +{ + uint64_t ret = env->cp15.hcr_el2; + + if (!arm_is_el2_enabled(env)) { + /* + * "This register has no effect if EL2 is not enabled in the + * current Security state". This is ARMv8.4-SecEL2 speak for + * !(SCR_EL3.NS==1 || SCR_EL3.EEL2==1). + * + * Prior to that, the language was "In an implementation that + * includes EL3, when the value of SCR_EL3.NS is 0 the PE behaves + * as if this field is 0 for all purposes other than a direct + * read or write access of HCR_EL2". With lots of enumeration + * on a per-field basis. In current QEMU, this is condition + * is arm_is_secure_below_el3. + * + * Since the v8.4 language applies to the entire register, and + * appears to be backward compatible, use that. + */ + return 0; + } + + /* + * For a cpu that supports both aarch64 and aarch32, we can set bits + * in HCR_EL2 (e.g. via EL3) that are RES0 when we enter EL2 as aa32. + * Ignore all of the bits in HCR+HCR2 that are not valid for aarch32. + */ + if (!arm_el_is_aa64(env, 2)) { + uint64_t aa32_valid; + + /* + * These bits are up-to-date as of ARMv8.6. + * For HCR, it's easiest to list just the 2 bits that are invalid. + * For HCR2, list those that are valid. + */ + aa32_valid = MAKE_64BIT_MASK(0, 32) & ~(HCR_RW | HCR_TDZ); + aa32_valid |= (HCR_CD | HCR_ID | HCR_TERR | HCR_TEA | HCR_MIOCNCE | + HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_TTLBIS); + ret &= aa32_valid; + } + + if (ret & HCR_TGE) { + /* These bits are up-to-date as of ARMv8.6. */ + if (ret & HCR_E2H) { + ret &= ~(HCR_VM | HCR_FMO | HCR_IMO | HCR_AMO | + HCR_BSU_MASK | HCR_DC | HCR_TWI | HCR_TWE | + HCR_TID0 | HCR_TID2 | HCR_TPCP | HCR_TPU | + HCR_TDZ | HCR_CD | HCR_ID | HCR_MIOCNCE | + HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_ENSCXT | + HCR_TTLBIS | HCR_TTLBOS | HCR_TID5); + } else { + ret |= HCR_FMO | HCR_IMO | HCR_AMO; + } + ret &= ~(HCR_SWIO | HCR_PTW | HCR_VF | HCR_VI | HCR_VSE | + HCR_FB | HCR_TID1 | HCR_TID3 | HCR_TSC | HCR_TACR | + HCR_TSW | HCR_TTLB | HCR_TVM | HCR_HCD | HCR_TRVM | + HCR_TLOR); + } + + return ret; +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index d32f9659bc..e85e2bfed9 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -261,74 +261,6 @@ static int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg) } #endif /* TARGET_AARCH64 */ -/* - * Return the effective value of HCR_EL2. - * Bits that are not included here: - * RW (read from SCR_EL3.RW as needed) - */ -uint64_t arm_hcr_el2_eff(CPUARMState *env) -{ - uint64_t ret = env->cp15.hcr_el2; - - if (!arm_is_el2_enabled(env)) { - /* - * "This register has no effect if EL2 is not enabled in the - * current Security state". This is ARMv8.4-SecEL2 speak for - * !(SCR_EL3.NS==1 || SCR_EL3.EEL2==1). - * - * Prior to that, the language was "In an implementation that - * includes EL3, when the value of SCR_EL3.NS is 0 the PE behaves - * as if this field is 0 for all purposes other than a direct - * read or write access of HCR_EL2". With lots of enumeration - * on a per-field basis. In current QEMU, this is condition - * is arm_is_secure_below_el3. - * - * Since the v8.4 language applies to the entire register, and - * appears to be backward compatible, use that. - */ - return 0; - } - - /* - * For a cpu that supports both aarch64 and aarch32, we can set bits - * in HCR_EL2 (e.g. via EL3) that are RES0 when we enter EL2 as aa32. - * Ignore all of the bits in HCR+HCR2 that are not valid for aarch32. - */ - if (!arm_el_is_aa64(env, 2)) { - uint64_t aa32_valid; - - /* - * These bits are up-to-date as of ARMv8.6. - * For HCR, it's easiest to list just the 2 bits that are invalid. - * For HCR2, list those that are valid. - */ - aa32_valid = MAKE_64BIT_MASK(0, 32) & ~(HCR_RW | HCR_TDZ); - aa32_valid |= (HCR_CD | HCR_ID | HCR_TERR | HCR_TEA | HCR_MIOCNCE | - HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_TTLBIS); - ret &= aa32_valid; - } - - if (ret & HCR_TGE) { - /* These bits are up-to-date as of ARMv8.6. */ - if (ret & HCR_E2H) { - ret &= ~(HCR_VM | HCR_FMO | HCR_IMO | HCR_AMO | - HCR_BSU_MASK | HCR_DC | HCR_TWI | HCR_TWE | - HCR_TID0 | HCR_TID2 | HCR_TPCP | HCR_TPU | - HCR_TDZ | HCR_CD | HCR_ID | HCR_MIOCNCE | - HCR_TID4 | HCR_TICAB | HCR_TOCU | HCR_ENSCXT | - HCR_TTLBIS | HCR_TTLBOS | HCR_TID5); - } else { - ret |= HCR_FMO | HCR_IMO | HCR_AMO; - } - ret &= ~(HCR_SWIO | HCR_PTW | HCR_VF | HCR_VI | HCR_VSE | - HCR_FB | HCR_TID1 | HCR_TID3 | HCR_TSC | HCR_TACR | - HCR_TSW | HCR_TTLB | HCR_TVM | HCR_HCD | HCR_TRVM | - HCR_TLOR); - } - - return ret; -} - /* Return the exception level to which exceptions should be taken * via SVEAccessTrap. If an exception should be routed through * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should From patchwork Fri Jun 4 15:52:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454077 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp562846jae; Fri, 4 Jun 2021 09:26:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzjrkL93dhvd/8FzClxwlUIjnHMAuuf8qGAiVPPFfUPuRUVl8c+SLEccwloGIc4Ve4X8MGb X-Received: by 2002:a05:6102:115:: with SMTP id z21mr3476771vsq.22.1622823964465; Fri, 04 Jun 2021 09:26:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823964; cv=none; d=google.com; s=arc-20160816; b=vEfY+9yDisBjionGDrtr4r0LJ3QzA+8sQm/QZjFCOzMTZaiWDe9C1PqwBSWLJlNc/X 8zYsx4El9xfXJtrUInBUhy3ixl+FL/onimH+IJCfXhag9vnvDope0BNh+YL2LPuZAkOr 7C5gHeyqyWvbefQTFUPAIrgHZddeKg0gr5F7pNnmiQbXEDqy2CxL+XMHm6zZkKNnMqiW i7IJoOB9fTcGFXoNO1+rCm3IEYodedoZVXsJX6stimC3jUIwSHx9obXAlK+sX15THj4W 1J7EESo8qAIu2DxYO93ygvWzcpjF2GOATnyBc1k94UzAx/YuR4dbqwotw8naWzptv5Tg xIEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=PY6Huc02/raAvhVa8/W7Vw34VyLEbcG3dvBisB2BlcU=; b=rjfdqpJJettnywvbGM6D+RR8wET80UQAS7JJeOwALFaHTEvnQfd+SMnhoMNkk7jUeA zb+3Bc3Pnw6izepp3ho0OcZy3/3oNOQXasB5yBxaBvKA5qs3d59tC17MSzOYH5OIzL+u q8r1kt35KxrHaeDhNKoSvIRdmIIZgE5sZfVE914IHVHWLxkI1tiqBWn7dkjN2tLI/zba WRJ9uGTpR9VNYEmO4QoLTXdw3KF8llWT3y4MU5bQENLT9m26TTzBIulLh2PjD4m8XsYk 6v/MM+TmZL2HB1+7ekWnVclFFrNyciPJNFFcfzfd8WCAQ7TAOtCDkvHfqSRx9wAsgPEl w1PA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=kjeZQjCZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c11si2888504uak.19.2021.06.04.09.26.03 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:26:03 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=kjeZQjCZ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59938 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCe2-00063j-PU for patch@linaro.org; Fri, 04 Jun 2021 12:26:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48230) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHc-0007jV-6h for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:52 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:37687) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHU-0005kQ-TO for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:51 -0400 Received: by mail-wr1-x431.google.com with SMTP id i94so4783159wri.4 for ; Fri, 04 Jun 2021 09:02:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PY6Huc02/raAvhVa8/W7Vw34VyLEbcG3dvBisB2BlcU=; b=kjeZQjCZt9waA3GNBOzI5cTzOh2r0A7BOvskq7Ngzd30S6Zlinjpv+LLiHKDv2RjOy m27jcqnnu343yFmvDXJHzkGC62WNCa42I5Uhd2Z1ls5Do5utolGKtORZsa4cNz3QmPee hzhqXqBAeIEZ5O70vMxMBOCsfTufSVSUBd62ruB+zYXZWRpfKCg70pzwWW4Ncmv54pzr XDYPc0DDBr9Q+kMbmWj08Qdf+BApsSyxDzBD5O3fIgNOOm28nMxNLWktfUcO1Ws58Ljt B6DEXf9BV/n5rIxlQPME1r6cIgJh6hSOGP0daw2nNx3kWoQRdh0NfAgbQXPMQvnRjBh3 l0bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PY6Huc02/raAvhVa8/W7Vw34VyLEbcG3dvBisB2BlcU=; b=DNQFQLEHYm6lHPVEZJlf4rSfoRsw2obLIXCJSXWCdPj//JcZPV45h4Jh25r1zrbzEg Oqa2W+BHUg5hxI+oym4nTWJnjQ6iiE6AFferFWLT2lxQ1sA6D8oCLakEoEL6uuRLO+sm RkfTE11WDzhsvctPjtnGgTwNViyVmTH9W2xxbjlmoa1K5n1DVbDkYkd9SbTwLJfrgljd ZJ1H0yjwBL4yJ7LDmmNfikwYq39b7Upp8mX0hS5rMC3MZv2P23xt8k3BR8Er+Ewm4UzU HbbsXn77+3jJbYUe6UpQa/lf6Co4y+lUBAVY0c4n6PIkOc/NeWEeRcnqGYJGqrJvHWGi mz/A== X-Gm-Message-State: AOAM5317/I9s+P3YD+UV9FU2szZHFgbjrMN7UZjLIJBnfVgcmAQd5/T/ O1cROdnR0jm9ggm4pDKoVCvGH2LM/93IaA== X-Received: by 2002:a5d:6443:: with SMTP id d3mr4726439wrw.389.1622822563407; Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id q11sm6996052wrx.80.2021.06.04.09.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 537FD1FFBB; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 35/99] target/arm: split vfp state setting from tcg helpers Date: Fri, 4 Jun 2021 16:52:08 +0100 Message-Id: <20210604155312.15902-36-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana cpu-vfp.c: vfp_get_fpsr and vfp_set_fpsr are needed also for KVM, so create a new cpu-vfp.c tcg/cpu-vfp.c: vfp_get_fpscr_from_host and vv are TCG-only, so we move the implementation to tcg/cpu-vfp.c Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-vfp.h | 29 +++++ target/arm/cpu-vfp.c | 97 +++++++++++++++++ target/arm/tcg/cpu-vfp.c | 146 +++++++++++++++++++++++++ target/arm/tcg/vfp_helper.c | 210 +----------------------------------- target/arm/meson.build | 1 + target/arm/tcg/meson.build | 1 + 6 files changed, 276 insertions(+), 208 deletions(-) create mode 100644 target/arm/cpu-vfp.h create mode 100644 target/arm/cpu-vfp.c create mode 100644 target/arm/tcg/cpu-vfp.c -- 2.20.1 diff --git a/target/arm/cpu-vfp.h b/target/arm/cpu-vfp.h new file mode 100644 index 0000000000..41e0d710a0 --- /dev/null +++ b/target/arm/cpu-vfp.h @@ -0,0 +1,29 @@ +/* + * ARM VFP floating-point operations internals + * + * Copyright (c) 2003 Fabrice Bellard + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#ifndef CPU_VFP_H +#define CPU_VFP_H + +#include "qemu/osdep.h" +#include "cpu.h" + +uint32_t vfp_get_fpscr_from_host(CPUARMState *env); +void vfp_set_fpscr_to_host(CPUARMState *env, uint32_t val); + +#endif /* CPU_VFP_H */ diff --git a/target/arm/cpu-vfp.c b/target/arm/cpu-vfp.c new file mode 100644 index 0000000000..8ea615a916 --- /dev/null +++ b/target/arm/cpu-vfp.c @@ -0,0 +1,97 @@ +/* + * ARM VFP floating-point operations + * + * Copyright (c) 2003 Fabrice Bellard + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "cpu-vfp.h" +#include "sysemu/tcg.h" + +uint32_t vfp_get_fpscr(CPUARMState *env) +{ + uint32_t i, fpscr; + + fpscr = env->vfp.xregs[ARM_VFP_FPSCR] + | (env->vfp.vec_len << 16) + | (env->vfp.vec_stride << 20); + + /* + * M-profile LTPSIZE overlaps A-profile Stride; whichever of the + * two is not applicable to this CPU will always be zero. + */ + fpscr |= env->v7m.ltpsize << 16; + + if (tcg_enabled()) { + fpscr |= vfp_get_fpscr_from_host(env); + } + + i = env->vfp.qc[0] | env->vfp.qc[1] | env->vfp.qc[2] | env->vfp.qc[3]; + fpscr |= i ? FPCR_QC : 0; + + return fpscr; +} + +void vfp_set_fpscr(CPUARMState *env, uint32_t val) +{ + /* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */ + if (!cpu_isar_feature(any_fp16, env_archcpu(env))) { + val &= ~FPCR_FZ16; + } + + if (tcg_enabled()) { + vfp_set_fpscr_to_host(env, val); + } + + if (!arm_feature(env, ARM_FEATURE_M)) { + /* + * Short-vector length and stride; on M-profile these bits + * are used for different purposes. + * We can't make this conditional be "if MVFR0.FPShVec != 0", + * because in v7A no-short-vector-support cores still had to + * allow Stride/Len to be written with the only effect that + * some insns are required to UNDEF if the guest sets them. + * + * TODO: if M-profile MVE implemented, set LTPSIZE. + */ + env->vfp.vec_len = extract32(val, 16, 3); + env->vfp.vec_stride = extract32(val, 20, 2); + } + + if (arm_feature(env, ARM_FEATURE_NEON)) { + /* + * The bit we set within fpscr_q is arbitrary; the register as a + * whole being zero/non-zero is what counts. + * TODO: M-profile MVE also has a QC bit. + */ + env->vfp.qc[0] = val & FPCR_QC; + env->vfp.qc[1] = 0; + env->vfp.qc[2] = 0; + env->vfp.qc[3] = 0; + } + + /* + * We don't implement trapped exception handling, so the + * trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!) + * + * The exception flags IOC|DZC|OFC|UFC|IXC|IDC are stored in + * fp_status; QC, Len and Stride are stored separately earlier. + * Clear out all of those and the RES0 bits: only NZCV, AHP, DN, + * FZ, RMode and FZ16 are kept in vfp.xregs[FPSCR]. + */ + env->vfp.xregs[ARM_VFP_FPSCR] = val & 0xf7c80000; +} diff --git a/target/arm/tcg/cpu-vfp.c b/target/arm/tcg/cpu-vfp.c new file mode 100644 index 0000000000..bb88abf1ba --- /dev/null +++ b/target/arm/tcg/cpu-vfp.c @@ -0,0 +1,146 @@ +/* + * ARM VFP floating-point operations + * + * Copyright (c) 2003 Fabrice Bellard + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "qemu/log.h" +#include "internals.h" +#include "fpu/softfloat.h" +#include "cpu-vfp.h" + +/* Convert host exception flags to vfp form. */ +static inline int vfp_exceptbits_from_host(int host_bits) +{ + int target_bits = 0; + + if (host_bits & float_flag_invalid) { + target_bits |= 1; + } + if (host_bits & float_flag_divbyzero) { + target_bits |= 2; + } + if (host_bits & float_flag_overflow) { + target_bits |= 4; + } + if (host_bits & (float_flag_underflow | float_flag_output_denormal)) { + target_bits |= 8; + } + if (host_bits & float_flag_inexact) { + target_bits |= 0x10; + } + if (host_bits & float_flag_input_denormal) { + target_bits |= 0x80; + } + return target_bits; +} + +/* Convert vfp exception flags to target form. */ +static inline int vfp_exceptbits_to_host(int target_bits) +{ + int host_bits = 0; + + if (target_bits & 1) { + host_bits |= float_flag_invalid; + } + if (target_bits & 2) { + host_bits |= float_flag_divbyzero; + } + if (target_bits & 4) { + host_bits |= float_flag_overflow; + } + if (target_bits & 8) { + host_bits |= float_flag_underflow; + } + if (target_bits & 0x10) { + host_bits |= float_flag_inexact; + } + if (target_bits & 0x80) { + host_bits |= float_flag_input_denormal; + } + return host_bits; +} + +uint32_t vfp_get_fpscr_from_host(CPUARMState *env) +{ + uint32_t i; + + i = get_float_exception_flags(&env->vfp.fp_status); + i |= get_float_exception_flags(&env->vfp.standard_fp_status); + /* FZ16 does not generate an input denormal exception. */ + i |= (get_float_exception_flags(&env->vfp.fp_status_f16) + & ~float_flag_input_denormal); + i |= (get_float_exception_flags(&env->vfp.standard_fp_status_f16) + & ~float_flag_input_denormal); + return vfp_exceptbits_from_host(i); +} + +void vfp_set_fpscr_to_host(CPUARMState *env, uint32_t val) +{ + int i; + uint32_t changed = env->vfp.xregs[ARM_VFP_FPSCR]; + + changed ^= val; + if (changed & (3 << 22)) { + i = (val >> 22) & 3; + switch (i) { + case FPROUNDING_TIEEVEN: + i = float_round_nearest_even; + break; + case FPROUNDING_POSINF: + i = float_round_up; + break; + case FPROUNDING_NEGINF: + i = float_round_down; + break; + case FPROUNDING_ZERO: + i = float_round_to_zero; + break; + } + set_float_rounding_mode(i, &env->vfp.fp_status); + set_float_rounding_mode(i, &env->vfp.fp_status_f16); + } + if (changed & FPCR_FZ16) { + bool ftz_enabled = val & FPCR_FZ16; + set_flush_to_zero(ftz_enabled, &env->vfp.fp_status_f16); + set_flush_to_zero(ftz_enabled, &env->vfp.standard_fp_status_f16); + set_flush_inputs_to_zero(ftz_enabled, &env->vfp.fp_status_f16); + set_flush_inputs_to_zero(ftz_enabled, &env->vfp.standard_fp_status_f16); + } + if (changed & FPCR_FZ) { + bool ftz_enabled = val & FPCR_FZ; + set_flush_to_zero(ftz_enabled, &env->vfp.fp_status); + set_flush_inputs_to_zero(ftz_enabled, &env->vfp.fp_status); + } + if (changed & FPCR_DN) { + bool dnan_enabled = val & FPCR_DN; + set_default_nan_mode(dnan_enabled, &env->vfp.fp_status); + set_default_nan_mode(dnan_enabled, &env->vfp.fp_status_f16); + } + + /* + * The exception flags are ORed together when we read fpscr so we + * only need to preserve the current state in one of our + * float_status values. + */ + i = vfp_exceptbits_to_host(val); + set_float_exception_flags(i, &env->vfp.fp_status); + set_float_exception_flags(0, &env->vfp.fp_status_f16); + set_float_exception_flags(0, &env->vfp.standard_fp_status); + set_float_exception_flags(0, &env->vfp.standard_fp_status_f16); +} diff --git a/target/arm/tcg/vfp_helper.c b/target/arm/tcg/vfp_helper.c index 01b9d8557f..521719f327 100644 --- a/target/arm/tcg/vfp_helper.c +++ b/target/arm/tcg/vfp_helper.c @@ -30,220 +30,14 @@ Single precision routines have a "s" suffix, double precision a "d" suffix. */ -#ifdef CONFIG_TCG - -/* Convert host exception flags to vfp form. */ -static inline int vfp_exceptbits_from_host(int host_bits) -{ - int target_bits = 0; - - if (host_bits & float_flag_invalid) { - target_bits |= 1; - } - if (host_bits & float_flag_divbyzero) { - target_bits |= 2; - } - if (host_bits & float_flag_overflow) { - target_bits |= 4; - } - if (host_bits & (float_flag_underflow | float_flag_output_denormal)) { - target_bits |= 8; - } - if (host_bits & float_flag_inexact) { - target_bits |= 0x10; - } - if (host_bits & float_flag_input_denormal) { - target_bits |= 0x80; - } - return target_bits; -} - -/* Convert vfp exception flags to target form. */ -static inline int vfp_exceptbits_to_host(int target_bits) -{ - int host_bits = 0; - - if (target_bits & 1) { - host_bits |= float_flag_invalid; - } - if (target_bits & 2) { - host_bits |= float_flag_divbyzero; - } - if (target_bits & 4) { - host_bits |= float_flag_overflow; - } - if (target_bits & 8) { - host_bits |= float_flag_underflow; - } - if (target_bits & 0x10) { - host_bits |= float_flag_inexact; - } - if (target_bits & 0x80) { - host_bits |= float_flag_input_denormal; - } - return host_bits; -} - -static uint32_t vfp_get_fpscr_from_host(CPUARMState *env) -{ - uint32_t i; - - i = get_float_exception_flags(&env->vfp.fp_status); - i |= get_float_exception_flags(&env->vfp.standard_fp_status); - /* FZ16 does not generate an input denormal exception. */ - i |= (get_float_exception_flags(&env->vfp.fp_status_f16) - & ~float_flag_input_denormal); - i |= (get_float_exception_flags(&env->vfp.standard_fp_status_f16) - & ~float_flag_input_denormal); - return vfp_exceptbits_from_host(i); -} - -static void vfp_set_fpscr_to_host(CPUARMState *env, uint32_t val) -{ - int i; - uint32_t changed = env->vfp.xregs[ARM_VFP_FPSCR]; - - changed ^= val; - if (changed & (3 << 22)) { - i = (val >> 22) & 3; - switch (i) { - case FPROUNDING_TIEEVEN: - i = float_round_nearest_even; - break; - case FPROUNDING_POSINF: - i = float_round_up; - break; - case FPROUNDING_NEGINF: - i = float_round_down; - break; - case FPROUNDING_ZERO: - i = float_round_to_zero; - break; - } - set_float_rounding_mode(i, &env->vfp.fp_status); - set_float_rounding_mode(i, &env->vfp.fp_status_f16); - } - if (changed & FPCR_FZ16) { - bool ftz_enabled = val & FPCR_FZ16; - set_flush_to_zero(ftz_enabled, &env->vfp.fp_status_f16); - set_flush_to_zero(ftz_enabled, &env->vfp.standard_fp_status_f16); - set_flush_inputs_to_zero(ftz_enabled, &env->vfp.fp_status_f16); - set_flush_inputs_to_zero(ftz_enabled, &env->vfp.standard_fp_status_f16); - } - if (changed & FPCR_FZ) { - bool ftz_enabled = val & FPCR_FZ; - set_flush_to_zero(ftz_enabled, &env->vfp.fp_status); - set_flush_inputs_to_zero(ftz_enabled, &env->vfp.fp_status); - } - if (changed & FPCR_DN) { - bool dnan_enabled = val & FPCR_DN; - set_default_nan_mode(dnan_enabled, &env->vfp.fp_status); - set_default_nan_mode(dnan_enabled, &env->vfp.fp_status_f16); - } - - /* - * The exception flags are ORed together when we read fpscr so we - * only need to preserve the current state in one of our - * float_status values. - */ - i = vfp_exceptbits_to_host(val); - set_float_exception_flags(i, &env->vfp.fp_status); - set_float_exception_flags(0, &env->vfp.fp_status_f16); - set_float_exception_flags(0, &env->vfp.standard_fp_status); - set_float_exception_flags(0, &env->vfp.standard_fp_status_f16); -} - -#else - -static uint32_t vfp_get_fpscr_from_host(CPUARMState *env) -{ - return 0; -} - -static void vfp_set_fpscr_to_host(CPUARMState *env, uint32_t val) -{ -} - -#endif - uint32_t HELPER(vfp_get_fpscr)(CPUARMState *env) { - uint32_t i, fpscr; - - fpscr = env->vfp.xregs[ARM_VFP_FPSCR] - | (env->vfp.vec_len << 16) - | (env->vfp.vec_stride << 20); - - /* - * M-profile LTPSIZE overlaps A-profile Stride; whichever of the - * two is not applicable to this CPU will always be zero. - */ - fpscr |= env->v7m.ltpsize << 16; - - fpscr |= vfp_get_fpscr_from_host(env); - - i = env->vfp.qc[0] | env->vfp.qc[1] | env->vfp.qc[2] | env->vfp.qc[3]; - fpscr |= i ? FPCR_QC : 0; - - return fpscr; -} - -uint32_t vfp_get_fpscr(CPUARMState *env) -{ - return HELPER(vfp_get_fpscr)(env); + return vfp_get_fpscr(env); } void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val) { - /* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */ - if (!cpu_isar_feature(any_fp16, env_archcpu(env))) { - val &= ~FPCR_FZ16; - } - - vfp_set_fpscr_to_host(env, val); - - if (!arm_feature(env, ARM_FEATURE_M)) { - /* - * Short-vector length and stride; on M-profile these bits - * are used for different purposes. - * We can't make this conditional be "if MVFR0.FPShVec != 0", - * because in v7A no-short-vector-support cores still had to - * allow Stride/Len to be written with the only effect that - * some insns are required to UNDEF if the guest sets them. - * - * TODO: if M-profile MVE implemented, set LTPSIZE. - */ - env->vfp.vec_len = extract32(val, 16, 3); - env->vfp.vec_stride = extract32(val, 20, 2); - } - - if (arm_feature(env, ARM_FEATURE_NEON)) { - /* - * The bit we set within fpscr_q is arbitrary; the register as a - * whole being zero/non-zero is what counts. - * TODO: M-profile MVE also has a QC bit. - */ - env->vfp.qc[0] = val & FPCR_QC; - env->vfp.qc[1] = 0; - env->vfp.qc[2] = 0; - env->vfp.qc[3] = 0; - } - - /* - * We don't implement trapped exception handling, so the - * trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!) - * - * The exception flags IOC|DZC|OFC|UFC|IXC|IDC are stored in - * fp_status; QC, Len and Stride are stored separately earlier. - * Clear out all of those and the RES0 bits: only NZCV, AHP, DN, - * FZ, RMode and FZ16 are kept in vfp.xregs[FPSCR]. - */ - env->vfp.xregs[ARM_VFP_FPSCR] = val & 0xf7c80000; -} - -void vfp_set_fpscr(CPUARMState *env, uint32_t val) -{ - HELPER(vfp_set_fpscr)(env, val); + vfp_set_fpscr(env, val); } #ifdef CONFIG_TCG diff --git a/target/arm/meson.build b/target/arm/meson.build index 1f7375375e..4bc44e1db2 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -4,6 +4,7 @@ arm_ss.add(files( 'cpu.c', 'cpu-common.c', 'cpu-mmu.c', + 'cpu-vfp.c', 'cpustate-list.c', 'gdbstub.c', 'cpu_tcg.c', diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 78c34742ec..64a86fd94c 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -21,6 +21,7 @@ arm_ss.add(when: 'CONFIG_TCG', if_true: files( 'translate-vfp.c', 'helper.c', 'cpregs.c', + 'cpu-vfp.c', 'iwmmxt_helper.c', 'm_helper.c', 'neon_helper.c', From patchwork Fri Jun 4 15:52:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454070 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp553480jae; Fri, 4 Jun 2021 09:14:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyZGkC9KLngUF/M7iRQp00zMKqpvF+HDXHMo3PPwXYqdKw8mGaG4NGrlxdghqXF2pg+1JcP X-Received: by 2002:a17:906:8345:: with SMTP id b5mr4848148ejy.14.1622823277582; Fri, 04 Jun 2021 09:14:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823277; cv=none; d=google.com; s=arc-20160816; b=y03YjObwHH5zDCRM+sLKubXKtxVNoFEGB9t53nyJk+oymly19bBYLCs3t5rqs6CiYK yKnmUAa3WuB7UXKy78it2mojXGecgJh+6i5dsFTYBCRlIG7ijQwMrkfxRSPTkVlbRSCB IQQuYENXen4fD6R+as4HuKEI23WrOmjrqhFAe0HOm1CZtAUjjitYc7/0J9JEJAH6vCRt 5dL39mKybMLwJYGYaEgssMHoGQKHwrKL958H9w8MS/J+dxZ/OQBvbxBx0fzEqNCIn37c Jy3cIJRkU98/ElMEx11rkqfetH7z2TWrTdih3h8WDKKaoYpJagPQ1v/sqx1bODyAfMI8 NTRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Cn3xyAgideBI5BCmgu7Gt0NwbqrNIOTnH45mDz31IIw=; b=Tl8hhsZu2WCYK5LTkpW5xM6O62g7yYLM3373oX+J1gLmhasfpOeS2xlnZlYDp085Kg xDGr52YIP3x65pqA981H9u9z6YB2N2aewtz75Yee24rSEC011haL5yJHYQe4dIpA7Fo9 XhAANYGktUnORixEC3ZPKJ1dUKWKlX1QyD3Uv5l4yzUlWcXCJduJhNq41LjsIO/5tmxQ MXfyYvBd8JiqZTRqA0+Hgmst6iv29ulW18nTjZcnTNJel2bD5EkWWHmHw86Naflvmcfj uR2jUfFZklMJ7cvm2Wk/yMN/Dnl8Dtoqw1t4VRm6W/ZdL+BFCkIYaDpnBjIfqcLbs6aC vw8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lmCmFVu9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h8si4801742edz.253.2021.06.04.09.14.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:14:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lmCmFVu9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:51816 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCSy-00065j-He for patch@linaro.org; Fri, 04 Jun 2021 12:14:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44724) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8g-0003Hd-VL for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:39 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]:46609) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8d-00009c-OD for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:38 -0400 Received: by mail-wm1-x332.google.com with SMTP id h22-20020a05600c3516b02901a826f84095so915817wmq.5 for ; Fri, 04 Jun 2021 08:53:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Cn3xyAgideBI5BCmgu7Gt0NwbqrNIOTnH45mDz31IIw=; b=lmCmFVu9dBcfNaXqIg9xLnLz+tn44MNuz1dERwHLoNAp82MxyQSMST+TUpv0nodjLo 54e+QIEwYcLla+Y8P4Z9bfwiFKmHe9lEfGrH8zqcIgcD/FjOY/pobgtVE4YGQ+mxlSs1 kUDBHI/LimmZAFUj3FRbVIIAER5YsdHugARyQ+6LvGND8l5JetIJD2+mj09gLhcYYndt m16/noRk3QkM3dH3bXoQCDly5DF8v4jxskroqA4ZLpiqcggCNCr2qVibTgUM7WPkWRQt TW0QXBii/yUnvvkLievncYSMP9aezfqVN9+OhreBMqwfdBDgacCRHXt2qF+ZjICLFg9Z TEhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Cn3xyAgideBI5BCmgu7Gt0NwbqrNIOTnH45mDz31IIw=; b=EPdbJwOaTupMydd9DpCtaBxgVTMSaixiFBrPwjhZ4fHAJg+6xHMgvesiUMmU/S+O0M BTyus8qBiIUBK2PVbhz03Nvuez8jHzQfuW1wHf1Vazy0gDMPEauefH3wfSleZbu01Yud g6WLBDe/e2b/wE2vs/tFrTtfEIEQ5ZmDq0o1SlrwZEpL7iMTOxylh6Q8VbR/NAaQSRsE xyOdXAJpeMYVna77AYoJroFCZpObIgHfRRSSD6tp0fpipXXgASKR4sVJ/r203wE79r1H E1P99MkcZJvNZ0R/oOV9FbnNrKN6MdKtLYfbS2u2VLY4lV5vFxZkircDay5yaoO2nlW6 llDQ== X-Gm-Message-State: AOAM530CAEZASON1sei8MgBXolxFe66Fmr9I7ADzZ+9lOXDwcX7dIMXi z2Q2OJZkEEBpUMSQCdpcm8FxkA== X-Received: by 2002:a1c:7402:: with SMTP id p2mr4273222wmc.88.1622822014181; Fri, 04 Jun 2021 08:53:34 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l16sm9356859wmj.47.2021.06.04.08.53.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 7295E1FFBC; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 36/99] target/arm: move arm_mmu_idx* to cpu-mmu Date: Fri, 4 Jun 2021 16:52:09 +0100 Message-Id: <20210604155312.15902-37-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x332.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-mmu.c | 95 +++++++++++++++++++++++++++++++++++++++++ target/arm/tcg/helper.c | 95 ----------------------------------------- 2 files changed, 95 insertions(+), 95 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-mmu.c b/target/arm/cpu-mmu.c index f463f8458e..c6ac90a61e 100644 --- a/target/arm/cpu-mmu.c +++ b/target/arm/cpu-mmu.c @@ -122,3 +122,98 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, .using64k = using64k, }; } + +/* Return the exception level we're running at if this is our mmu_idx */ +int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) +{ + if (mmu_idx & ARM_MMU_IDX_M) { + return mmu_idx & ARM_MMU_IDX_M_PRIV; + } + + switch (mmu_idx) { + case ARMMMUIdx_E10_0: + case ARMMMUIdx_E20_0: + case ARMMMUIdx_SE10_0: + case ARMMMUIdx_SE20_0: + return 0; + case ARMMMUIdx_E10_1: + case ARMMMUIdx_E10_1_PAN: + case ARMMMUIdx_SE10_1: + case ARMMMUIdx_SE10_1_PAN: + return 1; + case ARMMMUIdx_E2: + case ARMMMUIdx_E20_2: + case ARMMMUIdx_E20_2_PAN: + case ARMMMUIdx_SE2: + case ARMMMUIdx_SE20_2: + case ARMMMUIdx_SE20_2_PAN: + return 2; + case ARMMMUIdx_SE3: + return 3; + default: + g_assert_not_reached(); + } +} + +#ifndef CONFIG_TCG +ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate) +{ + g_assert_not_reached(); +} +#endif + +ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) +{ + ARMMMUIdx idx; + uint64_t hcr; + + if (arm_feature(env, ARM_FEATURE_M)) { + return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure); + } + + /* See ARM pseudo-function ELIsInHost. */ + switch (el) { + case 0: + hcr = arm_hcr_el2_eff(env); + if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { + idx = ARMMMUIdx_E20_0; + } else { + idx = ARMMMUIdx_E10_0; + } + break; + case 1: + if (env->pstate & PSTATE_PAN) { + idx = ARMMMUIdx_E10_1_PAN; + } else { + idx = ARMMMUIdx_E10_1; + } + break; + case 2: + /* Note that TGE does not apply at EL2. */ + if (arm_hcr_el2_eff(env) & HCR_E2H) { + if (env->pstate & PSTATE_PAN) { + idx = ARMMMUIdx_E20_2_PAN; + } else { + idx = ARMMMUIdx_E20_2; + } + } else { + idx = ARMMMUIdx_E2; + } + break; + case 3: + return ARMMMUIdx_SE3; + default: + g_assert_not_reached(); + } + + if (arm_is_secure_below_el3(env)) { + idx &= ~ARM_MMU_IDX_A_NS; + } + + return idx; +} + +ARMMMUIdx arm_mmu_idx(CPUARMState *env) +{ + return arm_mmu_idx_el(env, arm_current_el(env)); +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index e85e2bfed9..a4630b4039 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -2093,101 +2093,6 @@ int fp_exception_el(CPUARMState *env, int cur_el) return 0; } -/* Return the exception level we're running at if this is our mmu_idx */ -int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) -{ - if (mmu_idx & ARM_MMU_IDX_M) { - return mmu_idx & ARM_MMU_IDX_M_PRIV; - } - - switch (mmu_idx) { - case ARMMMUIdx_E10_0: - case ARMMMUIdx_E20_0: - case ARMMMUIdx_SE10_0: - case ARMMMUIdx_SE20_0: - return 0; - case ARMMMUIdx_E10_1: - case ARMMMUIdx_E10_1_PAN: - case ARMMMUIdx_SE10_1: - case ARMMMUIdx_SE10_1_PAN: - return 1; - case ARMMMUIdx_E2: - case ARMMMUIdx_E20_2: - case ARMMMUIdx_E20_2_PAN: - case ARMMMUIdx_SE2: - case ARMMMUIdx_SE20_2: - case ARMMMUIdx_SE20_2_PAN: - return 2; - case ARMMMUIdx_SE3: - return 3; - default: - g_assert_not_reached(); - } -} - -#ifndef CONFIG_TCG -ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate) -{ - g_assert_not_reached(); -} -#endif - -ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) -{ - ARMMMUIdx idx; - uint64_t hcr; - - if (arm_feature(env, ARM_FEATURE_M)) { - return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure); - } - - /* See ARM pseudo-function ELIsInHost. */ - switch (el) { - case 0: - hcr = arm_hcr_el2_eff(env); - if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - idx = ARMMMUIdx_E20_0; - } else { - idx = ARMMMUIdx_E10_0; - } - break; - case 1: - if (env->pstate & PSTATE_PAN) { - idx = ARMMMUIdx_E10_1_PAN; - } else { - idx = ARMMMUIdx_E10_1; - } - break; - case 2: - /* Note that TGE does not apply at EL2. */ - if (arm_hcr_el2_eff(env) & HCR_E2H) { - if (env->pstate & PSTATE_PAN) { - idx = ARMMMUIdx_E20_2_PAN; - } else { - idx = ARMMMUIdx_E20_2; - } - } else { - idx = ARMMMUIdx_E2; - } - break; - case 3: - return ARMMMUIdx_SE3; - default: - g_assert_not_reached(); - } - - if (arm_is_secure_below_el3(env)) { - idx &= ~ARM_MMU_IDX_A_NS; - } - - return idx; -} - -ARMMMUIdx arm_mmu_idx(CPUARMState *env) -{ - return arm_mmu_idx_el(env, arm_current_el(env)); -} - #ifndef CONFIG_USER_ONLY ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env) { From patchwork Fri Jun 4 15:52:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454144 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp653173jae; Fri, 4 Jun 2021 11:23:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxne62zcZda898ACe8D+jbZVcfWFJac9LEokB3Qm7EwjSeUtx/geJp+8YWc613QQs5R+1qW X-Received: by 2002:a9d:7303:: with SMTP id e3mr4852824otk.216.1622831032947; Fri, 04 Jun 2021 11:23:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831032; cv=none; d=google.com; s=arc-20160816; b=EqmI4axXJbRchwdwKbNUqC1muNz0NH0XZFhfLwZldXU8jZvAnhtG0tghsZDUcHeJSx VNuML7rnYXzKy8prFhgHV5lvL4+XQipqnyr/sykou9F3EN8xQAR5trV3DmQ5SRveMrTF 5nQKvqC5K+t47LfkJCKUYpJ1DI3qo6vGkHmH7j++4iSvOi8ThP2hc2oqiDYySf6b4uxE 0ur+s6T7OJv4D5wJ7jKpn0BzUPRiAql/MAYU9gGUiw++hrMN6YNN7zo0D782qZMNGOFt 0fzZfrJML47QoaidfuKQZ/VIRYxLLbUMIkKfYKuiNv+QxpEDj49DiL78L5dWE5aLg0iG +Yfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Fs4mHduSC7s6AOHhF5ZFH6VJI9TQw9c8e27N4ga4IHk=; b=vW6CNk5la0LdTR98fNR8r12YLACSQPAFhdBXXKApTvorIhpf3viGOvf8D5H65pdrhp 49bNqoFtR2XEp/Gj4f5xWs4LPxPa7O0wmZxAIZiDApVm1WIBgfLs6wjZPgfjNlQNrJEO njh60lsIn7jtBXPY1c3psGP1gpIIXWh6zp4iQsLOWUnZSyJRNLNKFhN4P71VSQrBym7d iagiSrCbbC+BKnwFmw/kegnFuAvkm1EusZMD97PKwVxsCZ+a6Rxt8RDY1BEzXxiVST1y rWhosUv5wbz/T4qjbT9ptJy0ZAsQIANR2S4Mv+2YRqUOiuwRypmsMOgPpX/3yWhTNQvm eqqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Zh1DBO0X; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u12si2814327oic.60.2021.06.04.11.23.52 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:23:52 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Zh1DBO0X; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:34100 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpEU4-0001gY-8f for patch@linaro.org; Fri, 04 Jun 2021 14:23:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42370) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpET4-0000w1-6Q for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:50 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]:46755) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpET0-0000Nr-RO for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:49 -0400 Received: by mail-wm1-x32f.google.com with SMTP id h22-20020a05600c3516b02901a826f84095so1132628wmq.5 for ; Fri, 04 Jun 2021 11:22:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Fs4mHduSC7s6AOHhF5ZFH6VJI9TQw9c8e27N4ga4IHk=; b=Zh1DBO0X4/prlBXa4eTG/cGDvDbmsYYxiDHeHH9PZIqK0MpqCcOy7uFZdUBwJr21Bz kYWhxtwC8YrhMEV6UP8OKobH2ZnOQcH2ZVGI0dJ3fxU/ewTm1EQhHVwSwsajpmTHUssB piqKQ981vZQE/WI3JvPk/uCSE6qQieRdb3C6ohSPkb0YneMxPqL5WTMzJfFCCm3cSHer CzY2310Dbbnf6Aue0xfxGh8s7VH3D2lJcuWrFvlAr811KKEKRNX50V1u75UQfQPDLpmw sBO14K8fqQumR6v9fX9lUlXt8/KCOAUiNixfT6iV5hEQqqkBlY/u6n0wGcvYkZIlUVIy BnBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Fs4mHduSC7s6AOHhF5ZFH6VJI9TQw9c8e27N4ga4IHk=; b=ah/yPV2MHxoEaW5qLDZYEsVAvEsLwTco1x+pPwfB5m2hz/AFu7TwHi5KneYhmuWw0/ wjlLHDGQfmJzTxIhvGJX0LE7hFXtPPsBpqCan0+aWyhD98Fng/34xrXh9/EocyX9ZQLs On7zdOF4rHrnmXg5CabqZtQVgwgYxlDKLghC/ZPcSmKYtODUKygKhHOgem9Z+IWeCP6t Y5249oym6sm1ZSVNMOfjQK1N5g0I/F1De3XcM7aFbzB8D2T4Lg6PRPsimTPI0YakytHk Jllg/UuhMx7gqH9Lb866t7HFc93SwGMzpw8H/ZIpUMV0fVwqC51jRq3xsQrCPMBh5caE 2Avg== X-Gm-Message-State: AOAM532w9mJHZ4tHleSewpJIgclDK6PkHS+80bD9BO/pAuHS1KSA6P/+ gCXhr0jKNB67eHWEEZ3bGdqSIg== X-Received: by 2002:a1c:bc06:: with SMTP id m6mr4965227wmf.74.1622830965405; Fri, 04 Jun 2021 11:22:45 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id q19sm9085241wmc.44.2021.06.04.11.22.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 8F0421FFBD; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 37/99] target/arm: move sve_zcr_len_for_el to common_cpu Date: Fri, 4 Jun 2021 16:52:10 +0100 Message-Id: <20210604155312.15902-38-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana it is required by arch-dump.c and cpu.c, so apparently we need this for KVM too Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-common.c | 43 +++++++++++++++++++++++++++++++++++++++++ target/arm/tcg/helper.c | 33 ------------------------------- 2 files changed, 43 insertions(+), 33 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c index 040e06392a..a34f7f19d8 100644 --- a/target/arm/cpu-common.c +++ b/target/arm/cpu-common.c @@ -299,3 +299,46 @@ uint64_t arm_hcr_el2_eff(CPUARMState *env) return ret; } + +/* + * these are AARCH64-only, but due to the chain of dependencies, + * between HELPER prototypes, hflags, cpreg definitions and functions in + * tcg/ etc, it becomes incredibly messy to add what should be here: + * + * #ifdef TARGET_AARCH64 + */ + +static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) +{ + uint32_t end_len; + + end_len = start_len &= 0xf; + if (!test_bit(start_len, cpu->sve_vq_map)) { + end_len = find_last_bit(cpu->sve_vq_map, start_len); + assert(end_len < start_len); + } + return end_len; +} + +/* + * Given that SVE is enabled, return the vector length for EL. + */ +uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) +{ + ARMCPU *cpu = env_archcpu(env); + uint32_t zcr_len = cpu->sve_max_vq - 1; + + if (el <= 1) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]); + } + if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]); + } + if (arm_feature(env, ARM_FEATURE_EL3)) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]); + } + + return sve_zcr_get_valid_len(cpu, zcr_len); +} + +/* #endif TARGET_AARCH64 , see matching comment above */ diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index a4630b4039..93fa3fa2a9 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -322,39 +322,6 @@ int sve_exception_el(CPUARMState *env, int el) return 0; } -static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) -{ - uint32_t end_len; - - end_len = start_len &= 0xf; - if (!test_bit(start_len, cpu->sve_vq_map)) { - end_len = find_last_bit(cpu->sve_vq_map, start_len); - assert(end_len < start_len); - } - return end_len; -} - -/* - * Given that SVE is enabled, return the vector length for EL. - */ -uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) -{ - ARMCPU *cpu = env_archcpu(env); - uint32_t zcr_len = cpu->sve_max_vq - 1; - - if (el <= 1) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]); - } - if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]); - } - if (arm_feature(env, ARM_FEATURE_EL3)) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]); - } - - return sve_zcr_get_valid_len(cpu, zcr_len); -} - void hw_watchpoint_update(ARMCPU *cpu, int n) { CPUARMState *env = &cpu->env; From patchwork Fri Jun 4 15:52:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454105 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp584472jae; Fri, 4 Jun 2021 09:52:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzzm2vu/MqOBigkFWcRek+vmh96q28PXZQbefG+9WxDz/U5ks+3m1TkJWrFInnJjv3RK6iO X-Received: by 2002:ab0:6646:: with SMTP id b6mr4314470uaq.1.1622825541652; Fri, 04 Jun 2021 09:52:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825541; cv=none; d=google.com; s=arc-20160816; b=Wz8H4PQTfoYUgLSjvQv/T/g55AZmCi1DVpB4d6Ji0dnlF+lM+d4kMYrwGpSACQ1FSM /FhczjeVb8gY47o+00efDpYHBLhiv3UH5bugloVA77HX1o6698qgJNjz0PjZBHNBFxj6 JrFRk20xl3PyxBOBESMCYLzRME4im9w8voaFSJ2/GOeYQG99zaSabQSpWzmGTEHn7DXr RA7ihOi5K6NVduaVV3/a/92HdEZckdwGqhgCuuCAMA69Izl7SKah64SgUNeFC211VesD zYV2ujrwmzZIhMYHmoscLr0gi+sP8LUsuql5EaP9Quq8QeFpgTetJX7UnevYImbzzWJm 0L5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=g+Q0O4/5dNcGlD5UswsObaT0lA+we8GvoPB9oAtz7uU=; b=iGlOyLCPMIse5ntoRpU1zEdLmxd6Ey6o2RbvG0oxlUpeF2b7Sj3kStULep/itHIn5E f7khcWRLmkotqqqo7FgW6u8qFQZuh6LEKaZaswzpihGUwPzReeggKrxFz/R2y9QWrs0Z QUTpmo7NGYFZEnL0Z7TlPykg6KmMyYXU3ltUr/75Ox+1sMsHemOPejr/xuW/pcwcOe3f SvXX7fNISBZaM5iX4HcSAcku5IXE216vtSqBp6h82uQIGU17XGlU3quh/DHCx6OTNYMV EeVMWWixI+yPmvz0mIUgtSp3Q5X+AqcDXQqiI+ClJVQjbXycGaL97mi+QyByLCM/BfK6 qYDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ZUqdxDtO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q99si3752287uaq.212.2021.06.04.09.52.21 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:52:21 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ZUqdxDtO; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:34512 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD3U-0007Mb-SD for patch@linaro.org; Fri, 04 Jun 2021 12:52:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33714) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkq-0008UQ-GM for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:06 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:33391) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCke-0002Ci-Bv for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:04 -0400 Received: by mail-wr1-x433.google.com with SMTP id a20so9970592wrc.0 for ; Fri, 04 Jun 2021 09:32:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g+Q0O4/5dNcGlD5UswsObaT0lA+we8GvoPB9oAtz7uU=; b=ZUqdxDtOXBlUTU70E4QInDY4qg3i0sn6r4uCyhH4FcORUrlcmUbbb066JybG0E5tgl zN/5YDs1jwsArbJa0ViqtUITGzqr5aW4+hHDncXPx7P9YHH/30MLRienUPO1kL960nwE SkB5m35SdySz4NmDgKEAWMB04gEfIQVtZlwHoEalOhPKEIhTnppnIwPNrpA4j9oVzGwU 5OIl684i6YYmyv4QjsHLAeSTkiaz4N3yI59yIw6saEq455CdtcGHm+T1v9ztqUrV1ovc a5MNtl9hifxKHik8We3xKiGhovBj2HhJt8ndh5m32mpCjNYdPn0VJoyJvL6MBuBu2oNJ hMpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g+Q0O4/5dNcGlD5UswsObaT0lA+we8GvoPB9oAtz7uU=; b=V4UFSPb/i+PdjTxdvKK4wWH1YxPOh8Ah0kWYw1Y8vmE1CFA+ITdBGi6TQ+owJUrxQB Mx69zkfyJDX82B35m5e/gZxrPjaUZQoO0nt07dJ4+Vp9NeJxkOzpUfeajkfW3tgm806+ 45tzCjL/JgQk/TrfiAHU80+KlwICBaOIqJxsjgvN9r5wLiEuY76D6IAbQMRDtS0yo6vP G9ouwuLQFIGJfjLW3A+0d7P6KK95ODYGw5dqsou0NPz7TDQAOnN0CXrPQoMON2XC4d7u sLGrsOfB9PZ/w7H04Ap3bf7sIVfwaEie9HyTuSszAsXYXNfP2Z3QRPiauS7Nq2PJSP7o ud7w== X-Gm-Message-State: AOAM532EqgwVS7Opc7IgtLFcDkjEyCoTKjHHBnGVD2kAcAL63qRkq0gf pJY0m05QZiKFzoPAY8cBngJN8Q== X-Received: by 2002:a05:6000:18ac:: with SMTP id b12mr4660754wri.44.1622824371142; Fri, 04 Jun 2021 09:32:51 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id m11sm5748495wmq.33.2021.06.04.09.32.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A8E821FF8F; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 38/99] target/arm: move arm_sctlr away from tcg helpers Date: Fri, 4 Jun 2021 16:52:11 +0100 Message-Id: <20210604155312.15902-39-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::433; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x433.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana this function is used for kvm too, add it to the cpu-common module. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-common.c | 11 +++++++++++ target/arm/tcg/helper.c | 11 ----------- 2 files changed, 11 insertions(+), 11 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c index a34f7f19d8..93aea216cc 100644 --- a/target/arm/cpu-common.c +++ b/target/arm/cpu-common.c @@ -342,3 +342,14 @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) } /* #endif TARGET_AARCH64 , see matching comment above */ + +uint64_t arm_sctlr(CPUARMState *env, int el) +{ + /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */ + if (el == 0) { + ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); + el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0) + ? 2 : 1; + } + return env->cp15.sctlr_el[el]; +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 93fa3fa2a9..b9ea043f20 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -1675,17 +1675,6 @@ void arm_cpu_do_interrupt(CPUState *cs) } #endif /* !CONFIG_USER_ONLY */ -uint64_t arm_sctlr(CPUARMState *env, int el) -{ - /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */ - if (el == 0) { - ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); - el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0) - ? 2 : 1; - } - return env->cp15.sctlr_el[el]; -} - /* Returns true if the stage 1 translation regime is using LPAE format page * tables. Used when raising alignment exceptions, whose FSR changes depending * on whether the long or short descriptor format is in use. */ From patchwork Fri Jun 4 15:52:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454106 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp584881jae; Fri, 4 Jun 2021 09:52:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyVZvlfCnAxapMiI7unBxwOv3ad/HECUzCFrrA0rPkpx61ADGQ7TCpJfxN1UNor3FVQhW6g X-Received: by 2002:a05:6e02:ec3:: with SMTP id i3mr5042625ilk.120.1622825575206; Fri, 04 Jun 2021 09:52:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825575; cv=none; d=google.com; s=arc-20160816; b=Bpn86WkiJbpTUm8aoKfQ3ZVJXbH5+XuetL9t4Oi6McBTRfFlkHk96Wxt7IyulvFP+l K4dQFDF1UEsagCAXxHRri4pvIxVPbnbxGyp03FuQliiFA8sKIC+E/X5WY+sKm2bBXJp1 vXDd9jJlM3Hx9Gu49VuvR+/Iwkp9nH7ixVZVEd2fY5DCIYyVHd8BLGNHoFV90rm51G2a OZmiuCxlVr242zUUSFPcuVebeOEoK3VDYy0wDDCqxJTtTw4HxCNmcCqZ7PovLLud1y9x 9Vntn3EjpRvDYnno1S31EAX1g002FG4Zy0ypAj9Tg+O6EoWUsVjGlfzrZajDJgd1qoNB TlXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=FmT2fJZ1uVMYQwf4DRBAass7AtI+GiCA7uzNGZJ41H4=; b=hDLI6e83b1t9Rlx1dNGBYUpBs17r+ziYcbt14xC6QLHEUvY5Y8swnQSdTF3T16pV4U g5HYBZzkuGCB9y1IY7wUzDT6hd7WZRFKLN3Q7suskVw4wmY1oTQzhSH2mcdxCZLxCAqN Zt0yyzYpNW1XE35ic3vtbvwoBpoOEhiITUmzRBJ5LDfLKu+d3YRCZE1WjA9LwEomr3Qn GJdsqp0f7PwsW+0RLcQ8zNPsJdpvGOKO0g7PeFQ/BVO6xZ7zWWuR07fTepmmFiQ4xLTd ZBtauiSwJqd6FRN8EGGlVvC9ELxVpHWIAB4eF9fDUdbBNZQZh+/N50hbFWlne56dClrN FW5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=aJ3Haq27; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w7si8031653ilu.149.2021.06.04.09.52.55 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:52:55 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=aJ3Haq27; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:38994 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD42-0001wR-Jm for patch@linaro.org; Fri, 04 Jun 2021 12:52:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48860) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCIP-0000gO-Lx for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:41 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]:54131) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHw-0005w9-Uw for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:41 -0400 Received: by mail-wm1-x330.google.com with SMTP id h3so5688469wmq.3 for ; Fri, 04 Jun 2021 09:03:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FmT2fJZ1uVMYQwf4DRBAass7AtI+GiCA7uzNGZJ41H4=; b=aJ3Haq27l0iwFoE1JmwazN5DPDWmaUuaHsv+xhvf3wHRhwjr+9bVy/TbWVhgNdkPuw 8apDTzuGa3tBoVdM2hDMreL7RW5q2f6Yw6J/HvMzpWmNdEPzcZ4yVpiwff/ER8QHuzqf bmJtN2xUChF2RQFHMIt0/qhIu4/DPmH4Ewti5qT4SMFvE1m90Rpm87ZW3I6c4YeUS9YA juZ41HPWv2cRaWScnDy/1PbbQyViWPbRKFkr6uuJ0ffeAkLMZml02WQSENFgDDrAHv7x RYrbufQMbINQ/eZ1DTem8dsRAcMLfmgJIo/0RKlXnQnUYpzAVocc0CrmCOwQ+mQJW1cl 0SUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FmT2fJZ1uVMYQwf4DRBAass7AtI+GiCA7uzNGZJ41H4=; b=cHVDYFPWRKFwd/Zl7QJO8EOg6LxcgwKEyMACL2EyIgReHSkP4y5C1BfdMH59OGkJzs JgFVaD9VQlkG8BnjynIADPhA7GlCGiOjYhfJPymEubwCTqLJhLvjUKopJrHc7hcVsvCR rI3CTpPFQOKqjqNylXfkvfaQbvLVxnSvmFew+1yIrTeWXz/FJ1x+9raRGr60lGzogYtM Sbg1PvBZSDVF3FjM4Q9A9Ohhy8cBn6JvEJE+5oGmzs+90f75xHpO5I/miWLaslArlvGb cuviN9YbDhh9msV655iqE4ZD/pGaoLsdzKlE+NYs13BiZft+2VYYBriFnj8z2CXsAnDz GTJQ== X-Gm-Message-State: AOAM531T3HVegAKC1eKjfN6BXOsyaFoFUvl2IYSm96a5dPfKp92vHNFT VPJusBHSkKyafyjlXepIRH2vCA== X-Received: by 2002:a1c:e409:: with SMTP id b9mr4220413wmh.63.1622822583730; Fri, 04 Jun 2021 09:03:03 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id d5sm6880344wrb.16.2021.06.04.09.02.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:56 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id C398B1FFBE; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 39/99] target/arm: move arm_cpu_list to common_cpu Date: Fri, 4 Jun 2021 16:52:12 +0100 Message-Id: <20210604155312.15902-40-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-common.c | 42 +++++++++++++++++++++++++++++++++++++++++ target/arm/tcg/helper.c | 41 ---------------------------------------- 2 files changed, 42 insertions(+), 41 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c index 93aea216cc..f4a3780e9e 100644 --- a/target/arm/cpu-common.c +++ b/target/arm/cpu-common.c @@ -8,6 +8,7 @@ #include "qemu/osdep.h" #include "qemu/log.h" +#include "qemu/qemu-print.h" #include "qom/object.h" #include "qapi/qapi-commands-machine-target.h" #include "qapi/error.h" @@ -353,3 +354,44 @@ uint64_t arm_sctlr(CPUARMState *env, int el) } return env->cp15.sctlr_el[el]; } + +/* Sort alphabetically by type name, except for "any". */ +static gint arm_cpu_list_compare(gconstpointer a, gconstpointer b) +{ + ObjectClass *class_a = (ObjectClass *)a; + ObjectClass *class_b = (ObjectClass *)b; + const char *name_a, *name_b; + + name_a = object_class_get_name(class_a); + name_b = object_class_get_name(class_b); + if (strcmp(name_a, "any-" TYPE_ARM_CPU) == 0) { + return 1; + } else if (strcmp(name_b, "any-" TYPE_ARM_CPU) == 0) { + return -1; + } else { + return strcmp(name_a, name_b); + } +} + +static void arm_cpu_list_entry(gpointer data, gpointer user_data) +{ + ObjectClass *oc = data; + const char *typename; + char *name; + + typename = object_class_get_name(oc); + name = g_strndup(typename, strlen(typename) - strlen("-" TYPE_ARM_CPU)); + qemu_printf(" %s\n", name); + g_free(name); +} + +void arm_cpu_list(void) +{ + GSList *list; + + list = object_class_get_list(TYPE_ARM_CPU, false); + list = g_slist_sort(list, arm_cpu_list_compare); + qemu_printf("Available CPUs:\n"); + g_slist_foreach(list, arm_cpu_list_entry, NULL); + g_slist_free(list); +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index b9ea043f20..0e3f403e56 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -552,47 +552,6 @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu) } -/* Sort alphabetically by type name, except for "any". */ -static gint arm_cpu_list_compare(gconstpointer a, gconstpointer b) -{ - ObjectClass *class_a = (ObjectClass *)a; - ObjectClass *class_b = (ObjectClass *)b; - const char *name_a, *name_b; - - name_a = object_class_get_name(class_a); - name_b = object_class_get_name(class_b); - if (strcmp(name_a, "any-" TYPE_ARM_CPU) == 0) { - return 1; - } else if (strcmp(name_b, "any-" TYPE_ARM_CPU) == 0) { - return -1; - } else { - return strcmp(name_a, name_b); - } -} - -static void arm_cpu_list_entry(gpointer data, gpointer user_data) -{ - ObjectClass *oc = data; - const char *typename; - char *name; - - typename = object_class_get_name(oc); - name = g_strndup(typename, strlen(typename) - strlen("-" TYPE_ARM_CPU)); - qemu_printf(" %s\n", name); - g_free(name); -} - -void arm_cpu_list(void) -{ - GSList *list; - - list = object_class_get_list(TYPE_ARM_CPU, false); - list = g_slist_sort(list, arm_cpu_list_compare); - qemu_printf("Available CPUs:\n"); - g_slist_foreach(list, arm_cpu_list_entry, NULL); - g_slist_free(list); -} - /* Sign/zero extend */ uint32_t HELPER(sxtb16)(uint32_t x) { From patchwork Fri Jun 4 15:52:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454116 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp598819jae; Fri, 4 Jun 2021 10:09:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzSX46iOcmMp5h0eNo0AnceLz/ky7PkvyCLM9YC03NtaLqjbjb0DeVbb9vXMGZRiBqAMwae X-Received: by 2002:a05:6402:1d06:: with SMTP id dg6mr5622619edb.132.1622826592721; Fri, 04 Jun 2021 10:09:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826592; cv=none; d=google.com; s=arc-20160816; b=M7zKt9T4st3NYxaGj3+c44d1m1cPLyu8HD8bNQy3NQ9apU/sS/crMWUGxA4aULggQ/ G0+rs1rJDq4BVKRdtcNZGTjmT3IiGilIpoZHJ5tnXTM1xq5C//Ypb40/ixaqwg4TwU07 UDyaxSg3VdDhOkarnti4dq4rN+X/Q4drZ8srAkUM+7NkQKLCD0huPH6jBpnSa8k679dv IlvpDl82TmG3015QKzsw7jPLzT+clYLux6kbGawRs5AN6Otca5eMo1YZtHq4sNOwNOXn n3++4Xf+hWJmOubobd2xwyqulHNQovmjMG+v5rBX7VUGlYFI7axBS7xhXppXUqiqDkXu OBrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=sclbEbvsQp39DT+lWM9cgWgO57dq75LgW1XTapf4q3Q=; b=pfDj6f/thekpwLAVBSY5MX/pHTliaa4RC7PHuU8E3N5BI3zSZq2L+mw3ZWsCwKg6ZR +7LF/qkz3ycpHQxLsX6zMHu9LwXsVO7bpzhtxizPttZ/Da7a5CG8AdlBWD7b/TmiNs9v uJZ2AZNtKHNhNB7U/SXpASlSGap11YoTjHNwAet6JGX4Z1OeGLPThCEta3WsJua0pGv/ X+OrdKm9rKzPeezBuUYCAeYx0xl2mFwcA26lzKAbeKyBKFq++xoaUOgFzDOM/+8ersb5 RefRClpLj6K+59h7jZdErLRpVEXNQ+xjMvkJM6E2ekBP5l4YY6xTgQFwVCrngoIw+DPa ejmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=XWV4BxMG; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id dz7si4689325edb.68.2021.06.04.10.09.52 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:09:52 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=XWV4BxMG; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43314 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDKR-0001P5-MP for patch@linaro.org; Fri, 04 Jun 2021 13:09:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33438) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkb-0008AE-BH for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:49 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:39909) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkX-00027b-7g for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:48 -0400 Received: by mail-wr1-x431.google.com with SMTP id l2so9930301wrw.6 for ; Fri, 04 Jun 2021 09:32:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sclbEbvsQp39DT+lWM9cgWgO57dq75LgW1XTapf4q3Q=; b=XWV4BxMGDDYyrNfsy2t0fB53nHMyJiexRzNeodEeSMtYK65mTPv6dp9f7xfrPfTxUr oEsHLHIsVqosHsKFK45wTioim0tiphz35AivQHYwZNwCtxuta9V1LZ1pcpls4U0q9tNz 9X8xQJ+3TaPIutIYw6ewAWOLTukmep38nCYq5GC/gfWb4hyUbAEfZw5ugr6s/Pddxytt +Hb3Ur/Z6tvath79yyW/xtQQWXCCc5/Q8YrnMMqT9javWFrYBDRu9N4FUuyAJZr4Zo3w UHQZUgw2P1T39DKrBFpJ5+lBhRBdEbjhXUwewgLbuCqI1sSJiuzPv9JEqTFCHifP9cIZ lwSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sclbEbvsQp39DT+lWM9cgWgO57dq75LgW1XTapf4q3Q=; b=GKEkvxZB2L1PcDj6z45q2E1bCmoREfGyVBZfeW3iRpLQCap1kSBC68j6y1cZdvoha6 thQEQmKESVq4xVgxNJnB7WDiAuLNWsg0sPFE2asuWAY7pV5U9gczpKDpIsOEZQIB9Z+V DKU3I6w+kNLqLzjTKV5pU3eZrkKJoLYeoGqihL+Uqz3jjeYc53aUlUfqBkEO92vQbOVd a0kq/kKjPXkJCdclXBa7CY4JsmK9mkkcQyra97e4RZX4VuPm5mtDE8aoTnBmWITu1lGA jFvfipIFQggoszoh9/owoctc3bhwIkJq+uCB+zi71N0Ot5LaYOE9Njaf9BrHVEVbuo3v pAlw== X-Gm-Message-State: AOAM532rNtLwOrQ1oANPu8A4ABfxMZtyXzUW+bJwp16BKPzdXBhhRIr0 MZ2/NvlDlKyLGUgVcXoZoTvtGQ== X-Received: by 2002:adf:bc07:: with SMTP id s7mr4848885wrg.301.1622824363286; Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id t1sm7060818wrx.28.2021.06.04.09.32.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id DFC9D1FFBF; Fri, 4 Jun 2021 16:53:17 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 40/99] target/arm: move aarch64_sync_32_to_64 (and vv) to cpu code Date: Fri, 4 Jun 2021 16:52:13 +0100 Message-Id: <20210604155312.15902-41-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana and arm_phys_excp_target_el since it is tied up inside the same #ifdef block. aarch64_sync_32_to_64 and aarch64_sync_64_to_32 are mixed in with the TCG helpers, but they shouldn't, as they are needed for KVM too. kvm_arch_get_registers() { if (!is_a64(env)) { aarch64_sync_64_to_32(env); } write_kvmstate_to_list(cpu); write_list_to_cpustate(cpu); ... } kvm_arch_put_registers() { if (!is_a64(env)) { aarch64_sync_32_to_64(env); } write_cpustate_to_list(cpu, true); write_list_to_kvmstate(cpu, level) ... } Move to the cpu module. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-sysemu.c | 215 +++++++++++++++++++++++++++++++++++++ target/arm/cpu-user.c | 11 ++ target/arm/tcg/helper.c | 232 +--------------------------------------- 3 files changed, 229 insertions(+), 229 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 3add2c2439..7a314bf805 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -133,3 +133,218 @@ void switch_mode(CPUARMState *env, int mode) env->banked_r14[r14_bank_number(old_mode)] = env->regs[14]; env->regs[14] = env->banked_r14[r14_bank_number(mode)]; } + +/* + * Function used to synchronize QEMU's AArch64 register set with AArch32 + * register set. This is necessary when switching between AArch32 and AArch64 + * execution state. + */ +void aarch64_sync_32_to_64(CPUARMState *env) +{ + int i; + uint32_t mode = env->uncached_cpsr & CPSR_M; + + /* We can blanket copy R[0:7] to X[0:7] */ + for (i = 0; i < 8; i++) { + env->xregs[i] = env->regs[i]; + } + + /* + * Unless we are in FIQ mode, x8-x12 come from the user registers r8-r12. + * Otherwise, they come from the banked user regs. + */ + if (mode == ARM_CPU_MODE_FIQ) { + for (i = 8; i < 13; i++) { + env->xregs[i] = env->usr_regs[i - 8]; + } + } else { + for (i = 8; i < 13; i++) { + env->xregs[i] = env->regs[i]; + } + } + + /* + * Registers x13-x23 are the various mode SP and FP registers. Registers + * r13 and r14 are only copied if we are in that mode, otherwise we copy + * from the mode banked register. + */ + if (mode == ARM_CPU_MODE_USR || mode == ARM_CPU_MODE_SYS) { + env->xregs[13] = env->regs[13]; + env->xregs[14] = env->regs[14]; + } else { + env->xregs[13] = env->banked_r13[bank_number(ARM_CPU_MODE_USR)]; + /* HYP is an exception in that it is copied from r14 */ + if (mode == ARM_CPU_MODE_HYP) { + env->xregs[14] = env->regs[14]; + } else { + env->xregs[14] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)]; + } + } + + if (mode == ARM_CPU_MODE_HYP) { + env->xregs[15] = env->regs[13]; + } else { + env->xregs[15] = env->banked_r13[bank_number(ARM_CPU_MODE_HYP)]; + } + + if (mode == ARM_CPU_MODE_IRQ) { + env->xregs[16] = env->regs[14]; + env->xregs[17] = env->regs[13]; + } else { + env->xregs[16] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)]; + env->xregs[17] = env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)]; + } + + if (mode == ARM_CPU_MODE_SVC) { + env->xregs[18] = env->regs[14]; + env->xregs[19] = env->regs[13]; + } else { + env->xregs[18] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)]; + env->xregs[19] = env->banked_r13[bank_number(ARM_CPU_MODE_SVC)]; + } + + if (mode == ARM_CPU_MODE_ABT) { + env->xregs[20] = env->regs[14]; + env->xregs[21] = env->regs[13]; + } else { + env->xregs[20] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)]; + env->xregs[21] = env->banked_r13[bank_number(ARM_CPU_MODE_ABT)]; + } + + if (mode == ARM_CPU_MODE_UND) { + env->xregs[22] = env->regs[14]; + env->xregs[23] = env->regs[13]; + } else { + env->xregs[22] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)]; + env->xregs[23] = env->banked_r13[bank_number(ARM_CPU_MODE_UND)]; + } + + /* + * Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ + * mode, then we can copy from r8-r14. Otherwise, we copy from the + * FIQ bank for r8-r14. + */ + if (mode == ARM_CPU_MODE_FIQ) { + for (i = 24; i < 31; i++) { + env->xregs[i] = env->regs[i - 16]; /* X[24:30] <- R[8:14] */ + } + } else { + for (i = 24; i < 29; i++) { + env->xregs[i] = env->fiq_regs[i - 24]; + } + env->xregs[29] = env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)]; + env->xregs[30] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)]; + } + + env->pc = env->regs[15]; +} + +/* + * Function used to synchronize QEMU's AArch32 register set with AArch64 + * register set. This is necessary when switching between AArch32 and AArch64 + * execution state. + */ +void aarch64_sync_64_to_32(CPUARMState *env) +{ + int i; + uint32_t mode = env->uncached_cpsr & CPSR_M; + + /* We can blanket copy X[0:7] to R[0:7] */ + for (i = 0; i < 8; i++) { + env->regs[i] = env->xregs[i]; + } + + /* + * Unless we are in FIQ mode, r8-r12 come from the user registers x8-x12. + * Otherwise, we copy x8-x12 into the banked user regs. + */ + if (mode == ARM_CPU_MODE_FIQ) { + for (i = 8; i < 13; i++) { + env->usr_regs[i - 8] = env->xregs[i]; + } + } else { + for (i = 8; i < 13; i++) { + env->regs[i] = env->xregs[i]; + } + } + + /* + * Registers r13 & r14 depend on the current mode. + * If we are in a given mode, we copy the corresponding x registers to r13 + * and r14. Otherwise, we copy the x register to the banked r13 and r14 + * for the mode. + */ + if (mode == ARM_CPU_MODE_USR || mode == ARM_CPU_MODE_SYS) { + env->regs[13] = env->xregs[13]; + env->regs[14] = env->xregs[14]; + } else { + env->banked_r13[bank_number(ARM_CPU_MODE_USR)] = env->xregs[13]; + + /* + * HYP is an exception in that it does not have its own banked r14 but + * shares the USR r14 + */ + if (mode == ARM_CPU_MODE_HYP) { + env->regs[14] = env->xregs[14]; + } else { + env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)] = env->xregs[14]; + } + } + + if (mode == ARM_CPU_MODE_HYP) { + env->regs[13] = env->xregs[15]; + } else { + env->banked_r13[bank_number(ARM_CPU_MODE_HYP)] = env->xregs[15]; + } + + if (mode == ARM_CPU_MODE_IRQ) { + env->regs[14] = env->xregs[16]; + env->regs[13] = env->xregs[17]; + } else { + env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16]; + env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[17]; + } + + if (mode == ARM_CPU_MODE_SVC) { + env->regs[14] = env->xregs[18]; + env->regs[13] = env->xregs[19]; + } else { + env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18]; + env->banked_r13[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[19]; + } + + if (mode == ARM_CPU_MODE_ABT) { + env->regs[14] = env->xregs[20]; + env->regs[13] = env->xregs[21]; + } else { + env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20]; + env->banked_r13[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[21]; + } + + if (mode == ARM_CPU_MODE_UND) { + env->regs[14] = env->xregs[22]; + env->regs[13] = env->xregs[23]; + } else { + env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)] = env->xregs[22]; + env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23]; + } + + /* + * Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ + * mode, then we can copy to r8-r14. Otherwise, we copy to the + * FIQ bank for r8-r14. + */ + if (mode == ARM_CPU_MODE_FIQ) { + for (i = 24; i < 31; i++) { + env->regs[i - 16] = env->xregs[i]; /* X[24:30] -> R[8:14] */ + } + } else { + for (i = 24; i < 29; i++) { + env->fiq_regs[i - 24] = env->xregs[i]; + } + env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[29]; + env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30]; + } + + env->regs[15] = env->pc; +} diff --git a/target/arm/cpu-user.c b/target/arm/cpu-user.c index a72b7f5703..0225089e46 100644 --- a/target/arm/cpu-user.c +++ b/target/arm/cpu-user.c @@ -22,3 +22,14 @@ void switch_mode(CPUARMState *env, int mode) cpu_abort(CPU(cpu), "Tried to switch out of user mode\n"); } } + +void aarch64_sync_64_to_32(CPUARMState *env) +{ + g_assert_not_reached(); +} + +uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx, + uint32_t cur_el, bool secure) +{ + return 1; +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 0e3f403e56..9dd83911f2 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -590,22 +590,10 @@ uint32_t HELPER(rbit)(uint32_t x) return revbit32(x); } -#ifdef CONFIG_USER_ONLY - -uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx, - uint32_t cur_el, bool secure) -{ - return 1; -} - -void aarch64_sync_64_to_32(CPUARMState *env) -{ - g_assert_not_reached(); -} - -#else +#ifndef CONFIG_USER_ONLY -/* Physical Interrupt Target EL Lookup Table +/* + * Physical Interrupt Target EL Lookup Table * * [ From ARM ARM section G1.13.4 (Table G1-15) ] * @@ -754,220 +742,6 @@ void arm_log_exception(int idx) } } -/* - * Function used to synchronize QEMU's AArch64 register set with AArch32 - * register set. This is necessary when switching between AArch32 and AArch64 - * execution state. - */ -void aarch64_sync_32_to_64(CPUARMState *env) -{ - int i; - uint32_t mode = env->uncached_cpsr & CPSR_M; - - /* We can blanket copy R[0:7] to X[0:7] */ - for (i = 0; i < 8; i++) { - env->xregs[i] = env->regs[i]; - } - - /* - * Unless we are in FIQ mode, x8-x12 come from the user registers r8-r12. - * Otherwise, they come from the banked user regs. - */ - if (mode == ARM_CPU_MODE_FIQ) { - for (i = 8; i < 13; i++) { - env->xregs[i] = env->usr_regs[i - 8]; - } - } else { - for (i = 8; i < 13; i++) { - env->xregs[i] = env->regs[i]; - } - } - - /* - * Registers x13-x23 are the various mode SP and FP registers. Registers - * r13 and r14 are only copied if we are in that mode, otherwise we copy - * from the mode banked register. - */ - if (mode == ARM_CPU_MODE_USR || mode == ARM_CPU_MODE_SYS) { - env->xregs[13] = env->regs[13]; - env->xregs[14] = env->regs[14]; - } else { - env->xregs[13] = env->banked_r13[bank_number(ARM_CPU_MODE_USR)]; - /* HYP is an exception in that it is copied from r14 */ - if (mode == ARM_CPU_MODE_HYP) { - env->xregs[14] = env->regs[14]; - } else { - env->xregs[14] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)]; - } - } - - if (mode == ARM_CPU_MODE_HYP) { - env->xregs[15] = env->regs[13]; - } else { - env->xregs[15] = env->banked_r13[bank_number(ARM_CPU_MODE_HYP)]; - } - - if (mode == ARM_CPU_MODE_IRQ) { - env->xregs[16] = env->regs[14]; - env->xregs[17] = env->regs[13]; - } else { - env->xregs[16] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)]; - env->xregs[17] = env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)]; - } - - if (mode == ARM_CPU_MODE_SVC) { - env->xregs[18] = env->regs[14]; - env->xregs[19] = env->regs[13]; - } else { - env->xregs[18] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)]; - env->xregs[19] = env->banked_r13[bank_number(ARM_CPU_MODE_SVC)]; - } - - if (mode == ARM_CPU_MODE_ABT) { - env->xregs[20] = env->regs[14]; - env->xregs[21] = env->regs[13]; - } else { - env->xregs[20] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)]; - env->xregs[21] = env->banked_r13[bank_number(ARM_CPU_MODE_ABT)]; - } - - if (mode == ARM_CPU_MODE_UND) { - env->xregs[22] = env->regs[14]; - env->xregs[23] = env->regs[13]; - } else { - env->xregs[22] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)]; - env->xregs[23] = env->banked_r13[bank_number(ARM_CPU_MODE_UND)]; - } - - /* - * Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ - * mode, then we can copy from r8-r14. Otherwise, we copy from the - * FIQ bank for r8-r14. - */ - if (mode == ARM_CPU_MODE_FIQ) { - for (i = 24; i < 31; i++) { - env->xregs[i] = env->regs[i - 16]; /* X[24:30] <- R[8:14] */ - } - } else { - for (i = 24; i < 29; i++) { - env->xregs[i] = env->fiq_regs[i - 24]; - } - env->xregs[29] = env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)]; - env->xregs[30] = env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)]; - } - - env->pc = env->regs[15]; -} - -/* - * Function used to synchronize QEMU's AArch32 register set with AArch64 - * register set. This is necessary when switching between AArch32 and AArch64 - * execution state. - */ -void aarch64_sync_64_to_32(CPUARMState *env) -{ - int i; - uint32_t mode = env->uncached_cpsr & CPSR_M; - - /* We can blanket copy X[0:7] to R[0:7] */ - for (i = 0; i < 8; i++) { - env->regs[i] = env->xregs[i]; - } - - /* - * Unless we are in FIQ mode, r8-r12 come from the user registers x8-x12. - * Otherwise, we copy x8-x12 into the banked user regs. - */ - if (mode == ARM_CPU_MODE_FIQ) { - for (i = 8; i < 13; i++) { - env->usr_regs[i - 8] = env->xregs[i]; - } - } else { - for (i = 8; i < 13; i++) { - env->regs[i] = env->xregs[i]; - } - } - - /* - * Registers r13 & r14 depend on the current mode. - * If we are in a given mode, we copy the corresponding x registers to r13 - * and r14. Otherwise, we copy the x register to the banked r13 and r14 - * for the mode. - */ - if (mode == ARM_CPU_MODE_USR || mode == ARM_CPU_MODE_SYS) { - env->regs[13] = env->xregs[13]; - env->regs[14] = env->xregs[14]; - } else { - env->banked_r13[bank_number(ARM_CPU_MODE_USR)] = env->xregs[13]; - - /* - * HYP is an exception in that it does not have its own banked r14 but - * shares the USR r14 - */ - if (mode == ARM_CPU_MODE_HYP) { - env->regs[14] = env->xregs[14]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)] = env->xregs[14]; - } - } - - if (mode == ARM_CPU_MODE_HYP) { - env->regs[13] = env->xregs[15]; - } else { - env->banked_r13[bank_number(ARM_CPU_MODE_HYP)] = env->xregs[15]; - } - - if (mode == ARM_CPU_MODE_IRQ) { - env->regs[14] = env->xregs[16]; - env->regs[13] = env->xregs[17]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[16]; - env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)] = env->xregs[17]; - } - - if (mode == ARM_CPU_MODE_SVC) { - env->regs[14] = env->xregs[18]; - env->regs[13] = env->xregs[19]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)] = env->xregs[18]; - env->banked_r13[bank_number(ARM_CPU_MODE_SVC)] = env->xregs[19]; - } - - if (mode == ARM_CPU_MODE_ABT) { - env->regs[14] = env->xregs[20]; - env->regs[13] = env->xregs[21]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)] = env->xregs[20]; - env->banked_r13[bank_number(ARM_CPU_MODE_ABT)] = env->xregs[21]; - } - - if (mode == ARM_CPU_MODE_UND) { - env->regs[14] = env->xregs[22]; - env->regs[13] = env->xregs[23]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)] = env->xregs[22]; - env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23]; - } - - /* Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ - * mode, then we can copy to r8-r14. Otherwise, we copy to the - * FIQ bank for r8-r14. - */ - if (mode == ARM_CPU_MODE_FIQ) { - for (i = 24; i < 31; i++) { - env->regs[i - 16] = env->xregs[i]; /* X[24:30] -> R[8:14] */ - } - } else { - for (i = 24; i < 29; i++) { - env->fiq_regs[i - 24] = env->xregs[i]; - } - env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[29]; - env->banked_r14[r14_bank_number(ARM_CPU_MODE_FIQ)] = env->xregs[30]; - } - - env->regs[15] = env->pc; -} - static void take_aarch32_exception(CPUARMState *env, int new_mode, uint32_t mask, uint32_t offset, uint32_t newpc) From patchwork Fri Jun 4 15:52:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454093 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp572930jae; Fri, 4 Jun 2021 09:38:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzgy8+BKT1rBZpFDr/XVesfAgbZJ14W1DpwQacageeZnc0S8daCSM53r3/gaA7zZxPaqG4A X-Received: by 2002:a1f:aed7:: with SMTP id x206mr3131321vke.12.1622824696294; Fri, 04 Jun 2021 09:38:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824696; cv=none; d=google.com; s=arc-20160816; b=lNq/tpIsoQIxX24ZPHr6NaOVkI6rf79Mn07WlmDaeoC1o2okGOgL53t0zTQOAbUPPa F0iGiIPIacWwOvN9j9Lk/ueuqCWckxE5Ru3PFbTETMMwy8EOpSbkzw0vtMnKTNTchP7A un80E4u0AxV3TEfyJa18oleYjMCIVi+7YkG5tb8AYBPw0qaATUY9Tyyracenf2I3q8Yn HXb8YGJWQIyJdaawuTIuIbH3vUaGYpcpsyCfZR/NBfVoTIkz5o3+4TR4U3vT9T1D5wE9 4Y75lQLFGnW6zYefrlJKC21AmXCZhX4cDQyxtQRfvO3WAoIdpT23h+STytUdUtX8fNrz Fv+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=YEMn/TZVYqcd/40Lz0kIER2IvEdqHAsgJm9milarnoM=; b=ohV9JjLlbtr7e9hnVmm7SaJxKVH7FD7ooaF61mpPmtw+8E+Qabnbr5nL/vljS6v1YU 4W4z5mnJfBZmV0ZlEHAKnE3UsYQSKL/ogrlVdL8hsvardQCgbhD9iDb8OlyhDiE3SqPA a4tJnW+b2OdEZv57g1rnxnh3QP3cNIQ2Xzd/GEkXw0aV+82gLiYUi4btkTa3ALWg41al ey/8Srx+Z51SldO78MEyLCdCZIUKJixvjaslRdlB5Erhz45W7z7qvrqTn25yUvB84XO2 ol/dWyTCxh3bf5O2xWczBsni7KtehAdjNE/Ht1fhliv5eSOowYYauFnjnXIH8HdRrGVp 1M3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=qoeJ8YdJ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id g16si2700433vsf.322.2021.06.04.09.38.16 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:38:16 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=qoeJ8YdJ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:47442 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCpr-0002Gv-3x for patch@linaro.org; Fri, 04 Jun 2021 12:38:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52090) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRW-0003F2-HH for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:06 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]:53128) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRM-0003sC-Ic for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:06 -0400 Received: by mail-wm1-x330.google.com with SMTP id f17so5706777wmf.2 for ; Fri, 04 Jun 2021 09:12:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YEMn/TZVYqcd/40Lz0kIER2IvEdqHAsgJm9milarnoM=; b=qoeJ8YdJYYO8fyGjxeRqyN87qqRw/OYj45ViDUyZGOWQXyQOcpGI3HKTSH737TVZCk OpTEPYM/3MzolXCetO1MvtGI9G6OiG9dV4B5f4LpN7dfu/7cnDD+bx0bkCJV9WpWT3Ie l1HMOxmJRtShK+mszU+2WPxxLgkA5nDOf6ADXZbpR+GBTsvaaFcBPKVsqt88n7DlczD/ U4/szyu43EjP1zVYs65I5VK7bkc2uNUCDcDG9zJzwliozClMgHOujOK+ZOSXdV5NF3JH XFXdMoXPXJXjis+2HW0zxs/FFgNMpFxlCFXAm8+XFPmtRfCVcWUW6kQz2BcVy5lar5r7 9aCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YEMn/TZVYqcd/40Lz0kIER2IvEdqHAsgJm9milarnoM=; b=af8a+Z542+etVLKM0Wse53wCJYMTJLDMqx36PWlUMF63ErVS3X0o4BtYNgbc+oEFzf vStmSjeQ65A+xw8wB+cySwIUEvGvWu1IcDDwvuZqrd84C0a0hJeX0Sk37/hZ5OKt1FZc 3SYxRK7j3munRaR3HIjPa3a5VL2UdBOxh3zbUgkyU61uJly9Ye0zYAVetYN8BHC1J7Hk T2xKNVfUg1YUE27YEvIs685CKA8Gxa9fTZphGLGlkMi4CkpO2Twal8xHU8LFNm/a1L+s +J8tPx42fmGPREC+rAmT9nyAY7jP9s08smisIpsK7T5m8kRMbhxmhCn8PmgwDHQlEpLW OQ3w== X-Gm-Message-State: AOAM5331k1gY4mPnS5v3QfBRXPAAOF82O0wOaXWr2shBSo4iuGxFFAsj xLi2Aek2ocoW3msSR9/h3mkzhQ== X-Received: by 2002:a7b:ce8a:: with SMTP id q10mr4477404wmj.184.1622823174713; Fri, 04 Jun 2021 09:12:54 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id f20sm7039580wmh.41.2021.06.04.09.12.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:50 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 0EE2D1FFC0; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 41/99] target/arm: new cpu32 ARM 32 bit CPU Class Date: Fri, 4 Jun 2021 16:52:14 +0100 Message-Id: <20210604155312.15902-42-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana In the ARM CPU class hierarchy, the ancestor TYPE_ARM_CPU is fundamentally a 32 bit CPU Class. The child TYPE_AARCH64_CPU overrides the class to make it a 64 bit CPU Class. Explicitly put the 32bit CPU Class implementation in a cpu32.c, along with the 32bit CPU Class model registration function. In later changes, we will further split 32bit and 64bit code. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-qom.h | 3 -- target/arm/cpu32.h | 28 ++++++++++ target/arm/cpu.c | 55 ++----------------- target/arm/cpu32.c | 118 +++++++++++++++++++++++++++++++++++++++++ target/arm/cpu64.c | 2 +- target/arm/cpu_tcg.c | 3 +- target/arm/meson.build | 6 ++- 7 files changed, 159 insertions(+), 56 deletions(-) create mode 100644 target/arm/cpu32.h create mode 100644 target/arm/cpu32.c -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h index a22bd506d0..0d41a346b9 100644 --- a/target/arm/cpu-qom.h +++ b/target/arm/cpu-qom.h @@ -38,9 +38,6 @@ typedef struct ARMCPUInfo { void (*class_init)(ObjectClass *oc, void *data); } ARMCPUInfo; -void arm_cpu_register(const ARMCPUInfo *info); -void aarch64_cpu_register(const ARMCPUInfo *info); - /** * ARMCPUClass: * @parent_realize: The parent class' realize handler. diff --git a/target/arm/cpu32.h b/target/arm/cpu32.h new file mode 100644 index 0000000000..211fad6f55 --- /dev/null +++ b/target/arm/cpu32.h @@ -0,0 +1,28 @@ +/* + * QEMU ARM CPU models (32bit) + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#ifndef ARM_CPU32_H +#define ARM_CPU32_H + +void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags); +void arm32_cpu_class_init(ObjectClass *oc, void *data); +void arm32_cpu_register(const ARMCPUInfo *info); + +#endif /* ARM_CPU32_H */ diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 7bb406efd2..b9b300944d 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -30,6 +30,7 @@ #ifdef CONFIG_TCG #include "hw/core/tcg-cpu-ops.h" #endif /* CONFIG_TCG */ +#include "cpu32.h" #include "internals.h" #include "exec/exec-all.h" #include "hw/qdev-properties.h" @@ -853,7 +854,7 @@ static inline void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags) #endif -static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags) +void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags) { ARMCPU *cpu = ARM_CPU(cs); CPUARMState *env = &cpu->env; @@ -1856,17 +1857,6 @@ static Property arm_cpu_properties[] = { DEFINE_PROP_END_OF_LIST() }; -static gchar *arm_gdb_arch_name(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - - if (arm_feature(env, ARM_FEATURE_IWMMXT)) { - return g_strdup("iwmmxt"); - } - return g_strdup("arm"); -} - #ifndef CONFIG_USER_ONLY #include "hw/core/sysemu-cpu-ops.h" @@ -1912,16 +1902,11 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data) cc->class_by_name = arm_cpu_class_by_name; cc->has_work = arm_cpu_has_work; - cc->dump_state = arm_cpu_dump_state; cc->set_pc = arm_cpu_set_pc; - cc->gdb_read_register = arm_cpu_gdb_read_register; - cc->gdb_write_register = arm_cpu_gdb_write_register; #ifndef CONFIG_USER_ONLY cc->sysemu_ops = &arm_sysemu_ops; #endif - cc->gdb_num_core_regs = 26; - cc->gdb_core_xml_file = "arm-core.xml"; - cc->gdb_arch_name = arm_gdb_arch_name; + cc->gdb_get_dynamic_xml = arm_gdb_get_dynamic_xml; cc->gdb_stop_before_watchpoint = true; cc->disas_set_info = arm_disas_set_info; @@ -1929,6 +1914,8 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data) #ifdef CONFIG_TCG cc->tcg_ops = &arm_tcg_ops; #endif /* CONFIG_TCG */ + + arm32_cpu_class_init(oc, data); } #ifdef CONFIG_KVM @@ -1951,38 +1938,6 @@ static const TypeInfo host_arm_cpu_type_info = { #endif -static void arm_cpu_instance_init(Object *obj) -{ - ARMCPUClass *acc = ARM_CPU_GET_CLASS(obj); - - acc->info->initfn(obj); - arm_cpu_post_init(obj); -} - -static void cpu_register_class_init(ObjectClass *oc, void *data) -{ - ARMCPUClass *acc = ARM_CPU_CLASS(oc); - - acc->info = data; -} - -void arm_cpu_register(const ARMCPUInfo *info) -{ - TypeInfo type_info = { - .parent = TYPE_ARM_CPU, - .instance_size = sizeof(ARMCPU), - .instance_align = __alignof__(ARMCPU), - .instance_init = arm_cpu_instance_init, - .class_size = sizeof(ARMCPUClass), - .class_init = info->class_init ?: cpu_register_class_init, - .class_data = (void *)info, - }; - - type_info.name = g_strdup_printf("%s-" TYPE_ARM_CPU, info->name); - type_register(&type_info); - g_free((void *)type_info.name); -} - static const TypeInfo arm_cpu_type_info = { .name = TYPE_ARM_CPU, .parent = TYPE_CPU, diff --git a/target/arm/cpu32.c b/target/arm/cpu32.c new file mode 100644 index 0000000000..39fb112a04 --- /dev/null +++ b/target/arm/cpu32.c @@ -0,0 +1,118 @@ +/* + * QEMU ARM CPU models (32bit) + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qemu/qemu-print.h" +#include "qemu-common.h" +#include "target/arm/idau.h" +#include "qemu/module.h" +#include "qapi/error.h" +#include "qapi/visitor.h" +#include "cpu.h" +#include "cpregs.h" +#include "internals.h" +#include "exec/exec-all.h" +#include "hw/qdev-properties.h" +#if !defined(CONFIG_USER_ONLY) +#include "hw/loader.h" +#include "hw/boards.h" +#endif +#include "sysemu/sysemu.h" +#include "sysemu/tcg.h" +#include "sysemu/hw_accel.h" +#include "kvm_arm.h" +#include "disas/capstone.h" +#include "fpu/softfloat.h" +#include "cpu-mmu.h" +#include "cpu32.h" + +/* we can move this to tcg/ after the cleanup of ARM boards configurations */ +static const ARMCPUInfo arm32_cpus[] = { +}; + +static gchar *arm_gdb_arch_name(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + if (arm_feature(env, ARM_FEATURE_IWMMXT)) { + return g_strdup("iwmmxt"); + } + return g_strdup("arm"); +} + +void arm32_cpu_class_init(ObjectClass *oc, void *data) +{ + CPUClass *cc = CPU_CLASS(oc); + + cc->gdb_read_register = arm_cpu_gdb_read_register; + cc->gdb_write_register = arm_cpu_gdb_write_register; + cc->gdb_num_core_regs = 26; + cc->gdb_core_xml_file = "arm-core.xml"; + cc->gdb_arch_name = arm_gdb_arch_name; + cc->dump_state = arm_cpu_dump_state; +} + +static void arm32_cpu_instance_init(Object *obj) +{ + ARMCPUClass *acc = ARM_CPU_GET_CLASS(obj); + + acc->info->initfn(obj); + arm_cpu_post_init(obj); +} + +static void arm32_cpu_register_class_init(ObjectClass *oc, void *data) +{ + ARMCPUClass *acc = ARM_CPU_CLASS(oc); + + acc->info = data; +} + +void arm32_cpu_register(const ARMCPUInfo *info) +{ + TypeInfo type_info = { + .parent = TYPE_ARM_CPU, + .instance_size = sizeof(ARMCPU), + .instance_align = __alignof__(ARMCPU), + .instance_init = arm32_cpu_instance_init, + .class_size = sizeof(ARMCPUClass), + .class_init = info->class_init ?: arm32_cpu_register_class_init, + .class_data = (void *)info, + }; + + type_info.name = g_strdup_printf("%s-" TYPE_ARM_CPU, info->name); + type_register(&type_info); + g_free((void *)type_info.name); +} + +static void arm32_cpu_register_types(void) +{ + const size_t cpu_count = ARRAY_SIZE(arm32_cpus); + + if (cpu_count) { + size_t i; + + for (i = 0; i < cpu_count; ++i) { + arm32_cpu_register(&arm32_cpus[i]); + } + } +} + +type_init(arm32_cpu_register_types) diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 5354069c63..4ff55fb0f0 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -860,7 +860,7 @@ static void cpu_register_class_init(ObjectClass *oc, void *data) acc->info = data; } -void aarch64_cpu_register(const ARMCPUInfo *info) +static void aarch64_cpu_register(const ARMCPUInfo *info) { TypeInfo type_info = { .parent = TYPE_AARCH64_CPU, diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c index d973239d78..09eff9bfd2 100644 --- a/target/arm/cpu_tcg.c +++ b/target/arm/cpu_tcg.c @@ -19,6 +19,7 @@ #include "hw/boards.h" #endif #include "cpregs.h" +#include "cpu32.h" /* CPU models. These are not needed for the AArch64 linux-user build. */ #if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) @@ -1072,7 +1073,7 @@ static void arm_tcg_cpu_register_types(void) type_register_static(&idau_interface_type_info); for (i = 0; i < ARRAY_SIZE(arm_tcg_cpus); ++i) { - arm_cpu_register(&arm_tcg_cpus[i]); + arm32_cpu_register(&arm_tcg_cpus[i]); } } diff --git a/target/arm/meson.build b/target/arm/meson.build index 4bc44e1db2..0ccd2fb0bc 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -2,12 +2,12 @@ arm_ss = ss.source_set() arm_ss.add(files( 'cpregs.c', 'cpu.c', + 'cpu32.c', 'cpu-common.c', 'cpu-mmu.c', 'cpu-vfp.c', 'cpustate-list.c', 'gdbstub.c', - 'cpu_tcg.c', )) arm_ss.add(zlib) @@ -18,6 +18,10 @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files( 'gdbstub64.c', )) +arm_ss.add(when: 'CONFIG_TCG', if_true: files( + 'cpu_tcg.c', +)) + arm_softmmu_ss = ss.source_set() arm_softmmu_ss.add(files( 'arch_dump.c', From patchwork Fri Jun 4 15:52:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454145 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp653588jae; Fri, 4 Jun 2021 11:24:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyNHy6h1UoqW5zamCqBKuRVFreJicglO0WO9UiazEjDaT+sfABx/1FaJUHeDAFS4GrFlWI1 X-Received: by 2002:a05:6808:318:: with SMTP id i24mr3883880oie.94.1622831071974; Fri, 04 Jun 2021 11:24:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831071; cv=none; d=google.com; s=arc-20160816; b=gjP40r3h2XlK/BWZ2bCIeeDkaRB1dqKVvYYx8267X2AqF1v43YX1c+zKFMI8W3TGet wIdcTRs53j2JdZ6WbNXzykiU2S4sTLrX2v2AcTtKy1fOZ1HYc2ttlE0UQE8PB1TVY43b 8FtKJh/QLLzg6iv5N6GB0gdyEivqvxIFZoWOLj3mkOSXN/ueInWqMtM5cuWfer/1ncah pZ9WXINw6VpDUlMZEeB0b8CbOew4QWtBAung4U/52XPQw50eyJIeLOvOUSFV+RUYkwhE GalOjqNWmSodaEamxPwg0moDGjFcnWpDVDs4d3SNFl2bPKtI/u4vv8F/rnOZTZsErd0W oEnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=DaG++NcIdtxVqAad7Vs0R6dhTvromA2j7N2Qe9wv/vA=; b=IcpgM3gKRT5n6vmi1IEVBmGXuh/tS88AvAh68ovXoyM2ooZ30Biv4OH/ed8g0xVrVR lFRjtRxDO7W3TdT8gPhm1t3uJkAlDc9AfBZClPW0MUQhv7KwIyVCBPWJLBYze5aBK1vv sOfSr6fuH1MefM6w56jEVMOxcMFWFB7BYD0VosWO2B+L7OYdHn94RmmgOhlewSzT/nHm OpDgpH9HNB19JhJHBSPbmVSttU+JBIqzqxWaW7dEkL63G2eWMTqOnrnjttyXs/Q79ss3 UJdKr1ZxUQBWqQcd4bchB93V5tu0jDJeA4vnQX2SWWQs2TTZKIeUKm97ySuZ6+KgGC8x OzZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="HXw8oYR/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id l19si2730839otr.214.2021.06.04.11.24.31 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:24:31 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="HXw8oYR/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:38292 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpEUh-0004TH-Am for patch@linaro.org; Fri, 04 Jun 2021 14:24:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42504) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpETB-0001PH-FP for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:57 -0400 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]:33773) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpET6-0000Re-89 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:57 -0400 Received: by mail-wr1-x429.google.com with SMTP id a20so10249620wrc.0 for ; Fri, 04 Jun 2021 11:22:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DaG++NcIdtxVqAad7Vs0R6dhTvromA2j7N2Qe9wv/vA=; b=HXw8oYR/a2DyitOFpiiPG/gdcoWC/t+xIkIbZ0ylTzKPhPm9QmR0jg9EW8i2O5K2zW FkKd/MB32M5KfDvXfvxTw/qh/6WQ7BigWQPeNfMrIRO9d+Ib/duYUJsec85WQOag037Q Dip7TqOPFqGMGotLh/rLAeJjYS+VJphopkfzLkczHq+Tpoc7GgMXjfS9WEbvo48UJiiu Cpkp1hyKinOKJXKx8NR9ppaSMZCvV0lq992Fk9ycPAFB15fmTPx7IPIX48Mi1/kkTz6Z iOkb2BMpDAZcPNxUUm9ux4xijMRRzhD05chi74Ai8+HYZKBRmkE6jATcZYFvzlcPJfkZ hQlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DaG++NcIdtxVqAad7Vs0R6dhTvromA2j7N2Qe9wv/vA=; b=DHNPQxXJ7FwLBF5OSH5EKJhNJEVArjEnpkz1/pAuKo7N0L0nPjEsrIZzMr5skRBMCc rn25eIu4ZLw7N9gT2ZjefUew9qQO3N7y6CuxTwfqopiQw3s0sEjVu8d2/HenIkMWWCVu ZPjA77VGXy16y6t6HAwv7tNNFpelHuK25NhgwXdsHadqHULoiHWVDfOI5ZJKjCZuzUDa nYGRhApJeegeTV0bNtNYLsGCOJ+GIGUY/7Wjld9EHKET+z3jVegQ6VUAC+WUymkdyuFz EyovUDGdsHdEY6BOBJAf7wlMO2HNCrvhVeHOZfS9tunBVeLhBgZ++N1SXwDACxML++Dq 7DSg== X-Gm-Message-State: AOAM5311GauEKTINeu2eaDu+S0xPimmsSlAXeN5/g4ykt9ETTyO1mLsD XbybbzoR/7tANppV4YOBexidWg== X-Received: by 2002:a5d:6a02:: with SMTP id m2mr5290785wru.77.1622830970818; Fri, 04 Jun 2021 11:22:50 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id f8sm6094773wmg.43.2021.06.04.11.22.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:45 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2B1411FFC1; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 42/99] target/arm: split 32bit and 64bit arm dump state Date: Fri, 4 Jun 2021 16:52:15 +0100 Message-Id: <20210604155312.15902-43-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::429; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu32.h | 2 +- target/arm/cpu.c | 225 --------------------------------------------- target/arm/cpu32.c | 85 ++++++++++++++++- target/arm/cpu64.c | 142 ++++++++++++++++++++++++++++ 4 files changed, 227 insertions(+), 227 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu32.h b/target/arm/cpu32.h index 211fad6f55..128d0c9247 100644 --- a/target/arm/cpu32.h +++ b/target/arm/cpu32.h @@ -21,7 +21,7 @@ #ifndef ARM_CPU32_H #define ARM_CPU32_H -void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags); +void arm32_cpu_dump_state(CPUState *cs, FILE *f, int flags); void arm32_cpu_class_init(ObjectClass *oc, void *data); void arm32_cpu_register(const ARMCPUInfo *info); diff --git a/target/arm/cpu.c b/target/arm/cpu.c index b9b300944d..97d562bbd5 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -19,7 +19,6 @@ */ #include "qemu/osdep.h" -#include "qemu/qemu-print.h" #include "qemu-common.h" #include "target/arm/idau.h" #include "qemu/module.h" @@ -716,230 +715,6 @@ static void arm_disas_set_info(CPUState *cpu, disassemble_info *info) #endif } -#ifdef TARGET_AARCH64 - -static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - uint32_t psr = pstate_read(env); - int i; - int el = arm_current_el(env); - const char *ns_status; - - qemu_fprintf(f, " PC=%016" PRIx64 " ", env->pc); - for (i = 0; i < 32; i++) { - if (i == 31) { - qemu_fprintf(f, " SP=%016" PRIx64 "\n", env->xregs[i]); - } else { - qemu_fprintf(f, "X%02d=%016" PRIx64 "%s", i, env->xregs[i], - (i + 2) % 3 ? " " : "\n"); - } - } - - if (arm_feature(env, ARM_FEATURE_EL3) && el != 3) { - ns_status = env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; - } else { - ns_status = ""; - } - qemu_fprintf(f, "PSTATE=%08x %c%c%c%c %sEL%d%c", - psr, - psr & PSTATE_N ? 'N' : '-', - psr & PSTATE_Z ? 'Z' : '-', - psr & PSTATE_C ? 'C' : '-', - psr & PSTATE_V ? 'V' : '-', - ns_status, - el, - psr & PSTATE_SP ? 'h' : 't'); - - if (cpu_isar_feature(aa64_bti, cpu)) { - qemu_fprintf(f, " BTYPE=%d", (psr & PSTATE_BTYPE) >> 10); - } - if (!(flags & CPU_DUMP_FPU)) { - qemu_fprintf(f, "\n"); - return; - } - if (fp_exception_el(env, el) != 0) { - qemu_fprintf(f, " FPU disabled\n"); - return; - } - qemu_fprintf(f, " FPCR=%08x FPSR=%08x\n", - vfp_get_fpcr(env), vfp_get_fpsr(env)); - - if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) { - int j, zcr_len = sve_zcr_len_for_el(env, el); - - for (i = 0; i <= FFR_PRED_NUM; i++) { - bool eol; - if (i == FFR_PRED_NUM) { - qemu_fprintf(f, "FFR="); - /* It's last, so end the line. */ - eol = true; - } else { - qemu_fprintf(f, "P%02d=", i); - switch (zcr_len) { - case 0: - eol = i % 8 == 7; - break; - case 1: - eol = i % 6 == 5; - break; - case 2: - case 3: - eol = i % 3 == 2; - break; - default: - /* More than one quadword per predicate. */ - eol = true; - break; - } - } - for (j = zcr_len / 4; j >= 0; j--) { - int digits; - if (j * 4 + 4 <= zcr_len + 1) { - digits = 16; - } else { - digits = (zcr_len % 4 + 1) * 4; - } - qemu_fprintf(f, "%0*" PRIx64 "%s", digits, - env->vfp.pregs[i].p[j], - j ? ":" : eol ? "\n" : " "); - } - } - - for (i = 0; i < 32; i++) { - if (zcr_len == 0) { - qemu_fprintf(f, "Z%02d=%016" PRIx64 ":%016" PRIx64 "%s", - i, env->vfp.zregs[i].d[1], - env->vfp.zregs[i].d[0], i & 1 ? "\n" : " "); - } else if (zcr_len == 1) { - qemu_fprintf(f, "Z%02d=%016" PRIx64 ":%016" PRIx64 - ":%016" PRIx64 ":%016" PRIx64 "\n", - i, env->vfp.zregs[i].d[3], env->vfp.zregs[i].d[2], - env->vfp.zregs[i].d[1], env->vfp.zregs[i].d[0]); - } else { - for (j = zcr_len; j >= 0; j--) { - bool odd = (zcr_len - j) % 2 != 0; - if (j == zcr_len) { - qemu_fprintf(f, "Z%02d[%x-%x]=", i, j, j - 1); - } else if (!odd) { - if (j > 0) { - qemu_fprintf(f, " [%x-%x]=", j, j - 1); - } else { - qemu_fprintf(f, " [%x]=", j); - } - } - qemu_fprintf(f, "%016" PRIx64 ":%016" PRIx64 "%s", - env->vfp.zregs[i].d[j * 2 + 1], - env->vfp.zregs[i].d[j * 2], - odd || j == 0 ? "\n" : ":"); - } - } - } - } else { - for (i = 0; i < 32; i++) { - uint64_t *q = aa64_vfp_qreg(env, i); - qemu_fprintf(f, "Q%02d=%016" PRIx64 ":%016" PRIx64 "%s", - i, q[1], q[0], (i & 1 ? "\n" : " ")); - } - } -} - -#else - -static inline void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags) -{ - g_assert_not_reached(); -} - -#endif - -void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - int i; - - if (is_a64(env)) { - aarch64_cpu_dump_state(cs, f, flags); - return; - } - - for (i = 0; i < 16; i++) { - qemu_fprintf(f, "R%02d=%08x", i, env->regs[i]); - if ((i % 4) == 3) { - qemu_fprintf(f, "\n"); - } else { - qemu_fprintf(f, " "); - } - } - - if (arm_feature(env, ARM_FEATURE_M)) { - uint32_t xpsr = xpsr_read(env); - const char *mode; - const char *ns_status = ""; - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - ns_status = env->v7m.secure ? "S " : "NS "; - } - - if (xpsr & XPSR_EXCP) { - mode = "handler"; - } else { - if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_NPRIV_MASK) { - mode = "unpriv-thread"; - } else { - mode = "priv-thread"; - } - } - - qemu_fprintf(f, "XPSR=%08x %c%c%c%c %c %s%s\n", - xpsr, - xpsr & XPSR_N ? 'N' : '-', - xpsr & XPSR_Z ? 'Z' : '-', - xpsr & XPSR_C ? 'C' : '-', - xpsr & XPSR_V ? 'V' : '-', - xpsr & XPSR_T ? 'T' : 'A', - ns_status, - mode); - } else { - uint32_t psr = cpsr_read(env); - const char *ns_status = ""; - - if (arm_feature(env, ARM_FEATURE_EL3) && - (psr & CPSR_M) != ARM_CPU_MODE_MON) { - ns_status = env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; - } - - qemu_fprintf(f, "PSR=%08x %c%c%c%c %c %s%s%d\n", - psr, - psr & CPSR_N ? 'N' : '-', - psr & CPSR_Z ? 'Z' : '-', - psr & CPSR_C ? 'C' : '-', - psr & CPSR_V ? 'V' : '-', - psr & CPSR_T ? 'T' : 'A', - ns_status, - aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26); - } - - if (flags & CPU_DUMP_FPU) { - int numvfpregs = 0; - if (cpu_isar_feature(aa32_simd_r32, cpu)) { - numvfpregs = 32; - } else if (cpu_isar_feature(aa32_vfp_simd, cpu)) { - numvfpregs = 16; - } - for (i = 0; i < numvfpregs; i++) { - uint64_t v = *aa32_vfp_dreg(env, i); - qemu_fprintf(f, "s%02d=%08x s%02d=%08x d%02d=%016" PRIx64 "\n", - i * 2, (uint32_t)v, - i * 2 + 1, (uint32_t)(v >> 32), - i, v); - } - qemu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env)); - } -} - uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz) { uint32_t Aff1 = idx / clustersz; diff --git a/target/arm/cpu32.c b/target/arm/cpu32.c index 39fb112a04..c03f420ba2 100644 --- a/target/arm/cpu32.c +++ b/target/arm/cpu32.c @@ -58,6 +58,89 @@ static gchar *arm_gdb_arch_name(CPUState *cs) return g_strdup("arm"); } +void arm32_cpu_dump_state(CPUState *cs, FILE *f, int flags) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + int i; + + assert(!is_a64(env)); + + for (i = 0; i < 16; i++) { + qemu_fprintf(f, "R%02d=%08x", i, env->regs[i]); + if ((i % 4) == 3) { + qemu_fprintf(f, "\n"); + } else { + qemu_fprintf(f, " "); + } + } + + if (arm_feature(env, ARM_FEATURE_M)) { + uint32_t xpsr = xpsr_read(env); + const char *mode; + const char *ns_status = ""; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + ns_status = env->v7m.secure ? "S " : "NS "; + } + + if (xpsr & XPSR_EXCP) { + mode = "handler"; + } else { + if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_NPRIV_MASK) { + mode = "unpriv-thread"; + } else { + mode = "priv-thread"; + } + } + + qemu_fprintf(f, "XPSR=%08x %c%c%c%c %c %s%s\n", + xpsr, + xpsr & XPSR_N ? 'N' : '-', + xpsr & XPSR_Z ? 'Z' : '-', + xpsr & XPSR_C ? 'C' : '-', + xpsr & XPSR_V ? 'V' : '-', + xpsr & XPSR_T ? 'T' : 'A', + ns_status, + mode); + } else { + uint32_t psr = cpsr_read(env); + const char *ns_status = ""; + + if (arm_feature(env, ARM_FEATURE_EL3) && + (psr & CPSR_M) != ARM_CPU_MODE_MON) { + ns_status = env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; + } + + qemu_fprintf(f, "PSR=%08x %c%c%c%c %c %s%s%d\n", + psr, + psr & CPSR_N ? 'N' : '-', + psr & CPSR_Z ? 'Z' : '-', + psr & CPSR_C ? 'C' : '-', + psr & CPSR_V ? 'V' : '-', + psr & CPSR_T ? 'T' : 'A', + ns_status, + aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26); + } + + if (flags & CPU_DUMP_FPU) { + int numvfpregs = 0; + if (cpu_isar_feature(aa32_simd_r32, cpu)) { + numvfpregs = 32; + } else if (cpu_isar_feature(aa32_vfp_simd, cpu)) { + numvfpregs = 16; + } + for (i = 0; i < numvfpregs; i++) { + uint64_t v = *aa32_vfp_dreg(env, i); + qemu_fprintf(f, "s%02d=%08x s%02d=%08x d%02d=%016" PRIx64 "\n", + i * 2, (uint32_t)v, + i * 2 + 1, (uint32_t)(v >> 32), + i, v); + } + qemu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env)); + } +} + void arm32_cpu_class_init(ObjectClass *oc, void *data) { CPUClass *cc = CPU_CLASS(oc); @@ -67,7 +150,7 @@ void arm32_cpu_class_init(ObjectClass *oc, void *data) cc->gdb_num_core_regs = 26; cc->gdb_core_xml_file = "arm-core.xml"; cc->gdb_arch_name = arm_gdb_arch_name; - cc->dump_state = arm_cpu_dump_state; + cc->dump_state = arm32_cpu_dump_state; } static void arm32_cpu_instance_init(Object *obj) diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 4ff55fb0f0..7cd73ae0b6 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -20,7 +20,9 @@ #include "qemu/osdep.h" #include "qapi/error.h" +#include "qemu/qemu-print.h" #include "cpu.h" +#include "cpu32.h" #ifdef CONFIG_TCG #include "hw/core/tcg-cpu-ops.h" #endif /* CONFIG_TCG */ @@ -828,6 +830,145 @@ static gchar *aarch64_gdb_arch_name(CPUState *cs) return g_strdup("aarch64"); } +static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + uint32_t psr = pstate_read(env); + int i; + int el = arm_current_el(env); + const char *ns_status; + + qemu_fprintf(f, " PC=%016" PRIx64 " ", env->pc); + for (i = 0; i < 32; i++) { + if (i == 31) { + qemu_fprintf(f, " SP=%016" PRIx64 "\n", env->xregs[i]); + } else { + qemu_fprintf(f, "X%02d=%016" PRIx64 "%s", i, env->xregs[i], + (i + 2) % 3 ? " " : "\n"); + } + } + + if (arm_feature(env, ARM_FEATURE_EL3) && el != 3) { + ns_status = env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; + } else { + ns_status = ""; + } + qemu_fprintf(f, "PSTATE=%08x %c%c%c%c %sEL%d%c", + psr, + psr & PSTATE_N ? 'N' : '-', + psr & PSTATE_Z ? 'Z' : '-', + psr & PSTATE_C ? 'C' : '-', + psr & PSTATE_V ? 'V' : '-', + ns_status, + el, + psr & PSTATE_SP ? 'h' : 't'); + + if (cpu_isar_feature(aa64_bti, cpu)) { + qemu_fprintf(f, " BTYPE=%d", (psr & PSTATE_BTYPE) >> 10); + } + if (!(flags & CPU_DUMP_FPU)) { + qemu_fprintf(f, "\n"); + return; + } + if (fp_exception_el(env, el) != 0) { + qemu_fprintf(f, " FPU disabled\n"); + return; + } + qemu_fprintf(f, " FPCR=%08x FPSR=%08x\n", + vfp_get_fpcr(env), vfp_get_fpsr(env)); + + if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) { + int j, zcr_len = sve_zcr_len_for_el(env, el); + + for (i = 0; i <= FFR_PRED_NUM; i++) { + bool eol; + if (i == FFR_PRED_NUM) { + qemu_fprintf(f, "FFR="); + /* It's last, so end the line. */ + eol = true; + } else { + qemu_fprintf(f, "P%02d=", i); + switch (zcr_len) { + case 0: + eol = i % 8 == 7; + break; + case 1: + eol = i % 6 == 5; + break; + case 2: + case 3: + eol = i % 3 == 2; + break; + default: + /* More than one quadword per predicate. */ + eol = true; + break; + } + } + for (j = zcr_len / 4; j >= 0; j--) { + int digits; + if (j * 4 + 4 <= zcr_len + 1) { + digits = 16; + } else { + digits = (zcr_len % 4 + 1) * 4; + } + qemu_fprintf(f, "%0*" PRIx64 "%s", digits, + env->vfp.pregs[i].p[j], + j ? ":" : eol ? "\n" : " "); + } + } + + for (i = 0; i < 32; i++) { + if (zcr_len == 0) { + qemu_fprintf(f, "Z%02d=%016" PRIx64 ":%016" PRIx64 "%s", + i, env->vfp.zregs[i].d[1], + env->vfp.zregs[i].d[0], i & 1 ? "\n" : " "); + } else if (zcr_len == 1) { + qemu_fprintf(f, "Z%02d=%016" PRIx64 ":%016" PRIx64 + ":%016" PRIx64 ":%016" PRIx64 "\n", + i, env->vfp.zregs[i].d[3], env->vfp.zregs[i].d[2], + env->vfp.zregs[i].d[1], env->vfp.zregs[i].d[0]); + } else { + for (j = zcr_len; j >= 0; j--) { + bool odd = (zcr_len - j) % 2 != 0; + if (j == zcr_len) { + qemu_fprintf(f, "Z%02d[%x-%x]=", i, j, j - 1); + } else if (!odd) { + if (j > 0) { + qemu_fprintf(f, " [%x-%x]=", j, j - 1); + } else { + qemu_fprintf(f, " [%x]=", j); + } + } + qemu_fprintf(f, "%016" PRIx64 ":%016" PRIx64 "%s", + env->vfp.zregs[i].d[j * 2 + 1], + env->vfp.zregs[i].d[j * 2], + odd || j == 0 ? "\n" : ":"); + } + } + } + } else { + for (i = 0; i < 32; i++) { + uint64_t *q = aa64_vfp_qreg(env, i); + qemu_fprintf(f, "Q%02d=%016" PRIx64 ":%016" PRIx64 "%s", + i, q[1], q[0], (i & 1 ? "\n" : " ")); + } + } +} + +static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + if (is_a64(env)) { + aarch64_cpu_dump_state(cs, f, flags); + } else { + arm32_cpu_dump_state(cs, f, flags); + } +} + static void aarch64_cpu_class_init(ObjectClass *oc, void *data) { CPUClass *cc = CPU_CLASS(oc); @@ -837,6 +978,7 @@ static void aarch64_cpu_class_init(ObjectClass *oc, void *data) cc->gdb_num_core_regs = 34; cc->gdb_core_xml_file = "aarch64-core.xml"; cc->gdb_arch_name = aarch64_gdb_arch_name; + cc->dump_state = arm_cpu_dump_state; object_class_property_add_bool(oc, "aarch64", aarch64_cpu_get_aarch64, aarch64_cpu_set_aarch64); From patchwork Fri Jun 4 15:52:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454092 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp572345jae; Fri, 4 Jun 2021 09:37:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxJnr/4jLhmLR8BNhcjjedLkh/4KRmeYWPMs6xjF/nhbl3eB2CQqdv8gTeMLdV/0XA/74i3 X-Received: by 2002:a67:386:: with SMTP id 128mr3654427vsd.16.1622824648496; Fri, 04 Jun 2021 09:37:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824648; cv=none; d=google.com; s=arc-20160816; b=ZAXbkHtkYY7AS/dbL6UtIOYmmpm0Tpmh8trqow7cvtZ+y1jvlRXJ36EQl8yX0oSTMr avE1A0ycbXEqFCwQm8diz2Wl1gNkQ+Mo3MEA3UasYeGno/pK/5COZhNebHqHJGuMKHkX gcwI3Jg3zwqhdxDB4DfbAmDdXi8B2fNabtF2MPox0yxPCLN4DqMeScRYERfHKcy10hjX SdSQqOpPoGJWDLYM2IaKFlRdt8Ii3tXvHq5xKqIAoE4XhcXqPVznV+gi1U66AjZXieFG tsXz1DpioruVxuTzafXiYUwFREY7QZ6YbzKsBGm84rwaFsc6+hFQmf8O18F2+21BACal lO+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=pdzAt2AXoQ9lDlpGyixAMgmXVwO3iRo3io77IDow6qA=; b=cB5Hr+QmeWH74yw9bjojtd6AA8lNaMpXw6cMN2MgocZuoRDDrqwZBD6VudNtbVre9V DPiwvutlIpCwRXuqE4u0DqRnIDBKsbXwdl9p30YaKFqe8R1wCel8gGqVWJkpZ8KYiabg 4tsNR9ln1Jv9X7UVpzIWzldEAQVW8fjn+bNdxeH63uuYVa8PjLvOaAa13opfgc3sBysb fKEXnEdURfHP5kYm3nHxCiU0vCIu/G9VygsjvkqrFuv/Kwe06AwYrlxXZTx+tyiNe81U vqEz1G/MmI3OzpfNtfuXk7NhvJD2F80QoQlQJX6wOuzxT1I7pABnaYGfKzhZAA0vKnD0 jzsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=SQwdWupW; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c24si3022267uam.126.2021.06.04.09.37.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:37:28 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=SQwdWupW; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:45996 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCp5-0001HW-PI for patch@linaro.org; Fri, 04 Jun 2021 12:37:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48930) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCIT-0000oO-SM for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:45 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]:34524) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHy-0005wT-48 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:45 -0400 Received: by mail-wm1-x330.google.com with SMTP id u5-20020a7bc0450000b02901480e40338bso4509129wmc.1 for ; Fri, 04 Jun 2021 09:03:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pdzAt2AXoQ9lDlpGyixAMgmXVwO3iRo3io77IDow6qA=; b=SQwdWupWuVopNCIEc/t23+QezyJZGaovRIliiKt2U9V3Op00Yl5bJlEbyEXdYtMH8X EpLpAWljNaRn1hn7vda6/LVVJAicIUk6kc5TAzIXILxujTM2Et1Iqac4mYGSXY8HfGjz hhOmDvQZqqBJjX2RTfMwc3wJnRi8NHrILaRoTb4V59+QAb7zZ5YTgcaurAWv2ShcpxXi JWqf/yRjvOXG/gthaknmD8myejdZgjHo9RWRzoltujdmoW7l25lYwHvFB1ctsiwrD9BI QpDRIZ7no008gl5ofVa4eV7wr23lZYNjED81G5rM8rAXvU7tv+YDHdEKnu4SdyHcAY5X R4jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pdzAt2AXoQ9lDlpGyixAMgmXVwO3iRo3io77IDow6qA=; b=CxyHMCAZMob5ZJ8bRzMHX+gx4EFXLXvv0wAdKSLJmuoSDbMz/NlTSvxqx3L/R8aXx5 BzDiBTt/dJxuHp7BRBp1uQi/9E8Fj+QvgvE0kXn72TBBjkYAM5OKSPrdnmPHZzzi9aVh O84drVFG0uuak/PoK8UZfncq+odIpz6CsMlnVL3zd45i8lWfeHvqsGG6j2ol+LBJWvJi V1Tb08l0cljtVljp7hp8QX6UF3lGw6mj5nl11YvU0fNzA+ih2oISXBP7Ost3FCxsRFNb PjFwLglzJCXDV2QxRfPDCDeFdQ8pVM5cg0j8s3UetO0Z8wHiC8WJptvL98RrDYwVs0+9 1nUQ== X-Gm-Message-State: AOAM533EGP5/dGxBErLqMsXLFc6vjy7WnrngkUX2emsWAEHQ6uZGSNDF zKT7MEAWO5aa+BzbDOiX52+L0g== X-Received: by 2002:a7b:c853:: with SMTP id c19mr4464373wml.30.1622822585348; Fri, 04 Jun 2021 09:03:05 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id r7sm11380294wma.9.2021.06.04.09.02.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:56 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 455B01FFC2; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 43/99] target/arm: move a15 cpu model away from the TCG-only models Date: Fri, 4 Jun 2021 16:52:16 +0100 Message-Id: <20210604155312.15902-44-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Cortex-A15 is the only ARM cpu class we need in KVM too. We will be able to move it to tcg/ once the board code and configurations are fixed. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu32.h | 4 +++ target/arm/cpu32.c | 73 ++++++++++++++++++++++++++++++++++++++++++++ target/arm/cpu_tcg.c | 67 ---------------------------------------- 3 files changed, 77 insertions(+), 67 deletions(-) -- 2.20.1 Acked-by: Richard Henderson diff --git a/target/arm/cpu32.h b/target/arm/cpu32.h index 128d0c9247..abd575d47d 100644 --- a/target/arm/cpu32.h +++ b/target/arm/cpu32.h @@ -21,8 +21,12 @@ #ifndef ARM_CPU32_H #define ARM_CPU32_H +#include "cpregs.h" + void arm32_cpu_dump_state(CPUState *cs, FILE *f, int flags); void arm32_cpu_class_init(ObjectClass *oc, void *data); void arm32_cpu_register(const ARMCPUInfo *info); +void cortex_a15_initfn(Object *obj); +extern const ARMCPRegInfo cortexa15_cp_reginfo[]; #endif /* ARM_CPU32_H */ diff --git a/target/arm/cpu32.c b/target/arm/cpu32.c index c03f420ba2..a6ba91ae08 100644 --- a/target/arm/cpu32.c +++ b/target/arm/cpu32.c @@ -43,8 +43,81 @@ #include "cpu-mmu.h" #include "cpu32.h" +#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) + +#ifndef CONFIG_USER_ONLY +static uint64_t a15_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + MachineState *ms = MACHINE(qdev_get_machine()); + + /* + * Linux wants the number of processors from here. + * Might as well set the interrupt-controller bit too. + */ + return ((ms->smp.cpus - 1) << 24) | (1 << 23); +} +#endif + +const ARMCPRegInfo cortexa15_cp_reginfo[] = { +#ifndef CONFIG_USER_ONLY + { .name = "L2CTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 2, + .access = PL1_RW, .resetvalue = 0, .readfn = a15_l2ctlr_read, + .writefn = arm_cp_write_ignore, }, +#endif + { .name = "L2ECTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 3, + .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, + REGINFO_SENTINEL +}; + +void cortex_a15_initfn(Object *obj) +{ + ARMCPU *cpu = ARM_CPU(obj); + + cpu->dtb_compatible = "arm,cortex-a15"; + set_feature(&cpu->env, ARM_FEATURE_V7VE); + set_feature(&cpu->env, ARM_FEATURE_NEON); + set_feature(&cpu->env, ARM_FEATURE_THUMB2EE); + set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER); + set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS); + set_feature(&cpu->env, ARM_FEATURE_CBAR_RO); + set_feature(&cpu->env, ARM_FEATURE_EL2); + set_feature(&cpu->env, ARM_FEATURE_EL3); + set_feature(&cpu->env, ARM_FEATURE_PMU); + cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15; + cpu->midr = 0x412fc0f1; + cpu->reset_fpsid = 0x410430f0; + cpu->isar.mvfr0 = 0x10110222; + cpu->isar.mvfr1 = 0x11111111; + cpu->ctr = 0x8444c004; + cpu->reset_sctlr = 0x00c50078; + cpu->isar.id_pfr0 = 0x00001131; + cpu->isar.id_pfr1 = 0x00011011; + cpu->isar.id_dfr0 = 0x02010555; + cpu->id_afr0 = 0x00000000; + cpu->isar.id_mmfr0 = 0x10201105; + cpu->isar.id_mmfr1 = 0x20000000; + cpu->isar.id_mmfr2 = 0x01240000; + cpu->isar.id_mmfr3 = 0x02102211; + cpu->isar.id_isar0 = 0x02101110; + cpu->isar.id_isar1 = 0x13112111; + cpu->isar.id_isar2 = 0x21232041; + cpu->isar.id_isar3 = 0x11112131; + cpu->isar.id_isar4 = 0x10011142; + cpu->isar.dbgdidr = 0x3515f021; + cpu->clidr = 0x0a200023; + cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */ + cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */ + cpu->ccsidr[2] = 0x711fe07a; /* 4096K L2 unified cache */ + define_arm_cp_regs(cpu, cortexa15_cp_reginfo); +} + +#endif /* !CONFIG_USER_ONLY || !TARGET_AARCH64 */ + /* we can move this to tcg/ after the cleanup of ARM boards configurations */ static const ARMCPUInfo arm32_cpus[] = { +#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) + { .name = "cortex-a15", .initfn = cortex_a15_initfn }, +#endif /* !CONFIG_USER_ONLY || !TARGET_AARCH64 */ }; static gchar *arm_gdb_arch_name(CPUState *cs) diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c index 09eff9bfd2..fe422498c7 100644 --- a/target/arm/cpu_tcg.c +++ b/target/arm/cpu_tcg.c @@ -378,30 +378,6 @@ static void cortex_a9_initfn(Object *obj) define_arm_cp_regs(cpu, cortexa9_cp_reginfo); } -#ifndef CONFIG_USER_ONLY -static uint64_t a15_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri) -{ - MachineState *ms = MACHINE(qdev_get_machine()); - - /* - * Linux wants the number of processors from here. - * Might as well set the interrupt-controller bit too. - */ - return ((ms->smp.cpus - 1) << 24) | (1 << 23); -} -#endif - -static const ARMCPRegInfo cortexa15_cp_reginfo[] = { -#ifndef CONFIG_USER_ONLY - { .name = "L2CTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 2, - .access = PL1_RW, .resetvalue = 0, .readfn = a15_l2ctlr_read, - .writefn = arm_cp_write_ignore, }, -#endif - { .name = "L2ECTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 3, - .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 }, - REGINFO_SENTINEL -}; - static void cortex_a7_initfn(Object *obj) { ARMCPU *cpu = ARM_CPU(obj); @@ -448,48 +424,6 @@ static void cortex_a7_initfn(Object *obj) define_arm_cp_regs(cpu, cortexa15_cp_reginfo); /* Same as A15 */ } -static void cortex_a15_initfn(Object *obj) -{ - ARMCPU *cpu = ARM_CPU(obj); - - cpu->dtb_compatible = "arm,cortex-a15"; - set_feature(&cpu->env, ARM_FEATURE_V7VE); - set_feature(&cpu->env, ARM_FEATURE_NEON); - set_feature(&cpu->env, ARM_FEATURE_THUMB2EE); - set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER); - set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS); - set_feature(&cpu->env, ARM_FEATURE_CBAR_RO); - set_feature(&cpu->env, ARM_FEATURE_EL2); - set_feature(&cpu->env, ARM_FEATURE_EL3); - set_feature(&cpu->env, ARM_FEATURE_PMU); - cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15; - cpu->midr = 0x412fc0f1; - cpu->reset_fpsid = 0x410430f0; - cpu->isar.mvfr0 = 0x10110222; - cpu->isar.mvfr1 = 0x11111111; - cpu->ctr = 0x8444c004; - cpu->reset_sctlr = 0x00c50078; - cpu->isar.id_pfr0 = 0x00001131; - cpu->isar.id_pfr1 = 0x00011011; - cpu->isar.id_dfr0 = 0x02010555; - cpu->id_afr0 = 0x00000000; - cpu->isar.id_mmfr0 = 0x10201105; - cpu->isar.id_mmfr1 = 0x20000000; - cpu->isar.id_mmfr2 = 0x01240000; - cpu->isar.id_mmfr3 = 0x02102211; - cpu->isar.id_isar0 = 0x02101110; - cpu->isar.id_isar1 = 0x13112111; - cpu->isar.id_isar2 = 0x21232041; - cpu->isar.id_isar3 = 0x11112131; - cpu->isar.id_isar4 = 0x10011142; - cpu->isar.dbgdidr = 0x3515f021; - cpu->clidr = 0x0a200023; - cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */ - cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */ - cpu->ccsidr[2] = 0x711fe07a; /* 4096K L2 unified cache */ - define_arm_cp_regs(cpu, cortexa15_cp_reginfo); -} - static void cortex_m0_initfn(Object *obj) { ARMCPU *cpu = ARM_CPU(obj); @@ -1022,7 +956,6 @@ static const ARMCPUInfo arm_tcg_cpus[] = { { .name = "cortex-a7", .initfn = cortex_a7_initfn }, { .name = "cortex-a8", .initfn = cortex_a8_initfn }, { .name = "cortex-a9", .initfn = cortex_a9_initfn }, - { .name = "cortex-a15", .initfn = cortex_a15_initfn }, { .name = "cortex-m0", .initfn = cortex_m0_initfn, .class_init = arm_v7m_class_init }, { .name = "cortex-m3", .initfn = cortex_m3_initfn, From patchwork Fri Jun 4 15:52:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454088 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp570066jae; Fri, 4 Jun 2021 09:34:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySLWZ5VheRVyDJ5qTEi8y2Re9hDiu5WqL62n9Kj/fJbvcwW76sO63LftWZ1VYIs4KeMv9M X-Received: by 2002:a67:d819:: with SMTP id e25mr2590093vsj.18.1622824471075; Fri, 04 Jun 2021 09:34:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824471; cv=none; d=google.com; s=arc-20160816; b=s9n1gcpaaOdSnlo8GUp3tngXBBKcJeFuEcavmrjTYhhgMSylxoj9oUpwHAGU8QaQqe 9VDYANHRqsjLrxMQPMufuGs4lATvhIklqyW9fjfcbRDNbTvsb3pH990RO6AZI/DsiGGp 7K3r93m5b8x9iJ1XhENWem28LxFyjTT+/zti9fkRxGhrih5jPZPGQzQh0zS1DrnZNHuL Bc+V9EBQ95DypasZemOOu6OOxvpQ5AKcRKCdA4C+vjDKGkYIByG7esrK6wNV9sEDR1+j ip3HvzQtFxYpqCseurkhzozf+C8A1fVHVBvTg3EXxtchuTB7haJtI/b+VyuOQ6OYvtUB nbZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=CQtnS34z/bTDeAgJgcVIAWI8dJCWeYn70HUU46NLGbY=; b=P8Inpv7XfOAxzJNf2It+jnU064I21NhFGUH2PSaVO3i5e5vAzsWf4BS61kwmKoD87F xS/N4nILM3KZTB8djVpLrez10iOsII7r9FReOCmFGvkq77Jmwpt85S8akpBnbgmCU46R qwI6QGtYKC2Zr05yQ0OxSN4uxDrzG4GhCXXpgSYjLTz9V1QqHs6I/i/0uCfLVX5A5ERd IupMRUhIre04TgP7+pgmQWg3UPx48+QUliksIF7S8BTGG8qWv/ukoaaLqk4cqDe64hjA gvNgq5rPUO5W6ouWjvAFKpdQ8mSkZ3w1ykBsYhRUMZ3QCsjU7JbwUPAKCUjvd/BD+E5v x3+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=H0rLFfDc; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d12si908398vsf.148.2021.06.04.09.34.31 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:34:31 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=H0rLFfDc; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:35450 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCmE-0002Wl-FJ for patch@linaro.org; Fri, 04 Jun 2021 12:34:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48250) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHd-0007mT-2k for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:53 -0400 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]:46941) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHW-0005lY-Fo for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:52 -0400 Received: by mail-wr1-x434.google.com with SMTP id a11so7945916wrt.13 for ; Fri, 04 Jun 2021 09:02:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CQtnS34z/bTDeAgJgcVIAWI8dJCWeYn70HUU46NLGbY=; b=H0rLFfDc/pCqx9gywa70LgA0sqr+1pwSHDZ5ngyterzlPj3IH3yrQdI5bLBbXVJJLr jdmc70IcuslrmQ2O7hz9DXhd3AjHVNvsxy8RcMKakr88RhbxsGmD5+cAhE/75auadcYk SEEufDxnaMPvUy+IRqaWWuUPhcmKVZVCf0K54lLn65Hwx+PbVo4ztol9Ed3NduddEw2+ JyD1odli0EoBxDPM+vngFKhBQXrCS9AvMvb/SDYmxWiQbh6Qbl3L+8Zc94rhkFQmPeZx YRdEH3LBbcD38fj5fplHGBFOUptDDgilijaKQtsRDz1Yge4DHl2N63AmRpvPthm+OXdb atVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CQtnS34z/bTDeAgJgcVIAWI8dJCWeYn70HUU46NLGbY=; b=BYnhGMpNIhk1ib0L+H84+5rXubrrjjbeDhSpMWySzGOYWBQiAQkl/KG648+dFBvvSJ M9mN516NxhxPwSu2GxwgAhk+x0RSM+K9jpYYFJURRzZnStIcoDb2Ev2DM59X7rMjPYSf F8prIRkYoWlIFRV44p7SznbsNudPCPmEC+ra0RuhhU9Mx9jeqbyBYBkQVweMjX7e+qyQ yRmx3VV3a3QkXreBD4Ia9kLGna+Ko55E2H5SE9D9CPi4H1zjUnO8AgYXAHb8bZoGgt3m +cqKSU1gn/qQ7l5UCznpvAJFqYbZqeKMfn37ZUtZxwGSNFfi5Hi420B2e55ponieHOLy ODSw== X-Gm-Message-State: AOAM533AT65owVk5Bvg40e0b38DU5EwSi4wb7V+cTaQX3tL4NRXKxGn1 Oy6E3NPtC8Xh6MJiBXSAceDDLg== X-Received: by 2002:a05:6000:1282:: with SMTP id f2mr4511680wrx.67.1622822565245; Fri, 04 Jun 2021 09:02:45 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id p1sm6127310wmc.11.2021.06.04.09.02.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 5A4751FFC3; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 44/99] target/arm: fixup sve_exception_el code style before move Date: Fri, 4 Jun 2021 16:52:17 +0100 Message-Id: <20210604155312.15902-45-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana before moving over sve_exception_el from the helper code, cleanup the style. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/tcg/helper.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 9dd83911f2..1c69a69d5a 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -261,7 +261,8 @@ static int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg) } #endif /* TARGET_AARCH64 */ -/* Return the exception level to which exceptions should be taken +/* + * Return the exception level to which exceptions should be taken * via SVEAccessTrap. If an exception should be routed through * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should * take care of raising that exception. @@ -275,7 +276,8 @@ int sve_exception_el(CPUARMState *env, int el) if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { bool disabled = false; - /* The CPACR.ZEN controls traps to EL1: + /* + * The CPACR.ZEN controls traps to EL1: * 0, 2 : trap EL0 and EL1 accesses * 1 : trap only EL0 accesses * 3 : trap no accesses @@ -301,7 +303,8 @@ int sve_exception_el(CPUARMState *env, int el) } } - /* CPTR_EL2. Since TZ and TFP are positive, + /* + * CPTR_EL2. Since TZ and TFP are positive, * they will be zero when EL2 is not present. */ if (el <= 2 && arm_is_el2_enabled(env)) { From patchwork Fri Jun 4 15:52:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454141 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp621871jae; Fri, 4 Jun 2021 10:40:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwAJKrxVVViZDhttRhmAWXvIX6+H0a0IQPOV8Jg593YXd53IHVqLYoOl1D78I8O8L2s7R5Y X-Received: by 2002:a05:6e02:de9:: with SMTP id m9mr5041464ilj.89.1622828459407; Fri, 04 Jun 2021 10:40:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622828459; cv=none; d=google.com; s=arc-20160816; b=wmfsy7YH3BfRJhAEh2o/K5Na/vkTOviO67Bz9FQac75ygxET942Ij3P9kigJldn3Hm SsbG8tl6iALJRndKnWjgXW/XSlEZmac87lW57ZXw0XOc80pLabf9dcvKXOiKM7qedaLa /i9451EgZzYtCDV6bj5SEZF0U2e4kX6cUK3FfFbggYvKMdfdnU8hbdGuH8v/n6JJiA7Q gAEN7Q5FtcF1vymCLwFizceXMXXm6wKNpJW3h+ozMJL6pTcek/0ZTr1VMeJaWLAiE+9K LtqDpynm/PeZ5jvUktak79daVq9OrOv4EsKWPuN9BGG62UHasxl+BX5MC87udAb0mpXT GoTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=05nkHLjcRHyfDukFJajvGPGtE1+ijrzLS+mVjmVHqf8=; b=ovyz9Xs9ZmGkwCRA8VTOjWPbKpApL8J4XQ2wP53kesPbuaARZPzStbW3OtjMPgx14+ qq5lrhJrei0OXZihbUZ2sI9cj5VrugDLk/qNbz7TADaclPc+9C2m0+SKNJS/AgLA9d8L jglmJqjSre2B3NCzJGja8j0fPo7F15TiguzLFnOFfJ2zD8f2mfcZHODfbl+m89VXAC5i 2q4Cf+2d7rHS3mvrcS3oSyUllQi1GD8gKWASjtGMEm2VnHrepqe3XkvUl0PhGykKp6f+ Yn+SwPoyoyQxceqTU4IwXA7WerK6HfkWaZylp2wntLuGLltpf3OYw2CElQfar3d65YZ9 JO0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=DodAKl7e; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h10si8028750ili.90.2021.06.04.10.40.59 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:40:59 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=DodAKl7e; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58050 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDoY-0002w3-SH for patch@linaro.org; Fri, 04 Jun 2021 13:40:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48960) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNY-0000TB-0L for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:13:04 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]:40637) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNR-00028f-QQ for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:13:03 -0400 Received: by mail-wr1-x42d.google.com with SMTP id y7so5387707wrh.7 for ; Fri, 04 Jun 2021 10:12:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=05nkHLjcRHyfDukFJajvGPGtE1+ijrzLS+mVjmVHqf8=; b=DodAKl7esocnjjL/SdHyoskpdivSLAuDP3NAJ35wJcmxAE4rrmKaa80TM3hjYhxRHN QFmSCWX/2cRPrLgWyhCMUdfn4CB4Z1omc9RE4J7Ajdmvx7r42a4x+ZpVzxP1lggrmOjV m2alL0J3LugU5AD0DUBWpMCt1J5iQ7nXFe/yqmelp7axmiiGCQm5kWGoKgrm2fbwxLn0 3RX7LhRBGrH+6BOZde8sKNjrtj1r3mFcZ/YufbJUt5BMz7u861Bhlo2+19KPZxYHc6xa fJuUSElnLblI87nvh+vacSW5fBsdHUnzIu78FA3MW0/Sk+RH4qp2ZQBkoe05NITs1B5R GtIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=05nkHLjcRHyfDukFJajvGPGtE1+ijrzLS+mVjmVHqf8=; b=Bc33fV12hdXf1bNSuTwWDBD4VB9q6A9JVrxTnOaAdzHR3KWdCnUUkQKfsew7l++tad Bw/u82sZM/+Qa799MyT+UgeQv9uDRxcZsYodFNcpcLd5wwaSmWXnVmFuuD4Yev+FqFoe 9CCatuQx0wy4jM8KQdYAXS3mfgKPMJ9IJYy/1PuAnwlztoqoAx6txLciwjAVTAxudHEL KzM1uCch0PCryZ+iP2+qU/yzSph/KYxaAHAKz8/vV9V+XyN8IX3oJ517KmMZVHiPa5pP Babkc59f04m4Qfws7qajcU3MpUzYMwZGs7fpyvVAOizUvO1S4D91vfU8Zjr/2beTwuru 76Lw== X-Gm-Message-State: AOAM531tMs0s1x123I/jcyGFWHc7SuNElJTbthoOP4JSRbIRbbfGIELp tEGkIYOrxnSeO7BoNu0IVcW9aQ== X-Received: by 2002:adf:f382:: with SMTP id m2mr4983685wro.394.1622826776475; Fri, 04 Jun 2021 10:12:56 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id q20sm9145301wrf.45.2021.06.04.10.12.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:46 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 73FBC1FFC4; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 45/99] target/arm: move sve_exception_el out of TCG helpers Date: Fri, 4 Jun 2021 16:52:18 +0100 Message-Id: <20210604155312.15902-46-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana we need this for KVM too. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-sysemu.c | 62 +++++++++++++++++++++++++++++++++++++++ target/arm/cpu-user.c | 5 ++++ target/arm/tcg/helper.c | 64 ----------------------------------------- 3 files changed, 67 insertions(+), 64 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 7a314bf805..7cc721fe68 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -348,3 +348,65 @@ void aarch64_sync_64_to_32(CPUARMState *env) env->regs[15] = env->pc; } + +/* + * Return the exception level to which exceptions should be taken + * via SVEAccessTrap. If an exception should be routed through + * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should + * take care of raising that exception. + * C.f. the ARM pseudocode function CheckSVEEnabled. + */ +int sve_exception_el(CPUARMState *env, int el) +{ + uint64_t hcr_el2 = arm_hcr_el2_eff(env); + + if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { + bool disabled = false; + + /* + * The CPACR.ZEN controls traps to EL1: + * 0, 2 : trap EL0 and EL1 accesses + * 1 : trap only EL0 accesses + * 3 : trap no accesses + */ + if (!extract32(env->cp15.cpacr_el1, 16, 1)) { + disabled = true; + } else if (!extract32(env->cp15.cpacr_el1, 17, 1)) { + disabled = el == 0; + } + if (disabled) { + /* route_to_el2 */ + return hcr_el2 & HCR_TGE ? 2 : 1; + } + + /* Check CPACR.FPEN. */ + if (!extract32(env->cp15.cpacr_el1, 20, 1)) { + disabled = true; + } else if (!extract32(env->cp15.cpacr_el1, 21, 1)) { + disabled = el == 0; + } + if (disabled) { + return 0; + } + } + + /* + * CPTR_EL2. Since TZ and TFP are positive, + * they will be zero when EL2 is not present. + */ + if (el <= 2 && arm_is_el2_enabled(env)) { + if (env->cp15.cptr_el[2] & CPTR_TZ) { + return 2; + } + if (env->cp15.cptr_el[2] & CPTR_TFP) { + return 0; + } + } + + /* CPTR_EL3. Since EZ is negative we must check for EL3. */ + if (arm_feature(env, ARM_FEATURE_EL3) + && !(env->cp15.cptr_el[3] & CPTR_EZ)) { + return 3; + } + return 0; +} diff --git a/target/arm/cpu-user.c b/target/arm/cpu-user.c index 0225089e46..39093ade76 100644 --- a/target/arm/cpu-user.c +++ b/target/arm/cpu-user.c @@ -33,3 +33,8 @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx, { return 1; } + +int sve_exception_el(CPUARMState *env, int el) +{ + return 0; +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 1c69a69d5a..8372089260 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -261,70 +261,6 @@ static int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg) } #endif /* TARGET_AARCH64 */ -/* - * Return the exception level to which exceptions should be taken - * via SVEAccessTrap. If an exception should be routed through - * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should - * take care of raising that exception. - * C.f. the ARM pseudocode function CheckSVEEnabled. - */ -int sve_exception_el(CPUARMState *env, int el) -{ -#ifndef CONFIG_USER_ONLY - uint64_t hcr_el2 = arm_hcr_el2_eff(env); - - if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { - bool disabled = false; - - /* - * The CPACR.ZEN controls traps to EL1: - * 0, 2 : trap EL0 and EL1 accesses - * 1 : trap only EL0 accesses - * 3 : trap no accesses - */ - if (!extract32(env->cp15.cpacr_el1, 16, 1)) { - disabled = true; - } else if (!extract32(env->cp15.cpacr_el1, 17, 1)) { - disabled = el == 0; - } - if (disabled) { - /* route_to_el2 */ - return hcr_el2 & HCR_TGE ? 2 : 1; - } - - /* Check CPACR.FPEN. */ - if (!extract32(env->cp15.cpacr_el1, 20, 1)) { - disabled = true; - } else if (!extract32(env->cp15.cpacr_el1, 21, 1)) { - disabled = el == 0; - } - if (disabled) { - return 0; - } - } - - /* - * CPTR_EL2. Since TZ and TFP are positive, - * they will be zero when EL2 is not present. - */ - if (el <= 2 && arm_is_el2_enabled(env)) { - if (env->cp15.cptr_el[2] & CPTR_TZ) { - return 2; - } - if (env->cp15.cptr_el[2] & CPTR_TFP) { - return 0; - } - } - - /* CPTR_EL3. Since EZ is negative we must check for EL3. */ - if (arm_feature(env, ARM_FEATURE_EL3) - && !(env->cp15.cptr_el[3] & CPTR_EZ)) { - return 3; - } -#endif - return 0; -} - void hw_watchpoint_update(ARMCPU *cpu, int n) { CPUARMState *env = &cpu->env; From patchwork Fri Jun 4 15:52:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454146 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp653595jae; Fri, 4 Jun 2021 11:24:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwoxuUDEakfNpXEWOd1SBzm0pbUsFsU6ZunZUb4Ak7Yc1iOfxSP/cUsiqndS7p0mKKcEt+S X-Received: by 2002:a05:6830:1251:: with SMTP id s17mr4825106otp.81.1622831072595; Fri, 04 Jun 2021 11:24:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831072; cv=none; d=google.com; s=arc-20160816; b=Zp4qcdfQZ7tNstocz4mcVAATwPrQabKwOaquC91i/URX6j6Oi8YuDb55eh1bCsyrsc UPQmNfbosEozTz8fM9RI/16QSfuAaZKjFRzwLnPIQdtz/YVMVXh6fFsSczwdGm6YdNrV 4D9f8PV6tOWqkruHXqaMKwLEQpjAyWDWwdEPrrGVo1+2YCURQZsm07Ytzu0s0JzuVl1j p0bMC7CM9COxwdtXQxN2iBs3EVz2e+4wc+xUlZIKzJSFDdmWHTVWeWqwrnWd5HrH0AXN iEZauScdbEwvsMW96lKEzd0qTWbIKJOiwcZNMogoAZ2Nm8nwzOUpKeR+279Zftm1aJhL z9AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=3KAvFYDicdu7I7b7IT8uHA37bMcceuspcA95AajAGsA=; b=CksBaLvBrqGeL0BHLdkKdsqfFtm9RTsDh27WRelhlLPbWKc/4SlwWK9uoD730iZhCD mRJzwsbbejGbR0apdx62KCgEcoKnZhxcgNZ3Pgf/73Jz5Vgm3T+rpXB7eSdjYpri1BB7 G1LmNRzv8eOY//kSC8FBMSNny7tAO7WNPiI95EACdNORHi7FJoA+lWkG13qYJPS9gonQ wzyGtrb3zoEEZJTR988Pc46EbMbHmpGTFLiB4TyZC8Ty6QSn4anxBu9tdikCvUr1+MxI SuF9bOA+7yrtvlIM9O6pF/EoAq2Sa95fXxZA+j/7AXmtXmqogIjL7Uekkw0gWImQ/ujC txTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=GI+fjp4j; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t7si2521650otc.126.2021.06.04.11.24.32 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:24:32 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=GI+fjp4j; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:41042 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpEUh-0006IQ-UM for patch@linaro.org; Fri, 04 Jun 2021 14:24:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42348) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpET3-0000sR-98 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:49 -0400 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]:39770) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpESx-0000LY-N6 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:49 -0400 Received: by mail-wr1-x42a.google.com with SMTP id l2so10209678wrw.6 for ; Fri, 04 Jun 2021 11:22:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3KAvFYDicdu7I7b7IT8uHA37bMcceuspcA95AajAGsA=; b=GI+fjp4jLaxZpQN5torVy0zwYR8zaOzI5CHG5ZYstagfUGHVKfx2vkG8tBc80c1MTS yEu4K5O/KlfZzokVbggY5rqI3SamKAG2Jd4IxC7KYDgNiyq7WshXs8TCjGLall/vbl3u A4z5GfWxEy8ttjkyEsbVnOE0HRg8CgLcA6adD1R5oi8M9cVdD4rCZ3MrBDQ9ruDZWIoJ MXsSdReA35V17MlW1oR5TvWnwWmmt7MxKcLCOHn1MyZE2FjOVw67x1r4p4iWuJ5Mku42 psh7l7mC0aogOtAMENmZoc7/xWlMx0CtdJAhckZXMSqXAKGg+K6921xWGcN5Vx+3UTmP wV/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3KAvFYDicdu7I7b7IT8uHA37bMcceuspcA95AajAGsA=; b=KxU6UQKumX4d6cNKhpbm+Iiox/UYgewgzbIs53tGKbYz2MNvu59Vs9LmG4EnTXODO2 z8Hg06r7+BvAZhvkDwhODv4qclRNs7ah+HwJ/KAV4FZja6piCjp4ItwYWyIFmriWM4Xi 0UWymeZXbszpfCo1fQ2jx16O/HhT64WRi67GeroliiCBiNFIQwftReAKGXguecuplwHA rOrhxhDpio3kNzKVzW4T1dhv51eAyK0r9eUZhojwyAIuujtbgQP8lUjJXbb5+V5fkQO2 Bbm6W5orS0C3gtqB74M9Lb8D7/Lwt7WKxpB3zwHmwZPPsews6YHzEgyZdbsGevG7PzyP fgHg== X-Gm-Message-State: AOAM5328ReBCStpr5r5kVuLWJwk3D7D66Kj8dyezAVEPZtK1aA5alN0/ EwrxXhvVWEyq3ElQ/Em0lzoKlg== X-Received: by 2002:adf:d204:: with SMTP id j4mr5196908wrh.3.1622830962339; Fri, 04 Jun 2021 11:22:42 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id n9sm6505994wmc.20.2021.06.04.11.22.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 88BB61FFC5; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 46/99] target/arm: fix comments style of fp_exception_el before moving it Date: Fri, 4 Jun 2021 16:52:19 +0100 Message-Id: <20210604155312.15902-47-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42a; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/tcg/helper.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 8372089260..d4cafdbd95 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -1625,13 +1625,15 @@ uint32_t HELPER(crc32c)(uint32_t acc, uint32_t val, uint32_t bytes) return crc32c(acc, buf, bytes) ^ 0xffffffff; } -/* Return the exception level to which FP-disabled exceptions should +/* + * Return the exception level to which FP-disabled exceptions should * be taken, or 0 if FP is enabled. */ int fp_exception_el(CPUARMState *env, int cur_el) { #ifndef CONFIG_USER_ONLY - /* CPACR and the CPTR registers don't exist before v6, so FP is + /* + * CPACR and the CPTR registers don't exist before v6, so FP is * always accessible */ if (!arm_feature(env, ARM_FEATURE_V6)) { @@ -1654,7 +1656,8 @@ int fp_exception_el(CPUARMState *env, int cur_el) return 0; } - /* The CPACR controls traps to EL1, or PL1 if we're 32 bit: + /* + * The CPACR controls traps to EL1, or PL1 if we're 32 bit: * 0, 2 : trap EL0 and EL1/PL1 accesses * 1 : trap only EL0 accesses * 3 : trap no accesses @@ -1701,7 +1704,8 @@ int fp_exception_el(CPUARMState *env, int cur_el) } } - /* For the CPTR registers we don't need to guard with an ARM_FEATURE + /* + * For the CPTR registers we don't need to guard with an ARM_FEATURE * check because zero bits in the registers mean "don't trap". */ From patchwork Fri Jun 4 15:52:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454128 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp606968jae; Fri, 4 Jun 2021 10:20:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxGTfVtZ39Un/EfTdQ1rKYs7xBRlYkCo9hK+haHchoIXMc7vbacVLEh7pNN9GiRQCAkygKU X-Received: by 2002:ab0:6784:: with SMTP id v4mr4555848uar.73.1622827205327; Fri, 04 Jun 2021 10:20:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827205; cv=none; d=google.com; s=arc-20160816; b=dbXYs3M3rfSnxXuu5F1BXlIz2o+L3JZt3I8LUJK8EoPI7mVlXkoCnMVs9uGEju9rxY /lRI+EWGDBT7bowShtcW/FLXtwTWtpagrwNnViCSBf6QqiOgBvQB639mBIF+F94AeD3g 5vuowILXcT4dfaN/Ejh5iii3cObuMOlLPCDWI3uNMtwPmbjTpryVtRlD1cbVrxRp/Tia qiQTjbmogbQB9zvnRqdZylnYzipK625JB8ZFKf7IBQjosbt3ODpWxEwde91U2QSnz73u /65+njDQUA1YbqDwP+P3C2ANBFZhDDAH95i/le2gCkjZiDlPx++kINvbKcqj+h5aNbVM mTDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=U5Xfw9YPhDKbocOUkGJ6dyaTOmUWedj5SAgt0+6wAY8=; b=nNosX0dKlaSfTo4cgBTYd5mL+xFn32z87vFnPAK47pBHfWJ9da0ywMmlDvZBHNyJBR U12i6EummQwU1Sjt7v1wY9kD9oEg21SOJSQvbsKY2JPqTHf5ZUGBbI0M5HcPBJztKk6q i99RrECoB+/stCCgBrfNKJvVLK56yx4ASq3wWNUS2dGmR/MtEQ88sOj2Og/ibIyeIuGy EINlAWxCtfWUNuIRHj82yIgtUfD2k1GfodwpNhc0btTC5Zy48rWvkZBoSlP93xrwI5el pvfHHjqFJm1vo02mBfVhXNJs/QlUBoCzXj92bJJ6UJAbQ6ExK9BbNqTvEPI4z3jhB7lG YwMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=dvCxY719; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w12si3475700vkm.54.2021.06.04.10.20.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:20:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=dvCxY719; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:47848 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDUK-0007ji-MW for patch@linaro.org; Fri, 04 Jun 2021 13:20:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33554) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkg-0008Gt-8F for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:54 -0400 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]:45676) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkY-00028v-VE for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:54 -0400 Received: by mail-wr1-x430.google.com with SMTP id z8so9890074wrp.12 for ; Fri, 04 Jun 2021 09:32:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U5Xfw9YPhDKbocOUkGJ6dyaTOmUWedj5SAgt0+6wAY8=; b=dvCxY719d/0Gualat41bhM6mcAkf30N5uTmv8+TgAlljASfURjug1+inD/wQOSZjmW PI40sntvmkx7vY45zvYLrpyqowP8rzKUMCORsKlb+EYsM5rLyJYmsSxKDrkMplUyvtDi re+8zdsgdwLkIT6iJ7gLQKcGEFvXlnRPKCupEI5I8dYNlxYNB5ewa9qZHp3UXw+CB5iu J0qoOX3BHIRJUTLL4/1s1Z5jOUqkYWMNNHEjDe155aWtdl6QBk4HZSPbOdI6TcdKwQSm 3WRR+ZHph0SNRoxdYBTEczRJVosgj3uEa9RJ3+heR7UuYoiq5jX/EpVzjUcydAfU5mz7 gPig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U5Xfw9YPhDKbocOUkGJ6dyaTOmUWedj5SAgt0+6wAY8=; b=o6b1Ke27aYqkod5PcoEqWiYhtLIbx1OQtX8Q7cdYaJKGFPyDZtIEL/BLPlfPYJ0Mvj gbyAVfwWyZFFNQ7l2irom+rKseD3A0468lPmlFRfp8chcKlru3mwGU0r439NXesx21QW onEOFjsxWUV8zWY8k1zk6JqVqP0tJiEXU5PWhNMTRewOs6QjIE4kyZdfqvRcqvRAKS4R S7hmkBcGAAs7yRviQXGypnm02Lv8LGMJdspPiNH/I7Xz3BEjlFTZ8UGZX5Dt33X7XWJx KbKxKSaTgn/oLHoevnWMq9NcADacpeoMIbJtGGD66NV40qEEk9yTS9YYhgv4qjQg7IxP IvRg== X-Gm-Message-State: AOAM531YONSBfOvP3D3bommppgQs84iA2bO8MurcuZWTr6Izx11FGIJ5 zDadCFslpBY+7nS9sZhl6Wg0jvDHoDF3+Q== X-Received: by 2002:adf:e507:: with SMTP id j7mr4685178wrm.178.1622824365471; Fri, 04 Jun 2021 09:32:45 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id z3sm7638834wrl.13.2021.06.04.09.32.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A38E51FFC6; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 47/99] target/arm: move fp_exception_el out of TCG helpers Date: Fri, 4 Jun 2021 16:52:20 +0100 Message-Id: <20210604155312.15902-48-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-sysemu.c | 100 ++++++++++++++++++++++++++++++++++++++++ target/arm/cpu-user.c | 5 ++ target/arm/tcg/helper.c | 100 ---------------------------------------- 3 files changed, 105 insertions(+), 100 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 7cc721fe68..128616d90d 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -410,3 +410,103 @@ int sve_exception_el(CPUARMState *env, int el) } return 0; } + +/* + * Return the exception level to which FP-disabled exceptions should + * be taken, or 0 if FP is enabled. + */ +int fp_exception_el(CPUARMState *env, int cur_el) +{ +#ifndef CONFIG_USER_ONLY + /* + * CPACR and the CPTR registers don't exist before v6, so FP is + * always accessible + */ + if (!arm_feature(env, ARM_FEATURE_V6)) { + return 0; + } + + if (arm_feature(env, ARM_FEATURE_M)) { + /* CPACR can cause a NOCP UsageFault taken to current security state */ + if (!v7m_cpacr_pass(env, env->v7m.secure, cur_el != 0)) { + return 1; + } + + if (arm_feature(env, ARM_FEATURE_M_SECURITY) && !env->v7m.secure) { + if (!extract32(env->v7m.nsacr, 10, 1)) { + /* FP insns cause a NOCP UsageFault taken to Secure */ + return 3; + } + } + + return 0; + } + + /* + * The CPACR controls traps to EL1, or PL1 if we're 32 bit: + * 0, 2 : trap EL0 and EL1/PL1 accesses + * 1 : trap only EL0 accesses + * 3 : trap no accesses + * This register is ignored if E2H+TGE are both set. + */ + if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { + int fpen = extract32(env->cp15.cpacr_el1, 20, 2); + + switch (fpen) { + case 0: + case 2: + if (cur_el == 0 || cur_el == 1) { + /* Trap to PL1, which might be EL1 or EL3 */ + if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) { + return 3; + } + return 1; + } + if (cur_el == 3 && !is_a64(env)) { + /* Secure PL1 running at EL3 */ + return 3; + } + break; + case 1: + if (cur_el == 0) { + return 1; + } + break; + case 3: + break; + } + } + + /* + * The NSACR allows A-profile AArch32 EL3 and M-profile secure mode + * to control non-secure access to the FPU. It doesn't have any + * effect if EL3 is AArch64 or if EL3 doesn't exist at all. + */ + if ((arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && + cur_el <= 2 && !arm_is_secure_below_el3(env))) { + if (!extract32(env->cp15.nsacr, 10, 1)) { + /* FP insns act as UNDEF */ + return cur_el == 2 ? 2 : 1; + } + } + + /* + * For the CPTR registers we don't need to guard with an ARM_FEATURE + * check because zero bits in the registers mean "don't trap". + */ + + /* CPTR_EL2 : present in v7VE or v8 */ + if (cur_el <= 2 && extract32(env->cp15.cptr_el[2], 10, 1) + && arm_is_el2_enabled(env)) { + /* Trap FP ops at EL2, NS-EL1 or NS-EL0 to EL2 */ + return 2; + } + + /* CPTR_EL3 : present in v8 */ + if (extract32(env->cp15.cptr_el[3], 10, 1)) { + /* Trap all FP ops to EL3 */ + return 3; + } +#endif + return 0; +} diff --git a/target/arm/cpu-user.c b/target/arm/cpu-user.c index 39093ade76..6a1a1fa273 100644 --- a/target/arm/cpu-user.c +++ b/target/arm/cpu-user.c @@ -38,3 +38,8 @@ int sve_exception_el(CPUARMState *env, int el) { return 0; } + +int fp_exception_el(CPUARMState *env, int el) +{ + return 0; +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index d4cafdbd95..e55209491f 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -1625,106 +1625,6 @@ uint32_t HELPER(crc32c)(uint32_t acc, uint32_t val, uint32_t bytes) return crc32c(acc, buf, bytes) ^ 0xffffffff; } -/* - * Return the exception level to which FP-disabled exceptions should - * be taken, or 0 if FP is enabled. - */ -int fp_exception_el(CPUARMState *env, int cur_el) -{ -#ifndef CONFIG_USER_ONLY - /* - * CPACR and the CPTR registers don't exist before v6, so FP is - * always accessible - */ - if (!arm_feature(env, ARM_FEATURE_V6)) { - return 0; - } - - if (arm_feature(env, ARM_FEATURE_M)) { - /* CPACR can cause a NOCP UsageFault taken to current security state */ - if (!v7m_cpacr_pass(env, env->v7m.secure, cur_el != 0)) { - return 1; - } - - if (arm_feature(env, ARM_FEATURE_M_SECURITY) && !env->v7m.secure) { - if (!extract32(env->v7m.nsacr, 10, 1)) { - /* FP insns cause a NOCP UsageFault taken to Secure */ - return 3; - } - } - - return 0; - } - - /* - * The CPACR controls traps to EL1, or PL1 if we're 32 bit: - * 0, 2 : trap EL0 and EL1/PL1 accesses - * 1 : trap only EL0 accesses - * 3 : trap no accesses - * This register is ignored if E2H+TGE are both set. - */ - if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { - int fpen = extract32(env->cp15.cpacr_el1, 20, 2); - - switch (fpen) { - case 0: - case 2: - if (cur_el == 0 || cur_el == 1) { - /* Trap to PL1, which might be EL1 or EL3 */ - if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) { - return 3; - } - return 1; - } - if (cur_el == 3 && !is_a64(env)) { - /* Secure PL1 running at EL3 */ - return 3; - } - break; - case 1: - if (cur_el == 0) { - return 1; - } - break; - case 3: - break; - } - } - - /* - * The NSACR allows A-profile AArch32 EL3 and M-profile secure mode - * to control non-secure access to the FPU. It doesn't have any - * effect if EL3 is AArch64 or if EL3 doesn't exist at all. - */ - if ((arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) && - cur_el <= 2 && !arm_is_secure_below_el3(env))) { - if (!extract32(env->cp15.nsacr, 10, 1)) { - /* FP insns act as UNDEF */ - return cur_el == 2 ? 2 : 1; - } - } - - /* - * For the CPTR registers we don't need to guard with an ARM_FEATURE - * check because zero bits in the registers mean "don't trap". - */ - - /* CPTR_EL2 : present in v7VE or v8 */ - if (cur_el <= 2 && extract32(env->cp15.cptr_el[2], 10, 1) - && arm_is_el2_enabled(env)) { - /* Trap FP ops at EL2, NS-EL1 or NS-EL0 to EL2 */ - return 2; - } - - /* CPTR_EL3 : present in v8 */ - if (extract32(env->cp15.cptr_el[3], 10, 1)) { - /* Trap all FP ops to EL3 */ - return 3; - } -#endif - return 0; -} - #ifndef CONFIG_USER_ONLY ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env) { From patchwork Fri Jun 4 15:52:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454130 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp608630jae; Fri, 4 Jun 2021 10:22:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzITpHoTx7/Xr6OGldPVVsDaNZtSmVlSOvs5KEFS8Kw3LyvNXkmt0M9KCVTsK8TCBb5XWUO X-Received: by 2002:a67:c906:: with SMTP id w6mr3775611vsk.33.1622827354040; Fri, 04 Jun 2021 10:22:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827354; cv=none; d=google.com; s=arc-20160816; b=c9jHjCfmqDR+6yqwk7pMrl5bF1FGRmgupFyulCcP8nefWQ6PaGefD3oWuYnizY3WD7 CJIIUbj275eSFLJ056KcZPqyOMTMiNkajfkJoc5f9RsHhXlqL0zjL6s/G6B5hzuMAJil bLTaYvPwXXMSwgiB6C5SEKofu1eOewbC99MaS/gy+GSG18Wu++yZ82PAtIg9atihimOy BRDsCa7WzHFfzsDa+YrqRfjwMDAPDbUZbK2SussvfGiMhDC97wQDr4c6Hhz1iZHxeTkQ XNG37lldhq16JtT7UfUCcet1g0FrQ5bftCmRT4dqHGuadzE7mZNzmEv8zRLbl+DqTpP0 DPfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=LPmHXyzWsrCuMzUxwCKFj0W0fSkTFbn/I48VNDyaKbw=; b=fFMJv0ghEW0WFwyiNNwrzZR2I6asEihpErQKdrfDOuh93jEPitof0/SyPhE95bugca qB92I4pXWMwvHHmdeuH4fhqHe5y/MrO3hhgSkJQB3WYpGymoAvVd8rHs60YkcLNkYDGa XgwmV67JROyIStMAv8Axkw7gpe0eX0BikDQEv0vGZULY66m0NNuA+NWISe1cbLFD45Ce iSW0vJ1FS+KtyxaigniOxNXFDcB8VAPEQqYBYvyuN9JGu+tGirEGLy7toLA5WBIAv8uN Lyp98NCCULZHe+GVxsquoGy6ACE15kykkLs/y4lE6voVOW/kGIDmsFX/qFrEA2W4qR3I LYRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Hp5Um09J; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j1si3304802vsp.227.2021.06.04.10.22.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:22:34 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Hp5Um09J; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58288 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDWj-0006sR-F9 for patch@linaro.org; Fri, 04 Jun 2021 13:22:33 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33804) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkv-0000Fj-Bp for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:09 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]:53775) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkh-0002DX-U1 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:09 -0400 Received: by mail-wm1-x32f.google.com with SMTP id h3so5739613wmq.3 for ; Fri, 04 Jun 2021 09:32:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LPmHXyzWsrCuMzUxwCKFj0W0fSkTFbn/I48VNDyaKbw=; b=Hp5Um09J0bXhrMEoUkbEEzJATqdnAgYzcIYwuG/zu5QbZYX5AU48t5a+Icee38RnVH XHtBQwqixuvqTyK5SsGCb6C+pxCTluBaYwTHut1bFGBXDLJhzj3EHJfyKwYxe3hC5OA/ H3oitPRdFDyW7nEAu+HkD2XpphTkwjoZ4UwOOIjm1w1pysPEMNSP03txpzJ9v9p8dVcx GjaRdehBX7m8wX+lGJqFLwtggfbkbI/EDonV77D4jbzHCD5KdjHHXPwzlZymmx0+fENo A/s7YiMTeatI00xvZ/k05to6gjLh6fbGP6YnJD4dsjQafIgCBTIdShBc1CwrF1wlwKzv bEjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LPmHXyzWsrCuMzUxwCKFj0W0fSkTFbn/I48VNDyaKbw=; b=nO9IukVspBBxKGAoqUb8CvE6kOX/LJ623jIo502T29i34SsH2M4aabnrBWZrxze2n0 MO/Dtzg1bKhTyxvqs9yINRtEFUASuLGXoHpEanThLyA4Co8ArHQ0agSAgJVuNswTx1Mk 16T4vEfimnEHnMVeaNCgd9AM/wYF1GOgIgMFw9GswV8e2jFzA0Jq9mwdVhpAvCGFnEK/ f4YVONG8JqJfGQdU8D+5m9vWTlzDfvmb42HzZs4xxPy/Qf2wRIHQFg4Nyhg1+asZqkUg mey0L4UuPH3pa+91qSJt8BKzMVh6o1MAEfTVi6mYHlhiydgORarQervzoHUZfGgkDwTp 7FDA== X-Gm-Message-State: AOAM533XiQkpcpLoeIHIikOJL8vUdivU26V76oduL+8pKus5mS6gjxfK v3eKu+vZ3590gnbmR1enl2nwgQ== X-Received: by 2002:a1c:e409:: with SMTP id b9mr4350498wmh.63.1622824374142; Fri, 04 Jun 2021 09:32:54 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id w8sm7616636wre.70.2021.06.04.09.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:51 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id B876F1FFC7; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 48/99] target/arm: remove now useless ifndef from fp_exception_el Date: Fri, 4 Jun 2021 16:52:21 +0100 Message-Id: <20210604155312.15902-49-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana after moving the code of fp_exception_el to a sysemu-only module, we can remove the #ifndef CONFIG_USER_ONLY. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-sysemu.c | 2 -- 1 file changed, 2 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 128616d90d..0d80a0161c 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -417,7 +417,6 @@ int sve_exception_el(CPUARMState *env, int el) */ int fp_exception_el(CPUARMState *env, int cur_el) { -#ifndef CONFIG_USER_ONLY /* * CPACR and the CPTR registers don't exist before v6, so FP is * always accessible @@ -507,6 +506,5 @@ int fp_exception_el(CPUARMState *env, int cur_el) /* Trap all FP ops to EL3 */ return 3; } -#endif return 0; } From patchwork Fri Jun 4 15:52:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454132 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp610588jae; Fri, 4 Jun 2021 10:25:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx96Oo6ci/2bZ+uQggVDGWLCkR9neXRUXX0+MD+paGxUN3vXDV69Cr+FY5v4nFZNwWv2DlI X-Received: by 2002:ab0:3403:: with SMTP id z3mr4502097uap.113.1622827519441; Fri, 04 Jun 2021 10:25:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827519; cv=none; d=google.com; s=arc-20160816; b=ZAjJKHGxYH9x6PPzsCTAE3oajY6U1TQzH+g00hoxzj8acGS04Fr4XtWHfVCVWPW+0o JwpvHbaQHkaIN6NxzW8XN6cA2ePAgMFu1djM3X7SJMnuh21NdvFkHu/DWRYUj7ZoPLq1 WImcSlbm5l2tYyyBn7TLJ22gt1/zWeXfyExcJL9OoZJPH0Kmx8cp+rJggiwa6bLTYMU4 MKmllL5gxhSJfsFVg1TJie6DUGSrcMHuaXG17Y/SmaPJDVWenLTmnonRKrSPhUwAyed6 QCzKbQ/tRiOyuMSzN0FIxxLW+QFhecmwKLp0TCy7SatMpXX/CxrpKpZIg0YCKbkVUv8C ulag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=TGls5/xkSiHs03kC6I1GwTGP4oM62lUtA402V/b7XoA=; b=hW4YzI66a8oJ/h9hOdsRD8E1K67MPV0n1F9XG3kgjB/lydVOYCLquLltEXe7TW+hvk ehme4C6bjFbMsSW7vEEXKP67BKz64q9vYOZgekBGl1xlwPu/giJCcY4GXrhYwbk2ngSJ /n06l28dUSYfkJtAKDb0IEFa6o/V1WHHex0mFC0xuZaGnsQgtVEQVxA18Z/7yKnCDaH9 pSScXooG169L0pdoYzk1li5om9V8Ke0/mHNr0WDm+Ktom2Nu/BgBo1kmB4ZfQHZrWRsC CSuof+dKfmldt72h9BIAJTY12Bq4PbQ883WBjhVGHLRu5k4GwNjqsg7MzfHe2Y96JVXN WJAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=AWlGkHoB; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w9si2995152vsl.212.2021.06.04.10.25.19 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:25:19 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=AWlGkHoB; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:37768 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDZO-0003gf-RN for patch@linaro.org; Fri, 04 Jun 2021 13:25:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33840) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkx-0000Ob-3A for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:11 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:40802) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkj-0002E0-EN for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:10 -0400 Received: by mail-wr1-x431.google.com with SMTP id y7so5278165wrh.7 for ; Fri, 04 Jun 2021 09:32:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TGls5/xkSiHs03kC6I1GwTGP4oM62lUtA402V/b7XoA=; b=AWlGkHoBjDmLboTSeIgJAIFnHs3uSzsXAglphhqsOnLeoWXAk5rJpXknTQnefHCY8O tPvsg6BCnNmX/N/65jZpUL/4uUuTJnLkAsLnBxIw4tA4rJcNW325hSpJTS5yzMdmFSd0 rRh/uxrr5nfzHz1H5Wyk53vIFqfts2LzIHAgYdUp+M8Zh/jbtY4qkVU+BrKPW7RwVFLU le4/tX1K8l+oQS7P0pO4qIO9phPrv6X/MLN8YD4NU+3ukWxnWynWSmV0Ayp5TPB6D6gh laI0jpB8AmZwo4DKpltJTN+Hu/J339fcOFJGxxM4iTSA4AzPIpDnRnp0D0o4j2pfCSZo wvuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TGls5/xkSiHs03kC6I1GwTGP4oM62lUtA402V/b7XoA=; b=UcrMmBuQt8hIq/FGWm/j0laQWKCjoAh7hd6gO7lhxV7TshJVgynIBzr5k8r+aQ0hxl B6+1tKlu5V5c9zO0eu56RxuONSBVZgrxLeUtFTe4wXBicq4ANL5aehk6EMO4Ly9VFJGs qGeXIu/siZgL7It84GtXSTjKI9FV0k76MSFP3vgx8UEIK/tJZ5j55xmJ7BGYK8MuNO8h +dswEAocVIY7BpIJW167T1aiy/b6qGnqU39QrQvFyBKvAAuKkgK7KCvSkCK75WB4ch4m wr0aX4nvbmH3oNjsZvdi9+gxknuHwn+mxrxVRHsURBIEqu2OLRRIwZQsvaNZ2Q/kPirO KZrg== X-Gm-Message-State: AOAM530ZrGfhIjNZAGIghKY2tTcTqssYpGjMRjQDA+TLqkSOjWPg2g2x +c+yvhl7zp2QHl9FapY3RS9COQ== X-Received: by 2002:a5d:6e92:: with SMTP id k18mr4873333wrz.94.1622824375706; Fri, 04 Jun 2021 09:32:55 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id z188sm6319994wme.38.2021.06.04.09.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:51 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id CDBD01FFC8; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 49/99] target/arm: make further preparation for the exception code to move Date: Fri, 4 Jun 2021 16:52:22 +0100 Message-Id: <20210604155312.15902-50-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana the exception code in tcg/ needs some adjustment before being exposed to KVM-only builds. We need to call arm_rebuild_hflags only when TCG is enabled, or we will error out. The direct call to helper_rebuild_hflags_a64(env, new_el) will not be possible when extracting out to common code, it seems safe to replace it with a call to arm_rebuild_hflags, since the write to pstate is already done. Also, some CONFIG_TCG needs to be extended further, so that all the tcg-only code is marked as such. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/tcg/helper.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index e55209491f..7a9eaec5cb 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -755,7 +755,9 @@ static void take_aarch32_exception(CPUARMState *env, int new_mode, env->regs[14] = env->regs[15] + offset; } env->regs[15] = newpc; - arm_rebuild_hflags(env); + if (tcg_enabled()) { + arm_rebuild_hflags(env); + } } static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) @@ -1242,7 +1244,11 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs) pstate_write(env, PSTATE_DAIF | new_mode); env->aarch64 = 1; aarch64_restore_sp(env, new_el); - helper_rebuild_hflags_a64(env, new_el); + + if (tcg_enabled()) { + /* pstate already written, so we can use arm_rebuild_hflags here */ + arm_rebuild_hflags(env); + } env->pc = addr; @@ -1306,6 +1312,7 @@ void arm_cpu_do_interrupt(CPUState *cs) env->exception.syndrome); } +#ifdef CONFIG_TCG if (arm_is_psci_call(cpu, cs->exception_index)) { arm_handle_psci_call(cpu); qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); @@ -1317,7 +1324,6 @@ void arm_cpu_do_interrupt(CPUState *cs) * that caused the exception, not the target exception level, so * must be handled here. */ -#ifdef CONFIG_TCG if (cs->exception_index == EXCP_SEMIHOST) { handle_semihosting(cs); return; From patchwork Fri Jun 4 15:52:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454129 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp607828jae; Fri, 4 Jun 2021 10:21:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyaaFhUIZS1yPBhq13utk0682zMs7Qs5ZFTCgy6o4QtNNsm3vAud7OPm/20GF1JMohPNelB X-Received: by 2002:a05:6102:2431:: with SMTP id l17mr3776256vsi.45.1622827283065; Fri, 04 Jun 2021 10:21:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827283; cv=none; d=google.com; s=arc-20160816; b=hIxP+17ybqldLrJIj0y0h68rYV6K+5TKIyocmg916m1DHcw2v50xJod6Xv3tO749eX 3wxSGi/KdrbyHE9ZB2P+5y0o127xyOutRzv0QByTbDtDTBB2zwl/9kX8LjBQfs7v5FXJ G4dWdWd6bdpm3/0KpX9ZRtage9rREr94btsJRtZXi/aiWAW/TJVY5E1MVoFFoufkUCVs mWuPls54MEUckwq5XEe+H0VewBsaiQUQFU3rEz1MjavQPLo/deTa+ymDQK8d9KEBcI1c 9/pZr5KJ+Tb2UFNp0UUzPncbBaH7kAtOI3qKDgoQd7dUs0J5N4k54iju2HDqWy2mQzg/ 84Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=mjzS7rj1kweufgd0g3lBR+KBKwJLQAXc4Hnok/aBO4c=; b=p8fgJPDz9JZv4upXnQdQcZhDv7YKZdlqZLHFndWObJodPoVpNRDD1CXnxh6Ix2KrmS 74bPsvqrkY8n257ejGCLGTPV26sXt0LOXyoGd1mWifAGCiYeu0FY1TlYTxZLwy5nEOtC Sv3aOjmNs0+oOU5TzWWpADLLuRgeyqlE36d3Th7sIF1LRBY3D/KS1NBNfc1fIh+k+zLp eda+w4T9G1UmYx1/9/ti+1c4qrhZi6MAiC0yUWUMMGb5rabR9hlbr3vS5GlahKRjnOl8 0M+eccD3XFMwoJ6IeqFz2SBr6S1TdW3GXaMcwD2mdE/ombWOE5P9riTcYNDG07RwmqDo EYjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=m+OG1jD3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y12si3294053vky.16.2021.06.04.10.21.22 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:21:23 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=m+OG1jD3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52026 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDVa-0002BS-FP for patch@linaro.org; Fri, 04 Jun 2021 13:21:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48834) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNN-0008NF-HE for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:53 -0400 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]:39734) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNK-00024l-8b for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:53 -0400 Received: by mail-wr1-x432.google.com with SMTP id l2so10038347wrw.6 for ; Fri, 04 Jun 2021 10:12:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mjzS7rj1kweufgd0g3lBR+KBKwJLQAXc4Hnok/aBO4c=; b=m+OG1jD3er5DoGVh6n5An8FjIQSgCK2ZHONXOzHjAEQAdYg/xighy0WgtT6+cWr7y6 /HXEOXnxbQWVo/0Ny9Ejk23ggbTMVqx3Od24w9xXZFsuEf5T8PkvX4+UNtY8WYGQ5EC7 YKVy7zbHZr0S0NG1/i/GRaq5xJT+hnUcFkY7D+5xL9NaqOFtTuQ2rWJ94og9n4VgNZH3 cybkqigSVvx67ymJU8nMRLvZAcboWRwUNSceK3j9SYowuqh7hdxyopm+jazgE8Zxyebb jV7T1TRP4Dn2Mnfru0nKvInur1VnfXQFSJUocMIhxARhVqMU33bvY78Y1TZR4XAYx/lW GnMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mjzS7rj1kweufgd0g3lBR+KBKwJLQAXc4Hnok/aBO4c=; b=URE++Eb1XoNNYduZ4gopnmscVo6ROW8RdA2OXWHMhmjYXxNBZQIHDtHj+2FaWZC4T5 7Uu24s0moiAgb75VM2YFI0UGfWUQGzBmssV7YcCsCAoPYuOVZUOpToSGLCOYYprBohlw ZZ6b5Mtlr0vux07VEsyQcMU4gLjy4MeB2xx8Ni2VHGv51oO4k5ejblCEwLBi+ScTnIc+ o4ToU/2bT12h6ox5T7G1juMrlyxtzwEmlnmVZmqY2N+LDMD1E6KkTo9dpAmEGjrPgiMO FXSa/WUL8NY5pgUd7jT2iCXimbuTc6qSXcOlFHntcMUbT3owfzqLtXs2yVtRBLeH6fNo ZCpw== X-Gm-Message-State: AOAM5308q/Uwlfg8vlfJee1XdqkUbtxVWXDh3EXx5lhh4y10b0yQ201k R9YUc0A/hFtNQUpHVmRedzJsqg== X-Received: by 2002:a5d:4304:: with SMTP id h4mr5051852wrq.210.1622826768904; Fri, 04 Jun 2021 10:12:48 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id c7sm7120966wrs.23.2021.06.04.10.12.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:46 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E742A1FFC9; Fri, 4 Jun 2021 16:53:18 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 50/99] target/arm: fix style of arm_cpu_do_interrupt functions before move Date: Fri, 4 Jun 2021 16:52:23 +0100 Message-Id: <20210604155312.15902-51-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::432; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana before refactoring the exception code, fix the style of the functions being moved. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/tcg/helper.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 7a9eaec5cb..5b32329895 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -896,10 +896,11 @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs) new_mode = ARM_CPU_MODE_UND; addr = 0x04; mask = CPSR_I; - if (env->thumb) + if (env->thumb) { offset = 2; - else + } else { offset = 4; + } break; case EXCP_SWI: new_mode = ARM_CPU_MODE_SVC; @@ -985,7 +986,8 @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs) /* High vectors. When enabled, base address cannot be remapped. */ addr += 0xffff0000; } else { - /* ARM v7 architectures provide a vector base address register to remap + /* + * ARM v7 architectures provide a vector base address register to remap * the interrupt vector table. * This register is only followed in non-monitor mode, and is banked. * Note: only bits 31:5 are valid. @@ -1094,7 +1096,8 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs) aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); if (cur_el < new_el) { - /* Entry vector offset depends on whether the implemented EL + /* + * Entry vector offset depends on whether the implemented EL * immediately lower than the target level is using AArch32 or AArch64 */ bool is_aa64; @@ -1285,7 +1288,8 @@ static void handle_semihosting(CPUState *cs) } #endif -/* Handle a CPU exception for A and R profile CPUs. +/* + * Handle a CPU exception for A and R profile CPUs. * Do any appropriate logging, handle PSCI calls, and then hand off * to the AArch64-entry or AArch32-entry function depending on the * target exception level's register width. @@ -1330,7 +1334,8 @@ void arm_cpu_do_interrupt(CPUState *cs) } #endif - /* Hooks may change global state so BQL should be held, also the + /* + * Hooks may change global state so BQL should be held, also the * BQL needs to be held for any modification of * cs->interrupt_request. */ From patchwork Fri Jun 4 15:52:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454104 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp583777jae; Fri, 4 Jun 2021 09:51:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz4ZC2QYsVBV88/OpIpVUkWpy93V+Zeq4qLROwzQdeKXspGSvnMdXUtCkxdGPeZkV5m1Drg X-Received: by 2002:a67:b448:: with SMTP id c8mr3765659vsm.52.1622825486479; Fri, 04 Jun 2021 09:51:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825486; cv=none; d=google.com; s=arc-20160816; b=eEt+IOdxtWlWKzhBVe+S2x7Vw4AkXKJwfiG3Fa14W+tYpdTJNxy/Ec7JSxtS6YrllR WHp5jhQFCIX0OjyIw7zVXGPVmgT8kIgLP1mFpToOS5K1+PZWIHLiW6vGWv78smaE5QIn +ID0ara1J2DIPhdO0gDzDfPRKcV/aYbFXchU4q0wBtMT3EXNQn7XO3B2hixIGV85tYMl V3B3Qbvu+KtDNUiMtBHrH4M4H8NkxIxBE4uU0sbOjYtFWxgAUY9XSKlkdW8//MW46IBr ObuAtKgzFJBcoFWIWaRwFwEGKSjEvIQAmYJmnQ7dn4TJXASBeOJsC1pgTe1LnO6WM2x3 jwKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=P3B7XmJwW9fFwkfNwwGD8GDyI0xFISRu6PoXARUE8XU=; b=Jc8wM23Eb2HFlMfRXmwtEfFTTIq0Ox1+dG4G7T4lv4AXnupNAGU4Tqa5Zf03Bz9pzm nKVQN+LyZzjTr8GZoyWxbq+MuUgc6ePMidE+tUO4HEGbzGsL0L8DZWXTub4dn4rgQUWb yzQ3ydDVLgJlFEIrG1JezkrjidYteUOGiZ0qvPHMTfeMZ7ZqJYOu0zFUd9qnjh7tYqFV SUuUBGx9jEROAmDFmAYd7zmK2TrAlWMcvj+AztoPaM0V3EVCc7DA78OYg4lamAa0l42d YMVelRV7JqfyX3p5gbdYafcNWWXXq3NDTM5iG51A7StmK7ftlhQD4BYJKDc+oUZ6zeoK hjxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="nIZ7ql/D"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id k190si3768416vkh.1.2021.06.04.09.51.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:51:26 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="nIZ7ql/D"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58828 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD2b-0004gR-Oo for patch@linaro.org; Fri, 04 Jun 2021 12:51:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52124) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRY-0003ME-A3 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:08 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]:39471) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRK-0003rV-Ew for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:08 -0400 Received: by mail-wm1-x32d.google.com with SMTP id l18-20020a1ced120000b029014c1adff1edso8239198wmh.4 for ; Fri, 04 Jun 2021 09:12:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P3B7XmJwW9fFwkfNwwGD8GDyI0xFISRu6PoXARUE8XU=; b=nIZ7ql/DP/LRM+EE1HRrFHKirq+xHTtxtqUeVS93dgsG5PmdmBUclcxNOkIp5rYdZw ylunbUnggtTqJdwy1HPiQIp/GqsqozCfwSIBx85DjWHtNZSPIzjqO+dflVoIYAf/cPmB uqC2MaO6UoVyCAEK1x7MnX0scHdwVCIWaxUlTCqnY3PybuN28FPEAWA70quvyZkWnQT+ n9xwpD24FG9rbfQPwbVv82AhD8TVa4rQ4qO1oGdmjDEybETY17zssCmI3NGouu/X7UT5 gts9+V+821d4TWv2kDC+u/Jwm1Ju8U4hr5lQavVOc5QaqQk8HEw6Zwz9IsBLvEk1avyj O8KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P3B7XmJwW9fFwkfNwwGD8GDyI0xFISRu6PoXARUE8XU=; b=S64GC5pVgzNBSsg9tCEwutmXotIYatxA7GoXzsQOYr4O3qCy5W2X0jcjp43vlI6Dq3 vBW8ako49tGESDj6U0fMOMbmmMhzI+/nIHDAVS7G8JfHkOYGnFBfTmJE2twB29EM/w5A esE4hZZG5NRJK9yMa1/u81skb1hi4pE0O5tR9mQJB6NoyBPWwCL52qHlKbwiTjuetzFH Fqijk0dfcw9kqbciEt2+cQ1sIMzsDP6GMPnwPHHHBndCU9+MQBzTBx0SYpHRSdxWViR+ Qy+1boHPfd9azpTFwzwjnPmoSEdEz8H4l0dkuVv1bp6BwJP/nq9rkX0dQQJhqTrqAX6b ayyQ== X-Gm-Message-State: AOAM533BEcPvn8R7oZvZzXMm1iT6EEiiK4RPClahU0JI9pWsG9lD8jUd ZWF55EPjY5kDvR5qvzcpKkyWdwlBVDCY5A== X-Received: by 2002:a05:600c:2054:: with SMTP id p20mr4099638wmg.175.1622823172554; Fri, 04 Jun 2021 09:12:52 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id g205sm2396764wme.6.2021.06.04.09.12.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:42 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 162A21FFCA; Fri, 4 Jun 2021 16:53:19 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 51/99] target/arm: move exception code out of tcg/helper.c Date: Fri, 4 Jun 2021 16:52:24 +0100 Message-Id: <20210604155312.15902-52-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URI_NOVOWEL=0.5 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana cpu-sysemu.c: we need this sysemu code for KVM too, so we move the code to cpu-sysemu.c so we can reach a builable state. There will be further split later on in dedicated exception modules for 32 and 64bit, after we make more necessary changes to be able to split TARGET_AARCH64-only code. tcg/sysemu/tcg-cpu.c: the TCG-specific code we put in tcg/sysemu/, in preparation for the addition of the tcg-cpu accel-cpu ARM subclass. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/tcg/tcg-cpu.h | 31 ++ target/arm/cpu-sysemu.c | 670 +++++++++++++++++++++++++++ target/arm/tcg/helper.c | 734 ------------------------------ target/arm/tcg/sysemu/tcg-cpu.c | 73 +++ target/arm/tcg/sysemu/meson.build | 1 + 5 files changed, 775 insertions(+), 734 deletions(-) create mode 100644 target/arm/tcg/tcg-cpu.h create mode 100644 target/arm/tcg/sysemu/tcg-cpu.c -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/tcg/tcg-cpu.h b/target/arm/tcg/tcg-cpu.h new file mode 100644 index 0000000000..0ee8ba073b --- /dev/null +++ b/target/arm/tcg/tcg-cpu.h @@ -0,0 +1,31 @@ +/* + * QEMU ARM CPU + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ +#ifndef ARM_TCG_CPU_H +#define ARM_TCG_CPU_H + +#include "cpu.h" + +#ifndef CONFIG_USER_ONLY +/* Do semihosting call and set the appropriate return value. */ +void handle_semihosting(CPUState *cs); + +#endif /* !CONFIG_USER_ONLY */ + +#endif /* ARM_TCG_CPU_H */ diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 0d80a0161c..0e872b2e55 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -19,10 +19,14 @@ */ #include "qemu/osdep.h" +#include "qemu/log.h" +#include "qemu/main-loop.h" #include "cpu.h" #include "internals.h" #include "sysemu/hw_accel.h" #include "kvm_arm.h" +#include "sysemu/tcg.h" +#include "tcg/tcg-cpu.h" void arm_cpu_set_irq(void *opaque, int irq, int level) { @@ -508,3 +512,669 @@ int fp_exception_el(CPUARMState *env, int cur_el) } return 0; } + +static void take_aarch32_exception(CPUARMState *env, int new_mode, + uint32_t mask, uint32_t offset, + uint32_t newpc) +{ + int new_el; + + /* Change the CPU state so as to actually take the exception. */ + switch_mode(env, new_mode); + + /* + * For exceptions taken to AArch32 we must clear the SS bit in both + * PSTATE and in the old-state value we save to SPSR_, so zero it now. + */ + env->pstate &= ~PSTATE_SS; + env->spsr = cpsr_read(env); + /* Clear IT bits. */ + env->condexec_bits = 0; + /* Switch to the new mode, and to the correct instruction set. */ + env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode; + + /* This must be after mode switching. */ + new_el = arm_current_el(env); + + /* Set new mode endianness */ + env->uncached_cpsr &= ~CPSR_E; + if (env->cp15.sctlr_el[new_el] & SCTLR_EE) { + env->uncached_cpsr |= CPSR_E; + } + /* J and IL must always be cleared for exception entry */ + env->uncached_cpsr &= ~(CPSR_IL | CPSR_J); + env->daif |= mask; + + if (new_mode == ARM_CPU_MODE_HYP) { + env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0; + env->elr_el[2] = env->regs[15]; + } else { + /* CPSR.PAN is normally preserved preserved unless... */ + if (cpu_isar_feature(aa32_pan, env_archcpu(env))) { + switch (new_el) { + case 3: + if (!arm_is_secure_below_el3(env)) { + /* ... the target is EL3, from non-secure state. */ + env->uncached_cpsr &= ~CPSR_PAN; + break; + } + /* ... the target is EL3, from secure state ... */ + /* fall through */ + case 1: + /* ... the target is EL1 and SCTLR.SPAN is 0. */ + if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPAN)) { + env->uncached_cpsr |= CPSR_PAN; + } + break; + } + } + /* + * this is a lie, as there was no c1_sys on V4T/V5, but who cares + * and we should just guard the thumb mode on V4 + */ + if (arm_feature(env, ARM_FEATURE_V4T)) { + env->thumb = + (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) != 0; + } + env->regs[14] = env->regs[15] + offset; + } + env->regs[15] = newpc; + if (tcg_enabled()) { + arm_rebuild_hflags(env); + } +} + +static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) +{ + /* + * Handle exception entry to Hyp mode; this is sufficiently + * different to entry to other AArch32 modes that we handle it + * separately here. + * + * The vector table entry used is always the 0x14 Hyp mode entry point, + * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. + * The offset applied to the preferred return address is always zero + * (see DDI0487C.a section G1.12.3). + * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. + */ + uint32_t addr, mask; + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + switch (cs->exception_index) { + case EXCP_UDEF: + addr = 0x04; + break; + case EXCP_SWI: + addr = 0x14; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + env->cp15.ifar_s = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr = 0x0c; + break; + case EXCP_DATA_ABORT: + env->cp15.dfar_s = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr = 0x10; + break; + case EXCP_IRQ: + addr = 0x18; + break; + case EXCP_FIQ: + addr = 0x1c; + break; + case EXCP_HVC: + addr = 0x08; + break; + case EXCP_HYP_TRAP: + addr = 0x14; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) { + if (!arm_feature(env, ARM_FEATURE_V8)) { + /* + * QEMU syndrome values are v8-style. v7 has the IL bit + * UNK/SBZP for "field not valid" cases, where v8 uses RES1. + * If this is a v7 CPU, squash the IL bit in those cases. + */ + if (cs->exception_index == EXCP_PREFETCH_ABORT || + (cs->exception_index == EXCP_DATA_ABORT && + !(env->exception.syndrome & ARM_EL_ISV)) || + syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) { + env->exception.syndrome &= ~ARM_EL_IL; + } + } + env->cp15.esr_el[2] = env->exception.syndrome; + } + + if (arm_current_el(env) != 2 && addr < 0x14) { + addr = 0x14; + } + + mask = 0; + if (!(env->cp15.scr_el3 & SCR_EA)) { + mask |= CPSR_A; + } + if (!(env->cp15.scr_el3 & SCR_IRQ)) { + mask |= CPSR_I; + } + if (!(env->cp15.scr_el3 & SCR_FIQ)) { + mask |= CPSR_F; + } + + addr += env->cp15.hvbar; + + take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); +} + +static void arm_cpu_do_interrupt_aarch32(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + uint32_t addr; + uint32_t mask; + int new_mode; + uint32_t offset; + uint32_t moe; + + /* If this is a debug exception we must update the DBGDSCR.MOE bits */ + switch (syn_get_ec(env->exception.syndrome)) { + case EC_BREAKPOINT: + case EC_BREAKPOINT_SAME_EL: + moe = 1; + break; + case EC_WATCHPOINT: + case EC_WATCHPOINT_SAME_EL: + moe = 10; + break; + case EC_AA32_BKPT: + moe = 3; + break; + case EC_VECTORCATCH: + moe = 5; + break; + default: + moe = 0; + break; + } + + if (moe) { + env->cp15.mdscr_el1 = deposit64(env->cp15.mdscr_el1, 2, 4, moe); + } + + if (env->exception.target_el == 2) { + arm_cpu_do_interrupt_aarch32_hyp(cs); + return; + } + + switch (cs->exception_index) { + case EXCP_UDEF: + new_mode = ARM_CPU_MODE_UND; + addr = 0x04; + mask = CPSR_I; + if (env->thumb) { + offset = 2; + } else { + offset = 4; + } + break; + case EXCP_SWI: + new_mode = ARM_CPU_MODE_SVC; + addr = 0x08; + mask = CPSR_I; + /* The PC already points to the next instruction. */ + offset = 0; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", + env->exception.fsr, (uint32_t)env->exception.vaddress); + new_mode = ARM_CPU_MODE_ABT; + addr = 0x0c; + mask = CPSR_A | CPSR_I; + offset = 4; + break; + case EXCP_DATA_ABORT: + A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", + env->exception.fsr, + (uint32_t)env->exception.vaddress); + new_mode = ARM_CPU_MODE_ABT; + addr = 0x10; + mask = CPSR_A | CPSR_I; + offset = 8; + break; + case EXCP_IRQ: + new_mode = ARM_CPU_MODE_IRQ; + addr = 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I; + offset = 4; + if (env->cp15.scr_el3 & SCR_IRQ) { + /* IRQ routed to monitor mode */ + new_mode = ARM_CPU_MODE_MON; + mask |= CPSR_F; + } + break; + case EXCP_FIQ: + new_mode = ARM_CPU_MODE_FIQ; + addr = 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I | CPSR_F; + if (env->cp15.scr_el3 & SCR_FIQ) { + /* FIQ routed to monitor mode */ + new_mode = ARM_CPU_MODE_MON; + } + offset = 4; + break; + case EXCP_VIRQ: + new_mode = ARM_CPU_MODE_IRQ; + addr = 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I; + offset = 4; + break; + case EXCP_VFIQ: + new_mode = ARM_CPU_MODE_FIQ; + addr = 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I | CPSR_F; + offset = 4; + break; + case EXCP_SMC: + new_mode = ARM_CPU_MODE_MON; + addr = 0x08; + mask = CPSR_A | CPSR_I | CPSR_F; + offset = 0; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + return; /* Never happens. Keep compiler happy. */ + } + + if (new_mode == ARM_CPU_MODE_MON) { + addr += env->cp15.mvbar; + } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { + /* High vectors. When enabled, base address cannot be remapped. */ + addr += 0xffff0000; + } else { + /* + * ARM v7 architectures provide a vector base address register to remap + * the interrupt vector table. + * This register is only followed in non-monitor mode, and is banked. + * Note: only bits 31:5 are valid. + */ + addr += A32_BANKED_CURRENT_REG_GET(env, vbar); + } + + if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_MON) { + env->cp15.scr_el3 &= ~SCR_NS; + } + + take_aarch32_exception(env, new_mode, mask, offset, addr); +} + +static int aarch64_regnum(CPUARMState *env, int aarch32_reg) +{ + /* + * Return the register number of the AArch64 view of the AArch32 + * register @aarch32_reg. The CPUARMState CPSR is assumed to still + * be that of the AArch32 mode the exception came from. + */ + int mode = env->uncached_cpsr & CPSR_M; + + switch (aarch32_reg) { + case 0 ... 7: + return aarch32_reg; + case 8 ... 12: + return mode == ARM_CPU_MODE_FIQ ? aarch32_reg + 16 : aarch32_reg; + case 13: + switch (mode) { + case ARM_CPU_MODE_USR: + case ARM_CPU_MODE_SYS: + return 13; + case ARM_CPU_MODE_HYP: + return 15; + case ARM_CPU_MODE_IRQ: + return 17; + case ARM_CPU_MODE_SVC: + return 19; + case ARM_CPU_MODE_ABT: + return 21; + case ARM_CPU_MODE_UND: + return 23; + case ARM_CPU_MODE_FIQ: + return 29; + default: + g_assert_not_reached(); + } + case 14: + switch (mode) { + case ARM_CPU_MODE_USR: + case ARM_CPU_MODE_SYS: + case ARM_CPU_MODE_HYP: + return 14; + case ARM_CPU_MODE_IRQ: + return 16; + case ARM_CPU_MODE_SVC: + return 18; + case ARM_CPU_MODE_ABT: + return 20; + case ARM_CPU_MODE_UND: + return 22; + case ARM_CPU_MODE_FIQ: + return 30; + default: + g_assert_not_reached(); + } + case 15: + return 31; + default: + g_assert_not_reached(); + } +} + +static uint32_t cpsr_read_for_spsr_elx(CPUARMState *env) +{ + uint32_t ret = cpsr_read(env); + + /* Move DIT to the correct location for SPSR_ELx */ + if (ret & CPSR_DIT) { + ret &= ~CPSR_DIT; + ret |= PSTATE_DIT; + } + /* Merge PSTATE.SS into SPSR_ELx */ + ret |= env->pstate & PSTATE_SS; + + return ret; +} + +/* Handle exception entry to a target EL which is using AArch64 */ +static void arm_cpu_do_interrupt_aarch64(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + unsigned int new_el = env->exception.target_el; + target_ulong addr = env->cp15.vbar_el[new_el]; + unsigned int new_mode = aarch64_pstate_mode(new_el, true); + unsigned int old_mode; + unsigned int cur_el = arm_current_el(env); + int rt; + + /* + * Note that new_el can never be 0. If cur_el is 0, then + * el0_a64 is is_a64(), else el0_a64 is ignored. + */ + aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); + + if (cur_el < new_el) { + /* + * Entry vector offset depends on whether the implemented EL + * immediately lower than the target level is using AArch32 or AArch64 + */ + bool is_aa64; + uint64_t hcr; + + switch (new_el) { + case 3: + is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0; + break; + case 2: + hcr = arm_hcr_el2_eff(env); + if ((hcr & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { + is_aa64 = (hcr & HCR_RW) != 0; + break; + } + /* fall through */ + case 1: + is_aa64 = is_a64(env); + break; + default: + g_assert_not_reached(); + } + + if (is_aa64) { + addr += 0x400; + } else { + addr += 0x600; + } + } else if (pstate_read(env) & PSTATE_SP) { + addr += 0x200; + } + + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + case EXCP_DATA_ABORT: + env->cp15.far_el[new_el] = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", + env->cp15.far_el[new_el]); + /* fall through */ + case EXCP_BKPT: + case EXCP_UDEF: + case EXCP_SWI: + case EXCP_HVC: + case EXCP_HYP_TRAP: + case EXCP_SMC: + switch (syn_get_ec(env->exception.syndrome)) { + case EC_ADVSIMDFPACCESSTRAP: + /* + * QEMU internal FP/SIMD syndromes from AArch32 include the + * TA and coproc fields which are only exposed if the exception + * is taken to AArch32 Hyp mode. Mask them out to get a valid + * AArch64 format syndrome. + */ + env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20); + break; + case EC_CP14RTTRAP: + case EC_CP15RTTRAP: + case EC_CP14DTTRAP: + /* + * For a trap on AArch32 MRC/MCR/LDC/STC the Rt field is currently + * the raw register field from the insn; when taking this to + * AArch64 we must convert it to the AArch64 view of the register + * number. Notice that we read a 4-bit AArch32 register number and + * write back a 5-bit AArch64 one. + */ + rt = extract32(env->exception.syndrome, 5, 4); + rt = aarch64_regnum(env, rt); + env->exception.syndrome = deposit32(env->exception.syndrome, + 5, 5, rt); + break; + case EC_CP15RRTTRAP: + case EC_CP14RRTTRAP: + /* Similarly for MRRC/MCRR traps for Rt and Rt2 fields */ + rt = extract32(env->exception.syndrome, 5, 4); + rt = aarch64_regnum(env, rt); + env->exception.syndrome = deposit32(env->exception.syndrome, + 5, 5, rt); + rt = extract32(env->exception.syndrome, 10, 4); + rt = aarch64_regnum(env, rt); + env->exception.syndrome = deposit32(env->exception.syndrome, + 10, 5, rt); + break; + } + env->cp15.esr_el[new_el] = env->exception.syndrome; + break; + case EXCP_IRQ: + case EXCP_VIRQ: + addr += 0x80; + break; + case EXCP_FIQ: + case EXCP_VFIQ: + addr += 0x100; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (is_a64(env)) { + old_mode = pstate_read(env); + aarch64_save_sp(env, arm_current_el(env)); + env->elr_el[new_el] = env->pc; + } else { + old_mode = cpsr_read_for_spsr_elx(env); + env->elr_el[new_el] = env->regs[15]; + + aarch64_sync_32_to_64(env); + + env->condexec_bits = 0; + } + env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode; + + qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", + env->elr_el[new_el]); + + if (cpu_isar_feature(aa64_pan, cpu)) { + /* The value of PSTATE.PAN is normally preserved, except when ... */ + new_mode |= old_mode & PSTATE_PAN; + switch (new_el) { + case 2: + /* ... the target is EL2 with HCR_EL2.{E2H,TGE} == '11' ... */ + if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) + != (HCR_E2H | HCR_TGE)) { + break; + } + /* fall through */ + case 1: + /* ... the target is EL1 ... */ + /* ... and SCTLR_ELx.SPAN == 0, then set to 1. */ + if ((env->cp15.sctlr_el[new_el] & SCTLR_SPAN) == 0) { + new_mode |= PSTATE_PAN; + } + break; + } + } + if (cpu_isar_feature(aa64_mte, cpu)) { + new_mode |= PSTATE_TCO; + } + + pstate_write(env, PSTATE_DAIF | new_mode); + env->aarch64 = 1; + aarch64_restore_sp(env, new_el); + + if (tcg_enabled()) { + /* pstate already written, so we can use arm_rebuild_hflags here */ + arm_rebuild_hflags(env); + } + + env->pc = addr; + + qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", + new_el, env->pc, pstate_read(env)); +} + +void arm_log_exception(int idx) +{ + if (qemu_loglevel_mask(CPU_LOG_INT)) { + const char *exc = NULL; + static const char * const excnames[] = { + [EXCP_UDEF] = "Undefined Instruction", + [EXCP_SWI] = "SVC", + [EXCP_PREFETCH_ABORT] = "Prefetch Abort", + [EXCP_DATA_ABORT] = "Data Abort", + [EXCP_IRQ] = "IRQ", + [EXCP_FIQ] = "FIQ", + [EXCP_BKPT] = "Breakpoint", + [EXCP_EXCEPTION_EXIT] = "QEMU v7M exception exit", + [EXCP_KERNEL_TRAP] = "QEMU intercept of kernel commpage", + [EXCP_HVC] = "Hypervisor Call", + [EXCP_HYP_TRAP] = "Hypervisor Trap", + [EXCP_SMC] = "Secure Monitor Call", + [EXCP_VIRQ] = "Virtual IRQ", + [EXCP_VFIQ] = "Virtual FIQ", + [EXCP_SEMIHOST] = "Semihosting call", + [EXCP_NOCP] = "v7M NOCP UsageFault", + [EXCP_INVSTATE] = "v7M INVSTATE UsageFault", + [EXCP_STKOF] = "v8M STKOF UsageFault", + [EXCP_LAZYFP] = "v7M exception during lazy FP stacking", + [EXCP_LSERR] = "v8M LSERR UsageFault", + [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault", + }; + + if (idx >= 0 && idx < ARRAY_SIZE(excnames)) { + exc = excnames[idx]; + } + if (!exc) { + exc = "unknown"; + } + qemu_log_mask(CPU_LOG_INT, "Taking exception %d [%s]\n", idx, exc); + } +} + +/* + * Handle a CPU exception for A and R profile CPUs. + * Do any appropriate logging, handle PSCI calls, and then hand off + * to the AArch64-entry or AArch32-entry function depending on the + * target exception level's register width. + * + * Note: this is used for both TCG (as the do_interrupt tcg op), + * and KVM to re-inject guest debug exceptions, and to + * inject a Synchronous-External-Abort. + */ +void arm_cpu_do_interrupt(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + unsigned int new_el = env->exception.target_el; + + assert(!arm_feature(env, ARM_FEATURE_M)); + + arm_log_exception(cs->exception_index); + qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(env), + new_el); + if (qemu_loglevel_mask(CPU_LOG_INT) + && !excp_is_internal(cs->exception_index)) { + qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", + syn_get_ec(env->exception.syndrome), + env->exception.syndrome); + } + +#ifdef CONFIG_TCG + if (arm_is_psci_call(cpu, cs->exception_index)) { + arm_handle_psci_call(cpu); + qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); + return; + } + /* + * Semihosting semantics depend on the register width of the code + * that caused the exception, not the target exception level, so + * must be handled here. + */ + if (cs->exception_index == EXCP_SEMIHOST) { + handle_semihosting(cs); + return; + } +#endif /* CONFIG_TCG */ + /* + * Hooks may change global state so BQL should be held, also the + * BQL needs to be held for any modification of + * cs->interrupt_request. + */ + g_assert(qemu_mutex_iothread_locked()); + arm_call_pre_el_change_hook(cpu); + + assert(!excp_is_internal(cs->exception_index)); + if (arm_el_is_aa64(env, new_el)) { + arm_cpu_do_interrupt_aarch64(cs); + } else { + arm_cpu_do_interrupt_aarch32(cs); + } + + arm_call_el_change_hook(cpu); + + if (tcg_enabled()) { + cs->interrupt_request |= CPU_INTERRUPT_EXITTB; + } +} diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 5b32329895..a8b1efdb36 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -7,34 +7,13 @@ */ #include "qemu/osdep.h" -#include "qemu/units.h" -#include "target/arm/idau.h" -#include "trace.h" #include "cpu.h" #include "internals.h" #include "exec/gdbstub.h" #include "exec/helper-proto.h" -#include "qemu/host-utils.h" -#include "qemu/main-loop.h" -#include "qemu/bitops.h" #include "qemu/crc32c.h" -#include "qemu/qemu-print.h" -#include "exec/exec-all.h" #include /* For crc32 */ -#include "hw/irq.h" -#include "semihosting/semihost.h" -#include "sysemu/cpus.h" -#include "sysemu/cpu-timers.h" -#include "sysemu/kvm.h" -#include "sysemu/tcg.h" -#include "qemu/range.h" -#include "qapi/error.h" -#include "qemu/guest-random.h" -#ifdef CONFIG_TCG #include "arm_ldst.h" -#include "exec/cpu_ldst.h" -#include "semihosting/common-semi.h" -#endif #include "cpu-mmu.h" #include "cpregs.h" @@ -643,719 +622,6 @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx, return target_el; } -void arm_log_exception(int idx) -{ - if (qemu_loglevel_mask(CPU_LOG_INT)) { - const char *exc = NULL; - static const char * const excnames[] = { - [EXCP_UDEF] = "Undefined Instruction", - [EXCP_SWI] = "SVC", - [EXCP_PREFETCH_ABORT] = "Prefetch Abort", - [EXCP_DATA_ABORT] = "Data Abort", - [EXCP_IRQ] = "IRQ", - [EXCP_FIQ] = "FIQ", - [EXCP_BKPT] = "Breakpoint", - [EXCP_EXCEPTION_EXIT] = "QEMU v7M exception exit", - [EXCP_KERNEL_TRAP] = "QEMU intercept of kernel commpage", - [EXCP_HVC] = "Hypervisor Call", - [EXCP_HYP_TRAP] = "Hypervisor Trap", - [EXCP_SMC] = "Secure Monitor Call", - [EXCP_VIRQ] = "Virtual IRQ", - [EXCP_VFIQ] = "Virtual FIQ", - [EXCP_SEMIHOST] = "Semihosting call", - [EXCP_NOCP] = "v7M NOCP UsageFault", - [EXCP_INVSTATE] = "v7M INVSTATE UsageFault", - [EXCP_STKOF] = "v8M STKOF UsageFault", - [EXCP_LAZYFP] = "v7M exception during lazy FP stacking", - [EXCP_LSERR] = "v8M LSERR UsageFault", - [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault", - }; - - if (idx >= 0 && idx < ARRAY_SIZE(excnames)) { - exc = excnames[idx]; - } - if (!exc) { - exc = "unknown"; - } - qemu_log_mask(CPU_LOG_INT, "Taking exception %d [%s]\n", idx, exc); - } -} - -static void take_aarch32_exception(CPUARMState *env, int new_mode, - uint32_t mask, uint32_t offset, - uint32_t newpc) -{ - int new_el; - - /* Change the CPU state so as to actually take the exception. */ - switch_mode(env, new_mode); - - /* - * For exceptions taken to AArch32 we must clear the SS bit in both - * PSTATE and in the old-state value we save to SPSR_, so zero it now. - */ - env->pstate &= ~PSTATE_SS; - env->spsr = cpsr_read(env); - /* Clear IT bits. */ - env->condexec_bits = 0; - /* Switch to the new mode, and to the correct instruction set. */ - env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode; - - /* This must be after mode switching. */ - new_el = arm_current_el(env); - - /* Set new mode endianness */ - env->uncached_cpsr &= ~CPSR_E; - if (env->cp15.sctlr_el[new_el] & SCTLR_EE) { - env->uncached_cpsr |= CPSR_E; - } - /* J and IL must always be cleared for exception entry */ - env->uncached_cpsr &= ~(CPSR_IL | CPSR_J); - env->daif |= mask; - - if (cpu_isar_feature(aa32_ssbs, env_archcpu(env))) { - if (env->cp15.sctlr_el[new_el] & SCTLR_DSSBS_32) { - env->uncached_cpsr |= CPSR_SSBS; - } else { - env->uncached_cpsr &= ~CPSR_SSBS; - } - } - - if (new_mode == ARM_CPU_MODE_HYP) { - env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0; - env->elr_el[2] = env->regs[15]; - } else { - /* CPSR.PAN is normally preserved preserved unless... */ - if (cpu_isar_feature(aa32_pan, env_archcpu(env))) { - switch (new_el) { - case 3: - if (!arm_is_secure_below_el3(env)) { - /* ... the target is EL3, from non-secure state. */ - env->uncached_cpsr &= ~CPSR_PAN; - break; - } - /* ... the target is EL3, from secure state ... */ - /* fall through */ - case 1: - /* ... the target is EL1 and SCTLR.SPAN is 0. */ - if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPAN)) { - env->uncached_cpsr |= CPSR_PAN; - } - break; - } - } - /* - * this is a lie, as there was no c1_sys on V4T/V5, but who cares - * and we should just guard the thumb mode on V4 - */ - if (arm_feature(env, ARM_FEATURE_V4T)) { - env->thumb = - (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) != 0; - } - env->regs[14] = env->regs[15] + offset; - } - env->regs[15] = newpc; - if (tcg_enabled()) { - arm_rebuild_hflags(env); - } -} - -static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) -{ - /* - * Handle exception entry to Hyp mode; this is sufficiently - * different to entry to other AArch32 modes that we handle it - * separately here. - * - * The vector table entry used is always the 0x14 Hyp mode entry point, - * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. - * The offset applied to the preferred return address is always zero - * (see DDI0487C.a section G1.12.3). - * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. - */ - uint32_t addr, mask; - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - - switch (cs->exception_index) { - case EXCP_UDEF: - addr = 0x04; - break; - case EXCP_SWI: - addr = 0x14; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - env->cp15.ifar_s = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr = 0x0c; - break; - case EXCP_DATA_ABORT: - env->cp15.dfar_s = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr = 0x10; - break; - case EXCP_IRQ: - addr = 0x18; - break; - case EXCP_FIQ: - addr = 0x1c; - break; - case EXCP_HVC: - addr = 0x08; - break; - case EXCP_HYP_TRAP: - addr = 0x14; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) { - if (!arm_feature(env, ARM_FEATURE_V8)) { - /* - * QEMU syndrome values are v8-style. v7 has the IL bit - * UNK/SBZP for "field not valid" cases, where v8 uses RES1. - * If this is a v7 CPU, squash the IL bit in those cases. - */ - if (cs->exception_index == EXCP_PREFETCH_ABORT || - (cs->exception_index == EXCP_DATA_ABORT && - !(env->exception.syndrome & ARM_EL_ISV)) || - syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) { - env->exception.syndrome &= ~ARM_EL_IL; - } - } - env->cp15.esr_el[2] = env->exception.syndrome; - } - - if (arm_current_el(env) != 2 && addr < 0x14) { - addr = 0x14; - } - - mask = 0; - if (!(env->cp15.scr_el3 & SCR_EA)) { - mask |= CPSR_A; - } - if (!(env->cp15.scr_el3 & SCR_IRQ)) { - mask |= CPSR_I; - } - if (!(env->cp15.scr_el3 & SCR_FIQ)) { - mask |= CPSR_F; - } - - addr += env->cp15.hvbar; - - take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); -} - -static void arm_cpu_do_interrupt_aarch32(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - uint32_t addr; - uint32_t mask; - int new_mode; - uint32_t offset; - uint32_t moe; - - /* If this is a debug exception we must update the DBGDSCR.MOE bits */ - switch (syn_get_ec(env->exception.syndrome)) { - case EC_BREAKPOINT: - case EC_BREAKPOINT_SAME_EL: - moe = 1; - break; - case EC_WATCHPOINT: - case EC_WATCHPOINT_SAME_EL: - moe = 10; - break; - case EC_AA32_BKPT: - moe = 3; - break; - case EC_VECTORCATCH: - moe = 5; - break; - default: - moe = 0; - break; - } - - if (moe) { - env->cp15.mdscr_el1 = deposit64(env->cp15.mdscr_el1, 2, 4, moe); - } - - if (env->exception.target_el == 2) { - arm_cpu_do_interrupt_aarch32_hyp(cs); - return; - } - - switch (cs->exception_index) { - case EXCP_UDEF: - new_mode = ARM_CPU_MODE_UND; - addr = 0x04; - mask = CPSR_I; - if (env->thumb) { - offset = 2; - } else { - offset = 4; - } - break; - case EXCP_SWI: - new_mode = ARM_CPU_MODE_SVC; - addr = 0x08; - mask = CPSR_I; - /* The PC already points to the next instruction. */ - offset = 0; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", - env->exception.fsr, (uint32_t)env->exception.vaddress); - new_mode = ARM_CPU_MODE_ABT; - addr = 0x0c; - mask = CPSR_A | CPSR_I; - offset = 4; - break; - case EXCP_DATA_ABORT: - A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", - env->exception.fsr, - (uint32_t)env->exception.vaddress); - new_mode = ARM_CPU_MODE_ABT; - addr = 0x10; - mask = CPSR_A | CPSR_I; - offset = 8; - break; - case EXCP_IRQ: - new_mode = ARM_CPU_MODE_IRQ; - addr = 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I; - offset = 4; - if (env->cp15.scr_el3 & SCR_IRQ) { - /* IRQ routed to monitor mode */ - new_mode = ARM_CPU_MODE_MON; - mask |= CPSR_F; - } - break; - case EXCP_FIQ: - new_mode = ARM_CPU_MODE_FIQ; - addr = 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I | CPSR_F; - if (env->cp15.scr_el3 & SCR_FIQ) { - /* FIQ routed to monitor mode */ - new_mode = ARM_CPU_MODE_MON; - } - offset = 4; - break; - case EXCP_VIRQ: - new_mode = ARM_CPU_MODE_IRQ; - addr = 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I; - offset = 4; - break; - case EXCP_VFIQ: - new_mode = ARM_CPU_MODE_FIQ; - addr = 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I | CPSR_F; - offset = 4; - break; - case EXCP_SMC: - new_mode = ARM_CPU_MODE_MON; - addr = 0x08; - mask = CPSR_A | CPSR_I | CPSR_F; - offset = 0; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - return; /* Never happens. Keep compiler happy. */ - } - - if (new_mode == ARM_CPU_MODE_MON) { - addr += env->cp15.mvbar; - } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { - /* High vectors. When enabled, base address cannot be remapped. */ - addr += 0xffff0000; - } else { - /* - * ARM v7 architectures provide a vector base address register to remap - * the interrupt vector table. - * This register is only followed in non-monitor mode, and is banked. - * Note: only bits 31:5 are valid. - */ - addr += A32_BANKED_CURRENT_REG_GET(env, vbar); - } - - if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_MON) { - env->cp15.scr_el3 &= ~SCR_NS; - } - - take_aarch32_exception(env, new_mode, mask, offset, addr); -} - -static int aarch64_regnum(CPUARMState *env, int aarch32_reg) -{ - /* - * Return the register number of the AArch64 view of the AArch32 - * register @aarch32_reg. The CPUARMState CPSR is assumed to still - * be that of the AArch32 mode the exception came from. - */ - int mode = env->uncached_cpsr & CPSR_M; - - switch (aarch32_reg) { - case 0 ... 7: - return aarch32_reg; - case 8 ... 12: - return mode == ARM_CPU_MODE_FIQ ? aarch32_reg + 16 : aarch32_reg; - case 13: - switch (mode) { - case ARM_CPU_MODE_USR: - case ARM_CPU_MODE_SYS: - return 13; - case ARM_CPU_MODE_HYP: - return 15; - case ARM_CPU_MODE_IRQ: - return 17; - case ARM_CPU_MODE_SVC: - return 19; - case ARM_CPU_MODE_ABT: - return 21; - case ARM_CPU_MODE_UND: - return 23; - case ARM_CPU_MODE_FIQ: - return 29; - default: - g_assert_not_reached(); - } - case 14: - switch (mode) { - case ARM_CPU_MODE_USR: - case ARM_CPU_MODE_SYS: - case ARM_CPU_MODE_HYP: - return 14; - case ARM_CPU_MODE_IRQ: - return 16; - case ARM_CPU_MODE_SVC: - return 18; - case ARM_CPU_MODE_ABT: - return 20; - case ARM_CPU_MODE_UND: - return 22; - case ARM_CPU_MODE_FIQ: - return 30; - default: - g_assert_not_reached(); - } - case 15: - return 31; - default: - g_assert_not_reached(); - } -} - -static uint32_t cpsr_read_for_spsr_elx(CPUARMState *env) -{ - uint32_t ret = cpsr_read(env); - - /* Move DIT to the correct location for SPSR_ELx */ - if (ret & CPSR_DIT) { - ret &= ~CPSR_DIT; - ret |= PSTATE_DIT; - } - /* Merge PSTATE.SS into SPSR_ELx */ - ret |= env->pstate & PSTATE_SS; - - return ret; -} - -/* Handle exception entry to a target EL which is using AArch64 */ -static void arm_cpu_do_interrupt_aarch64(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - unsigned int new_el = env->exception.target_el; - target_ulong addr = env->cp15.vbar_el[new_el]; - unsigned int new_mode = aarch64_pstate_mode(new_el, true); - unsigned int old_mode; - unsigned int cur_el = arm_current_el(env); - int rt; - - /* - * Note that new_el can never be 0. If cur_el is 0, then - * el0_a64 is is_a64(), else el0_a64 is ignored. - */ - aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); - - if (cur_el < new_el) { - /* - * Entry vector offset depends on whether the implemented EL - * immediately lower than the target level is using AArch32 or AArch64 - */ - bool is_aa64; - uint64_t hcr; - - switch (new_el) { - case 3: - is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0; - break; - case 2: - hcr = arm_hcr_el2_eff(env); - if ((hcr & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { - is_aa64 = (hcr & HCR_RW) != 0; - break; - } - /* fall through */ - case 1: - is_aa64 = is_a64(env); - break; - default: - g_assert_not_reached(); - } - - if (is_aa64) { - addr += 0x400; - } else { - addr += 0x600; - } - } else if (pstate_read(env) & PSTATE_SP) { - addr += 0x200; - } - - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - case EXCP_DATA_ABORT: - env->cp15.far_el[new_el] = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", - env->cp15.far_el[new_el]); - /* fall through */ - case EXCP_BKPT: - case EXCP_UDEF: - case EXCP_SWI: - case EXCP_HVC: - case EXCP_HYP_TRAP: - case EXCP_SMC: - switch (syn_get_ec(env->exception.syndrome)) { - case EC_ADVSIMDFPACCESSTRAP: - /* - * QEMU internal FP/SIMD syndromes from AArch32 include the - * TA and coproc fields which are only exposed if the exception - * is taken to AArch32 Hyp mode. Mask them out to get a valid - * AArch64 format syndrome. - */ - env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20); - break; - case EC_CP14RTTRAP: - case EC_CP15RTTRAP: - case EC_CP14DTTRAP: - /* - * For a trap on AArch32 MRC/MCR/LDC/STC the Rt field is currently - * the raw register field from the insn; when taking this to - * AArch64 we must convert it to the AArch64 view of the register - * number. Notice that we read a 4-bit AArch32 register number and - * write back a 5-bit AArch64 one. - */ - rt = extract32(env->exception.syndrome, 5, 4); - rt = aarch64_regnum(env, rt); - env->exception.syndrome = deposit32(env->exception.syndrome, - 5, 5, rt); - break; - case EC_CP15RRTTRAP: - case EC_CP14RRTTRAP: - /* Similarly for MRRC/MCRR traps for Rt and Rt2 fields */ - rt = extract32(env->exception.syndrome, 5, 4); - rt = aarch64_regnum(env, rt); - env->exception.syndrome = deposit32(env->exception.syndrome, - 5, 5, rt); - rt = extract32(env->exception.syndrome, 10, 4); - rt = aarch64_regnum(env, rt); - env->exception.syndrome = deposit32(env->exception.syndrome, - 10, 5, rt); - break; - } - env->cp15.esr_el[new_el] = env->exception.syndrome; - break; - case EXCP_IRQ: - case EXCP_VIRQ: - addr += 0x80; - break; - case EXCP_FIQ: - case EXCP_VFIQ: - addr += 0x100; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (is_a64(env)) { - old_mode = pstate_read(env); - aarch64_save_sp(env, arm_current_el(env)); - env->elr_el[new_el] = env->pc; - } else { - old_mode = cpsr_read_for_spsr_elx(env); - env->elr_el[new_el] = env->regs[15]; - - aarch64_sync_32_to_64(env); - - env->condexec_bits = 0; - } - env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode; - - qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", - env->elr_el[new_el]); - - if (cpu_isar_feature(aa64_pan, cpu)) { - /* The value of PSTATE.PAN is normally preserved, except when ... */ - new_mode |= old_mode & PSTATE_PAN; - switch (new_el) { - case 2: - /* ... the target is EL2 with HCR_EL2.{E2H,TGE} == '11' ... */ - if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) - != (HCR_E2H | HCR_TGE)) { - break; - } - /* fall through */ - case 1: - /* ... the target is EL1 ... */ - /* ... and SCTLR_ELx.SPAN == 0, then set to 1. */ - if ((env->cp15.sctlr_el[new_el] & SCTLR_SPAN) == 0) { - new_mode |= PSTATE_PAN; - } - break; - } - } - if (cpu_isar_feature(aa64_mte, cpu)) { - new_mode |= PSTATE_TCO; - } - - if (cpu_isar_feature(aa64_ssbs, cpu)) { - if (env->cp15.sctlr_el[new_el] & SCTLR_DSSBS_64) { - new_mode |= PSTATE_SSBS; - } else { - new_mode &= ~PSTATE_SSBS; - } - } - - pstate_write(env, PSTATE_DAIF | new_mode); - env->aarch64 = 1; - aarch64_restore_sp(env, new_el); - - if (tcg_enabled()) { - /* pstate already written, so we can use arm_rebuild_hflags here */ - arm_rebuild_hflags(env); - } - - env->pc = addr; - - qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", - new_el, env->pc, pstate_read(env)); -} - -/* - * Do semihosting call and set the appropriate return value. All the - * permission and validity checks have been done at translate time. - * - * We only see semihosting exceptions in TCG only as they are not - * trapped to the hypervisor in KVM. - */ -#ifdef CONFIG_TCG -static void handle_semihosting(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - - if (is_a64(env)) { - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%" PRIx64 "\n", - env->xregs[0]); - env->xregs[0] = do_common_semihosting(cs); - env->pc += 4; - } else { - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%x\n", - env->regs[0]); - env->regs[0] = do_common_semihosting(cs); - env->regs[15] += env->thumb ? 2 : 4; - } -} -#endif - -/* - * Handle a CPU exception for A and R profile CPUs. - * Do any appropriate logging, handle PSCI calls, and then hand off - * to the AArch64-entry or AArch32-entry function depending on the - * target exception level's register width. - * - * Note: this is used for both TCG (as the do_interrupt tcg op), - * and KVM to re-inject guest debug exceptions, and to - * inject a Synchronous-External-Abort. - */ -void arm_cpu_do_interrupt(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - unsigned int new_el = env->exception.target_el; - - assert(!arm_feature(env, ARM_FEATURE_M)); - - arm_log_exception(cs->exception_index); - qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(env), - new_el); - if (qemu_loglevel_mask(CPU_LOG_INT) - && !excp_is_internal(cs->exception_index)) { - qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", - syn_get_ec(env->exception.syndrome), - env->exception.syndrome); - } - -#ifdef CONFIG_TCG - if (arm_is_psci_call(cpu, cs->exception_index)) { - arm_handle_psci_call(cpu); - qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); - return; - } - - /* - * Semihosting semantics depend on the register width of the code - * that caused the exception, not the target exception level, so - * must be handled here. - */ - if (cs->exception_index == EXCP_SEMIHOST) { - handle_semihosting(cs); - return; - } -#endif - - /* - * Hooks may change global state so BQL should be held, also the - * BQL needs to be held for any modification of - * cs->interrupt_request. - */ - g_assert(qemu_mutex_iothread_locked()); - - arm_call_pre_el_change_hook(cpu); - - assert(!excp_is_internal(cs->exception_index)); - if (arm_el_is_aa64(env, new_el)) { - arm_cpu_do_interrupt_aarch64(cs); - } else { - arm_cpu_do_interrupt_aarch32(cs); - } - - arm_call_el_change_hook(cpu); - - if (!kvm_enabled()) { - cs->interrupt_request |= CPU_INTERRUPT_EXITTB; - } -} #endif /* !CONFIG_USER_ONLY */ /* Returns true if the stage 1 translation regime is using LPAE format page diff --git a/target/arm/tcg/sysemu/tcg-cpu.c b/target/arm/tcg/sysemu/tcg-cpu.c new file mode 100644 index 0000000000..af9d3905d7 --- /dev/null +++ b/target/arm/tcg/sysemu/tcg-cpu.c @@ -0,0 +1,73 @@ +/* + * QEMU ARM TCG CPU (sysemu code) + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qemu/qemu-print.h" +#include "qemu-common.h" +#include "target/arm/idau.h" +#include "qemu/module.h" +#include "qapi/error.h" +#include "qapi/visitor.h" +#include "cpu.h" +#include "hw/core/tcg-cpu-ops.h" +#include "semihosting/common-semi.h" +#include "cpregs.h" +#include "internals.h" +#include "exec/exec-all.h" +#include "hw/qdev-properties.h" +#if !defined(CONFIG_USER_ONLY) +#include "hw/loader.h" +#include "hw/boards.h" +#endif +#include "sysemu/sysemu.h" +#include "sysemu/tcg.h" +#include "sysemu/hw_accel.h" +#include "kvm_arm.h" +#include "disas/capstone.h" +#include "fpu/softfloat.h" +#include "cpu-mmu.h" +#include "tcg/tcg-cpu.h" + +/* + * Do semihosting call and set the appropriate return value. All the + * permission and validity checks have been done at translate time. + * + * We only see semihosting exceptions in TCG only as they are not + * trapped to the hypervisor in KVM. + */ +void handle_semihosting(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + if (is_a64(env)) { + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%" PRIx64 "\n", + env->xregs[0]); + env->xregs[0] = do_common_semihosting(cs); + env->pc += 4; + } else { + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%x\n", + env->regs[0]); + env->regs[0] = do_common_semihosting(cs); + env->regs[15] += env->thumb ? 2 : 4; + } +} diff --git a/target/arm/tcg/sysemu/meson.build b/target/arm/tcg/sysemu/meson.build index 26014851bd..56e4b5ccea 100644 --- a/target/arm/tcg/sysemu/meson.build +++ b/target/arm/tcg/sysemu/meson.build @@ -2,5 +2,6 @@ arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( 'debug_helper.c', 'm_helper.c', 'mte_helper.c', + 'tcg-cpu.c', 'tlb_helper.c', )) From patchwork Fri Jun 4 15:52:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454087 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp570001jae; Fri, 4 Jun 2021 09:34:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzXGhsfYqy7izk3QpkYe2WmBi0w8gOgR32gjgtybf7JTsBVTRiRlkM9K0rkTj0puFQFVPHj X-Received: by 2002:a05:6102:e90:: with SMTP id l16mr3554808vst.43.1622824465745; Fri, 04 Jun 2021 09:34:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824465; cv=none; d=google.com; s=arc-20160816; b=BtTWTaIwjPw/zTJxLWdEOMwrv0U6p52eDxwN0xcXf5lmS/fj8U7mGgqOPeQBVhlr1e UvnVUc0yEBCDm/4pAWnUpgTK2RRmmQP0CazD3ahPyw5prWkwPU0vtLJXsYEtlE71AwLe hKjeKJzUg6yrhzPQYOII8kKJy9NH/wFNqzupVXylRBjGSCsyx7FstX5doBfPLNHrqRKh JeBtqCQDhc8AydVdH10FQCBkE52ebNLZWVNMEiX0JNhe7CmJvKPkDrt4MH/g05O93maG Vqlt9w51CVipEuCpsV6Z/JTx2AQpzWJ7Smn9mexAYnhn1og056s54JwUrN3hOuYDy3WN 8oaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=LGeDOwWwESs2gGWztkDCsxwH9lAbDgWWcLiqhQ1nuks=; b=jpm71jilBeTR12ArNZnRGpovvuIzDp3VmbkeaoUcwKsmZi1BpABdaVXpBMla2CCzKz pieY0Oo3Y/5G9l+jzdr2fAmJjnvf4WyqtXfpetGtN01higjkGD0HwV+BI8kRTJ06I+Zk 3jNOOJ97b18RQ01+PFt7GRfD6L+mkb9PRULUGgX2sDfharTx1pAN601YK/mokT7OjEtz qPsuN9om8oyw9qN3lbMG/M5fH6q/h1BVN7n2coRbyWLBOBgpNQ5WZhvi8XgUfVsGkSnA IegwyEaMfBkzDd06NJyyWtE/cVFZe+p+noxmmrWApxRLpOWY2KIwbqmFkSkAbRBrg3yI kzdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=hVeY+N7U; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t206si1010183vkb.84.2021.06.04.09.34.25 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:34:25 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=hVeY+N7U; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:35312 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCm9-0002RM-7S for patch@linaro.org; Fri, 04 Jun 2021 12:34:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48668) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCI8-0008U9-P6 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:25 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:45917) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHf-0005rw-OP for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:24 -0400 Received: by mail-wr1-x433.google.com with SMTP id z8so9801943wrp.12 for ; Fri, 04 Jun 2021 09:02:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LGeDOwWwESs2gGWztkDCsxwH9lAbDgWWcLiqhQ1nuks=; b=hVeY+N7UJ5et154mehRHnRJzJGgbp6XaKvWItU7A/3rgLqBR3rBdNSkWExUdU0ZEUZ wFG5aqQoqI2mfSyuLKvsYw/67HV50BVqWEWpVBraEdpQ2iX2ehB4L0BSDjSCNfilmmW8 TCjPvmM8KxWP98yUHOANBY9t34wnQvNfUJ8TutmTd4OxQnv0M7+fmbqG2MfXxrEJSB5X YX7//QICp46LtXEIdXRrXotqnuA9wlf6QWFhG6Gdb1mS0Mf9Jo+F+T3q5JeCVn4YJ3wq /WbBZDi1pn8voRQ5Hr6lrs0b4Lqma0yehoLpxgb1Gxr3pvmYihGORlppyYvHU4rCdzaK nFCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LGeDOwWwESs2gGWztkDCsxwH9lAbDgWWcLiqhQ1nuks=; b=KOBk9kc2ze0KKt0Pa2Ba6g7rIn0L5SoI45MPSGLfiBefrRRqk0gE1fmWEbk6/cKl2e VUW6gItYhbG/InuxgVcaSTlqL9WTlxd+zCh9A/ejffVmiNSRiKNq5CTj7Sgc3gNveVL6 2OrcmSWWr6ZY+R9jSr3LtQIF8QzEdsD51RuhYXpFUbT5+T95n641QiU9KHZd/kahXkm5 Aln90ANsgkGCG/52TwjAqQHE3FMtolicU8ju+6y6gr5LGhgW8RvTJIGvH+TGOfyMR6EY MfZVMuzV+/EhqFkIf9mxq9wFp0rdl2EPezt+CSJkK+1MaYJ/DwvjqhfbXLW84OZ9CXbs xrZA== X-Gm-Message-State: AOAM532+IJu9yoWqaofKgrOfTR3EKREMbAZ9DqE1uWj9UJ5YLJJXAPMg Hsrq6Wjwl/gi1Y8nnbFPgDcSjQ== X-Received: by 2002:adf:e109:: with SMTP id t9mr4610797wrz.372.1622822574450; Fri, 04 Jun 2021 09:02:54 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id e27sm7421445wra.50.2021.06.04.09.02.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:53 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 322761FFCB; Fri, 4 Jun 2021 16:53:19 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 52/99] target/arm: rename handle_semihosting to tcg_handle_semihosting Date: Fri, 4 Jun 2021 16:52:25 +0100 Message-Id: <20210604155312.15902-53-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::433; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x433.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana make it clearer from the name that this is a tcg-only function. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/tcg/tcg-cpu.h | 2 +- target/arm/cpu-sysemu.c | 2 +- target/arm/tcg/sysemu/tcg-cpu.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/tcg/tcg-cpu.h b/target/arm/tcg/tcg-cpu.h index 0ee8ba073b..7e62f92d16 100644 --- a/target/arm/tcg/tcg-cpu.h +++ b/target/arm/tcg/tcg-cpu.h @@ -24,7 +24,7 @@ #ifndef CONFIG_USER_ONLY /* Do semihosting call and set the appropriate return value. */ -void handle_semihosting(CPUState *cs); +void tcg_handle_semihosting(CPUState *cs); #endif /* !CONFIG_USER_ONLY */ diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 0e872b2e55..7569241339 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -1153,7 +1153,7 @@ void arm_cpu_do_interrupt(CPUState *cs) * must be handled here. */ if (cs->exception_index == EXCP_SEMIHOST) { - handle_semihosting(cs); + tcg_handle_semihosting(cs); return; } #endif /* CONFIG_TCG */ diff --git a/target/arm/tcg/sysemu/tcg-cpu.c b/target/arm/tcg/sysemu/tcg-cpu.c index af9d3905d7..2c395f47e7 100644 --- a/target/arm/tcg/sysemu/tcg-cpu.c +++ b/target/arm/tcg/sysemu/tcg-cpu.c @@ -52,7 +52,7 @@ * We only see semihosting exceptions in TCG only as they are not * trapped to the hypervisor in KVM. */ -void handle_semihosting(CPUState *cs) +void tcg_handle_semihosting(CPUState *cs) { ARMCPU *cpu = ARM_CPU(cs); CPUARMState *env = &cpu->env; From patchwork Fri Jun 4 15:52:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454113 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp595008jae; Fri, 4 Jun 2021 10:05:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzGDbFxnrBMcv3uUI54PgQpUn5lWqfK+5bEKRBFZlE7ZWoMd/yYRW1+nOe+MxiEKonr565n X-Received: by 2002:a17:906:9bce:: with SMTP id de14mr5232230ejc.353.1622826319709; Fri, 04 Jun 2021 10:05:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826319; cv=none; d=google.com; s=arc-20160816; b=kg12Ntcq+ISaObXoT0uVeuoMAt9+UzV9mIkDIrz4/mYi0JzfsfiEiNwgSFzloH+8vn gBGWNqG/8Y0NtiRJXPfgcu96ZfahD/zZ1mYVTBI0ALQ/b+DT/twwKMdhpiRI8eB0PTMb EJrBCFCkDJNwibRoWuN9WxIu9OPpzM8krY3i4CbrzqlEGlkX/ZLvjUjsmHePt0k0Cumf gfxYg13OHYR7IvKZpSSF/kprBy7IFWreIQUUmBd3dgpV85fVfHPiDXBfhaMvTawlPXuN rE7JqnCehbyC8/R0V8lrUoYRKz5/4cERK/7vo7B+7WTIthz+VnpFRsttiRbkVBAmxtlJ VXnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=wznjsiz23B2i+j4Jb69yS0BiQR4HsYX/WkLwF6wNHTg=; b=U5OTqNuORbf+NeQ1TldAosPhK/B5gRj0BBHTN2kbi1RLEn+3V3hBCnfqwWbpFwSy7g J6PS9EHJVlDxQCaDcVQSSUiQzoZ0QHUz4Libf8FHJGRDCakYTOWfJExZ2rvzQO4UPxv2 C3yV1yEwP69MChMYspJyDe2PUO6+Lpigo/dv4XtALf5xV6U1mWGIZSvwKa2gPjBf0nhC 8aHRTXfHt/4asMbHEOadTIk59kq/XdjN+ByYXa8/3DQlSnIqs/CnJZqsQIdLT97ku8qW Ogxeqtuk89V8Uw0Mf9V9AU+99xJn7JO9tcO3IYPM/aaeZHx5+k4u24h38+7pznWPcWQz OLHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="X/drSWWO"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a24si5050357edu.170.2021.06.04.10.05.19 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:05:19 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="X/drSWWO"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:55308 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDG2-0006hF-KB for patch@linaro.org; Fri, 04 Jun 2021 13:05:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33232) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkU-0007tH-Aj for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:42 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:39910) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkS-00025T-DP for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:42 -0400 Received: by mail-wr1-x433.google.com with SMTP id l2so9930108wrw.6 for ; Fri, 04 Jun 2021 09:32:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wznjsiz23B2i+j4Jb69yS0BiQR4HsYX/WkLwF6wNHTg=; b=X/drSWWOPGk5OgiTbAmO0AX/ImlgJgeiLiTuEI3GrTYxtsWCa/KhrhFcfqHuaGiULC FnJonLSWN+vIAfDYQV1TO71GTAvpagvy2GVS2n+KSw8xqZ/JmCFsvvsm9K4NqRImxeGq ZVKtjfMiYVSXdplnNicjULyqQHKCEjPrAmbpV6IfYEtvb8783qy/MUUO/WAjlm1TxXw9 UbYL9zrI/fBeQs/VFQwuRrma9csH6q+tlCxl6d1jqHrwHpgDEIYJkI+PodB7xkPEl/6L upnvd1cJl64bdrqS+LEQ4ZS35SzlQs8ox7sI8BI/eldQOgXS2Kj0OKE+sVNfBdmwLx8t 5cUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wznjsiz23B2i+j4Jb69yS0BiQR4HsYX/WkLwF6wNHTg=; b=KinOLTZwtxBEojrEzHSsACTSYRfTx1nK2B4APWfUXkB/ApcWl0sX+OFhqnm3/IhFCZ GmfDbq5cA3mtyecDWBLBYNVuoJjMfLAkvbMiAbpnrttVr+9L9Jtv0fzFT86DVUy09i08 Ef+IRXmHrjrI69r4Ep2PY9CWGo3+yp3M1ugRdrD74TPNhclMWYkenhTeWnLhUUZkBCxp fGGXTrs60ZZ2hzwZ/FPWytxriIfSpLmDUKs8Uqwc2enAE3zcnnlpsmPvMjmBEBUVfwA1 Mq4hxuhTkfEslV6ZsDfzQgzt/IAKUyh227NxKyNctUJC28qRvMiqMmlRqmzs/e0Ovn/c jyUQ== X-Gm-Message-State: AOAM533jja/QIXnM2lfPgUj9RqY7i3vyDRBE92D4/R54WDsaFQA3cgI9 qGviOayNMhlioYrdW/4y13cLsQ== X-Received: by 2002:adf:df86:: with SMTP id z6mr4798136wrl.255.1622824359086; Fri, 04 Jun 2021 09:32:39 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id a77sm7630225wmd.14.2021.06.04.09.32.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 4A9B41FFCC; Fri, 4 Jun 2021 16:53:19 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 53/99] target/arm: replace CONFIG_TCG with tcg_enabled Date: Fri, 4 Jun 2021 16:52:26 +0100 Message-Id: <20210604155312.15902-54-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::433; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x433.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana for "all" builds (tcg + kvm), we want to avoid doing the psci and semihosting checks if tcg is built-in, but not enabled. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-sysemu.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 7569241339..e83d55b9f7 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -1141,22 +1141,22 @@ void arm_cpu_do_interrupt(CPUState *cs) env->exception.syndrome); } -#ifdef CONFIG_TCG - if (arm_is_psci_call(cpu, cs->exception_index)) { - arm_handle_psci_call(cpu); - qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); - return; - } - /* - * Semihosting semantics depend on the register width of the code - * that caused the exception, not the target exception level, so - * must be handled here. - */ - if (cs->exception_index == EXCP_SEMIHOST) { - tcg_handle_semihosting(cs); - return; + if (tcg_enabled()) { + if (arm_is_psci_call(cpu, cs->exception_index)) { + arm_handle_psci_call(cpu); + qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); + return; + } + /* + * Semihosting semantics depend on the register width of the code + * that caused the exception, not the target exception level, so + * must be handled here. + */ + if (cs->exception_index == EXCP_SEMIHOST) { + tcg_handle_semihosting(cs); + return; + } } -#endif /* CONFIG_TCG */ /* * Hooks may change global state so BQL should be held, also the * BQL needs to be held for any modification of From patchwork Fri Jun 4 15:52:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454137 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp619063jae; Fri, 4 Jun 2021 10:36:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxoLfBtHgMPath0vbZVzvnQIZJUjlWv8MZY6UTZj8rMIVP3mLfqxskZq24xBuquI8K/vsEF X-Received: by 2002:a05:6102:32d1:: with SMTP id o17mr3643146vss.38.1622828217059; Fri, 04 Jun 2021 10:36:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622828217; cv=none; d=google.com; s=arc-20160816; b=AnC7u6ghPGw9ve2t5LEaB469+s4ayFqmvpshiFTuKackn03ZlZDGu8sr1yek9DxN2B UaaErMV4+C6jOxfksFl+9sQoRD0hNMDY9bEuUCyg4yOsAHhMStF+iEHbNXGJjRfQnkuC sFVjYJOUj6iT61EFfSl3x//GiSUs0W8cXvXRvs5JkPPIFgHzeG4U8Mc6X2f8y5PvT/Du 3dK1N2gQ8650sVKEF0TTc0XZemKB3Uu+BmEyhSzdyvk8p2YcRjuU4ykHetrygXcTqnUj ADO+gDbZSvfONKlZruEmUl0Y3Wawukcvem17NboJEYPfc0SrRLWIprPRqk2qeXihT1bc 765Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=lmCIQoI/Hkin0XFZ5JHEI3PAYwX9z71nEbOHIAi16Pg=; b=Ap8Z+ROwbLnWcyJgmuVA/DfvHHDgAku4mfe+2H2Fw19zsKvbI0v4Hc3bDYg9Fa8xkW hmRSs456wYYPrUsBvg6bDL0uus5Bt6Jzido0hIfBy3zRgiDbQHz7FQUoByy5PO7YpwgA dwK7wBAuHaIK4Fh1rdpRrzWBxeotYy3TLRCzyneEJliWf7BF00mEqGDXTk482U8RSq45 pFU/IN3MVI0MULYlcE7CBYbVojBRZJfJHMB8On5vgGHCTOqmRp3ZcpV5EdQg6yn9+f30 WBzkBb3kyaGAi67dvUwAJwtd91sk7Fx3EU/3c/2sxjlRgYjiL3teV5NU5xkVS+kAv33m /aTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="x0K7/fo1"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id f7si3203308uav.6.2021.06.04.10.36.56 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:36:57 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="x0K7/fo1"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:47334 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDke-0004F2-Ef for patch@linaro.org; Fri, 04 Jun 2021 13:36:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48892) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNQ-0000B2-TO for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:56 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]:42557) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNN-00026i-Kb for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:56 -0400 Received: by mail-wm1-x331.google.com with SMTP id o2-20020a05600c4fc2b029019a0a8f959dso6023646wmq.1 for ; Fri, 04 Jun 2021 10:12:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lmCIQoI/Hkin0XFZ5JHEI3PAYwX9z71nEbOHIAi16Pg=; b=x0K7/fo1kZ7VOCuu1I82VF1/QpE4oHyI4nH9FqDjSzQ2GkZFh60QFERWEgRoOdlMae 6LMirwwZU3Lxy5dx/WTMCyM+JK2pvfrfxoHBS8g9uVQcJAZumEN6aO6D41H5gR4UcL9I tf2HSnF68SqBsFzjIVpUNcop0yfDwmGbhbtwsEhCjQxadyCdkexV71AGVEtWY1HSRuZE SMd5wrNVtuWWV+GMZXWBVl2mRtX2nzeJ8UeGisYVufhIr/dblrgrWKdiq91qG3nlQm1z 3pVj80KEU9jGDOIqnzRVO36UeHJBn1z8JPAUSWspXkUa8sIVd2wV3uMlKrF/CehmaZCK 4gAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lmCIQoI/Hkin0XFZ5JHEI3PAYwX9z71nEbOHIAi16Pg=; b=tNNKR2qpXGdutqnmawQcBpU5xYF1BsR+StLCPC9/WvRU1b2ADUqUreNUbil175ZXua HweONR1WA8MGU2yHTB5Bd8b5mbV5KOrI0mwZ44i8gOXxkgYYuU5EtMoqNrTKRlkCmVMY SHij2fp6LyFuIxOw+cCE+Md70tzW5TufhDY0JRgDrVoU4Fy+rDpkfcAZU/R5IeqVkqzr RDbPYb1seNbOF9wH64jTnJAelL/SgYuyI9DBSbD1AJJKxDqxMaZwm3N//cmwKykpquUm ILEUeehvsZrLn3DfaM+Lckf49k9MiD40gXMwnodZvI79cjXWIBJIhQTmfabQIOn/mAkP 5fMw== X-Gm-Message-State: AOAM530rTYBZ0/i0IKbV4hFLlxjqNAomMJZyrgApcuGKXgRBDWCMM3jM IL9FicT6KO4HPruyiMycROUu/Q== X-Received: by 2002:a05:600c:4f48:: with SMTP id m8mr4621691wmq.169.1622826772025; Fri, 04 Jun 2021 10:12:52 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id q5sm6172567wmc.0.2021.06.04.10.12.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:46 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 744F01FFCD; Fri, 4 Jun 2021 16:53:19 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 54/99] target/arm: move TCGCPUOps to tcg/tcg-cpu.c Date: Fri, 4 Jun 2021 16:52:27 +0100 Message-Id: <20210604155312.15902-55-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana move the TCGCPUOps interface to tcg/tcg-cpu.c in preparation for the addition of the TCG accel-cpu class. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu.h | 1 - target/arm/internals.h | 5 - target/arm/tcg/tcg-cpu.h | 6 + target/arm/cpu-sysemu.c | 4 + target/arm/cpu.c | 210 +--------------------------------- target/arm/cpu_tcg.c | 2 +- target/arm/tcg/helper.c | 1 + target/arm/tcg/tcg-cpu.c | 229 +++++++++++++++++++++++++++++++++++++ target/arm/tcg/meson.build | 1 + 9 files changed, 244 insertions(+), 215 deletions(-) create mode 100644 target/arm/tcg/tcg-cpu.c -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index c5ead3365f..e528873ed3 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1031,7 +1031,6 @@ extern const VMStateDescription vmstate_arm_cpu; void arm_cpu_do_interrupt(CPUState *cpu); void arm_v7m_cpu_do_interrupt(CPUState *cpu); -bool arm_cpu_exec_interrupt(CPUState *cpu, int int_req); int arm_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg); int arm_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg); diff --git a/target/arm/internals.h b/target/arm/internals.h index c41f91f1c0..227a80ec21 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -173,11 +173,6 @@ static inline int r14_bank_number(int mode) void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu); void arm_translate_init(void); -#ifdef CONFIG_TCG -void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb); -#endif /* CONFIG_TCG */ - - enum arm_fprounding { FPROUNDING_TIEEVEN, FPROUNDING_POSINF, diff --git a/target/arm/tcg/tcg-cpu.h b/target/arm/tcg/tcg-cpu.h index 7e62f92d16..d93c6a6749 100644 --- a/target/arm/tcg/tcg-cpu.h +++ b/target/arm/tcg/tcg-cpu.h @@ -21,6 +21,12 @@ #define ARM_TCG_CPU_H #include "cpu.h" +#include "hw/core/tcg-cpu-ops.h" + +void arm_cpu_synchronize_from_tb(CPUState *cs, + const TranslationBlock *tb); + +extern struct TCGCPUOps arm_tcg_ops; #ifndef CONFIG_USER_ONLY /* Do semihosting call and set the appropriate return value. */ diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index e83d55b9f7..c09c89eeac 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -28,6 +28,10 @@ #include "sysemu/tcg.h" #include "tcg/tcg-cpu.h" +#ifdef CONFIG_TCG +#include "tcg/tcg-cpu.h" +#endif /* CONFIG_TCG */ + void arm_cpu_set_irq(void *opaque, int irq, int level) { ARMCPU *cpu = opaque; diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 97d562bbd5..192700fe8f 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -27,7 +27,7 @@ #include "cpu.h" #include "cpregs.h" #ifdef CONFIG_TCG -#include "hw/core/tcg-cpu-ops.h" +#include "tcg/tcg-cpu.h" #endif /* CONFIG_TCG */ #include "cpu32.h" #include "internals.h" @@ -58,25 +58,6 @@ static void arm_cpu_set_pc(CPUState *cs, vaddr value) } } -#ifdef CONFIG_TCG -void arm_cpu_synchronize_from_tb(CPUState *cs, - const TranslationBlock *tb) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - - /* - * It's OK to look at env for the current mode here, because it's - * never possible for an AArch64 TB to chain to an AArch32 TB. - */ - if (is_a64(env)) { - env->pc = tb->pc; - } else { - env->regs[15] = tb->pc; - } -} -#endif /* CONFIG_TCG */ - static bool arm_cpu_has_work(CPUState *cs) { ARMCPU *cpu = ARM_CPU(cs); @@ -442,175 +423,6 @@ static void arm_cpu_reset(DeviceState *dev) } } -static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, - unsigned int target_el, - unsigned int cur_el, bool secure, - uint64_t hcr_el2) -{ - CPUARMState *env = cs->env_ptr; - bool pstate_unmasked; - bool unmasked = false; - - /* - * Don't take exceptions if they target a lower EL. - * This check should catch any exceptions that would not be taken - * but left pending. - */ - if (cur_el > target_el) { - return false; - } - - switch (excp_idx) { - case EXCP_FIQ: - pstate_unmasked = !(env->daif & PSTATE_F); - break; - - case EXCP_IRQ: - pstate_unmasked = !(env->daif & PSTATE_I); - break; - - case EXCP_VFIQ: - if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) { - /* VFIQs are only taken when hypervized. */ - return false; - } - return !(env->daif & PSTATE_F); - case EXCP_VIRQ: - if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) { - /* VIRQs are only taken when hypervized. */ - return false; - } - return !(env->daif & PSTATE_I); - default: - g_assert_not_reached(); - } - - /* - * Use the target EL, current execution state and SCR/HCR settings to - * determine whether the corresponding CPSR bit is used to mask the - * interrupt. - */ - if ((target_el > cur_el) && (target_el != 1)) { - /* Exceptions targeting a higher EL may not be maskable */ - if (arm_feature(env, ARM_FEATURE_AARCH64)) { - /* - * 64-bit masking rules are simple: exceptions to EL3 - * can't be masked, and exceptions to EL2 can only be - * masked from Secure state. The HCR and SCR settings - * don't affect the masking logic, only the interrupt routing. - */ - if (target_el == 3 || !secure || (env->cp15.scr_el3 & SCR_EEL2)) { - unmasked = true; - } - } else { - /* - * The old 32-bit-only environment has a more complicated - * masking setup. HCR and SCR bits not only affect interrupt - * routing but also change the behaviour of masking. - */ - bool hcr, scr; - - switch (excp_idx) { - case EXCP_FIQ: - /* - * If FIQs are routed to EL3 or EL2 then there are cases where - * we override the CPSR.F in determining if the exception is - * masked or not. If neither of these are set then we fall back - * to the CPSR.F setting otherwise we further assess the state - * below. - */ - hcr = hcr_el2 & HCR_FMO; - scr = (env->cp15.scr_el3 & SCR_FIQ); - - /* - * When EL3 is 32-bit, the SCR.FW bit controls whether the - * CPSR.F bit masks FIQ interrupts when taken in non-secure - * state. If SCR.FW is set then FIQs can be masked by CPSR.F - * when non-secure but only when FIQs are only routed to EL3. - */ - scr = scr && !((env->cp15.scr_el3 & SCR_FW) && !hcr); - break; - case EXCP_IRQ: - /* - * When EL3 execution state is 32-bit, if HCR.IMO is set then - * we may override the CPSR.I masking when in non-secure state. - * The SCR.IRQ setting has already been taken into consideration - * when setting the target EL, so it does not have a further - * affect here. - */ - hcr = hcr_el2 & HCR_IMO; - scr = false; - break; - default: - g_assert_not_reached(); - } - - if ((scr || hcr) && !secure) { - unmasked = true; - } - } - } - - /* - * The PSTATE bits only mask the interrupt if we have not overriden the - * ability above. - */ - return unmasked || pstate_unmasked; -} - -bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request) -{ - CPUClass *cc = CPU_GET_CLASS(cs); - CPUARMState *env = cs->env_ptr; - uint32_t cur_el = arm_current_el(env); - bool secure = arm_is_secure(env); - uint64_t hcr_el2 = arm_hcr_el2_eff(env); - uint32_t target_el; - uint32_t excp_idx; - - /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */ - - if (interrupt_request & CPU_INTERRUPT_FIQ) { - excp_idx = EXCP_FIQ; - target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure); - if (arm_excp_unmasked(cs, excp_idx, target_el, - cur_el, secure, hcr_el2)) { - goto found; - } - } - if (interrupt_request & CPU_INTERRUPT_HARD) { - excp_idx = EXCP_IRQ; - target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure); - if (arm_excp_unmasked(cs, excp_idx, target_el, - cur_el, secure, hcr_el2)) { - goto found; - } - } - if (interrupt_request & CPU_INTERRUPT_VIRQ) { - excp_idx = EXCP_VIRQ; - target_el = 1; - if (arm_excp_unmasked(cs, excp_idx, target_el, - cur_el, secure, hcr_el2)) { - goto found; - } - } - if (interrupt_request & CPU_INTERRUPT_VFIQ) { - excp_idx = EXCP_VFIQ; - target_el = 1; - if (arm_excp_unmasked(cs, excp_idx, target_el, - cur_el, secure, hcr_el2)) { - goto found; - } - } - return false; - - found: - cs->exception_index = excp_idx; - env->exception.target_el = target_el; - cc->tcg_ops->do_interrupt(cs); - return true; -} - void arm_cpu_update_virq(ARMCPU *cpu) { /* @@ -1015,6 +827,7 @@ static void arm_cpu_finalizefn(Object *obj) QLIST_REMOVE(hook, node); g_free(hook); } + #ifndef CONFIG_USER_ONLY if (cpu->pmu_timer) { timer_free(cpu->pmu_timer); @@ -1644,25 +1457,6 @@ static const struct SysemuCPUOps arm_sysemu_ops = { .legacy_vmsd = &vmstate_arm_cpu, }; #endif - -#ifdef CONFIG_TCG -static const struct TCGCPUOps arm_tcg_ops = { - .initialize = arm_translate_init, - .synchronize_from_tb = arm_cpu_synchronize_from_tb, - .cpu_exec_interrupt = arm_cpu_exec_interrupt, - .tlb_fill = arm_cpu_tlb_fill, - .debug_excp_handler = arm_debug_excp_handler, - -#if !defined(CONFIG_USER_ONLY) - .do_interrupt = arm_cpu_do_interrupt, - .do_transaction_failed = arm_cpu_do_transaction_failed, - .do_unaligned_access = arm_cpu_do_unaligned_access, - .adjust_watchpoint_address = arm_adjust_watchpoint_address, - .debug_check_watchpoint = arm_debug_check_watchpoint, -#endif /* !CONFIG_USER_ONLY */ -}; -#endif /* CONFIG_TCG */ - static void arm_cpu_class_init(ObjectClass *oc, void *data) { ARMCPUClass *acc = ARM_CPU_CLASS(oc); diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c index fe422498c7..4606ad8436 100644 --- a/target/arm/cpu_tcg.c +++ b/target/arm/cpu_tcg.c @@ -11,7 +11,7 @@ #include "qemu/osdep.h" #include "cpu.h" #ifdef CONFIG_TCG -#include "hw/core/tcg-cpu-ops.h" +#include "tcg/tcg-cpu.h" #endif /* CONFIG_TCG */ #include "internals.h" #include "target/arm/idau.h" diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index a8b1efdb36..38cc7c6a3d 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -16,6 +16,7 @@ #include "arm_ldst.h" #include "cpu-mmu.h" #include "cpregs.h" +#include "tcg-cpu.h" static int vfp_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg) { diff --git a/target/arm/tcg/tcg-cpu.c b/target/arm/tcg/tcg-cpu.c new file mode 100644 index 0000000000..9fd996d908 --- /dev/null +++ b/target/arm/tcg/tcg-cpu.c @@ -0,0 +1,229 @@ +/* + * QEMU ARM CPU + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include "tcg-cpu.h" +#include "hw/core/tcg-cpu-ops.h" +#include "cpregs.h" +#include "internals.h" +#include "exec/exec-all.h" + +void arm_cpu_synchronize_from_tb(CPUState *cs, + const TranslationBlock *tb) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + /* + * It's OK to look at env for the current mode here, because it's + * never possible for an AArch64 TB to chain to an AArch32 TB. + */ + if (is_a64(env)) { + env->pc = tb->pc; + } else { + env->regs[15] = tb->pc; + } +} + +static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, + unsigned int target_el, + unsigned int cur_el, bool secure, + uint64_t hcr_el2) +{ + CPUARMState *env = cs->env_ptr; + bool pstate_unmasked; + bool unmasked = false; + + /* + * Don't take exceptions if they target a lower EL. + * This check should catch any exceptions that would not be taken + * but left pending. + */ + if (cur_el > target_el) { + return false; + } + + switch (excp_idx) { + case EXCP_FIQ: + pstate_unmasked = !(env->daif & PSTATE_F); + break; + + case EXCP_IRQ: + pstate_unmasked = !(env->daif & PSTATE_I); + break; + + case EXCP_VFIQ: + if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) { + /* VFIQs are only taken when hypervized. */ + return false; + } + return !(env->daif & PSTATE_F); + case EXCP_VIRQ: + if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) { + /* VIRQs are only taken when hypervized. */ + return false; + } + return !(env->daif & PSTATE_I); + default: + g_assert_not_reached(); + } + + /* + * Use the target EL, current execution state and SCR/HCR settings to + * determine whether the corresponding CPSR bit is used to mask the + * interrupt. + */ + if ((target_el > cur_el) && (target_el != 1)) { + /* Exceptions targeting a higher EL may not be maskable */ + if (arm_feature(env, ARM_FEATURE_AARCH64)) { + /* + * 64-bit masking rules are simple: exceptions to EL3 + * can't be masked, and exceptions to EL2 can only be + * masked from Secure state. The HCR and SCR settings + * don't affect the masking logic, only the interrupt routing. + */ + if (target_el == 3 || !secure || (env->cp15.scr_el3 & SCR_EEL2)) { + unmasked = true; + } + } else { + /* + * The old 32-bit-only environment has a more complicated + * masking setup. HCR and SCR bits not only affect interrupt + * routing but also change the behaviour of masking. + */ + bool hcr, scr; + + switch (excp_idx) { + case EXCP_FIQ: + /* + * If FIQs are routed to EL3 or EL2 then there are cases where + * we override the CPSR.F in determining if the exception is + * masked or not. If neither of these are set then we fall back + * to the CPSR.F setting otherwise we further assess the state + * below. + */ + hcr = hcr_el2 & HCR_FMO; + scr = (env->cp15.scr_el3 & SCR_FIQ); + + /* + * When EL3 is 32-bit, the SCR.FW bit controls whether the + * CPSR.F bit masks FIQ interrupts when taken in non-secure + * state. If SCR.FW is set then FIQs can be masked by CPSR.F + * when non-secure but only when FIQs are only routed to EL3. + */ + scr = scr && !((env->cp15.scr_el3 & SCR_FW) && !hcr); + break; + case EXCP_IRQ: + /* + * When EL3 execution state is 32-bit, if HCR.IMO is set then + * we may override the CPSR.I masking when in non-secure state. + * The SCR.IRQ setting has already been taken into consideration + * when setting the target EL, so it does not have a further + * affect here. + */ + hcr = hcr_el2 & HCR_IMO; + scr = false; + break; + default: + g_assert_not_reached(); + } + + if ((scr || hcr) && !secure) { + unmasked = true; + } + } + } + + /* + * The PSTATE bits only mask the interrupt if we have not overriden the + * ability above. + */ + return unmasked || pstate_unmasked; +} + +static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request) +{ + CPUClass *cc = CPU_GET_CLASS(cs); + CPUARMState *env = cs->env_ptr; + uint32_t cur_el = arm_current_el(env); + bool secure = arm_is_secure(env); + uint64_t hcr_el2 = arm_hcr_el2_eff(env); + uint32_t target_el; + uint32_t excp_idx; + + /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */ + + if (interrupt_request & CPU_INTERRUPT_FIQ) { + excp_idx = EXCP_FIQ; + target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure); + if (arm_excp_unmasked(cs, excp_idx, target_el, + cur_el, secure, hcr_el2)) { + goto found; + } + } + if (interrupt_request & CPU_INTERRUPT_HARD) { + excp_idx = EXCP_IRQ; + target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure); + if (arm_excp_unmasked(cs, excp_idx, target_el, + cur_el, secure, hcr_el2)) { + goto found; + } + } + if (interrupt_request & CPU_INTERRUPT_VIRQ) { + excp_idx = EXCP_VIRQ; + target_el = 1; + if (arm_excp_unmasked(cs, excp_idx, target_el, + cur_el, secure, hcr_el2)) { + goto found; + } + } + if (interrupt_request & CPU_INTERRUPT_VFIQ) { + excp_idx = EXCP_VFIQ; + target_el = 1; + if (arm_excp_unmasked(cs, excp_idx, target_el, + cur_el, secure, hcr_el2)) { + goto found; + } + } + return false; + + found: + cs->exception_index = excp_idx; + env->exception.target_el = target_el; + cc->tcg_ops->do_interrupt(cs); + return true; +} + +struct TCGCPUOps arm_tcg_ops = { + .initialize = arm_translate_init, + .synchronize_from_tb = arm_cpu_synchronize_from_tb, + .cpu_exec_interrupt = arm_cpu_exec_interrupt, + .tlb_fill = arm_cpu_tlb_fill, + .debug_excp_handler = arm_debug_excp_handler, + +#if !defined(CONFIG_USER_ONLY) + .do_interrupt = arm_cpu_do_interrupt, + .do_transaction_failed = arm_cpu_do_transaction_failed, + .do_unaligned_access = arm_cpu_do_unaligned_access, + .adjust_watchpoint_address = arm_adjust_watchpoint_address, + .debug_check_watchpoint = arm_debug_check_watchpoint, +#endif /* !CONFIG_USER_ONLY */ +}; diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 64a86fd94c..4e690eea6c 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -31,6 +31,7 @@ arm_ss.add(when: 'CONFIG_TCG', if_true: files( 'vfp_helper.c', 'crypto_helper.c', 'debug_helper.c', + 'tcg-cpu.c', ), if_false: files( 'tcg-stubs.c', From patchwork Fri Jun 4 15:52:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454143 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp653123jae; Fri, 4 Jun 2021 11:23:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxWQUkillZf4dlryUDFFmy2xCdTOFzbE1E5suH2F/weqD9jYWPzRAmY0b/W5E5M043S8fR+ X-Received: by 2002:a02:394a:: with SMTP id w10mr5203961jae.107.1622831027573; Fri, 04 Jun 2021 11:23:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831027; cv=none; d=google.com; s=arc-20160816; b=1HR+kbt5QXkKloE4PGPW13dI6t4Yhn+IYE+v/TDzdCdoHBn5XjIvw8RqD/yEA0Q33E btlsJxTNyF4qKawtpYZJ6G0UUKA7pzrkHCpoMvaHBoQgksfoSsjMkubbUrhNvSUtZx7W dAwT8CwO/yjdoXh1EEdK2hL0AdTWkRlxw6wiGYl15J7A1auhQ0siqpTHTfY4279r7p0W MdWfmVgvRuVOfpDcBnV1EDuUkHt/4nFywJ5TKYjqravMN/Bx1Nf5dAB71PPitj2bhYD7 8q0Stp0GVjTlI18h5ALe2dHoSyRWVFmNx930pG8aexu1PP9ABRSJJyqY/Q/YW/l7VuAo nfXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=FUsBx0TJprnzGFmcrQF1BJlipqIhjCqwyWs1QwVqKdE=; b=uXrbHRy767Du1X/5DIgyebYR+aSAEjlfOloh+cF4dVwAHl2LddUbWfO3eIbwk3aH4W 5wpvS5QZ0jtHgybI5kddIRNgI6aKU+Q+HNriOBC7t0ezRuaRpkq/obQXmPfE+mBKLaD3 pp44U4FMINN/nKhiMTwDKhvHyQ5WPN4mbocONvAEYYSTXli5M1Y0irw0sHvKbSdDHRgz UO6CI6YWSkJUQkWYEj2YG6C5kwAeUpOHsllhYyLa5bMEi2TqbwdFjcjP8r2k6sdskDlp o530qPMEDEPSZf5T4xLbTpGMeFdrGpgrAqTwehwTE9HHMf7zTeHgg0nkbubeYIH/87MR f0bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=oV5RDrvN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a2si4763565jaq.37.2021.06.04.11.23.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:23:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=oV5RDrvN; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:33554 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpETy-0001KW-TQ for patch@linaro.org; Fri, 04 Jun 2021 14:23:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42314) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpET1-0000lQ-IQ for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:47 -0400 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]:38836) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpESx-0000L7-3Q for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:47 -0400 Received: by mail-wm1-x32b.google.com with SMTP id t4-20020a1c77040000b029019d22d84ebdso8434059wmi.3 for ; Fri, 04 Jun 2021 11:22:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FUsBx0TJprnzGFmcrQF1BJlipqIhjCqwyWs1QwVqKdE=; b=oV5RDrvNSAbsUtf5gqKc/5V9R99tlLG1WhiuoSlxkHsk7facSWO7BV9n9eAkqZBIqU CDrhWpjIdEBTJP9IrIQc4YLHPB6eeWI7vXui3ZweoQk4xCLo2dm+zzBq+/o3c/2GBwL4 WdKiv3swS6QmOzCEHRqinnVKCHQiTp34jK5OOBec2K9IKCMcofjdK/aqfdZtPjMi+YG1 vvjLwxVDiRhoxddoG6uz2riEfTyeJcaqfTtxjKxvkDCaew2Dk7ogU7GiV3ZhKrc5DDlK P7m/lDhSMEBPbCNeezVEXUjMJcO4N408zCnaYkdZKGDm7tR/cdWIUW2lwwBZ43xNJ5AO B4Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FUsBx0TJprnzGFmcrQF1BJlipqIhjCqwyWs1QwVqKdE=; b=gSOmWo/Lw56jM2txa8N1q2eoin6eTwbaSC21blzsppDWKSigmQBbOpDbzEfwEfOzOs BLnQQ9U3wsVBe+6dB1dSLn5Sel5g1PPOp31Iz0vnOKzyVSNZox7t3ntTE7+u9Htoloah lhDyFhDqf/i/2ik5TwuYv1aTbpgz4fgW6YHlY/2OanvX78tCpo8m9Qd1l36s0bTJJPS1 4aIN7BdlpvKetjwytjULycLVA5gNVN/e8yDybW6Ih1Qj4X8IdbbeutotCz+Y55iDzh3Q 7OpeGL1iH+NIBg0W4l8/ktGDJqWhgarMCrnazJ00oa/qVRlponEXAsBPaWv7mgq5XL4/ 5GqQ== X-Gm-Message-State: AOAM5304dxO8FnxfrWc+itLBoD9Worq0EOiCJGR/wadYsJPdvPwDsuaj HYKSrgYTFZP89lK9KTWchvCW6A== X-Received: by 2002:a1c:67c3:: with SMTP id b186mr5015358wmc.12.1622830961673; Fri, 04 Jun 2021 11:22:41 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id a12sm6236678wmj.36.2021.06.04.11.22.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 8D2041FF90; Fri, 4 Jun 2021 16:53:19 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 55/99] target/arm: move cpu_tcg to tcg/tcg-cpu-models.c Date: Fri, 4 Jun 2021 16:52:28 +0100 Message-Id: <20210604155312.15902-56-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32b; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana move the module containing cpu models definitions for 32bit TCG-only CPUs to tcg/ and rename it for clarity. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/{cpu_tcg.c => tcg/tcg-cpu-models.c} | 7 +------ target/arm/meson.build | 4 ---- target/arm/tcg/meson.build | 1 + 3 files changed, 2 insertions(+), 10 deletions(-) rename target/arm/{cpu_tcg.c => tcg/tcg-cpu-models.c} (99%) -- 2.20.1 diff --git a/target/arm/cpu_tcg.c b/target/arm/tcg/tcg-cpu-models.c similarity index 99% rename from target/arm/cpu_tcg.c rename to target/arm/tcg/tcg-cpu-models.c index 4606ad8436..91af2174a1 100644 --- a/target/arm/cpu_tcg.c +++ b/target/arm/tcg/tcg-cpu-models.c @@ -1,5 +1,5 @@ /* - * QEMU ARM TCG CPUs. + * QEMU ARM TCG-only CPUs. * * Copyright (c) 2012 SUSE LINUX Products GmbH * @@ -9,10 +9,7 @@ */ #include "qemu/osdep.h" -#include "cpu.h" -#ifdef CONFIG_TCG #include "tcg/tcg-cpu.h" -#endif /* CONFIG_TCG */ #include "internals.h" #include "target/arm/idau.h" #if !defined(CONFIG_USER_ONLY) @@ -24,7 +21,6 @@ /* CPU models. These are not needed for the AArch64 linux-user build. */ #if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64) -#ifdef CONFIG_TCG static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request) { CPUClass *cc = CPU_GET_CLASS(cs); @@ -48,7 +44,6 @@ static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request) } return ret; } -#endif /* CONFIG_TCG */ static void arm926_initfn(Object *obj) { diff --git a/target/arm/meson.build b/target/arm/meson.build index 0ccd2fb0bc..8d0c12b2fc 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -18,10 +18,6 @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files( 'gdbstub64.c', )) -arm_ss.add(when: 'CONFIG_TCG', if_true: files( - 'cpu_tcg.c', -)) - arm_softmmu_ss = ss.source_set() arm_softmmu_ss.add(files( 'arch_dump.c', diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 4e690eea6c..5b36a13a24 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -32,6 +32,7 @@ arm_ss.add(when: 'CONFIG_TCG', if_true: files( 'crypto_helper.c', 'debug_helper.c', 'tcg-cpu.c', + 'tcg-cpu-models.c', ), if_false: files( 'tcg-stubs.c', From patchwork Fri Jun 4 15:52:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454121 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp603765jae; Fri, 4 Jun 2021 10:15:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyJkeLetqyDg0eENlJSnJOABfwsL7Nwa+J4qO9v14dj6gxR9FG7ND5x98qjku1Eso4Dvx+r X-Received: by 2002:ab0:5961:: with SMTP id o30mr4464285uad.127.1622826944196; Fri, 04 Jun 2021 10:15:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826944; cv=none; d=google.com; s=arc-20160816; b=pL+mNEF3lcS91kfW1LS5cZxpKMFUoMxyazgfXRvJVXws7bhnvDoyfKcp91d0GwuGcc CJVVpcyQr6cloU/CJDMOm3gv6uJFTypcvClEwrPUxqth0CwJXNjjdz1S6jyWHe2q3+IZ YewVxRuynpI02aAqc6sR5j2cnLfEnZ3EryQ5elQ2gxdFIRrbgZxCONbL4gfajc9wMdQ0 IABcgsTw/VCzHL9tRfDgWG//SWgCk/hp3W7SrohBbmIRUuv3jzzYlBf6oirucpW95PQM KaP4AvWFTkg4uouGfKUWXgja+1mo6HC+yhyTuy35j0hZhj0vVATOoPDlFzBgFIYoDUze jG9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=5E665klXVT6SgVYAKD/ACEUSAkv3dyFCri5JuwiYWIQ=; b=PVgAmXPGW/ZUVJ8Zz2ELtUXnGgtmqcsPE3+ve13/RbkOTmwNwNiyfuoM+nrLexk1Od ly9dxt8/PdhUZl6eQFET1XAZe11QfrUm7J6rMiZBoydxvzKGy23D+CjfmhBfHsqvMojJ U1gA07x52n1rMjAyEkmlH9eKUcT7gEq05WcFAh4rvTW8OQrVHxNAffdot8Y7dI+BD5JM iVyoOODp21UwcxYWv6oEJIrQS9H5fl3Gl+WKI3Pa3MDs4VAeDZPI4weEmm3kZ1Bnz4Br pp/83T2CasikSCbdmAKSb121OH+e0aUZrta7GUr2Earr0iEtgYfIZksdyBoCEbJsuVaZ JMHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=E5VZBNng; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id l17si3060007vsi.54.2021.06.04.10.15.44 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:15:44 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=E5VZBNng; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:60752 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDQ7-00050Q-Ha for patch@linaro.org; Fri, 04 Jun 2021 13:15:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33502) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCke-0008Da-Cr for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:53 -0400 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]:34638) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCka-0002A0-PF for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:52 -0400 Received: by mail-wm1-x32c.google.com with SMTP id u5-20020a7bc0450000b02901480e40338bso4548699wmc.1 for ; Fri, 04 Jun 2021 09:32:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5E665klXVT6SgVYAKD/ACEUSAkv3dyFCri5JuwiYWIQ=; b=E5VZBNng/kVQRvDNyQZ6xIFIFh0io5q6S/g/QnrodIjXOZAep9gDcFhz2zlHj4+hPX PFrjs3KCp4Vupy18rFpz0pGME8NuyljiUaDcVUciwlNca8M3q26JnPhzY2U6BhgcYHar XVEzYcU6r26c77/C/6e1QVKfRw1BZZNIF4GW88u3rHzaQ7M7D5+gQrgN0LlvIn7OZAAQ slgukS01TUAHFfyGJFLcH1XPjIL4sEfiSoNe0DgnDNlt604bunngaC+sgjQ2XvM+jvG1 2TlhLVTGYl4YhWvplZmY54GbuR9iRp36XnYlJ7U0dNFr/rqRM08vaGC5dJpBgH8J8nyn OSfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5E665klXVT6SgVYAKD/ACEUSAkv3dyFCri5JuwiYWIQ=; b=mhVgKMmhHnGaRw7i4EYxIgfFYuS+ea/xWNJeddE+F4G7J5cC2KNnXywBxo3CO1fcty o19UVu7xn4z7klgOHh5tH3HO8t42+L1UDPQL5Ov5FdtoD0M16jLTmDebzbombKeg79aX i+CY+DqjrjefoJlANKLnDcuFk5wBSGeAqTFihZK0qeKbroMmKUFRSesmVsT75bMj4+Bo rPSHr5+Y6RO9Pt8TWqrZEJPS+pczL3ee1SGi7lWmz204b+qnTl9zCD+/JnY00+/WStzt T5ZfMM7C9/oBaYQU5gPRFyEGqSHqymNixYvm3lkZBI7qRZAXrIsQtbUEWbBl5aizIzXq cB4A== X-Gm-Message-State: AOAM530SNzp+WPoZqGI2cqU8r0KeP2foFWBAw+RbdrntUn+7ObrnhdGu 206mg5EFLT3II2BwBX4aZJsXMg== X-Received: by 2002:a05:600c:2146:: with SMTP id v6mr4523210wml.131.1622824367342; Fri, 04 Jun 2021 09:32:47 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id b8sm5464966wmd.35.2021.06.04.09.32.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A236E1FFCE; Fri, 4 Jun 2021 16:53:19 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 56/99] target/arm: wrap call to aarch64_sve_change_el in tcg_enabled() Date: Fri, 4 Jun 2021 16:52:29 +0100 Message-Id: <20210604155312.15902-57-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32c; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana After this patch it is possible to build only kvm: ./configure --disable-tcg --enable-kvm Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-sysemu.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index c09c89eeac..2d3fe4f643 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -917,11 +917,13 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs) unsigned int cur_el = arm_current_el(env); int rt; - /* - * Note that new_el can never be 0. If cur_el is 0, then - * el0_a64 is is_a64(), else el0_a64 is ignored. - */ - aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); + if (tcg_enabled()) { + /* + * Note that new_el can never be 0. If cur_el is 0, then + * el0_a64 is is_a64(), else el0_a64 is ignored. + */ + aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); + } if (cur_el < new_el) { /* From patchwork Fri Jun 4 15:52:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454127 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp606888jae; Fri, 4 Jun 2021 10:19:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnbt3RngylUapUl9+pFKnGAQ8OtPPu9Tc2EY8EQiNczF7lpOaq30pf+wHQX5e/daxGQWhw X-Received: by 2002:a67:de18:: with SMTP id q24mr3749382vsk.7.1622827198156; Fri, 04 Jun 2021 10:19:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827198; cv=none; d=google.com; s=arc-20160816; b=ahfOQ8cwvd9j0MXowmG5Ge7qHy5yu0O1+H9RgiDEGTRp/22n6skfr35kr0LxhyEFuE pcIiEfizXZe/Lv8x4xYC3hy5ArRw6ESn1S6uI8olP1pkjtdO7suU9OSWUOVfT6CJldrn tD1886/q1bXgxTWtI20tUkkkQaUpyqvgnOoEFhoa8AMIMYQbl4aKgyryHWnjZOALOxSH +F4F2Zalc266290WbN1G67DGR2dtP8afX9ATO52p32E1yye0AHI2HikIugt5XNZqtSGV B4gGDCl321xjT/JS14LnM47WxqpWPPCgL/vo24vhCcX7br/CUQ5F10c87ps+BQ9tUFMe p7sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=6HF2MyB2aBKBLReV9F/D4V7kzFy2KuqmCJBZnZnN5/0=; b=ebE1KB6vPtBC0KzoNfZraVJbDqsUz/2zomextuqkQnSs8/h6KtbuZ0s9EvAlyc+fgK BugxLyBUg7TbpUuRKFIBCJpNuJ4q1H0fYQ57HYPaONcx1f+ZyRjdRK25ujcBD9cvuOmQ T+vy/vhyns4nN4+Bm+VvREOdMQDir1eWpZzZnapnPD9pKA8vgFUTlRgpUHckjiF1lS+3 WztiEFEli/ZLV6wid2yrTXrk80nC/7A+xMDM6WC6eU0DhFDgBNj9DCbYRRULvhGcKuRK Gt9Ye3eu/PBbWjIlMRVtAopJv28Vdvt+ojb8MFd4QMBDPAuQ7Z7IjCzcyEsT4DYWeajn 9K5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=JWn9SNFg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q10si3630094vsm.284.2021.06.04.10.19.58 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:19:58 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=JWn9SNFg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49552 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDUD-0000UN-Jg for patch@linaro.org; Fri, 04 Jun 2021 13:19:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33768) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkt-00007q-Gi for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:07 -0400 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]:36769) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkf-0002D5-N2 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:07 -0400 Received: by mail-wm1-x32b.google.com with SMTP id n17-20020a7bc5d10000b0290169edfadac9so8286499wmk.1 for ; Fri, 04 Jun 2021 09:32:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6HF2MyB2aBKBLReV9F/D4V7kzFy2KuqmCJBZnZnN5/0=; b=JWn9SNFgYVkwm9qFq/ksZND+Q+z/XCqtk48kb3QP45cWyRSsmNqgqmsHUqttVuYsSe GLDs7ZsoewEUqx87JkjllAhteaaDzYlqbrdm2HYKXfNPG4/0YC0VmD7nnO+X2G7+BL0r eLKUCW8coqNhB3fbZn993pa21hz/mXbTTTxehLtXQ0rhpVXQU8LX0awYCbg/rYOLoXib ru8UftqTEIsFdFbAtIXjZQR5kTBLQaLmF18qU8RHN+sFSOgCrc5lcLPM+VBcir4cF1OJ QCSVxGT79Zwydj5v/K7b5GfZu2dBlQZIC1Zn8vZbYoMGGeG5qK6Pd8f+JXoXxZHOxjdp BNyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6HF2MyB2aBKBLReV9F/D4V7kzFy2KuqmCJBZnZnN5/0=; b=U1FamiIenaXQ8dD1LRhkBY/fvygukGbEpQfScfNkWfVrB5m5YHwtrwFr+ckYpFNB38 sEU7MTeXrn0QAw76VEkVCnwpwsYxUppaPZGW4ckWVuDPozqble/hf9LcAaVQTVS/fEqW 6Cer9rLF4Mc9PHQUYIazPxxFC2tzQRjcCa7Tm+1GY3qddFWQs0L9REXFkVjMFlcaiyec gKC53p2b/CiacOm7M0TRtRzWvJZ1GYyEv9Drt4tWd7wvnhA+BX/dSQgm852pEh4uZbpq NwJfmY+W1Ky8Alw585n81Xo7k3fH9MEZnlq5JkN1oYcERN5lIAnybwjqxm3pteXfj2HU osrQ== X-Gm-Message-State: AOAM5338cFIw1UQnVOUdvxaZikUajUM6ADWp+gfA/bzTPDYWcKklSBeK iffF7r0x39f3HwLs5E0HWm2ewg== X-Received: by 2002:a05:600c:3544:: with SMTP id i4mr4691118wmq.112.1622824372346; Fri, 04 Jun 2021 09:32:52 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l10sm7192329wrm.2.2021.06.04.09.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:51 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id B8E8E1FFCF; Fri, 4 Jun 2021 16:53:19 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 57/99] target/arm: remove kvm include file for PSCI and arm-powerctl Date: Fri, 4 Jun 2021 16:52:30 +0100 Message-Id: <20210604155312.15902-58-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32b; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana The QEMU PSCI implementation is not used for KVM, we do not need the kvm constants header. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/arm-powerctl.h | 2 -- target/arm/psci.c | 1 - 2 files changed, 3 deletions(-) -- 2.20.1 diff --git a/target/arm/arm-powerctl.h b/target/arm/arm-powerctl.h index 37c8a04f0a..35e048ce14 100644 --- a/target/arm/arm-powerctl.h +++ b/target/arm/arm-powerctl.h @@ -11,8 +11,6 @@ #ifndef QEMU_ARM_POWERCTL_H #define QEMU_ARM_POWERCTL_H -#include "kvm-consts.h" - #define QEMU_ARM_POWERCTL_RET_SUCCESS QEMU_PSCI_RET_SUCCESS #define QEMU_ARM_POWERCTL_INVALID_PARAM QEMU_PSCI_RET_INVALID_PARAMS #define QEMU_ARM_POWERCTL_ALREADY_ON QEMU_PSCI_RET_ALREADY_ON diff --git a/target/arm/psci.c b/target/arm/psci.c index 6709e28013..800c4a55d8 100644 --- a/target/arm/psci.c +++ b/target/arm/psci.c @@ -19,7 +19,6 @@ #include "qemu/osdep.h" #include "cpu.h" #include "exec/helper-proto.h" -#include "kvm-consts.h" #include "qemu/main-loop.h" #include "sysemu/runstate.h" #include "internals.h" From patchwork Fri Jun 4 15:52:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454103 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp583269jae; Fri, 4 Jun 2021 09:50:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxgGGgkKsmjIrUGl97FzBaPQBc6tZPttcQDQ3NYY7lKHxLUgwRdwA/NAnlaErvLsv1Nq7fC X-Received: by 2002:a67:eb99:: with SMTP id e25mr3646055vso.12.1622825447153; Fri, 04 Jun 2021 09:50:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825447; cv=none; d=google.com; s=arc-20160816; b=yYho63b2aDcVL1QGRqFI+/5u34tWbzYvA6zc1s6S/QtYRJ26mHpqlcWxwzWikSLdKC a8o6hnAYUEK8VHXNMObu/7qZOlTfknm9frhITgE5G55FsOYgeKYTyEcZGgb9AlDW9BUj 0HMCH/4XfzDylVC+ZdnF+IDXCseBcC/N+DXnyabm/AT3e2aKPkeyWXHNZSlAd3g1bopg k/hkDmM9+Rzz463LKi2wjdKaRmF1nvmdCqSJQa8+Wgvf58+BvGN3eZar9/tWEVFEIqaQ boF4ru/yuF7+N1y9rpF25C3JPLHSg0h63iB25XICOiQ3xkLmQbWoNzc5iC3G1pydxk8B oGgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=eaey0oqZSKDutKn14TjZF2vUFk3jKdYZaYsrDxS3CSk=; b=dbyoN3tvprpiK1u+gJOYyMNwJuQ++wXcUU8MxGm2AP5cGq0TR9iEhgyIyvwAjCilFg pviHGYC1ORVPEzkCt1EUU1H3tCYwaK7y5K77mCVrSaTzK1M5sRTgP7/qiPb5tTU6c4Ba pDY9hBDFVmqXPLCSyPlWUMJEra2jAZQaukb8PSQayjoYEsiINju1XKnWQZRhxGnLKOJY z7lUiImlHCGTMxgWrdpWcTxa+yfIwLKncfOvUyOcV4bvyLv2dds63aMGhhzMQfjhC71R NAf5BboZX0bKfs7aMcrEWpAbgEqNSPNQH8R/cSjCBKfhQ5wSWUmnj/SFQM2hM3+Fa/pU Bsfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=yuTEgwM1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n3si3486215vse.430.2021.06.04.09.50.47 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:50:47 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=yuTEgwM1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58598 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD1y-0004X8-Eg for patch@linaro.org; Fri, 04 Jun 2021 12:50:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48858) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCIP-0000em-86 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:41 -0400 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]:41950) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHv-0005wJ-5C for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:40 -0400 Received: by mail-wr1-x432.google.com with SMTP id h8so9802598wrz.8 for ; Fri, 04 Jun 2021 09:03:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eaey0oqZSKDutKn14TjZF2vUFk3jKdYZaYsrDxS3CSk=; b=yuTEgwM153O7/VZoMA1ldke6yzN3PW5rfzoTQjV9jcsS22OB7R9tZOoBR9VuIhm1Vy ++UeqDECAlhEQOvZmDW4eHY7+gL9KD+q9oE9eZGz/Gt9LZT+ZF1k8rKPldKb/EBCcGk0 PAWXeOHngAahsA/32SsH0tuuox+zr+VKRO0jygSmG0BMtfu8JI57Y6WvVr1ezMlEUpCL 9WLitbfPd7oo1cyvbOwPVlbTjjdNVn3tXo5SUn802imLBebuek5/YQ8tz4PD8We7HKxQ h6XZBgpCeqUVpcO3euHNL+4Lpft7cKwgsLRHRxCT+xEmAE9vqmKID841LRyVpeP6CS+M XLOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eaey0oqZSKDutKn14TjZF2vUFk3jKdYZaYsrDxS3CSk=; b=F5rCSsZS0AereCmnmpbuGGX+TAbCt0jUqx2iDkerikPdhDfBHNlhe5nIeF8cy/QLgh uckoki6gfqE0wXzyRz9/rN1CIW7FX3cHCHk5wkxmV55cYopj0ho6QcOc6GWqFzLn7jSK grb3b6Xnso4BNQLb07GoPkRfr80VW96uYuxCZyOa8FnGfR7GzvYAPJnLcvKbscfvwkUU xMWYTHsh6J/koHn9TFRtOy8ewCLw6gegZrbdziIQwww2CxeCkTD+NgUF9csCBQxjUxEi 9GgYYSyVXWE/h28wYl5jQMBWK7VulUuwsXaxeKVy83WrbUW11wZyrwmwmdIRhHMlUXbm /R6Q== X-Gm-Message-State: AOAM531k8KeAaiTBvfaDZANkEvmTDKASKgGyp3EgAh55v6WQd2/JTEVq aRdzIsaCQD5RjBhjCvcpzf/Nhw== X-Received: by 2002:adf:e109:: with SMTP id t9mr4611736wrz.372.1622822584283; Fri, 04 Jun 2021 09:03:04 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id p20sm8680500wmq.10.2021.06.04.09.02.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:56 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 06CC91FFD0; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 58/99] target/arm: move kvm-const.h, kvm.c, kvm64.c, kvm_arm.h to kvm/ Date: Fri, 4 Jun 2021 16:52:31 +0100 Message-Id: <20210604155312.15902-59-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::432; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Alistair Francis , "Michael S. Tsirkin" , Radoslaw Biernacki , Richard Henderson , Shannon Zhao , qemu-arm@nongnu.org, Claudio Fontana , "Edgar E. Iglesias" , Igor Mammedov , Leif Lindholm , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana and adapt the code including the header references, and trace-events / trace.h Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- meson.build | 2 +- target/arm/cpu.h | 2 +- target/arm/{ => kvm}/kvm-consts.h | 0 target/arm/{ => kvm}/kvm_arm.h | 0 target/arm/kvm/trace.h | 1 + target/arm/trace.h | 1 - hw/arm/sbsa-ref.c | 2 +- hw/arm/virt-acpi-build.c | 2 +- hw/arm/virt.c | 2 +- hw/arm/xlnx-versal.c | 2 +- hw/arm/xlnx-zynqmp.c | 2 +- hw/cpu/a15mpcore.c | 2 +- hw/intc/arm_gic_kvm.c | 2 +- hw/intc/arm_gicv3_its_kvm.c | 2 +- hw/intc/arm_gicv3_kvm.c | 2 +- target/arm/cpu-sysemu.c | 2 +- target/arm/cpu.c | 2 +- target/arm/cpu32.c | 2 +- target/arm/cpu64.c | 2 +- target/arm/{ => kvm}/kvm.c | 0 target/arm/{ => kvm}/kvm64.c | 0 target/arm/machine.c | 2 +- target/arm/monitor.c | 2 +- target/arm/tcg/sysemu/tcg-cpu.c | 1 - MAINTAINERS | 2 +- target/arm/kvm/meson.build | 4 ++++ target/arm/{ => kvm}/trace-events | 0 target/arm/meson.build | 3 +-- 28 files changed, 24 insertions(+), 22 deletions(-) rename target/arm/{ => kvm}/kvm-consts.h (100%) rename target/arm/{ => kvm}/kvm_arm.h (100%) create mode 100644 target/arm/kvm/trace.h delete mode 100644 target/arm/trace.h rename target/arm/{ => kvm}/kvm.c (100%) rename target/arm/{ => kvm}/kvm64.c (100%) create mode 100644 target/arm/kvm/meson.build rename target/arm/{ => kvm}/trace-events (100%) -- 2.20.1 diff --git a/meson.build b/meson.build index eb22030571..e2a22984b8 100644 --- a/meson.build +++ b/meson.build @@ -1859,8 +1859,8 @@ if have_system or have_user trace_events_subdirs += [ 'accel/tcg', 'hw/core', - 'target/arm', 'target/arm/tcg', + 'target/arm/kvm', 'target/hppa', 'target/i386', 'target/i386/kvm', diff --git a/target/arm/cpu.h b/target/arm/cpu.h index e528873ed3..f57fa9b9f5 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -20,7 +20,7 @@ #ifndef ARM_CPU_H #define ARM_CPU_H -#include "kvm-consts.h" +#include "kvm/kvm-consts.h" #include "hw/registerfields.h" #include "cpu-qom.h" #include "exec/cpu-defs.h" diff --git a/target/arm/kvm-consts.h b/target/arm/kvm/kvm-consts.h similarity index 100% rename from target/arm/kvm-consts.h rename to target/arm/kvm/kvm-consts.h diff --git a/target/arm/kvm_arm.h b/target/arm/kvm/kvm_arm.h similarity index 100% rename from target/arm/kvm_arm.h rename to target/arm/kvm/kvm_arm.h diff --git a/target/arm/kvm/trace.h b/target/arm/kvm/trace.h new file mode 100644 index 0000000000..c688745b90 --- /dev/null +++ b/target/arm/kvm/trace.h @@ -0,0 +1 @@ +#include "trace/trace-target_arm_kvm.h" diff --git a/target/arm/trace.h b/target/arm/trace.h deleted file mode 100644 index 60372d8e26..0000000000 --- a/target/arm/trace.h +++ /dev/null @@ -1 +0,0 @@ -#include "trace/trace-target_arm.h" diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c index 43c19b4923..38ac4ca2cd 100644 --- a/hw/arm/sbsa-ref.c +++ b/hw/arm/sbsa-ref.c @@ -28,7 +28,7 @@ #include "sysemu/runstate.h" #include "sysemu/sysemu.h" #include "exec/hwaddr.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "hw/arm/boot.h" #include "hw/block/flash.h" #include "hw/boards.h" diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c index 60fe2e65a7..bfd7f58eec 100644 --- a/hw/arm/virt-acpi-build.c +++ b/hw/arm/virt-acpi-build.c @@ -51,7 +51,7 @@ #include "sysemu/numa.h" #include "sysemu/reset.h" #include "sysemu/tpm.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "migration/vmstate.h" #include "hw/acpi/ghes.h" diff --git a/hw/arm/virt.c b/hw/arm/virt.c index 840758666d..4573c3daf5 100644 --- a/hw/arm/virt.c +++ b/hw/arm/virt.c @@ -63,7 +63,7 @@ #include "hw/intc/arm_gic.h" #include "hw/intc/arm_gicv3_common.h" #include "hw/irq.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "hw/firmware/smbios.h" #include "qapi/visitor.h" #include "qapi/qapi-visit-common.h" diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c index fb776834f7..d42e19ab5a 100644 --- a/hw/arm/xlnx-versal.c +++ b/hw/arm/xlnx-versal.c @@ -18,7 +18,7 @@ #include "sysemu/sysemu.h" #include "sysemu/kvm.h" #include "hw/arm/boot.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "hw/misc/unimp.h" #include "hw/arm/xlnx-versal.h" diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c index 3597e8db4d..0af49c713a 100644 --- a/hw/arm/xlnx-zynqmp.c +++ b/hw/arm/xlnx-zynqmp.c @@ -23,7 +23,7 @@ #include "hw/boards.h" #include "sysemu/kvm.h" #include "sysemu/sysemu.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #define GIC_NUM_SPI_INTR 160 diff --git a/hw/cpu/a15mpcore.c b/hw/cpu/a15mpcore.c index 774ca9987a..670d07a98c 100644 --- a/hw/cpu/a15mpcore.c +++ b/hw/cpu/a15mpcore.c @@ -25,7 +25,7 @@ #include "hw/irq.h" #include "hw/qdev-properties.h" #include "sysemu/kvm.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" static void a15mp_priv_set_irq(void *opaque, int irq, int level) { diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c index 7d2a13273a..9b45b3cad4 100644 --- a/hw/intc/arm_gic_kvm.c +++ b/hw/intc/arm_gic_kvm.c @@ -24,7 +24,7 @@ #include "qemu/module.h" #include "migration/blocker.h" #include "sysemu/kvm.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "gic_internal.h" #include "vgic_common.h" #include "qom/object.h" diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c index b554d2ede0..5322e1bcaf 100644 --- a/hw/intc/arm_gicv3_its_kvm.c +++ b/hw/intc/arm_gicv3_its_kvm.c @@ -25,7 +25,7 @@ #include "hw/qdev-properties.h" #include "sysemu/runstate.h" #include "sysemu/kvm.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "migration/blocker.h" #include "qom/object.h" diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c index 96c7e8b80c..086b0ba0d3 100644 --- a/hw/intc/arm_gicv3_kvm.c +++ b/hw/intc/arm_gicv3_kvm.c @@ -26,7 +26,7 @@ #include "qemu/module.h" #include "sysemu/kvm.h" #include "sysemu/runstate.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "gicv3_internal.h" #include "vgic_common.h" #include "migration/blocker.h" diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 2d3fe4f643..26467c640b 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -24,7 +24,7 @@ #include "cpu.h" #include "internals.h" #include "sysemu/hw_accel.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "sysemu/tcg.h" #include "tcg/tcg-cpu.h" diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 192700fe8f..9b81cbe386 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -39,7 +39,7 @@ #endif #include "sysemu/tcg.h" #include "sysemu/hw_accel.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "disas/capstone.h" #include "fpu/softfloat.h" #include "cpu-mmu.h" diff --git a/target/arm/cpu32.c b/target/arm/cpu32.c index a6ba91ae08..56f02ca891 100644 --- a/target/arm/cpu32.c +++ b/target/arm/cpu32.c @@ -37,7 +37,7 @@ #include "sysemu/sysemu.h" #include "sysemu/tcg.h" #include "sysemu/hw_accel.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "disas/capstone.h" #include "fpu/softfloat.h" #include "cpu-mmu.h" diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 7cd73ae0b6..f5ead76374 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -31,7 +31,7 @@ #include "hw/loader.h" #endif #include "sysemu/kvm.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "qapi/visitor.h" #include "hw/qdev-properties.h" #include "cpregs.h" diff --git a/target/arm/kvm.c b/target/arm/kvm/kvm.c similarity index 100% rename from target/arm/kvm.c rename to target/arm/kvm/kvm.c diff --git a/target/arm/kvm64.c b/target/arm/kvm/kvm64.c similarity index 100% rename from target/arm/kvm64.c rename to target/arm/kvm/kvm64.c diff --git a/target/arm/machine.c b/target/arm/machine.c index 2982e8d7f4..595ab94237 100644 --- a/target/arm/machine.c +++ b/target/arm/machine.c @@ -3,7 +3,7 @@ #include "qemu/error-report.h" #include "sysemu/kvm.h" #include "sysemu/tcg.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "internals.h" #include "migration/cpu.h" #include "cpregs.h" diff --git a/target/arm/monitor.c b/target/arm/monitor.c index 80c64fa355..0c72bf7c31 100644 --- a/target/arm/monitor.c +++ b/target/arm/monitor.c @@ -22,7 +22,7 @@ #include "qemu/osdep.h" #include "hw/boards.h" -#include "kvm_arm.h" +#include "kvm/kvm_arm.h" #include "qapi/error.h" #include "qapi/visitor.h" #include "qapi/qobject-input-visitor.h" diff --git a/target/arm/tcg/sysemu/tcg-cpu.c b/target/arm/tcg/sysemu/tcg-cpu.c index 2c395f47e7..6ab49ba614 100644 --- a/target/arm/tcg/sysemu/tcg-cpu.c +++ b/target/arm/tcg/sysemu/tcg-cpu.c @@ -39,7 +39,6 @@ #include "sysemu/sysemu.h" #include "sysemu/tcg.h" #include "sysemu/hw_accel.h" -#include "kvm_arm.h" #include "disas/capstone.h" #include "fpu/softfloat.h" #include "cpu-mmu.h" diff --git a/MAINTAINERS b/MAINTAINERS index 1ff68116b0..24e55954d4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -373,7 +373,7 @@ ARM KVM CPUs M: Peter Maydell L: qemu-arm@nongnu.org S: Maintained -F: target/arm/kvm.c +F: target/arm/kvm/kvm.c MIPS KVM CPUs M: Huacai Chen diff --git a/target/arm/kvm/meson.build b/target/arm/kvm/meson.build new file mode 100644 index 0000000000..e92010fa3f --- /dev/null +++ b/target/arm/kvm/meson.build @@ -0,0 +1,4 @@ +arm_ss.add(when: 'CONFIG_KVM', if_true: files( + 'kvm.c', + 'kvm64.c', +)) diff --git a/target/arm/trace-events b/target/arm/kvm/trace-events similarity index 100% rename from target/arm/trace-events rename to target/arm/kvm/trace-events diff --git a/target/arm/meson.build b/target/arm/meson.build index 8d0c12b2fc..448e94861f 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -11,8 +11,6 @@ arm_ss.add(files( )) arm_ss.add(zlib) -arm_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c', 'kvm64.c'), if_false: files('kvm-stub.c')) - arm_ss.add(when: 'TARGET_AARCH64', if_true: files( 'cpu64.c', 'gdbstub64.c', @@ -38,6 +36,7 @@ arm_user_ss.add(files( )) subdir('tcg') +subdir('kvm') target_arch += {'arm': arm_ss} target_softmmu_arch += {'arm': arm_softmmu_ss} From patchwork Fri Jun 4 15:52:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454099 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp580201jae; Fri, 4 Jun 2021 09:46:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyR7PzE74xv1kb0H0rY1a/MBb7BVhnzrtq8aV5aFWFaL5VLpOAupPaWx12oZpjISm0HJpU+ X-Received: by 2002:ab0:6cf2:: with SMTP id l18mr4347366uai.86.1622825211041; Fri, 04 Jun 2021 09:46:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825211; cv=none; d=google.com; s=arc-20160816; b=FLvWeMkklPedQ3Py72Ou68XqADvwjo1tLfH6gvx5e9P3tHcTmzGESfJhy8jszp5Ncy m4P9qV8p6z6gVqczDkK4CqHNuhVb3jIvsQpSmNuDEQVA9g0DaW6AO116HBG+pI1gzzdX 0hyaUzAjzn2ygOjf+HH52OEU1vroAwP+rBzTbYG86xTUrqadWuSIhawA8XfBCnhTHFIX A/O+11961NpU5v58Yk6vh8Y7wmgnrdpKrf01NTouLz4UUyzoepxS2+buEX2j13uqndGg 3Ef+TeBOQO98QUW2pSucyikTxDg3TLcNaQDfxT3Zc8kyK9kFUiYziRx5tA/dhilOUdwy uKVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=wONPc0qYV05aYWd43IzxOdQbvCsObFB2L5QRCtxqT+c=; b=NOJY45wO7eG8R9RhaPDxE+PFKhGmJWoC8sZzGpuotTyIGqcRc7RobPqelFMy14gqAk a+0YQp4hh/XohHRf0t6DCy4xDGw83OcjhIz1xJgCI81r/ZZ/tIPzH/PM6IzSsG7BUFGd +FIA832RP4Yy3THLOBBWwvO0x27VaSW3HbL3PlB+i9OcZJUQndTQnmKFNcn9LOaaSOaN 6C+91TkWW+k8FajYeUUk6Gwae90pdhuL+NPN15v5KByMmHRJNukd7v+0IybY5j9SmMZR La6PikN6xmAw7S/yblP+QQmO+49Cajp58D2DIve3zSDMDh+um4CknPef/ookgUR1+q2d 2WuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=blVjmi0T; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id f20si835530vst.195.2021.06.04.09.46.50 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:46:51 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=blVjmi0T; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44324 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCyA-0003RW-Ct for patch@linaro.org; Fri, 04 Jun 2021 12:46:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51852) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRJ-0002yp-Na for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:53 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]:45657) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRD-0003pT-A5 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:53 -0400 Received: by mail-wm1-x333.google.com with SMTP id v206-20020a1cded70000b02901a586d3fa23so1645978wmg.4 for ; Fri, 04 Jun 2021 09:12:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wONPc0qYV05aYWd43IzxOdQbvCsObFB2L5QRCtxqT+c=; b=blVjmi0T4xncD1CPiZb8+go4DchsTCsavQ/SW7q4YfAZ980JftA8hIkvZYSaKASpLL 1rBo3U9CbrOFqI/wFr+7tRE5wjXa34o0IVgmCGX9AOqCEXlp/zhhKLLlB8HbJy+AVxDJ eDYy/1IhmgVx6hzorf4Z50IX5HDkvz8SUh6QPjE2Ck22Z/FhTHi8EXNk9Ei60mqVA2yd U2DOgpBaSk2SnyqJIhAoN7y4ActZKm8fpvTqto3igWttCPxAb9DbuiZCo0U1yrgqTMiz uBRac7AEjuFes/jC2Y3kJh6JMsZ34ql+grXjTOXE6usMrx8SQwPG0SaSKYXxo3Nf5fUs VNVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wONPc0qYV05aYWd43IzxOdQbvCsObFB2L5QRCtxqT+c=; b=oXXnNVdhS0n4n6Czgf03P+WIOFuggXOrxwiqL2VVm7N9BSn9PKiNcZLN/s5GLYVaqI JQXxLiwWsA5icnWdECoBVnEd6s4ohtex3l8h/hfN5kjpQBvQTQO4kT21SXl2BAi2h1Ks bd6j2jnlceNaPict1A/5+u2ypAnVhWeYyS8m8KnflxRtz9Z/vRBZjL2KQ8mBOdZMzXtE ak+kVXJxzmyYDHGlCcjnyn77X4vZnlRS9vbJcz4JyMO9h72Fag8U9dxjw6l9YZpAnwFx ROw6wdPou68gxkUbHQk8+0B9E/XyC01h3wjsMphpjyGc7+RHGSuIoJ6a26G3CYB/puJu aZ7Q== X-Gm-Message-State: AOAM533kE+C9TebPHjLP0La6dDzKySGm4WweVXiMhI1iCIVgprmqGuGH 0y/ypVbe/49kJ9Po7NSCB0yF8A== X-Received: by 2002:a05:600c:19c8:: with SMTP id u8mr4496800wmq.50.1622823165972; Fri, 04 Jun 2021 09:12:45 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id c12sm8185002wrr.90.2021.06.04.09.12.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:42 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 1ABC41FFD1; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 59/99] MAINTAINERS: update arm kvm maintained files to all in target/arm/kvm/ Date: Fri, 4 Jun 2021 16:52:32 +0100 Message-Id: <20210604155312.15902-60-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/MAINTAINERS b/MAINTAINERS index 24e55954d4..95e836af49 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -373,7 +373,7 @@ ARM KVM CPUs M: Peter Maydell L: qemu-arm@nongnu.org S: Maintained -F: target/arm/kvm/kvm.c +F: target/arm/kvm/ MIPS KVM CPUs M: Huacai Chen From patchwork Fri Jun 4 15:52:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454096 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp576253jae; Fri, 4 Jun 2021 09:42:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxqq8yf7keLBqH70KDo4pXPfxlhhmeEqWBax9zKB3AXsWK5L1wWNXcImqsWJ60xAee9JyK1 X-Received: by 2002:a67:b44c:: with SMTP id c12mr3613383vsm.7.1622824935697; Fri, 04 Jun 2021 09:42:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824935; cv=none; d=google.com; s=arc-20160816; b=Ayiz1qxxQKfkuYM1v6FboG93gkA6B3hhWIrw29BAu7W0xJwe6rfkqFMY02MwgjyOKn BIpRZPHcCX7D2OM3QDiqB0qhoK+ucumHQdffq5aZTAa6XSD8cUz7ITxImpklvTyq1v53 Z6QC/LJQ+MLI3D/7OxiH3jGStfVLVb2jDoBH82HcN5QQ9Yu2nnexY8wfkh0bxl/XQqEH J+uJixRrA2Of+z60ABJOpLVSRle2zSDU7Og+pIRZz0f53+AEW3G1R8y8eKjI45RmzdU1 zPFtBXFPyZiTTQk1HfR7YrlSgwUqwZoth9m0hEIZqyco9cXAcgVgLq6JtbUQTWmzfLH8 WPBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=OjGXb7X5f/RV9s4a2gQrpM53TYKYVdPrN+K7LbjNZd0=; b=TJ+tnX4IVVLMjZfFguWNrQJrOKKOdNh84Qc5bZS0MRB78mMApxXfjShk0x624IrQEb /dSWAxK9Q9C+lTKvdLsFZ3N8oRPKefNUUqMHUwW3RP9LVg82X0LYBVPEVX4sZ5R711Fp l677as/PPiJpQmN3fC3i+G9RO86lGk26Q8JVmqqwILw8M66tMdFs2JKSKJNBp6tv1+xZ 6gIwfZBWwV+AMsE6DyovB/n33DErWpGTriiek0ynFostE8TVYqrjYahiEL6qPIVmmD+6 tMjNoWrRxN5ad2ajx6wUXkE8JvvLvuzTOSl4zIBDTU5wWfsPbInOVGf5CQsvDx6eJjZs VPMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=DQPgJ414; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u3si780336vsl.11.2021.06.04.09.42.15 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:42:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=DQPgJ414; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:55406 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCtj-00084e-2i for patch@linaro.org; Fri, 04 Jun 2021 12:42:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51676) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRC-0002iU-Tb for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:46 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]:55930) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCR6-0003n3-J0 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:46 -0400 Received: by mail-wm1-x32e.google.com with SMTP id g204so5685922wmf.5 for ; Fri, 04 Jun 2021 09:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OjGXb7X5f/RV9s4a2gQrpM53TYKYVdPrN+K7LbjNZd0=; b=DQPgJ414R5YMAqouvs6BLK9liTnc8PVZhojoyUvd8wVnLHDlCnCU1ZSrEu/IUHDsjA 3wecGKhPjUVBUvrMV8x8f7mGpIDjJedP68ThbWwlHIRU61T6RQN+mZV5DT/nusoAEnVx IomLez84oWWVRqmU0yOLLceBKqHbqZTv5zXKc/HqRhMCbcB4htifS03DAkAs8PBhWlhx 5Ltem084vwDTHx1oK3BH31a24Q8QPB7PtnwtQs46hnAg4s9ggR+yhdyXTgZ36h2g3eSH apwzL7BN4mCnGlEsqaSgJe/fj1DbcynPb/prQjr3oUnKH+zwDsBUW9BMdZc22k0EI3l4 EgyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OjGXb7X5f/RV9s4a2gQrpM53TYKYVdPrN+K7LbjNZd0=; b=QdZHewO/yGt8WwMeH1MgDhcjJB9QeL99s0+tEvqRNSs6rDoayjus4qK9XVbCPhFYfB /Ndaz+BwIrNNkTNi/veGpLoGHfoPI99xcdvF/pVnEWy47cpsFh7IefA9wa4yj9sFdTst UwdnBv0WDTaw5kjmL3suGyZvpckBWLC2yBeU+3aUubJUn3IVSn5/rQ4PlRvdBa5Jv/Qe PRzTXpRFcYABrDsZi/WnMf0d4Fv6k1qAVN/gs6Ko3yau0y1nQkDmtyAQnSGc5CutUNWT xqS/ofOEn5eN0nrM50Lmpc+aNJhVavtpwv08SbrXNG3gcOFDbo1a7YTX3kBgdX86x2ob Tgbg== X-Gm-Message-State: AOAM533IaNgCmSmDCKL+4ILZ6/ZDHbAjnoxj6wxtwswFNIcUXZUfhLF+ 0e292DXr4UiS6h/hkUzzdgo6dg== X-Received: by 2002:a05:600c:3795:: with SMTP id o21mr4432038wmr.99.1622823159079; Fri, 04 Jun 2021 09:12:39 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id y22sm10838514wma.36.2021.06.04.09.12.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 368EA1FF91; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 60/99] target/arm: cleanup cpu includes Date: Fri, 4 Jun 2021 16:52:33 +0100 Message-Id: <20210604155312.15902-61-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana cpu.c, cpu32.c, cpu64.c, tcg/sysemu/tcg-cpu.c, all need a good cleanup when it comes to included header files. Signed-off-by: Claudio Fontana Acked-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu.c | 6 ++---- target/arm/cpu32.c | 14 -------------- target/arm/cpu64.c | 6 ------ target/arm/tcg/sysemu/tcg-cpu.c | 22 +--------------------- 4 files changed, 3 insertions(+), 45 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 9b81cbe386..7e3726ff00 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -21,24 +21,22 @@ #include "qemu/osdep.h" #include "qemu-common.h" #include "target/arm/idau.h" -#include "qemu/module.h" #include "qapi/error.h" -#include "qapi/visitor.h" #include "cpu.h" #include "cpregs.h" + #ifdef CONFIG_TCG #include "tcg/tcg-cpu.h" #endif /* CONFIG_TCG */ #include "cpu32.h" -#include "internals.h" #include "exec/exec-all.h" #include "hw/qdev-properties.h" #if !defined(CONFIG_USER_ONLY) #include "hw/loader.h" #include "hw/boards.h" #endif + #include "sysemu/tcg.h" -#include "sysemu/hw_accel.h" #include "kvm/kvm_arm.h" #include "disas/capstone.h" #include "fpu/softfloat.h" diff --git a/target/arm/cpu32.c b/target/arm/cpu32.c index 56f02ca891..6c53245d66 100644 --- a/target/arm/cpu32.c +++ b/target/arm/cpu32.c @@ -20,26 +20,12 @@ #include "qemu/osdep.h" #include "qemu/qemu-print.h" -#include "qemu-common.h" -#include "target/arm/idau.h" #include "qemu/module.h" -#include "qapi/error.h" -#include "qapi/visitor.h" #include "cpu.h" #include "cpregs.h" -#include "internals.h" -#include "exec/exec-all.h" -#include "hw/qdev-properties.h" #if !defined(CONFIG_USER_ONLY) -#include "hw/loader.h" #include "hw/boards.h" #endif -#include "sysemu/sysemu.h" -#include "sysemu/tcg.h" -#include "sysemu/hw_accel.h" -#include "kvm/kvm_arm.h" -#include "disas/capstone.h" -#include "fpu/softfloat.h" #include "cpu-mmu.h" #include "cpu32.h" diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index f5ead76374..a8ff1994ca 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -23,13 +23,7 @@ #include "qemu/qemu-print.h" #include "cpu.h" #include "cpu32.h" -#ifdef CONFIG_TCG -#include "hw/core/tcg-cpu-ops.h" -#endif /* CONFIG_TCG */ #include "qemu/module.h" -#if !defined(CONFIG_USER_ONLY) -#include "hw/loader.h" -#endif #include "sysemu/kvm.h" #include "kvm/kvm_arm.h" #include "qapi/visitor.h" diff --git a/target/arm/tcg/sysemu/tcg-cpu.c b/target/arm/tcg/sysemu/tcg-cpu.c index 6ab49ba614..327b2a5073 100644 --- a/target/arm/tcg/sysemu/tcg-cpu.c +++ b/target/arm/tcg/sysemu/tcg-cpu.c @@ -19,29 +19,9 @@ */ #include "qemu/osdep.h" -#include "qemu/qemu-print.h" -#include "qemu-common.h" -#include "target/arm/idau.h" -#include "qemu/module.h" -#include "qapi/error.h" -#include "qapi/visitor.h" #include "cpu.h" -#include "hw/core/tcg-cpu-ops.h" #include "semihosting/common-semi.h" -#include "cpregs.h" -#include "internals.h" -#include "exec/exec-all.h" -#include "hw/qdev-properties.h" -#if !defined(CONFIG_USER_ONLY) -#include "hw/loader.h" -#include "hw/boards.h" -#endif -#include "sysemu/sysemu.h" -#include "sysemu/tcg.h" -#include "sysemu/hw_accel.h" -#include "disas/capstone.h" -#include "fpu/softfloat.h" -#include "cpu-mmu.h" +#include "qemu/log.h" #include "tcg/tcg-cpu.h" /* From patchwork Fri Jun 4 15:52:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454078 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp563988jae; Fri, 4 Jun 2021 09:27:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYRV5bgtHxVCXtqgpAVL7aggPDTOIWLcY8jawjnS//uKOkoWTeXcC32cT5ud1hXJM1cZCf X-Received: by 2002:a05:6102:2122:: with SMTP id f2mr3540999vsg.35.1622824032139; Fri, 04 Jun 2021 09:27:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824032; cv=none; d=google.com; s=arc-20160816; b=rANigFJFfsUA1f6BvD4qvgwnrNEegmtkltfcdduTN8aquzpNVsfiPeGtNRt7CMpxzw xKpPwMLJflMDD25xoyqlH9N92rzMIoEtlYavIsPNc3M5FFIIZcrPX+14xga4qoZ92UgN 2hft6LJefUhabziBe6Op2lEWAnDPCfxdiqhQvBxO9ZtE5BA1nUQXfYC56EvAF+52IX08 e0a0dtDvxILpHJWzVx0B0LZMV84ahO/YIZgzqn6mVgGurzLvDUqDu5peH9d7rMBN/Ght sc6idFHeh1+mR/XRB7MwlF1oOpBsX6UoY2/r0Gv1dMaHJoPqSIWDPvyIxofz98oU6xRP LE+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=1M9DSMG90oxpkIftDhmS6X3YTXP3prGe70gnCR2W/W0=; b=qkzsi5CA7DsYYxxhozNR+qFpUjBLjb99geI98nCT7YWCTEggZf9prPgs5FeTY82ePd z8ofXX3w9OR+9ppb3o1p9+GGdJH6eyNJXQjOtsnRoDkjOB07bINQEi08aEBm6Gaa0pvG BWKjIueO7AxJlqnOhv4uBaXB3MlQK3tG911WkVULAWBW2YHAArZnPqv6YYSvR1NJC0Bx WKT+EZC9QV0W0iF37wQrC1KAbR0BsGis03LtNggaeYW8VOR+WxmBeL47/+dCSmugKe7w frRMoEXAEZcsDYVLHOqW6OIIJO1ZO9CBmt7gZ20HzvqB9rDTDj1NW9ILO1FcjpQ1lIn4 pxuw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zqBSpXrV; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id m13si2736289vsk.332.2021.06.04.09.27.12 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:27:12 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zqBSpXrV; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:39798 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCf9-00038K-FB for patch@linaro.org; Fri, 04 Jun 2021 12:27:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48540) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHy-0008Ec-B9 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:17 -0400 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]:45608) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHZ-0005nE-DO for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:10 -0400 Received: by mail-wm1-x335.google.com with SMTP id v206-20020a1cded70000b02901a586d3fa23so1629616wmg.4 for ; Fri, 04 Jun 2021 09:02:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1M9DSMG90oxpkIftDhmS6X3YTXP3prGe70gnCR2W/W0=; b=zqBSpXrVgFZHYH3MLkjTX40nmjnrTG1dVG15IOZoP5dtK/dTwoU1jDxFRaYDKemzjC MDqy4P4LuUHz2d5UIuYPPW32W/5YTO1VP426Uh3Fi/MF6mqHWvHloBrAktplt+UJcyTa LJLwC820GlaD1qcJsIwWs2wz9xZdgA2d1JDUAzn4J2pDppTmSWWqWE5n09urgB1qjOTF j62jYZqYHOVjaSGVZyEs/BiG0zUyg66K3o8IlXeWfrtmQM8QvdanolHWNYiS2EE/RK0i cWBwB4HUPVu/ig3J8CPLA/aqoc6BnjR3DoTtP7hlmWNwOmCHyCg/lDO8sMMKGPdKSaYr TvbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1M9DSMG90oxpkIftDhmS6X3YTXP3prGe70gnCR2W/W0=; b=GsDEQTcRCNCz7zFUMhbHXxRqGoMtw/w3HbO1TGvQEGrHcQg2XwyqC4d6RinViB1Wyz +8LwVhvy9RSPR8/qFJ2JPeeRC5stZG7jtJKy3cK4p4a8H2g9XR03gJxgXulh0pd82i5V W0jqEmAuXvtvCeLwx8QXtGibIwpxf7vFUjHSz0ShLRW6VuKZFKWM3W1FONAdC2cryv/K SVRHd6bJqIfhc2OizxQrHqZ894Pdj/2zrHUk6+8ua8hBwbDrtz3pPjmhKyK3jZMoqhYR MfumONqaCOe3mKgtLeI7bRiSt9gB4CDdxsd0rmfwDLgg0rfLno4D14eg1KEpd7BVhl4s TTFw== X-Gm-Message-State: AOAM531VYAU0NJxJmC4riQ0nRSr/E2vOfyRMu9EZ22dLHRTo9rjF4M3y LW0MQZcyYrfdKqNRk6Urk3iXYg== X-Received: by 2002:a7b:c041:: with SMTP id u1mr4237576wmc.95.1622822567983; Fri, 04 Jun 2021 09:02:47 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id s62sm9232329wms.13.2021.06.04.09.02.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 502211FFD2; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 61/99] target/arm: remove broad "else" statements when checking accels Date: Fri, 4 Jun 2021 16:52:34 +0100 Message-Id: <20210604155312.15902-62-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::335; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x335.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Stefano Stabellini , Julien Grall , Olaf Hering , qemu-arm@nongnu.org, Claudio Fontana , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana There might be more than just KVM and TCG in the future, so where appropriate, replace broad "else" statements with the appropriate if (accel_enabled()) check. Also invert some checks for !kvm_enabled() or !tcg_enabled() where it seems appropriate to do so. Note that to make qtest happy we need to perform gpio initialization in the qtest_enabled() case as well. Hopefully we do not break any Xen stuff. Signed-off-by: Claudio Fontana Cc: Julien Grall Cc: Stefano Stabellini Cc: Olaf Hering Cc: Alex Bennée Signed-off-by: Alex Bennée --- target/arm/cpu.c | 9 +++++---- target/arm/cpu64.c | 9 +++++---- target/arm/machine.c | 18 ++++++------------ 3 files changed, 16 insertions(+), 20 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 7e3726ff00..57f975f5dc 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -37,6 +37,7 @@ #endif #include "sysemu/tcg.h" +#include "sysemu/qtest.h" #include "kvm/kvm_arm.h" #include "disas/capstone.h" #include "fpu/softfloat.h" @@ -564,7 +565,7 @@ static void arm_cpu_initfn(Object *obj) * the same interface as non-KVM CPUs. */ qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4); - } else { + } else if (tcg_enabled() || qtest_enabled()) { qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4); } @@ -741,14 +742,14 @@ void arm_cpu_post_init(Object *obj) ? cpu_isar_feature(aa64_fp_simd, cpu) : cpu_isar_feature(aa32_vfp, cpu)) { cpu->has_vfp = true; - if (!kvm_enabled()) { + if (tcg_enabled()) { qdev_property_add_static(DEVICE(obj), &arm_cpu_has_vfp_property); } } if (arm_feature(&cpu->env, ARM_FEATURE_NEON)) { cpu->has_neon = true; - if (!kvm_enabled()) { + if (tcg_enabled()) { qdev_property_add_static(DEVICE(obj), &arm_cpu_has_neon_property); } } @@ -849,7 +850,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) * We have not registered the cpu properties when KVM * is in use, so the user will not be able to set them. */ - if (!kvm_enabled()) { + if (tcg_enabled()) { arm_cpu_pauth_finalize(cpu, &local_err); if (local_err != NULL) { error_propagate(errp, local_err); diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index a8ff1994ca..e3d818275c 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -24,6 +24,7 @@ #include "cpu.h" #include "cpu32.h" #include "qemu/module.h" +#include "sysemu/tcg.h" #include "sysemu/kvm.h" #include "kvm/kvm_arm.h" #include "qapi/visitor.h" @@ -297,7 +298,7 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) */ bitmap_andnot(tmp, kvm_supported, cpu->sve_vq_init, max_vq); bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq); - } else { + } else if (tcg_enabled()) { /* Propagate enabled bits down through required powers-of-two. */ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { if (!test_bit(vq - 1, cpu->sve_vq_init)) { @@ -334,7 +335,7 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) "vector length must be enabled.\n"); return; } - } else { + } else if (tcg_enabled()) { /* Disabling a power-of-two disables all larger lengths. */ if (test_bit(0, cpu->sve_vq_init)) { error_setg(errp, "cannot disable sve128"); @@ -416,7 +417,7 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) } return; } - } else { + } else if (tcg_enabled()) { /* Ensure all required powers-of-two are enabled. */ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { if (!test_bit(vq - 1, cpu->sve_vq_map)) { @@ -610,7 +611,7 @@ static void aarch64_max_initfn(Object *obj) if (kvm_enabled()) { kvm_arm_set_cpu_features_from_host(cpu); - } else { + } else if (tcg_enabled()) { uint64_t t; uint32_t u; aarch64_a57_initfn(obj); diff --git a/target/arm/machine.c b/target/arm/machine.c index 595ab94237..4acdccc22d 100644 --- a/target/arm/machine.c +++ b/target/arm/machine.c @@ -638,9 +638,11 @@ static int cpu_pre_save(void *opaque) if (tcg_enabled()) { pmu_op_start(&cpu->env); - } - - if (kvm_enabled()) { + if (!write_cpustate_to_list(cpu, false)) { + /* This should never fail. */ + abort(); + } + } else if (kvm_enabled()) { if (!write_kvmstate_to_list(cpu)) { /* This should never fail */ abort(); @@ -651,11 +653,6 @@ static int cpu_pre_save(void *opaque) * write_kvmstate_to_list() */ kvm_arm_cpu_pre_save(cpu); - } else { - if (!write_cpustate_to_list(cpu, false)) { - /* This should never fail. */ - abort(); - } } cpu->cpreg_vmstate_array_len = cpu->cpreg_array_len; @@ -754,13 +751,10 @@ static int cpu_post_load(void *opaque, int version_id) */ write_list_to_cpustate(cpu); kvm_arm_cpu_post_load(cpu); - } else { + } else if (tcg_enabled()) { if (!write_list_to_cpustate(cpu)) { return -1; } - } - - if (tcg_enabled()) { hw_breakpoint_update_all(cpu); hw_watchpoint_update_all(cpu); From patchwork Fri Jun 4 15:52:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454095 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp575747jae; Fri, 4 Jun 2021 09:41:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzUjKkVEbFbS66aog6TQDQ3tiJKNuPxT73wPcjP8gZGUtPM227iaK1wq8TeLx+/RzxchpQc X-Received: by 2002:a67:df99:: with SMTP id x25mr3636231vsk.44.1622824900876; Fri, 04 Jun 2021 09:41:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824900; cv=none; d=google.com; s=arc-20160816; b=EqqdMifbaWM4O3dfXTENd9fqUrzOJURPZLPcdkLSpAjRTf9ivlSk8pcUDlt1ZF01OM 8fkleihN5YAt/mqTkcfh3gU1HLLrXIHSUNv97URVera5N/5z3EW0CtFtk+cwA7zoTF41 yRoK8TX+UdlUh/kXI+6p1/YdAAbO/5zgxbjkvgr32OiZ8apmJziDTf7KKYI9mM3JenIq P/mEu09v0xlvev+QSdQifXcTOGsZmv6S/OvHLXB1s23g7X9f+ffHBe4Q6Gn2sL1ZsPRn VagVJtKmETk/GtuzCSnq3EIByGvFeVLsotNOrd33CAqgyczCJinPBLUL0ZqbNoglS4Vv KMPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=eiyi2LcRkeghzxLmudkb7UCv7ec33DEm9YuMgjwgL/o=; b=BXC6SNDw9NpSQvpxMvqWQ4EuxiIwv6eTfl7R4SkNU4OS/afmbES6kuHNGPnBkEsr5Z 78FQOgrD86qCvcYa9M/4fT6dt9VNlLEoqsErd244r+tX1fVcMD+mIRD2QckPAAxNna3a i3XlImHF0j5EJxaY9NyPjIUJK+Dz624iFX2YMMM8UWLfcZrdS7y/bYGXkou2k731wmhX g0OpU/s0njqStK3G3pIqoOE5loNwwDoWny6kQLUYYMTATnml2Mbbd9NvdvPqdtR0a0ZB P55Jn64HAQbGR0HOnSpEwjn0ZpqaOVYL1zlc4NfTqc0qJlvAvsE+HWVDC9yfRPUC52TP l7yA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=vacE7Hdj; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id x8si911682vsr.24.2021.06.04.09.41.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:41:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=vacE7Hdj; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:54068 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCtA-00079M-9q for patch@linaro.org; Fri, 04 Jun 2021 12:41:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48536) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHy-0008Eb-Ac for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:17 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]:40492) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHa-0005np-SF for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:09 -0400 Received: by mail-wm1-x32f.google.com with SMTP id b145-20020a1c80970000b029019c8c824054so8223358wmd.5 for ; Fri, 04 Jun 2021 09:02:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eiyi2LcRkeghzxLmudkb7UCv7ec33DEm9YuMgjwgL/o=; b=vacE7HdjcpWPgUumrSNZEOF3qw07fr8B1Qu26YUcXuDLeqDZ0e1Qkt8ZO1TxFI1AqD y7XXzcXx2vaUqDrxko6A/BIRc9PEUGryjoLx8yLgBPYdEess0deUWI0j0vrH75CUj/Si ENV2+/aOkQiOmGIrMzBwUUnFJ2odvZc3gj6slnbi1pU6qXC8LzNDcSPRvp8yyRPuuYJ1 TJOSIQdDBDPW7A3Wk7OOMADWxho4OBUQElUoz4RIpBsjLAsb/pfBxBP9JAMDjCiThSwG jqJH/MuqkHnOVOXMAOIp0fp5lpK4ZbVOYdyFV+5EGmkQSv+oNCnguZDUNwcwj/dHiGxv /P3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eiyi2LcRkeghzxLmudkb7UCv7ec33DEm9YuMgjwgL/o=; b=T8xFFjvyINkTOnCW6yIqhe7m7TH7cgovdzMuc2bMCmP0nvQQC7bzqSnIzyWl8qrLAM siTsWUR1KsW2ElRYE66pZXrwupATGvFT0IgwpmTpQx++8TB8sSZjAWNVg6XBDSPaot6L /YChFGv7DeF94ANsqybBDFMFg2scVBWEGeCiu34km9nIPinKIfIlu+fMmaGJFs+YrvS2 ypcWrJZ/9fRPm7vx21uI27A31zkdAynDsO0Ga7+FDAxCuPTtutSVrMhPgm7kthbvSDjF AemXwMgVEez1o2q8pJEKM9xp3a/MIJGVGhZIkVsQEACI0We/t0mRwFJM/Y9wZLCJhXEz /myQ== X-Gm-Message-State: AOAM533PRTmZawQslD0ZE/cgUHfRIV96iCl/rwQ2bGwh/WZWaXy0Sni1 xb+9ruy03skW9Yad4ddC8UaCfw== X-Received: by 2002:a05:600c:35c8:: with SMTP id r8mr3262410wmq.168.1622822569577; Fri, 04 Jun 2021 09:02:49 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l16sm9378070wmj.47.2021.06.04.09.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 644F61FFD3; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 62/99] target/arm: remove kvm-stub.c Date: Fri, 4 Jun 2021 16:52:35 +0100 Message-Id: <20210604155312.15902-63-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana the functions used in machine.c are now protected via if (kvm_enabled()), so the stub is not needed. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/kvm-stub.c | 24 ------------------------ 1 file changed, 24 deletions(-) delete mode 100644 target/arm/kvm-stub.c -- 2.20.1 diff --git a/target/arm/kvm-stub.c b/target/arm/kvm-stub.c deleted file mode 100644 index 56a7099e6b..0000000000 --- a/target/arm/kvm-stub.c +++ /dev/null @@ -1,24 +0,0 @@ -/* - * QEMU KVM ARM specific function stubs - * - * Copyright Linaro Limited 2013 - * - * Author: Peter Maydell - * - * This work is licensed under the terms of the GNU GPL, version 2 or later. - * See the COPYING file in the top-level directory. - * - */ -#include "qemu/osdep.h" -#include "cpu.h" -#include "kvm_arm.h" - -bool write_kvmstate_to_list(ARMCPU *cpu) -{ - abort(); -} - -bool write_list_to_kvmstate(ARMCPU *cpu, int level) -{ - abort(); -} From patchwork Fri Jun 4 15:52:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454140 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp621530jae; Fri, 4 Jun 2021 10:40:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzyW38BMDVvVVLZIZaGJwVHVIqLna5K7YZwlsBgAp2Cexw3goj8JX8hBgkxSSNz2j1mCGSU X-Received: by 2002:ab0:d8f:: with SMTP id i15mr4622071uak.104.1622828434106; Fri, 04 Jun 2021 10:40:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622828434; cv=none; d=google.com; s=arc-20160816; b=L4zcMugeq0YC/d+aHY9q9T2HP/0vKvJVLAAN2I/xcXm4HJAz3TSY+jJMQZ5adSvMyH EMUizrFRx0ENMe5135bAzGIlmv6S+VgWdFKboNDCseA2+YREd1mZH5n32vDEI/g0tk7c tyAPfJztvVti4Mc7DFeW3JPR7Cx07Rai9KggXiMoWKxMCqDRP+qM9Si0QA9OgAgJC5rU ByrwY/WSo+Kd4316UYUSs42SJhCY9vfbwXK1vmk/obNZyBDxZbewjJbrM4gRJZpNvZE4 43CaH4dXkupbsC5E6eRvCEWmT039QSwBxx6f5VmvSQ3MQlRtweJnbmKL8ksmQ1ez7xZ4 H/Iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=N2NTbq9Py2mVkZ2OrcWx1tG3NJj+ueFdRocasaLiPrE=; b=dzLwhfmx1oqeC5wH/eJkhduUNDVKAGiliuai+zRG3JIKVikF9NTaVAgv9ctmjCF+hD YYimy5q2US9s3fqUH3FdlDa7dD5xVO8ngW7hz9hbpp+syRHgj9RX50juW0GoznJxZbJc 88L0t0Lu+BusKS9DqsyNel7mqTDm2tsj2vpxwUnlzoRy5Mryt9Z0zH/URKSdrlb07lCw 3OCSEeKPSzj5WGl0me1HMhMmVqqG+uGKtRbXzfse+R/8FJdBrTuIbjHNR+Sew0lAMsx6 UO9hanl1RXKmCLvfrEDHvcDs9tx3A2jYR307oHEsd70eyXTQh04ghUMoXWo6NEBbagRL G19w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=K6Ipp5Ub; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w3si1938291vke.6.2021.06.04.10.40.34 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:40:34 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=K6Ipp5Ub; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:56546 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDo9-0001tU-HY for patch@linaro.org; Fri, 04 Jun 2021 13:40:33 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48894) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNR-0000C7-2p for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:57 -0400 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]:39727) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNO-00026v-8d for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:56 -0400 Received: by mail-wr1-x42a.google.com with SMTP id l2so10038506wrw.6 for ; Fri, 04 Jun 2021 10:12:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=N2NTbq9Py2mVkZ2OrcWx1tG3NJj+ueFdRocasaLiPrE=; b=K6Ipp5UbFGTiZ7yJ0xV0HroliU+7IgUEBKIVe9AxjeCP+A/5S7bLlSk3dic9g3/NHw Ros0hQNEjdNBVKIQEsoiTXb6qE5L2IOt6yw0nedPoudxYQmbF7pCNl9zLTCCh32jJSiQ JZzMwjkZRY6AMGaiQoJKdgPUKpqziwfaLx7ICKuVwbf0p6ikA7JBmQKluzhxE4nc2jTN ZpGjjO6HM8DmAng9D3rQEBwCFXaVa5ofnGfsnOBHQrFONgBX6a2v5pd/q7gE9G6O6DV9 ir52Yz0/pLd2EhnkMwjXaAW0Yll2zqOEQlZAnuxZLq3FvzCOG7d1eU8ENSkWZfcxphhf YTXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=N2NTbq9Py2mVkZ2OrcWx1tG3NJj+ueFdRocasaLiPrE=; b=CuCEYwHt2xSkYrr8F0BvyzWFtb5wx2xIQFqww4XRUFV4fnUjTcr/n8DpiE37Fa/PsQ fK8tE/CwyiyPb8ZpaCjk1JzdXRaueX82QeKE7rG4s7Nyely94ZlqSRYgmuA8v7f/Jtra tmo9WCGZdJI6M/JfUATBi0XIIpZkR0ngfkMpzcAfWklYZJeUq4vzdTir9GlP8RiPxqd+ Sud4tQ79zJjO5oY/V055kd4sD0QB8+3K/xPB4+dNYkriLoBuh4Je7Xczf2axl3+cUPIJ Wur9zzndnOpJXfWvGig6Xp7hHnEEcyM35lcWuQct48UdZ6r43ViGI7Orxpp2ciEcwgkw I0Qg== X-Gm-Message-State: AOAM533gS/HKHR3dvnTUdzmfTgX4a1RE1PYM4baJ8gSFn+f0cLbGgoy5 ziyJH94IwNhz+eNYonyTiY8fAg== X-Received: by 2002:adf:f1c3:: with SMTP id z3mr4807739wro.375.1622826772968; Fri, 04 Jun 2021 10:12:52 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id s7sm22532wmh.38.2021.06.04.10.12.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:46 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 79E9F1FFD4; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 63/99] tests/qtest: skip bios-tables-test test_acpi_oem_fields_virt for KVM Date: Fri, 4 Jun 2021 16:52:36 +0100 Message-Id: <20210604155312.15902-64-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42a; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Michael S. Tsirkin" , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?b?w6k=?= , qemu-arm@nongnu.org, Claudio Fontana , Igor Mammedov , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana test is TCG-only. Signed-off-by: Claudio Fontana Cc: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée --- tests/qtest/bios-tables-test.c | 7 +++++++ 1 file changed, 7 insertions(+) -- 2.20.1 diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c index 762d154b34..f8fe4b8efe 100644 --- a/tests/qtest/bios-tables-test.c +++ b/tests/qtest/bios-tables-test.c @@ -1484,6 +1484,13 @@ static void test_acpi_oem_fields_virt_tcg(void) }; char *args; +#ifndef CONFIG_TCG + if (data.tcg_only) { + g_test_skip("TCG disabled, skipping ACPI tcg_only test"); + return; + } +#endif /* CONFIG_TCG */ + args = test_acpi_create_args(&data, "-cpu cortex-a57 "OEM_TEST_ARGS, true); data.qts = qtest_init(args); From patchwork Fri Jun 4 15:52:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454075 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp561406jae; Fri, 4 Jun 2021 09:24:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzZLSybdjx/c3TmOkbg8wpMzZuedVowsXzLVVZwHyB49/kCp8gJvhOJFjRCn4n1cHqVFg7x X-Received: by 2002:a05:6808:290:: with SMTP id z16mr11523886oic.147.1622823853336; Fri, 04 Jun 2021 09:24:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823853; cv=none; d=google.com; s=arc-20160816; b=XJRsio52SBw/hvILGJTgeB6qV0LxrDb9WSurXXOa3eCRrfsfeEFPSs1dIxG8XOsVmb kov+NFONNCyPW3Z9xjwzWjYwgNdvDzsW1BjwRybhhl66gMc7gnnmyu777rUAHYYbpC7f JB0w76RCz92iIBMyz2mzYo61/6aIk9ok9HibOS46hU0R4BRR5O1xPjuNcMRtxbTr1xf1 pPkD7TmS4hKMdw6tzx4Zg11cKEZZOHCWcrUJkeXT8NKK64oprf9ZTDe05KaTIql2CONK Cf1Ljm2nVQEOPdzeu1S8V5VpLiruXwdBVwX267sP65Nj4RMx9gabsXlda8Nn4mv6B18g W+Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=TnqsqRW/fHAwL8iPmUyBN5YIVqEFRjRPuxkbEMdWync=; b=sL5yfoBxGWifL4mCD6DapGleI9DebiIbWZtbbi+GZodlFYmGnJ4UBFR3fk+5S7DFLD hv5svBJqcYwcVnwnL9DIMsfKwgsXNAfttoVcQRaCzBpBIuFDElDZp1LQkTVN+PUySoNf ID33MVu4Yc0wofOOHV7kXv6qnj6FCGW2lSjtpFsJEPBnnQvUuT67wbM4OFpnNh/y6qdB Julw7eRQqQoLutlg8DthxA3M5Y9GBvEwstLLtDzE9Q/7IMuTB9AJp/46D920UxBPX+pe /WME6+uDeXiaKpscaxrL68qt/BQV6S3pguFkWE7Ih1daxffgkWyCXFXcqq50nvg7SZKR t47w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lpGMZm1z; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id b15si2727062otj.123.2021.06.04.09.24.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:24:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lpGMZm1z; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:54508 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCcG-0002Rf-JW for patch@linaro.org; Fri, 04 Jun 2021 12:24:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48770) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCIJ-0000G7-G8 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:35 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]:46651) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHo-0005vq-P7 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:35 -0400 Received: by mail-wm1-x331.google.com with SMTP id h22-20020a05600c3516b02901a826f84095so930629wmq.5 for ; Fri, 04 Jun 2021 09:03:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TnqsqRW/fHAwL8iPmUyBN5YIVqEFRjRPuxkbEMdWync=; b=lpGMZm1z7Q59Kou8ERYOuxJzhfCJ2ybQ8LuqQ83XyL89sjceMcIOq0bUKHr+t9Y8lI JpTjWl88OiHfUIg1h1J7xv0Z8hjeJ2Bzld2ebaOZeRVtGcUllUeeUkLSEceuA8xEv5y8 bwOsoEbBHmAyL6y48+gLtmEOnTUBV4TCu0aj6SEzeWq6tg6D29AamOr6n9OIiX97eUYI +FoOxac3NGbiJm9WP/wwcv/GDMp8oc8G1UGQqRO+ZSVbW0tmJ9x3eKBLONHrjDGcaTdM Y2IGJbDd3RORrUWGD9pqfKDdum+LJR5gI4O2BLLE+S5OjsildKVP4ahyUNdQxmHE1OEf OV8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TnqsqRW/fHAwL8iPmUyBN5YIVqEFRjRPuxkbEMdWync=; b=GaN44UDc8F749cofwL4djFoGk9mBGqLbMGBWqeBpScbDB9JYMW17j8tCv2QIcpIWmb MW8lifomv5rHrKiXhhXa17DK3ddPff8rqfwUGS5EFWwcvF/hO68nLZ1MODMi8zaKHL2L pkhAiVlu589DRr97A8emj9o9WwoJCU0ZQLY9x1lPGgwps1jYscseBPKONlxvM6gXxgj2 oxEmu6vScFC9Xp0+EPdJS53bWoWh5Jp8zNMprNx30yAVjxUCPcpUMInIbbf8wQhcroCq 0fr8NZozwyjXz2TRsbW9QxLG6L5IF0Bte6f5nEERLVq5jWUsb27rUFYGcf4YFo9CVWQz n1kA== X-Gm-Message-State: AOAM531dP5BevD5VsstGaotgsQD5U16tWZ+psoBJd9CnXQl8++AEAemR gEAqsHsZf5hs883Mqf2ED/FsPg== X-Received: by 2002:a7b:c852:: with SMTP id c18mr4315206wml.16.1622822581304; Fri, 04 Jun 2021 09:03:01 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id t12sm9136988wre.9.2021.06.04.09.02.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:56 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 8F3BB1FFD5; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 64/99] tests: do not run test-hmp on all machines for ARM KVM-only Date: Fri, 4 Jun 2021 16:52:37 +0100 Message-Id: <20210604155312.15902-65-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , "Dr. David Alan Gilbert" , =?utf-8?q?Philippe_Ma?= =?utf-8?q?thieu-Daud=C3=A9?= , qemu-arm@nongnu.org, Claudio Fontana , Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana on ARM we currently list and build all machines, even when building KVM-only, without TCG. Until we fix this (and we only list and build machines that are compatible with KVM), only test specifically using the "virt" machine in this case. Signed-off-by: Claudio Fontana Cc: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée --- tests/qtest/test-hmp.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) -- 2.20.1 diff --git a/tests/qtest/test-hmp.c b/tests/qtest/test-hmp.c index 413eb95d2a..1d4b4f2f0e 100644 --- a/tests/qtest/test-hmp.c +++ b/tests/qtest/test-hmp.c @@ -157,8 +157,28 @@ int main(int argc, char **argv) g_test_init(&argc, &argv, NULL); + /* + * XXX currently we build also boards for ARM that are incompatible with KVM. + * We therefore need to check this explicitly, and only test virt for kvm-only + * arm builds. + * After we do the work of Kconfig etc to ensure that only KVM-compatible boards + * are built for the kvm-only build, we could remove this. + */ +#ifndef CONFIG_TCG + { + const char *arch = qtest_get_arch(); + + if (strcmp(arch, "arm") == 0 || strcmp(arch, "aarch64") == 0) { + add_machine_test_case("virt"); + goto add_machine_test_done; + } + } +#endif /* !CONFIG_TCG */ + qtest_cb_for_every_machine(add_machine_test_case, g_test_quick()); + goto add_machine_test_done; + add_machine_test_done: /* as none machine has no memory by default, add a test case with memory */ qtest_add_data_func("hmp/none+2MB", g_strdup("none -m 2"), test_machine); From patchwork Fri Jun 4 15:52:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454122 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp603952jae; Fri, 4 Jun 2021 10:16:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6knNomCchyySvfkALFWYTSjTxh4xPqgbnNZjpU2rWmOrr31QP8G7XRDnPY+R4h+6uNoUr X-Received: by 2002:ab0:3403:: with SMTP id z3mr4454881uap.113.1622826961012; Fri, 04 Jun 2021 10:16:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826961; cv=none; d=google.com; s=arc-20160816; b=uwXGDORR70V8VdIGUcDUdtScLgEQ7kSGoB6k6bgIkkNxDt7u/ImbyBIl97BdSEyTlP xzVAURu+mx4gxH0UKVWgSha/iP1lGcBCVXcSyzod+0JwBwXm/TIYBXLazniqLZfLmyeI QSoTyTEQ0wLLG7CYlaqrCGk7ppKmb9a/00Ygn2ecq6uUmRF5u+fQ7A0yBnwCqmPuyu2x 4vstNoxjk5Q2sPRCOMZMqipdJ9B7WYBUT+m54EdnMWUFTR1Q0hn3EjArl4o1JecATFSZ eeRbhMY2AUgHon5keLLV+mQh2lrt61Uv8N7b54Dh2qcfCtHaSDk/JjdQTfe2ww+Ngtx4 P/eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=fVwtDg6NmbYUbWbKcWKwD4v1KnbubfIiGX4wXK52zzM=; b=KqSLDSy/IguAxpCiu5/emkaGYq3SeNROB5lr6UQcNbIXP3oeNqZLgazSpMXrkigkBI X7orzp4C2Oo92MXfiT7m515X25cO84DFoHyhho3cBEyAnSMC7Ehsj42sBBjh77oKiT1y nzip7i42CtzTzfSTqv/4rPkbn7dObVZ+uW3pZc2Oxgided0Wm0LSTFhQnFW2b/f15p3M FCKUakibr0+zwp2+z5AlzUmNjEXH9ApYlxEuDZ0b5nx5aOicZSu0RzGR2T2RAsYriEgj MzR1y9OAnkuKMmdjvFcNzd8ckjg7lX7pwZ2LDGeDZ6PLokAfgkHtyacf3I0LWpZulEiN 7CkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=OLH7eO1B; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i19si822229uan.177.2021.06.04.10.16.00 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:16:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=OLH7eO1B; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:33968 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDQO-00066M-Bh for patch@linaro.org; Fri, 04 Jun 2021 13:16:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48698) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNJ-000860-Fh for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:49 -0400 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]:43792) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNH-00023E-Fr for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:49 -0400 Received: by mail-wm1-x335.google.com with SMTP id 3-20020a05600c0243b029019f2f9b2b8aso6011193wmj.2 for ; Fri, 04 Jun 2021 10:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fVwtDg6NmbYUbWbKcWKwD4v1KnbubfIiGX4wXK52zzM=; b=OLH7eO1BVxQh5ug2adTu2a83X1SkMbpcY1UBwmeZ1id1G8thQFyPecmAOGW+PujpY9 UY6BDvhqOYbxj8sjZ0WpvNvJ5sC1/b8L0TEe8PdX4Oa+gyTFnvftb/YSlOx01C0bfR5y gBr7U5PidbWbwR3359Fy2SOMdw+WHNTo14koav2SLGiSd4Ow7cASXq13rmVjvMkwsTI/ 9SRdrg+rK/yvjXyP06u24oYW/4DhqsVCBoifC4JBctypNLOUZxYJwSh8N43ZjTxL4AE4 7DbWU5gg4Ixa7jV6vjaMBQXGnXvWxCM0C5PctPIYFQVm8KgFblPB7tv0tg/zrmnsefr2 LqIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fVwtDg6NmbYUbWbKcWKwD4v1KnbubfIiGX4wXK52zzM=; b=k7YFZMNjkZjfTnSPX7DzvyRngpuQFpNiIUGNY+VD7wp+pOgMWAIL2d2YPsjW+Dxnvc 0QtSDPJJvx3SfQo4OG9L50NhIMGw4Y6ONk/Tzp1U9RJJuXZE7xoOj1unWHR8ZX8nKGvU W2LfQkBFkhpGk4ZEFQ4P0zT1ZHsWrqaBNdibad0zw95yqpx5q2H5gupR3Prl92WwgVHa CBaWkqmFz31sLDFEg3uTHyQ2XenI2xF+MI8V9p0qIThQHwuN04kQq9lQRD0MCNcDywoN DI56VjmgmIL8WE91k7Ua6Sg/h90xSSRF2gg80bCbSEotgo6itRMPPhJKOGZOfJ8tFIQT 5bgg== X-Gm-Message-State: AOAM533jf8r7zBUjN2ySrGpSDKD3XTNfDMbQLYGOjq2QdK9OldUJoQYq z+4Q/7vwp+LRYthfBuhZmbgu6Q== X-Received: by 2002:a1c:6004:: with SMTP id u4mr4668413wmb.110.1622826763890; Fri, 04 Jun 2021 10:12:43 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id r4sm2614512wrt.26.2021.06.04.10.12.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A55A01FFD6; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 65/99] tests: device-introspect-test: cope with ARM TCG-only devices Date: Fri, 4 Jun 2021 16:52:38 +0100 Message-Id: <20210604155312.15902-66-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::335; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x335.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-arm@nongnu.org, Claudio Fontana , Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Skip the test_device_intro_concrete for now for ARM KVM-only build, as on ARM we currently build devices for ARM that are not compatible with a KVM-only build. We can remove this workaround when we fix this in KConfig etc, and we only list and build machines that are compatible with KVM for KVM-only builds. Alternative implementation provided by Alex. Suggested-by: Alex Bennée Signed-off-by: Claudio Fontana Cc: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée --- tests/qtest/device-introspect-test.c | 32 +++++++++++++++++++++++----- 1 file changed, 27 insertions(+), 5 deletions(-) -- 2.20.1 diff --git a/tests/qtest/device-introspect-test.c b/tests/qtest/device-introspect-test.c index bbec166dbc..cb8bf6e37d 100644 --- a/tests/qtest/device-introspect-test.c +++ b/tests/qtest/device-introspect-test.c @@ -305,6 +305,24 @@ static void test_abstract_interfaces(void) qtest_quit(qts); } +/* + * XXX currently we build also boards for ARM that are incompatible with KVM. + * We therefore need to check this explicitly, and only test virt for kvm-only + * arm builds. + * After we do the work of Kconfig etc to ensure that only KVM-compatible boards + * are built for the kvm-only build, we could remove this. + */ +static bool skip_machine_tests(void) +{ +#ifndef CONFIG_TCG + const char *arch = qtest_get_arch(); + if (strcmp(arch, "arm") == 0 || strcmp(arch, "aarch64") == 0) { + return true; + } +#endif /* !CONFIG_TCG */ + return false; +} + static void add_machine_test_case(const char *mname) { char *path, *args; @@ -329,11 +347,15 @@ int main(int argc, char **argv) qtest_add_func("device/introspect/none", test_device_intro_none); qtest_add_func("device/introspect/abstract", test_device_intro_abstract); qtest_add_func("device/introspect/abstract-interfaces", test_abstract_interfaces); - if (g_test_quick()) { - qtest_add_data_func("device/introspect/concrete/defaults/none", - g_strdup(common_args), test_device_intro_concrete); - } else { - qtest_cb_for_every_machine(add_machine_test_case, true); + + if (!skip_machine_tests()) { + if (g_test_quick()) { + qtest_add_data_func("device/introspect/concrete/defaults/none", + g_strdup(common_args), + test_device_intro_concrete); + } else { + qtest_cb_for_every_machine(add_machine_test_case, true); + } } return g_test_run(); From patchwork Fri Jun 4 15:52:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454136 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp618126jae; Fri, 4 Jun 2021 10:35:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx5yokJfLdMeJSeEQMFCWX1pD+zi4b+yzvqM4dpxXqUdFvcku4wUUptNt+j7kTXW3lSCjXQ X-Received: by 2002:a67:2c44:: with SMTP id s65mr3941095vss.46.1622828139847; Fri, 04 Jun 2021 10:35:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622828139; cv=none; d=google.com; s=arc-20160816; b=jdHFxKkol5C6OWL7R3KQiFpLQ8rKkx5EHM7WRbnYPN9P05C7zrCUCpzqJEix9Ce4pN 3sO+FzPIK2gtrOsCkmrb1J+13EmHJxMmFlsQLR7hCWfOFUkqXXqWD3Lkb+/2fGVcyuxa cMnxYA64/53HdHfjS/ff0h8q2+gJTaPiM6OGP4nr8X+oSw2MpcIyvuAXYIUCjd+h79bb jU5jTtdltzK8uM3B0asBGLRgXO7mLiHk/ObKUbbh8nRjbNpV/MUjVo1itSIg0Xbs+oOY B/xOF1UTvx+7aH0y6+GpnjRYvXUmUDck8APY+XA8nwrFc9p3qNR7Fj62kWyYDfMYlT5A MQXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=o/tJR1h7wyxWfcVFJmfREs1+Cr3yNNSSbrFc5jF6cQA=; b=SS0RD9eOLm6wte80g3SIXo8Ds69VeSdgZDfCuCknp0do/PPwc4IpLcGGdJen98t2wQ CzMg+exEi9GH1OeHWs3z22m7KkIoN6cPjginCv3dR2h8CP3CbFvSkMXw7MrxNGGcUrS1 tMtK/F6rjIi+0hteBtuNnk5teM/zczzwkT8llVrkkVqxJtturYx66qkYM8ypRt7VZ1+j tm66sa61HzPFJn1wT2A0jpOHsNXrihD9wUzZI7OKpyLNxkC34TqEf5GzdLovUnLy36jY ZVgpdwiRTex8w3OqYLoXfU0Dk3MhrJ3SwdKGs/qVWwdJelArvI681xePNzVm/JfbaGPX DsRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=F17fSk9t; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id s14si3639728vsi.367.2021.06.04.10.35.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:35:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=F17fSk9t; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42056 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDjP-0000aV-7j for patch@linaro.org; Fri, 04 Jun 2021 13:35:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48694) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNJ-00085O-9j for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:49 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]:44968) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDND-000221-DN for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:49 -0400 Received: by mail-wr1-x42b.google.com with SMTP id f2so9990706wri.11 for ; Fri, 04 Jun 2021 10:12:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=o/tJR1h7wyxWfcVFJmfREs1+Cr3yNNSSbrFc5jF6cQA=; b=F17fSk9tH+3glWMuB2pYiu12CQG/zDLd2tnt+EnAyX0IeSqzDIoPv5mfOure8ZXVyR Qp10cis8SUg8o9kwLqvxVS3SCMQFHUr0jktZeV6PkaVY0Y1eyIo5IUaiCHbnZhav08oS qlWW2DjOA6O70OXL/PV4fPVS6M9pzdTBwVcmQ+cRuM40JHH9jqeaxJAbmQUJPn+7ubrK VRkSRdf2fAdSmBsCDp7b5qX39ZCZcTAuHGzeH+yywdAJAXpIOZ1nFcdZSLYVRpYqO5Mr DlgDZk72YXd236CeJ18iePRpmkpOSVL7VeUQdlDGtZBE8dEqBFvuiaU779ZW4bJ07vzo G2sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o/tJR1h7wyxWfcVFJmfREs1+Cr3yNNSSbrFc5jF6cQA=; b=Uxg7vX5/ZClQv+RK2NQZl+d4B6EJT/rd44ANFkoaFlbziEGfhBlbg+xVohg30+Ieav FFMkV8gEU6TlVFLlZsAC/yGMOoubzDNsqd1UvI6PcUo14fwy4v7ytfYOULk0Y0SD4sUG CGo1IsV7BT+1Vr0ONppHdyi2IlDgV/okniw9IU0PmGiOVngfQecLAl10geeJFBAJx2aS m1bhbsy437MsYYul65ahWddOHd8PESPlfrzAkFUjTf8hTo6JNwXGGmi7Pt+Bvh3L6Nkw kIbjWxa1uQD4dxgiD8j2dWM6SxDiFkFIHsmhWf6d/h3CBN4TrmFQCou4TRKHlpVsQmNg urkw== X-Gm-Message-State: AOAM532NPpJCr5bH6QbbcXZjKNDFxfDWlK16+5L7yg8H6zJA8m+raohX e/Ov5cmBw6/Byq0lDmeTQVDb4JYBynE5xQ== X-Received: by 2002:adf:e54f:: with SMTP id z15mr4970797wrm.141.1622826761870; Fri, 04 Jun 2021 10:12:41 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id 92sm7893175wrp.88.2021.06.04.10.12.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id BB9381FFD7; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 66/99] tests: do not run qom-test on all machines for ARM KVM-only Date: Fri, 4 Jun 2021 16:52:39 +0100 Message-Id: <20210604155312.15902-67-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-arm@nongnu.org, Claudio Fontana , Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana on ARM we currently list and build all machines, even when building KVM-only, without TCG. Until we fix this (and we only list and build machines that are compatible with KVM), only test specifically using the "virt" machine in this case. Signed-off-by: Claudio Fontana Cc: Philippe Mathieu-Daudé Signed-off-by: Alex Bennée --- tests/qtest/qom-test.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) -- 2.20.1 diff --git a/tests/qtest/qom-test.c b/tests/qtest/qom-test.c index eb34af843b..b0a6d10148 100644 --- a/tests/qtest/qom-test.c +++ b/tests/qtest/qom-test.c @@ -90,7 +90,27 @@ int main(int argc, char **argv) { g_test_init(&argc, &argv, NULL); + /* + * XXX currently we build also boards for ARM that are incompatible with KVM. + * We therefore need to check this explicitly, and only test virt for kvm-only + * arm builds. + * After we do the work of Kconfig etc to ensure that only KVM-compatible boards + * are built for the kvm-only build, we could remove this. + */ +#ifndef CONFIG_TCG + { + const char *arch = qtest_get_arch(); + + if (strcmp(arch, "arm") == 0 || strcmp(arch, "aarch64") == 0) { + add_machine_test_case("virt"); + goto add_machine_test_done; + } + } +#endif /* !CONFIG_TCG */ + qtest_cb_for_every_machine(add_machine_test_case, g_test_quick()); + goto add_machine_test_done; + add_machine_test_done: return g_test_run(); } From patchwork Fri Jun 4 15:52:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454117 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp599122jae; Fri, 4 Jun 2021 10:10:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzqCJHEBRv3fcvp8/8oBD1e2/d5v50sYx+gEwybl8WLPUNRsT9RhypJGo+mvxiOAlJTRZqu X-Received: by 2002:aa7:c84a:: with SMTP id g10mr5763531edt.326.1622826614975; Fri, 04 Jun 2021 10:10:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826614; cv=none; d=google.com; s=arc-20160816; b=wC0i2PCBekily5A0hAskwuBH1iucmmFcCgXakyeBATmYgT2HPQ5eG9OJqn0i8hfJx9 orCsPp1PLhbFGT0JqSJO6sah7w2kAb8+N+KkW5bU4qlL5x30Ya2W1Kamm/3T/jm0wq9G +riFpYLq+9oBA8n+OSAV2gvDw6+XdETcoY+dr0OdtE4cyg4E/99fDcEcSSr7c4SvSS/B Ftm12aoJwKw2u1PJP2TmUOKrf3uR/+YpITsFVrcFsnoc5UkTyewEZmGvKvCj+RS/Ft+Q WI2kXG1WvraAFhtRWUVUlDMeY2DLKJtF08bVW+rYg0lvgLiLNWDp1GmaIALkJ5l/GrgQ ho/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=VxtZzAI4CoksM/sdPCcSUcUhaEvNzz/ZBq2sF1VS0fg=; b=zb8s8wkhG3+4VzwSTKfy6x7BI2Lp3TfZKtWcHU5ttvxGSdpBLWVbM4vijovZIq0MKx +hInnO8LuwETtrwuvSIGtA5pFRO6psWHbShGtIG03pnnA31QFdmubBemER44ZLxGJJGi WTnnmIPRFujffVETn3SEdTaaU71bIx08qYH23UFynMau4vUWaSXrv5y577ewj18MgD5Z 333LB4ZQBo7Ixau8XkBIXmRCtPg02x54lb065GPI4JHwpZYm8MoNbYa0gY6CNG64Fq8f JSTKEx2iz733XHFYQos2fmdZXh6pRVLKlsucWLH2TXxnN7ytfC/w9ogBFpL3h26CjRxs uDNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=mmWyplAU; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id cn8si368449edb.28.2021.06.04.10.10.14 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:10:14 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=mmWyplAU; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44450 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDKn-00028z-TF for patch@linaro.org; Fri, 04 Jun 2021 13:10:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33334) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkY-00084E-5o for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:46 -0400 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]:44624) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkV-00026j-2d for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:45 -0400 Received: by mail-wr1-x42c.google.com with SMTP id f2so9882545wri.11 for ; Fri, 04 Jun 2021 09:32:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VxtZzAI4CoksM/sdPCcSUcUhaEvNzz/ZBq2sF1VS0fg=; b=mmWyplAU8upfLswSrr+m05yMVAWbdOW3TAzzB6OOqRCpinTkj7rWSBHcX0TTPA85Qt qihUyS3q8WmsPwvmHceE15/Nm9NjJ6+aBTWBRYRNnteIwEB8QAcxwQj5Uv+36pG5NBHH g4g0Gd/9b5l8tgSDZUO8pCzMcJj7WC7fqHQ2Sfzn6Kdry1AMOvBWUYU7smdr67SJcbri raWrjfV8qAetNvuGeVSR2wUk5SHs5YBwuPp32RSDZ/ZdzPD52hbpWimak7z4iO651fly daGf5VCXVEU+sYqpSsjQajkeqoPd4OfMASmk8LN9ryl0xoWQliabbRcIzKprAOfjpxK0 7G/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VxtZzAI4CoksM/sdPCcSUcUhaEvNzz/ZBq2sF1VS0fg=; b=KEe9hUW+yInCByo0hu5BT6Czb6CoVJSsSJ0ss1MpU2gXXujgCMpZycfn9MK420UMMQ HTvFe2qhmQhDjy5mXMx5Gomgrw9ur84yDtj+xpAcD59kY0gbOlm8q4wiU/mqS6SLT7b8 hsigBwN/ctblzsEplKsCRQeFgrk6v5r8ncBi+UtuxS5Je/oH7p93K8JIRep7kNb6XyhE J0kELmBTObTC+MtKdKUdxTuCa0RWp9Jblou/Nvv0o9oZKfZNG6s4Oyo8xVva3UxfQPjx dE8hQ/zFhpNOu0RO52XQjOpAyBNK0UX4GKMsqQ4hRRro7A7DN5fZAK5DtgEnQkZtYD7B ujhg== X-Gm-Message-State: AOAM532LbVzMqAcCFiQqzbHYnRGvzCXKxRZPzJzgzAGNe7m8y3ysiwDJ aSXcnH5eBuFZvPaEhvL5WO1aKg== X-Received: by 2002:a5d:4287:: with SMTP id k7mr4947896wrq.98.1622824361505; Fri, 04 Jun 2021 09:32:41 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id g21sm7987458wrb.46.2021.06.04.09.32.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id DAD161FFD8; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 67/99] target/arm: create kvm cpu accel class Date: Fri, 4 Jun 2021 16:52:40 +0100 Message-Id: <20210604155312.15902-68-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42c; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana move init, realizefn and reset code into it. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/internals.h | 1 - target/arm/cpu-sysemu.c | 32 ---------- target/arm/cpu.c | 49 +++----------- target/arm/kvm/kvm-cpu.c | 128 +++++++++++++++++++++++++++++++++++++ target/arm/kvm/meson.build | 1 + 5 files changed, 137 insertions(+), 74 deletions(-) create mode 100644 target/arm/kvm/kvm-cpu.c -- 2.20.1 diff --git a/target/arm/internals.h b/target/arm/internals.h index 227a80ec21..522596d15f 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1165,7 +1165,6 @@ static inline uint64_t useronly_maybe_clean_ptr(uint32_t desc, uint64_t ptr) #ifndef CONFIG_USER_ONLY void arm_cpu_set_irq(void *opaque, int irq, int level); -void arm_cpu_kvm_set_irq(void *opaque, int irq, int level); bool arm_cpu_virtio_is_big_endian(CPUState *cs); #endif /* !CONFIG_USER_ONLY */ diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index 26467c640b..fff55311f4 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -24,7 +24,6 @@ #include "cpu.h" #include "internals.h" #include "sysemu/hw_accel.h" -#include "kvm/kvm_arm.h" #include "sysemu/tcg.h" #include "tcg/tcg-cpu.h" @@ -72,37 +71,6 @@ void arm_cpu_set_irq(void *opaque, int irq, int level) } } -void arm_cpu_kvm_set_irq(void *opaque, int irq, int level) -{ -#ifdef CONFIG_KVM - ARMCPU *cpu = opaque; - CPUARMState *env = &cpu->env; - CPUState *cs = CPU(cpu); - uint32_t linestate_bit; - int irq_id; - - switch (irq) { - case ARM_CPU_IRQ: - irq_id = KVM_ARM_IRQ_CPU_IRQ; - linestate_bit = CPU_INTERRUPT_HARD; - break; - case ARM_CPU_FIQ: - irq_id = KVM_ARM_IRQ_CPU_FIQ; - linestate_bit = CPU_INTERRUPT_FIQ; - break; - default: - g_assert_not_reached(); - } - - if (level) { - env->irq_line_state |= linestate_bit; - } else { - env->irq_line_state &= ~linestate_bit; - } - kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level); -#endif -} - bool arm_cpu_virtio_is_big_endian(CPUState *cs) { ARMCPU *cpu = ARM_CPU(cs); diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 57f975f5dc..0ecbfa060c 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -42,6 +42,7 @@ #include "disas/capstone.h" #include "fpu/softfloat.h" #include "cpu-mmu.h" +#include "qemu/accel.h" static void arm_cpu_set_pc(CPUState *cs, vaddr value) { @@ -409,11 +410,6 @@ static void arm_cpu_reset(DeviceState *dev) &env->vfp.fp_status_f16); set_float_detect_tininess(float_tininess_before_rounding, &env->vfp.standard_fp_status_f16); -#ifndef CONFIG_USER_ONLY - if (kvm_enabled()) { - kvm_arm_reset_vcpu(cpu); - } -#endif if (tcg_enabled()) { hw_breakpoint_update_all(cpu); @@ -560,12 +556,7 @@ static void arm_cpu_initfn(Object *obj) #ifndef CONFIG_USER_ONLY /* Our inbound IRQ and FIQ lines */ - if (kvm_enabled()) { - /* VIRQ and VFIQ are unused with KVM but we add them to maintain - * the same interface as non-KVM CPUs. - */ - qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4); - } else if (tcg_enabled() || qtest_enabled()) { + if (tcg_enabled() || qtest_enabled()) { qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4); } @@ -809,6 +800,9 @@ void arm_cpu_post_init(Object *obj) } } #endif + + /* if required, do accelerator-specific cpu initializations */ + accel_cpu_instance_init(CPU(obj)); } static void arm_cpu_finalizefn(Object *obj) @@ -878,16 +872,13 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) Error *local_err = NULL; bool no_aa32 = false; - /* If we needed to query the host kernel for the CPU features + /* + * If we needed to query the host kernel for the CPU features * then it's possible that might have failed in the initfn, but * this is the first point where we can report it. */ if (cpu->host_cpu_probe_failed) { - if (!kvm_enabled()) { - error_setg(errp, "The 'host' CPU type can only be used with KVM"); - } else { - error_setg(errp, "Failed to retrieve host CPU features"); - } + error_setg(errp, "The 'host' CPU type can only be used with KVM"); return; } @@ -1486,26 +1477,6 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data) arm32_cpu_class_init(oc, data); } -#ifdef CONFIG_KVM -static void arm_host_initfn(Object *obj) -{ - ARMCPU *cpu = ARM_CPU(obj); - - kvm_arm_set_cpu_features_from_host(cpu); - if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { - aarch64_add_sve_properties(obj); - } - arm_cpu_post_init(obj); -} - -static const TypeInfo host_arm_cpu_type_info = { - .name = TYPE_ARM_HOST_CPU, - .parent = TYPE_AARCH64_CPU, - .instance_init = arm_host_initfn, -}; - -#endif - static const TypeInfo arm_cpu_type_info = { .name = TYPE_ARM_CPU, .parent = TYPE_CPU, @@ -1521,10 +1492,6 @@ static const TypeInfo arm_cpu_type_info = { static void arm_cpu_register_types(void) { type_register_static(&arm_cpu_type_info); - -#ifdef CONFIG_KVM - type_register_static(&host_arm_cpu_type_info); -#endif } type_init(arm_cpu_register_types) diff --git a/target/arm/kvm/kvm-cpu.c b/target/arm/kvm/kvm-cpu.c new file mode 100644 index 0000000000..5fbb127e61 --- /dev/null +++ b/target/arm/kvm/kvm-cpu.c @@ -0,0 +1,128 @@ +/* + * QEMU ARM CPU + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "cpu.h" +#include "hw/core/accel-cpu.h" +#include "qapi/error.h" + +#include "kvm/kvm_arm.h" +#include "internals.h" + +static void arm_cpu_kvm_set_irq(void *opaque, int irq, int level) +{ + ARMCPU *cpu = opaque; + CPUARMState *env = &cpu->env; + CPUState *cs = CPU(cpu); + uint32_t linestate_bit; + int irq_id; + + switch (irq) { + case ARM_CPU_IRQ: + irq_id = KVM_ARM_IRQ_CPU_IRQ; + linestate_bit = CPU_INTERRUPT_HARD; + break; + case ARM_CPU_FIQ: + irq_id = KVM_ARM_IRQ_CPU_FIQ; + linestate_bit = CPU_INTERRUPT_FIQ; + break; + default: + g_assert_not_reached(); + } + + if (level) { + env->irq_line_state |= linestate_bit; + } else { + env->irq_line_state &= ~linestate_bit; + } + kvm_arm_set_irq(cs->cpu_index, KVM_ARM_IRQ_TYPE_CPU, irq_id, !!level); +} + +static void kvm_cpu_instance_init(CPUState *cs) +{ + /* + * VIRQ and VFIQ are unused with KVM but we add them to maintain + * the same interface as non-KVM CPUs. + */ + qdev_init_gpio_in(DEVICE(cs), arm_cpu_kvm_set_irq, 4); +} + +static bool kvm_cpu_realizefn(CPUState *cs, Error **errp) +{ + /* + * If we needed to query the host kernel for the CPU features + * then it's possible that might have failed in the initfn, but + * this is the first point where we can report it. + */ + ARMCPU *cpu = ARM_CPU(cs); + + if (cpu->host_cpu_probe_failed) { + error_setg(errp, "Failed to retrieve host CPU features"); + return false; + } + return true; +} + +static void host_cpu_instance_init(Object *obj) +{ + ARMCPU *cpu = ARM_CPU(obj); + + kvm_arm_set_cpu_features_from_host(cpu); + if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { + aarch64_add_sve_properties(obj); + } + arm_cpu_post_init(obj); +} + +static void kvm_cpu_reset(CPUState *cs) +{ + kvm_arm_reset_vcpu(ARM_CPU(cs)); +} + +static const TypeInfo host_cpu_type_info = { + .name = ARM_CPU_TYPE_NAME("host"), + .parent = TYPE_AARCH64_CPU, + .instance_init = host_cpu_instance_init, +}; + +static void kvm_cpu_accel_class_init(ObjectClass *oc, void *data) +{ + AccelCPUClass *acc = ACCEL_CPU_CLASS(oc); + + acc->cpu_realizefn = kvm_cpu_realizefn; + acc->cpu_instance_init = kvm_cpu_instance_init; + acc->cpu_reset = kvm_cpu_reset; +} + +static const TypeInfo kvm_cpu_accel_type_info = { + .name = ACCEL_CPU_NAME("kvm"), + .parent = TYPE_ACCEL_CPU, + .class_init = kvm_cpu_accel_class_init, + .abstract = true, +}; + +static void kvm_cpu_accel_register_types(void) +{ + type_register_static(&host_cpu_type_info); + type_register_static(&kvm_cpu_accel_type_info); +} + +type_init(kvm_cpu_accel_register_types); diff --git a/target/arm/kvm/meson.build b/target/arm/kvm/meson.build index e92010fa3f..ef58a29dd7 100644 --- a/target/arm/kvm/meson.build +++ b/target/arm/kvm/meson.build @@ -1,4 +1,5 @@ arm_ss.add(when: 'CONFIG_KVM', if_true: files( 'kvm.c', 'kvm64.c', + 'kvm-cpu.c', )) From patchwork Fri Jun 4 15:52:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454115 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp597304jae; Fri, 4 Jun 2021 10:08:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybahBRu6Jr5RDclpaZEtbw2ToU5Kr5PbEwwBuqB1YUlBLMWEdWzSvtylZDc13cd18XX+X/ X-Received: by 2002:a17:906:6d43:: with SMTP id a3mr5233303ejt.142.1622826487347; Fri, 04 Jun 2021 10:08:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826487; cv=none; d=google.com; s=arc-20160816; b=P09+hEdLfdq7fHMR5nZ4CVMkiiAnjysvX1vOGcKP7ScczlLqzwWr3m1/EXBC3sIhAh iLwxzSoXzUoT8b7tgg+g0pYgLwdF5PTOgyhQSxl72t5+3EDsISi0Mu/WpbSE7JHLGT4m HxDnc/wAAtYbk532lK74Gijyk+fzmFwNRagts5FSNjDQxpfEhXTbtH9hW00iWo6cPZuG ckeI0Zq/Ql1q6kxr+fu50bJbbIgdCTzmcAdUOO5n7gU/ZI5tSzEhOEvXUbKrgoKkXAL+ YE5Z61hZrQfOSjT+LIOnpRAqbOFh6aHe5T2FFqqxklpi28Oe1idP/PiIGVtx6vjhjxBo OIpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=D4Fg0Dja58X/pwaq0v8JkFhBHTMRtUReJTqk/3Hcw8E=; b=RfNBTLk1425rKG5GmDF5Pn+C0OgTCVWnKF/tvv4M8AevMBR6HW8YfM1oNuOzsr3raJ fmWtQC5DhRwg5GtrBTIhinPm2A3ChScdfXCxEaeXrsIWOMha11BSIukGv3pGVZtNPa+4 atQQfBWtu3ghROBMGwXkIbGWYbG9ZwZE39z0/csvBQBImsGmQLw/i1LDjuOMfQXkSCUH oo5yt7Mgkb5H3MMpAgtmm4DkBsK7dWskQ/VIwtLDCNxavkkgJgcFoFQY5oSS7XObFR1A wyAZWAyEXGQMkfxCADKGMd0tu1rxJMq5S+QCp6oWB15IKKyRooM5LmCsbSDD3BihD0qV HNZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KBtW9owH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i16si5253559ejg.401.2021.06.04.10.08.07 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:08:07 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=KBtW9owH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:35076 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDIk-00045u-E3 for patch@linaro.org; Fri, 04 Jun 2021 13:08:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33284) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkV-0007vL-P3 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:43 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]:46786) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkU-00026G-49 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:43 -0400 Received: by mail-wm1-x334.google.com with SMTP id h22-20020a05600c3516b02901a826f84095so977393wmq.5 for ; Fri, 04 Jun 2021 09:32:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D4Fg0Dja58X/pwaq0v8JkFhBHTMRtUReJTqk/3Hcw8E=; b=KBtW9owH4oqFDsQ/ei+FxAtn2PrZgaswjzShvdAI7I0dp6QEtTxRD4DwztT4zDlSYQ Fjs5coHf8Mf9grFOGpEik+zSUxyEWM14uFaNm69PGvKWG6IJikWrPA5FIcdveg+ZuEbn huO1IUDLKKjdrDsCAipIWlbARm9YkpsQwvVlrG9RIPMicT46QRQkKoEqRAc7pWFLY+6z SfoEHoedVc/tpdaa/sPonq/8Bk3Z7qkvxYDnydVBM+sAE2QS9LjY+y3NNohpFJjBLqVv SASzyHTlOnl0lSQ2O//o7js9vT0arieHEwYbRcychxMMYxtF36GETfFheMUN6m94HQId jdFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D4Fg0Dja58X/pwaq0v8JkFhBHTMRtUReJTqk/3Hcw8E=; b=YZngKqTyCBsvbLzIil0V/QGVh2tsT4vxgytavZe8LxIHCaJTC4i6cKWirBaEshwIxs K5A3U2fSALP1XnxFfIylbOCgKd5X31gt5lOJOEefr+yxfpzGf8cHXAkiH1/KMkbGT+Ej GsEV+7+KcQ7s5V13m0n+CXgo++/pN3iuGIJEDxpM1BUGzbU9QB6MIShgTET/zntkbW38 ZOVllWsOOJt6JgIqj8yl6S73a1tRUuoDMVYwAafAGk+jXNbOhB8s67GSQxbAWFY4lyJ9 tzDhbflqGw/Bi+zEvrj6pnwqBJ1bLMD2ADm4oZ7VRmBdhJt8umhIc/e2bAgbI2nt2oIq DncQ== X-Gm-Message-State: AOAM532srcy40JmUKnQj2RvqgXThKVdPabnfNiw4IeUOMhQOgWmgpYmR OZ5QGNsQumvwzwOgu0eXuYZloA== X-Received: by 2002:a7b:c5d3:: with SMTP id n19mr4593580wmk.68.1622824360689; Fri, 04 Jun 2021 09:32:40 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id w11sm7439148wrv.89.2021.06.04.09.32.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id F2F7F1FFDC; Fri, 4 Jun 2021 16:53:20 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 68/99] target/arm: move kvm post init initialization to kvm cpu accel Date: Fri, 4 Jun 2021 16:52:41 +0100 Message-Id: <20210604155312.15902-69-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu.c | 4 ---- target/arm/kvm/kvm-cpu.c | 1 + 2 files changed, 1 insertion(+), 4 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 0ecbfa060c..003e58d8ee 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -778,10 +778,6 @@ void arm_cpu_post_init(Object *obj) qdev_property_add_static(DEVICE(cpu), &arm_cpu_gt_cntfrq_property); } - if (kvm_enabled()) { - kvm_arm_add_vcpu_properties(obj); - } - #ifndef CONFIG_USER_ONLY if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64) && cpu_isar_feature(aa64_mte, cpu)) { diff --git a/target/arm/kvm/kvm-cpu.c b/target/arm/kvm/kvm-cpu.c index 5fbb127e61..9f65010c0c 100644 --- a/target/arm/kvm/kvm-cpu.c +++ b/target/arm/kvm/kvm-cpu.c @@ -63,6 +63,7 @@ static void kvm_cpu_instance_init(CPUState *cs) * the same interface as non-KVM CPUs. */ qdev_init_gpio_in(DEVICE(cs), arm_cpu_kvm_set_irq, 4); + kvm_arm_add_vcpu_properties(OBJECT(cs)); } static bool kvm_cpu_realizefn(CPUState *cs, Error **errp) From patchwork Fri Jun 4 15:52:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454110 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp591708jae; Fri, 4 Jun 2021 10:01:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx+kFIm0g8NHdWJajwMea2ezmUAuEEzkS0Vx3SWfdqittBfx1g/3nTpY/4c3IX2yjWxbvST X-Received: by 2002:a2e:2e16:: with SMTP id u22mr4207620lju.322.1622826110820; Fri, 04 Jun 2021 10:01:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826110; cv=none; d=google.com; s=arc-20160816; b=kC3uaI5IVJbSfx6ufDHUPZ8IMs3/ewUXWy3r0TgShZnOPNU/bvbBwMxRlmOP/Dl9RW 19lnj7cDV3KmgEvjnoJeYpgkNNA4qC+fRTc8lXyG3dBd7o1yMUy/167VK6zQMgnYWvAN uKdrh8RipVe7NJmKIZynQVliXAS+uUlB9KOvqXwVOcrPg8Uwkjh0vBIYHq9sQfRzn8ZA y9BCwtKhZJSv8j1XO5rUGMobMk9STOuuFsyBuQIzObsr1c4Ol7zhF8mkZKnN6dnDyBQW v56IZ0gRGx8wsCETi2lrM9jKRpcaXXf/WL0wIm5QeasVLERjwcijM1GJpc0NTM4VHbZ3 /flw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=IZcluuGH+rF7DI3dlOPWDCggxxQXhI+T0BM1IJZ8hik=; b=Q44FU3R7jmyW/6LAuhWnXd0niWEt/o8WkZ3LD237JXOzdy8zUKMtlg1dd9/vBNyI/T dfQEuS1z/wWKduqebbj5kpTzOey/22prk9awwWmt54GST0yt/k3eNRhjR3r0CpGpkNXQ lJDAm6oBhSljHwCrMTheFZr0fhRc/ugnaPd1iLh4XzbwQsDZH0LX8CM4Vjfok9OUITIA BDyX8zXU64zufUepo7c9wGABiIvYUwapZWmlmUjucJ17TI13V1KsPVmSVk1t+rsWRAXk PoqLQizfUNVKij14J3AILn+Mkk7VRO/W2mrTFHqHGIlb8l0+ZgvnEY6QwGa/3AuqiKPm VbrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=jgqtgZh3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r2si6478960lfm.359.2021.06.04.10.01.50 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:01:50 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=jgqtgZh3; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:41294 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDCf-0005jl-Je for patch@linaro.org; Fri, 04 Jun 2021 13:01:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51740) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRF-0002po-01 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:49 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]:33714) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRB-0003oJ-6b for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:48 -0400 Received: by mail-wr1-x42b.google.com with SMTP id a20so9911559wrc.0 for ; Fri, 04 Jun 2021 09:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IZcluuGH+rF7DI3dlOPWDCggxxQXhI+T0BM1IJZ8hik=; b=jgqtgZh3OkQoHx0aooR74gfohMyDUIGBlhGfkcwgAMkkNJWJAM+Qbrhk/l4EJ7joac HzlOGrjNH9/ybbzz+R9UAZhrKF/QQXKqJGb+ndXZ5sVnDpBgI8modGF61owNzr0Cv6e8 i1dSPVU/yjBgYsvjQLVwVr04OpKNPMILH4bhlWgCn/45VVzf5IqTzoCGoiBEazJCINto PtNRCb4VIN0vo7aDxsL+xFb1Xw7uQhFVYqpS0L9x8piJn7m7E5HA3T8oMVgc2XOsQp/7 bHiJBnlh7J2RGoCllYVV56py6G71L+/44TBT94mo1R5yWp2OPyeQt/gSI9Hnrr7LEPV1 VliQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IZcluuGH+rF7DI3dlOPWDCggxxQXhI+T0BM1IJZ8hik=; b=KHEEBdmV8Ytzz28pRtpWqK06bKjDay2xy1PzCE+5jpWhLq9rv3qGjbtqtws74sJSq3 S3jMOm29OoWFtIBgRwtAzxC/g/hU0vmfRW+2cMT41c50KxMrpnS5YL8OKlp1TRCLrpdN y49pevxuJwMmdyOcXxbdN8GfXKkUW8t8pe/Vg7Y63D9r/laPBn6tSd/iN1KDt5MdwnAh mLM4SAM30odQYWTMHqrglX3XRmbICfLYovupyztDMP6MjS6heEckJlzL4GzfmSpuBgXM 11HteumgXJFzU65usG5/11HRSiZmFaNK9BZ+O/OTfgcPQXRt+mUs5OlFykpqT1B4s3Va XBOw== X-Gm-Message-State: AOAM533g8OdqMvk8/ADjjTqGC25MYbyi4bvFyCVO/WvxL3i4LFEJfJdf 67d3pkTI/HxBs6Y1GHgqoF2mKw== X-Received: by 2002:adf:eeca:: with SMTP id a10mr4651107wrp.184.1622823163871; Fri, 04 Jun 2021 09:12:43 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l10sm7144801wrm.2.2021.06.04.09.12.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:42 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 1DE7B1FFDD; Fri, 4 Jun 2021 16:53:21 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 69/99] target/arm: add tcg cpu accel class Date: Fri, 4 Jun 2021 16:52:42 +0100 Message-Id: <20210604155312.15902-70-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , qemu-arm@nongnu.org, =?utf-8?q?Al?= =?utf-8?q?ex_Benn=C3=A9e?= , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana move init, realizefn and reset code into it. Signed-off-by: Claudio Fontana Cc: Paolo Bonzini Signed-off-by: Alex Bennée --- target/arm/tcg/tcg-cpu.h | 4 ++- target/arm/cpu.c | 44 ++------------------------ target/arm/tcg/sysemu/tcg-cpu.c | 27 ++++++++++++++++ target/arm/tcg/tcg-cpu-models.c | 10 +++--- target/arm/tcg/tcg-cpu.c | 55 +++++++++++++++++++++++++++++++-- 5 files changed, 92 insertions(+), 48 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/tcg-cpu.h b/target/arm/tcg/tcg-cpu.h index d93c6a6749..dd08587949 100644 --- a/target/arm/tcg/tcg-cpu.h +++ b/target/arm/tcg/tcg-cpu.h @@ -22,15 +22,17 @@ #include "cpu.h" #include "hw/core/tcg-cpu-ops.h" +#include "hw/core/accel-cpu.h" void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb); -extern struct TCGCPUOps arm_tcg_ops; +void tcg_arm_init_accel_cpu(AccelCPUClass *accel_cpu, CPUClass *cc); #ifndef CONFIG_USER_ONLY /* Do semihosting call and set the appropriate return value. */ void tcg_handle_semihosting(CPUState *cs); +bool tcg_cpu_realizefn(CPUState *cs, Error **errp); #endif /* !CONFIG_USER_ONLY */ diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 003e58d8ee..945dfbbe9d 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -410,12 +410,6 @@ static void arm_cpu_reset(DeviceState *dev) &env->vfp.fp_status_f16); set_float_detect_tininess(float_tininess_before_rounding, &env->vfp.standard_fp_status_f16); - - if (tcg_enabled()) { - hw_breakpoint_update_all(cpu); - hw_watchpoint_update_all(cpu); - arm_rebuild_hflags(env); - } } void arm_cpu_update_virq(ARMCPU *cpu) @@ -576,10 +570,6 @@ static void arm_cpu_initfn(Object *obj) cpu->dtb_compatible = "qemu,unknown"; cpu->psci_version = 1; /* By default assume PSCI v0.1 */ cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE; - - if (tcg_enabled()) { - cpu->psci_version = 2; /* TCG implements PSCI 0.2 */ - } } static Property arm_cpu_gt_cntfrq_property = @@ -868,34 +858,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) Error *local_err = NULL; bool no_aa32 = false; - /* - * If we needed to query the host kernel for the CPU features - * then it's possible that might have failed in the initfn, but - * this is the first point where we can report it. - */ - if (cpu->host_cpu_probe_failed) { - error_setg(errp, "The 'host' CPU type can only be used with KVM"); - return; - } - -#ifndef CONFIG_USER_ONLY - /* The NVIC and M-profile CPU are two halves of a single piece of - * hardware; trying to use one without the other is a command line - * error and will result in segfaults if not caught here. - */ - if (arm_feature(env, ARM_FEATURE_M)) { - if (!env->nvic) { - error_setg(errp, "This board cannot be used with Cortex-M CPUs"); - return; - } - } else { - if (env->nvic) { - error_setg(errp, "This board can only be used with Cortex-M CPUs"); - return; - } - } - -#ifdef CONFIG_TCG +#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY) { uint64_t scale; @@ -921,8 +884,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) cpu->gt_timer[GTIMER_HYPVIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale, arm_gt_hvtimer_cb, cpu); } -#endif /* CONFIG_TCG */ -#endif /* !CONFIG_USER_ONLY */ +#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */ cpu_exec_realizefn(cs, &local_err); if (local_err != NULL) { @@ -1467,7 +1429,7 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data) cc->disas_set_info = arm_disas_set_info; #ifdef CONFIG_TCG - cc->tcg_ops = &arm_tcg_ops; + cc->init_accel_cpu = tcg_arm_init_accel_cpu; #endif /* CONFIG_TCG */ arm32_cpu_class_init(oc, data); diff --git a/target/arm/tcg/sysemu/tcg-cpu.c b/target/arm/tcg/sysemu/tcg-cpu.c index 327b2a5073..115ac523dc 100644 --- a/target/arm/tcg/sysemu/tcg-cpu.c +++ b/target/arm/tcg/sysemu/tcg-cpu.c @@ -19,10 +19,13 @@ */ #include "qemu/osdep.h" +#include "qapi/error.h" +#include "qemu/timer.h" #include "cpu.h" #include "semihosting/common-semi.h" #include "qemu/log.h" #include "tcg/tcg-cpu.h" +#include "internals.h" /* * Do semihosting call and set the appropriate return value. All the @@ -50,3 +53,27 @@ void tcg_handle_semihosting(CPUState *cs) env->regs[15] += env->thumb ? 2 : 4; } } + +bool tcg_cpu_realizefn(CPUState *cs, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + /* + * The NVIC and M-profile CPU are two halves of a single piece of + * hardware; trying to use one without the other is a command line + * error and will result in segfaults if not caught here. + */ + if (arm_feature(env, ARM_FEATURE_M)) { + if (!env->nvic) { + error_setg(errp, "This board cannot be used with Cortex-M CPUs"); + return false; + } + } else { + if (env->nvic) { + error_setg(errp, "This board can only be used with Cortex-M CPUs"); + return false; + } + } + return true; +} diff --git a/target/arm/tcg/tcg-cpu-models.c b/target/arm/tcg/tcg-cpu-models.c index 91af2174a1..975869f276 100644 --- a/target/arm/tcg/tcg-cpu-models.c +++ b/target/arm/tcg/tcg-cpu-models.c @@ -846,16 +846,18 @@ static const struct TCGCPUOps arm_v7m_tcg_ops = { }; #endif /* CONFIG_TCG */ +static void arm_v7m_init_accel_cpu(AccelCPUClass *accel_cpu, CPUClass *cc) +{ + cc->tcg_ops = &arm_v7m_tcg_ops; +} + static void arm_v7m_class_init(ObjectClass *oc, void *data) { ARMCPUClass *acc = ARM_CPU_CLASS(oc); CPUClass *cc = CPU_CLASS(oc); acc->info = data; -#ifdef CONFIG_TCG - cc->tcg_ops = &arm_v7m_tcg_ops; -#endif /* CONFIG_TCG */ - + cc->init_accel_cpu = arm_v7m_init_accel_cpu; cc->gdb_core_xml_file = "arm-m-profile.xml"; } diff --git a/target/arm/tcg/tcg-cpu.c b/target/arm/tcg/tcg-cpu.c index 9fd996d908..db677bc71c 100644 --- a/target/arm/tcg/tcg-cpu.c +++ b/target/arm/tcg/tcg-cpu.c @@ -20,8 +20,8 @@ #include "qemu/osdep.h" #include "cpu.h" +#include "qapi/error.h" #include "tcg-cpu.h" -#include "hw/core/tcg-cpu-ops.h" #include "cpregs.h" #include "internals.h" #include "exec/exec-all.h" @@ -212,7 +212,7 @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request) return true; } -struct TCGCPUOps arm_tcg_ops = { +static struct TCGCPUOps arm_tcg_ops = { .initialize = arm_translate_init, .synchronize_from_tb = arm_cpu_synchronize_from_tb, .cpu_exec_interrupt = arm_cpu_exec_interrupt, @@ -227,3 +227,54 @@ struct TCGCPUOps arm_tcg_ops = { .debug_check_watchpoint = arm_debug_check_watchpoint, #endif /* !CONFIG_USER_ONLY */ }; + +static void tcg_cpu_instance_init(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + + /* + * this would be the place to move TCG-specific props + * in future refactoring of cpu properties. + */ + + cpu->psci_version = 2; /* TCG implements PSCI 0.2 */ +} + +static void tcg_cpu_reset(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + hw_breakpoint_update_all(cpu); + hw_watchpoint_update_all(cpu); + arm_rebuild_hflags(env); +} + +void tcg_arm_init_accel_cpu(AccelCPUClass *accel_cpu, CPUClass *cc) +{ + cc->tcg_ops = &arm_tcg_ops; +} + +static void tcg_cpu_accel_class_init(ObjectClass *oc, void *data) +{ + AccelCPUClass *acc = ACCEL_CPU_CLASS(oc); + +#ifndef CONFIG_USER_ONLY + acc->cpu_realizefn = tcg_cpu_realizefn; +#endif /* CONFIG_USER_ONLY */ + + acc->cpu_instance_init = tcg_cpu_instance_init; + acc->cpu_reset = tcg_cpu_reset; +} +static const TypeInfo tcg_cpu_accel_type_info = { + .name = ACCEL_CPU_NAME("tcg"), + + .parent = TYPE_ACCEL_CPU, + .class_init = tcg_cpu_accel_class_init, + .abstract = true, +}; +static void tcg_cpu_accel_register_types(void) +{ + type_register_static(&tcg_cpu_accel_type_info); +} +type_init(tcg_cpu_accel_register_types); From patchwork Fri Jun 4 15:52:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454083 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp566061jae; Fri, 4 Jun 2021 09:29:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw1l+II/a4lWYbvIi/C7elk2cUzlc96T/r02taBx/lKFyL5wcevRtr3WD1B6dtjOu8xDuub X-Received: by 2002:a67:c90d:: with SMTP id w13mr3539686vsk.17.1622824177126; Fri, 04 Jun 2021 09:29:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824177; cv=none; d=google.com; s=arc-20160816; b=A6sZwmwBrZgok782CUkaU9TMnEQV/kq0U7YTdfoZk6+SYJcf9wB74o0zo6o7L4vvVt O1ZjOL7PrgqvhmewGzgEyt9IfK/qTb1HeQw6AjiQbEsKeeb/bygCTP0FTOASjHLl3YiV Q9QAhDzX4XG6LWSxNiq1KKNRE7VuQTQ3NdIuKnpjZag7I+6D5Nc5brS+j3K/76ZXGbhs u+GOl2C4eHq3zuinCOebrpFGqwTdOOYoMktxXEPxELRlbsef7Z8991VVzvgcoNRo81To CbIrENn0yh/Izt5o/TbvsnldzjcAxD6tgTEfj37UyS3rigg/bZgEAAragdOIyjqj+Bqy 9DRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=wuU8nDRdcW3/99gtEo5hPanI3MwrgtDEgaxIpfRRswM=; b=F7Nq+9j35w8iwzAf9dPPcuXFk34eEzeK0Fpu8HteRVU6KVzby+k8MXQAPWI7obbugW hkhyxKqYsnTqNAGPg1GyM3tulqMUWvdDHH2D6CwqVRaBM06XBjsrIoscdyOtVDvg++2Y Mv2rEVqYVN1GF+UcCihYlxtZYZpBWM2vV+H4md5e8CzJRlbT4Og6tI/KdjnM1M9cBzEl psUJe7BHiLzYmx+qw6U9YitUeoHR7riyrcbtA3s/r1F0/TuxtPSCIr+4WEfCc97O7lOe +beY3sbqPhi94QKvdloYnNBlFL0Wzex7GuL7gBqUp5ZrJ2rSJdJBexL0MqRe0UgnSNi7 h4EA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=cUSanbjg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q123si3252674vkh.12.2021.06.04.09.29.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:29:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=cUSanbjg; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49542 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpChU-0001EV-7b for patch@linaro.org; Fri, 04 Jun 2021 12:29:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48620) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCI5-0008R1-1R for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:22 -0400 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]:45919) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHh-0005s9-Bi for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:20 -0400 Received: by mail-wr1-x435.google.com with SMTP id z8so9801975wrp.12 for ; Fri, 04 Jun 2021 09:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wuU8nDRdcW3/99gtEo5hPanI3MwrgtDEgaxIpfRRswM=; b=cUSanbjgU5p4C6tM8P66vDYmES/myvA0uhdkylnzBblqyHRqW9b0GNZmBQIqPbsa5r QHbpV/SzLedy8MGN9106VvzbIMxCWeSwNr8sQh0NWoEZzKA1Zrx4lSS+Xz41SWT8gHuv 3+shnUcvZLEwJ17aqehc0nul4DIUDcFN/E448E1uVg5QBnBD4Vk/G8RZAfpyDZScPzB+ k2gKGuaQF534bvGO31tmEfgY++6ZS/rSXDfzDwBCoy3GTVO1r+yTaKC+a44Kga7MA3Qg pJhDKskd/ngQMLLh77rBAi92pPNcfszJyVgS3IrfLZVPIzQlczb1FtXl9dgWNzJ51SBG TAiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wuU8nDRdcW3/99gtEo5hPanI3MwrgtDEgaxIpfRRswM=; b=ZPtS8SWCfKVbSZwBZmwKi//tQ+j2wLjYo6VBZxZBafF6W89NDX5m1ezxP/vbkUriCJ wbP/8y72UnoiPPqdMQPRwWbgcGL+XBO/xi/9TE/WYTn1eWgqiuibktQwWBIp7FkEXQkU 5DE87Agi2LPgC35MyYLaQ41u05tbPHh725y86GoUDaraMofeAGTllpyjbqA5IcCcMqmw aZ35PSTwfDRFFefLDpRaTPigr+Bu91V8zQektJTBjcpxkWMwHXthEzwvMt0XH9gk9X6D aDk/+TZ8KbSx63u+xQ8TEk+x5fCLwESryQYmeYt8a2aFsImfQvPQriNzPvTDvVivxU16 8jCw== X-Gm-Message-State: AOAM530h2UiRYqdkVocaOkxetmC87/UGuMsS4x7O4F4wjg9dhcu1pMPZ 1glW7PkJkcAEA9X/9R3xc1z+Yg== X-Received: by 2002:a5d:4c48:: with SMTP id n8mr4493694wrt.327.1622822575116; Fri, 04 Jun 2021 09:02:55 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id b8sm5400497wmd.35.2021.06.04.09.02.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:53 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 378FA1FFDE; Fri, 4 Jun 2021 16:53:21 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 70/99] target/arm: move TCG gt timer creation code in tcg/ Date: Fri, 4 Jun 2021 16:52:43 +0100 Message-Id: <20210604155312.15902-71-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::435; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana we need to be careful not to use if (tcg_enabled()) here, because of the VMSTATE definitions in machine.c, which are only protected by CONFIG_TCG, and thus it would break the --enable-tcg --enable-kvm build. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/tcg/tcg-cpu.h | 1 + target/arm/cpu.c | 30 ++++--------------------- target/arm/tcg/sysemu/tcg-cpu.c | 40 +++++++++++++++++++++++++++++++++ 3 files changed, 45 insertions(+), 26 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/tcg-cpu.h b/target/arm/tcg/tcg-cpu.h index dd08587949..3e4ce2c355 100644 --- a/target/arm/tcg/tcg-cpu.h +++ b/target/arm/tcg/tcg-cpu.h @@ -33,6 +33,7 @@ void tcg_arm_init_accel_cpu(AccelCPUClass *accel_cpu, CPUClass *cc); /* Do semihosting call and set the appropriate return value. */ void tcg_handle_semihosting(CPUState *cs); bool tcg_cpu_realizefn(CPUState *cs, Error **errp); +bool tcg_cpu_realize_gt_timers(CPUState *cs, Error **errp); #endif /* !CONFIG_USER_ONLY */ diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 945dfbbe9d..2fef8ca471 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -859,32 +859,10 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) bool no_aa32 = false; #if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY) - { - uint64_t scale; - - if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) { - if (!cpu->gt_cntfrq_hz) { - error_setg(errp, "Invalid CNTFRQ: %"PRId64"Hz", - cpu->gt_cntfrq_hz); - return; - } - scale = gt_cntfrq_period_ns(cpu); - } else { - scale = GTIMER_SCALE; - } - - cpu->gt_timer[GTIMER_PHYS] = timer_new(QEMU_CLOCK_VIRTUAL, scale, - arm_gt_ptimer_cb, cpu); - cpu->gt_timer[GTIMER_VIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale, - arm_gt_vtimer_cb, cpu); - cpu->gt_timer[GTIMER_HYP] = timer_new(QEMU_CLOCK_VIRTUAL, scale, - arm_gt_htimer_cb, cpu); - cpu->gt_timer[GTIMER_SEC] = timer_new(QEMU_CLOCK_VIRTUAL, scale, - arm_gt_stimer_cb, cpu); - cpu->gt_timer[GTIMER_HYPVIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale, - arm_gt_hvtimer_cb, cpu); - } -#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */ + if (!tcg_cpu_realize_gt_timers(cs, errp)) { + return; + } +#endif cpu_exec_realizefn(cs, &local_err); if (local_err != NULL) { diff --git a/target/arm/tcg/sysemu/tcg-cpu.c b/target/arm/tcg/sysemu/tcg-cpu.c index 115ac523dc..1c6df15092 100644 --- a/target/arm/tcg/sysemu/tcg-cpu.c +++ b/target/arm/tcg/sysemu/tcg-cpu.c @@ -54,6 +54,46 @@ void tcg_handle_semihosting(CPUState *cs) } } +/* + * we cannot use tcg_enabled() to condition the call to this function, + * due to the fields VMSTATE definitions in machine.c : it would break + * the --enable-tcg --enable-kvm build. We need to run this code whenever + * CONFIG_TCG is true, regardless of the chosen accelerator. + * + * So we cannot call this from tcg_cpu_realizefn, as this needs to + * be called whenever TCG is built-in, regardless of whether it is + * enabled or not. + */ +bool tcg_cpu_realize_gt_timers(CPUState *cs, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + uint64_t scale; + + if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) { + if (!cpu->gt_cntfrq_hz) { + error_setg(errp, "Invalid CNTFRQ: %"PRId64"Hz", + cpu->gt_cntfrq_hz); + return false; + } + scale = gt_cntfrq_period_ns(cpu); + } else { + scale = GTIMER_SCALE; + } + + cpu->gt_timer[GTIMER_PHYS] = timer_new(QEMU_CLOCK_VIRTUAL, scale, + arm_gt_ptimer_cb, cpu); + cpu->gt_timer[GTIMER_VIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale, + arm_gt_vtimer_cb, cpu); + cpu->gt_timer[GTIMER_HYP] = timer_new(QEMU_CLOCK_VIRTUAL, scale, + arm_gt_htimer_cb, cpu); + cpu->gt_timer[GTIMER_SEC] = timer_new(QEMU_CLOCK_VIRTUAL, scale, + arm_gt_stimer_cb, cpu); + cpu->gt_timer[GTIMER_HYPVIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale, + arm_gt_hvtimer_cb, cpu); + return true; +} + bool tcg_cpu_realizefn(CPUState *cs, Error **errp) { ARMCPU *cpu = ARM_CPU(cs); From patchwork Fri Jun 4 15:52:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454098 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp577549jae; Fri, 4 Jun 2021 09:43:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxJBMPwvK6FvVNrkNQXeNyx3Q0yAuRz72Ay6QomaEtPeVrxo8mW6mNRrnj9sAyK2a4US8Up X-Received: by 2002:ab0:6147:: with SMTP id w7mr4603199uan.49.1622825024748; Fri, 04 Jun 2021 09:43:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825024; cv=none; d=google.com; s=arc-20160816; b=oKwpei/ogTQVJS0MDe2q9UGvZwjjWWuDTo0GbKUMgo3JCtBdkyRcDEXOFL/+87fPTZ btSYR1DJ4FIwx7a8rKXRfM4odkbiqy2UIjo/IS4gakeOTzUlfGxXUYwi2D0BmFdx7ogm 85ly1XbpxwRcuPuw2qjH6drcetltAbLqucpyk5QwveHKPakb1n7WsgwNVDljSKIsV5rP gofOd9oHHPjJ8qGgKMm4A4YkuZqCqaxpYhEsjSZ7P0MG+ayX2j/gWmXGF5dUqaSHhDm6 kzpwjthLgbrqvc1HB0bxXdUAl9058Sk6KGh+xJJG/CbOQspAsS+5NhpejZGwdtc5zfJ2 ZTFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=+lRMEJ0Jroff3JWceP6oHhM/6tfNcxStzXU3u63x4aE=; b=hx/1lFTztpwsGAp5tEwKd1HWzZs4aBM1TkaXefyHZXy2qqnlNN4Y4Ag8RBzFfIkLNY p1/ryc51bP2lHnhhv5EG22Szgt9GHTWC8sm1FvPglq3JxWpgfvon2IwkPtIVAOrm20FZ h28qeFfacwS5dHjR9y6K4O2kZYGcKtMhfe0GAsZ2aYmcWNAnBevpZLmM0OkrMO6BzxUW eO9g1AjUa380VOD6Lk42z940yc41euIz+BrSciz6BXCIcpRGHpFXMjCuTy19GD9MHyX0 kSjlY+npzJKasuWdXeJEYvQH9nHg+Mgdah373b4OaM29/0yO9NHXnIwPd69BwgZBEU1f 3f3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Td5J72B7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i23si745952vsk.315.2021.06.04.09.43.44 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:43:44 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=Td5J72B7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:34476 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCvA-0004qg-2w for patch@linaro.org; Fri, 04 Jun 2021 12:43:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51730) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRE-0002nr-D0 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:48 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]:40614) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCR9-0003nT-0j for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:48 -0400 Received: by mail-wr1-x42d.google.com with SMTP id y7so5219503wrh.7 for ; Fri, 04 Jun 2021 09:12:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+lRMEJ0Jroff3JWceP6oHhM/6tfNcxStzXU3u63x4aE=; b=Td5J72B76ZjB+Lvyb7YR0FE1MZCPDFVFdgMk9GQ6KdVCVasLJvy5GZDakMqID195rY pvjEOq8R+Y93fjTr38eUcUjnatgjuVZElFacOG844hadG+bPX9QKMmXeIHTziTq6OhQo UAr9U2XSuBlAAQ1KE5tOhIw+btsOK0tOZ30CpbosPqJuuap8UK4i6KCBVr3XLxOFHYkL a6mRDBg8Dysq4PD5VRudzoQoyP8d6/JkYj6pEalfdEQt02q7opnxgqinTr2jkKx9bagh tjdvPeu9cKRAV73w4zUKaYE0eQVBMdRoJuw6EIwpDXc7PdBMNaFsYf9Kob843T+mRU10 ktuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+lRMEJ0Jroff3JWceP6oHhM/6tfNcxStzXU3u63x4aE=; b=DUiSGbuuB34akZjsja8zPBdhK27G8lapMi6BmC2IIxaCO/dkocnyef03E+nCn9Y2MV wSzKBxbRN0Kzx1LTKgRKXGyqFRaRpo4FJTLES31SdCNFDpAi9ggUeCVeVPUCG8L6gWjx E5ZoooBHfMVyGHXd0eoRIWUU//68Ju5wkjrz7P2teFE8Onok4DXbOYv0EvFeRSt7FYt2 og2aj0xJKmEDGyf48GcZ17GMAK68cLRblunV4GCoHdgqWvzhm10in2dSYzwcOoy0DNst bIPB9R7GstL501Z34+yaZ+0LitN6ngY8zzOCzfRH8XVbS4+/ynLfYF52k4FvqOjr47Z4 CkhA== X-Gm-Message-State: AOAM531oGsdOp7J1VBqxHiioeCKXbVVowCw6hFin+6tdVeOoubh/D3Ze w/W3vFRiR66bx1csBxHhbXPZwg== X-Received: by 2002:adf:a401:: with SMTP id d1mr4646896wra.55.1622823161296; Fri, 04 Jun 2021 09:12:41 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id a1sm8019703wrg.92.2021.06.04.09.12.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 61CD91FFDF; Fri, 4 Jun 2021 16:53:21 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 71/99] target/arm: cpu-sve: new module Date: Fri, 4 Jun 2021 16:52:44 +0100 Message-Id: <20210604155312.15902-72-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana extract the SVE-related cpu object properties and functions, and move them to a separate module. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-sve.h | 37 ++++ target/arm/cpu.h | 14 +- target/arm/cpu-sve.c | 358 +++++++++++++++++++++++++++++++++++++++ target/arm/cpu.c | 3 + target/arm/cpu64.c | 329 +---------------------------------- target/arm/kvm/kvm-cpu.c | 1 + target/arm/meson.build | 1 + 7 files changed, 408 insertions(+), 335 deletions(-) create mode 100644 target/arm/cpu-sve.h create mode 100644 target/arm/cpu-sve.c -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-sve.h b/target/arm/cpu-sve.h new file mode 100644 index 0000000000..692509d419 --- /dev/null +++ b/target/arm/cpu-sve.h @@ -0,0 +1,37 @@ +/* + * QEMU AArch64 CPU SVE Extensions for TARGET_AARCH64 + * + * Copyright (c) 2013 Linaro Ltd + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#ifndef CPU_SVE_H +#define CPU_SVE_H + +/* note: SVE is an AARCH64-only option, only include this for TARGET_AARCH64 */ + +#include "cpu.h" + +/* called by arm_cpu_finalize_features in realizefn */ +void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp); + +/* add the CPU SVE properties */ +void aarch64_add_sve_properties(Object *obj); + +/* add the CPU SVE properties specific to the "MAX" CPU */ +void aarch64_add_sve_properties_max(Object *obj); + +#endif /* CPU_SVE_H */ diff --git a/target/arm/cpu.h b/target/arm/cpu.h index f57fa9b9f5..b9b9bd8b01 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -173,7 +173,8 @@ typedef struct { #define VSTCR_SW VTCR_NSW #define VSTCR_SA VTCR_NSA -/* Define a maximum sized vector register. +/* + * Define a maximum sized vector register. * For 32-bit, this is a 128-bit NEON/AdvSIMD register. * For 64-bit, this is a 2048-bit SVE register. * @@ -201,13 +202,9 @@ typedef struct { #ifdef TARGET_AARCH64 # define ARM_MAX_VQ 16 -void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp); -void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp); #else # define ARM_MAX_VQ 1 -static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { } -static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { } -#endif +#endif /* TARGET_AARCH64 */ typedef struct ARMVectorReg { uint64_t d[2 * ARM_MAX_VQ] QEMU_ALIGNED(16); @@ -219,10 +216,13 @@ typedef struct ARMPredicateReg { uint64_t p[DIV_ROUND_UP(2 * ARM_MAX_VQ, 8)] QEMU_ALIGNED(16); } ARMPredicateReg; +void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp); /* In AArch32 mode, PAC keys do not exist at all. */ typedef struct ARMPACKey { uint64_t lo, hi; } ARMPACKey; +#else +static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { } #endif /* See the commentary above the TBFLAG field definitions. */ @@ -1059,7 +1059,6 @@ int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg); void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq); void aarch64_sve_change_el(CPUARMState *env, int old_el, int new_el, bool el0_a64); -void aarch64_add_sve_properties(Object *obj); /* * SVE registers are encoded in KVM's memory in an endianness-invariant format. @@ -1090,7 +1089,6 @@ static inline void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) { } static inline void aarch64_sve_change_el(CPUARMState *env, int o, int n, bool a) { } -static inline void aarch64_add_sve_properties(Object *obj) { } #endif void aarch64_sync_32_to_64(CPUARMState *env); diff --git a/target/arm/cpu-sve.c b/target/arm/cpu-sve.c new file mode 100644 index 0000000000..129fb9586e --- /dev/null +++ b/target/arm/cpu-sve.c @@ -0,0 +1,358 @@ +/* + * QEMU ARM CPU + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "cpu.h" +#include "sysemu/tcg.h" +#include "sysemu/kvm.h" +#include "kvm/kvm_arm.h" +#include "qapi/visitor.h" +#include "cpu-sve.h" + +void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) +{ + /* + * If any vector lengths are explicitly enabled with sve properties, + * then all other lengths are implicitly disabled. If sve-max-vq is + * specified then it is the same as explicitly enabling all lengths + * up to and including the specified maximum, which means all larger + * lengths will be implicitly disabled. If no sve properties + * are enabled and sve-max-vq is not specified, then all lengths not + * explicitly disabled will be enabled. Additionally, all power-of-two + * vector lengths less than the maximum enabled length will be + * automatically enabled and all vector lengths larger than the largest + * disabled power-of-two vector length will be automatically disabled. + * Errors are generated if the user provided input that interferes with + * any of the above. Finally, if SVE is not disabled, then at least one + * vector length must be enabled. + */ + DECLARE_BITMAP(kvm_supported, ARM_MAX_VQ); + DECLARE_BITMAP(tmp, ARM_MAX_VQ); + uint32_t vq, max_vq = 0; + + /* Collect the set of vector lengths supported by KVM. */ + bitmap_zero(kvm_supported, ARM_MAX_VQ); + if (kvm_enabled() && kvm_arm_sve_supported()) { + kvm_arm_sve_get_vls(CPU(cpu), kvm_supported); + } else if (kvm_enabled()) { + assert(!cpu_isar_feature(aa64_sve, cpu)); + } + + /* + * Process explicit sve properties. + * From the properties, sve_vq_map implies sve_vq_init. + * Check first for any sve enabled. + */ + if (!bitmap_empty(cpu->sve_vq_map, ARM_MAX_VQ)) { + max_vq = find_last_bit(cpu->sve_vq_map, ARM_MAX_VQ) + 1; + + if (cpu->sve_max_vq && max_vq > cpu->sve_max_vq) { + error_setg(errp, "cannot enable sve%d", max_vq * 128); + error_append_hint(errp, "sve%d is larger than the maximum vector " + "length, sve-max-vq=%d (%d bits)\n", + max_vq * 128, cpu->sve_max_vq, + cpu->sve_max_vq * 128); + return; + } + + if (kvm_enabled()) { + /* + * For KVM we have to automatically enable all supported unitialized + * lengths, even when the smaller lengths are not all powers-of-two. + */ + bitmap_andnot(tmp, kvm_supported, cpu->sve_vq_init, max_vq); + bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq); + } else if (tcg_enabled()) { + /* Propagate enabled bits down through required powers-of-two. */ + for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { + if (!test_bit(vq - 1, cpu->sve_vq_init)) { + set_bit(vq - 1, cpu->sve_vq_map); + } + } + } + } else if (cpu->sve_max_vq == 0) { + /* + * No explicit bits enabled, and no implicit bits from sve-max-vq. + */ + if (!cpu_isar_feature(aa64_sve, cpu)) { + /* SVE is disabled and so are all vector lengths. Good. */ + return; + } + + if (kvm_enabled()) { + /* Disabling a supported length disables all larger lengths. */ + for (vq = 1; vq <= ARM_MAX_VQ; ++vq) { + if (test_bit(vq - 1, cpu->sve_vq_init) && + test_bit(vq - 1, kvm_supported)) { + break; + } + } + max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; + bitmap_andnot(cpu->sve_vq_map, kvm_supported, + cpu->sve_vq_init, max_vq); + if (max_vq == 0 || bitmap_empty(cpu->sve_vq_map, max_vq)) { + error_setg(errp, "cannot disable sve%d", vq * 128); + error_append_hint(errp, "Disabling sve%d results in all " + "vector lengths being disabled.\n", + vq * 128); + error_append_hint(errp, "With SVE enabled, at least one " + "vector length must be enabled.\n"); + return; + } + } else if (tcg_enabled()) { + /* Disabling a power-of-two disables all larger lengths. */ + if (test_bit(0, cpu->sve_vq_init)) { + error_setg(errp, "cannot disable sve128"); + error_append_hint(errp, "Disabling sve128 results in all " + "vector lengths being disabled.\n"); + error_append_hint(errp, "With SVE enabled, at least one " + "vector length must be enabled.\n"); + return; + } + for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) { + if (test_bit(vq - 1, cpu->sve_vq_init)) { + break; + } + } + max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; + bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq); + } + + max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1; + } + + /* + * Process the sve-max-vq property. + * Note that we know from the above that no bit above + * sve-max-vq is currently set. + */ + if (cpu->sve_max_vq != 0) { + max_vq = cpu->sve_max_vq; + + if (!test_bit(max_vq - 1, cpu->sve_vq_map) && + test_bit(max_vq - 1, cpu->sve_vq_init)) { + error_setg(errp, "cannot disable sve%d", max_vq * 128); + error_append_hint(errp, "The maximum vector length must be " + "enabled, sve-max-vq=%d (%d bits)\n", + max_vq, max_vq * 128); + return; + } + + /* Set all bits not explicitly set within sve-max-vq. */ + bitmap_complement(tmp, cpu->sve_vq_init, max_vq); + bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq); + } + + /* + * We should know what max-vq is now. Also, as we're done + * manipulating sve-vq-map, we ensure any bits above max-vq + * are clear, just in case anybody looks. + */ + assert(max_vq != 0); + bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq); + + if (kvm_enabled()) { + /* Ensure the set of lengths matches what KVM supports. */ + bitmap_xor(tmp, cpu->sve_vq_map, kvm_supported, max_vq); + if (!bitmap_empty(tmp, max_vq)) { + vq = find_last_bit(tmp, max_vq) + 1; + if (test_bit(vq - 1, cpu->sve_vq_map)) { + if (cpu->sve_max_vq) { + error_setg(errp, "cannot set sve-max-vq=%d", + cpu->sve_max_vq); + error_append_hint(errp, "This KVM host does not support " + "the vector length %d-bits.\n", + vq * 128); + error_append_hint(errp, "It may not be possible to use " + "sve-max-vq with this KVM host. Try " + "using only sve properties.\n"); + } else { + error_setg(errp, "cannot enable sve%d", vq * 128); + error_append_hint(errp, "This KVM host does not support " + "the vector length %d-bits.\n", + vq * 128); + } + } else { + error_setg(errp, "cannot disable sve%d", vq * 128); + error_append_hint(errp, "The KVM host requires all " + "supported vector lengths smaller " + "than %d bits to also be enabled.\n", + max_vq * 128); + } + return; + } + } else if (tcg_enabled()) { + /* Ensure all required powers-of-two are enabled. */ + for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { + if (!test_bit(vq - 1, cpu->sve_vq_map)) { + error_setg(errp, "cannot disable sve%d", vq * 128); + error_append_hint(errp, "sve%d is required as it " + "is a power-of-two length smaller than " + "the maximum, sve%d\n", + vq * 128, max_vq * 128); + return; + } + } + } + + /* + * Now that we validated all our vector lengths, the only question + * left to answer is if we even want SVE at all. + */ + if (!cpu_isar_feature(aa64_sve, cpu)) { + error_setg(errp, "cannot enable sve%d", max_vq * 128); + error_append_hint(errp, "SVE must be enabled to enable vector " + "lengths.\n"); + error_append_hint(errp, "Add sve=on to the CPU property list.\n"); + return; + } + + /* From now on sve_max_vq is the actual maximum supported length. */ + cpu->sve_max_vq = max_vq; +} + +static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(obj); + uint32_t value; + + /* All vector lengths are disabled when SVE is off. */ + if (!cpu_isar_feature(aa64_sve, cpu)) { + value = 0; + } else { + value = cpu->sve_max_vq; + } + visit_type_uint32(v, name, &value, errp); +} + +static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(obj); + uint32_t max_vq; + + if (!visit_type_uint32(v, name, &max_vq, errp)) { + return; + } + + if (kvm_enabled() && !kvm_arm_sve_supported()) { + error_setg(errp, "cannot set sve-max-vq"); + error_append_hint(errp, "SVE not supported by KVM on this host\n"); + return; + } + + if (max_vq == 0 || max_vq > ARM_MAX_VQ) { + error_setg(errp, "unsupported SVE vector length"); + error_append_hint(errp, "Valid sve-max-vq in range [1-%d]\n", + ARM_MAX_VQ); + return; + } + + cpu->sve_max_vq = max_vq; +} + +/* + * Note that cpu_arm_get/set_sve_vq cannot use the simpler + * object_property_add_bool interface because they make use + * of the contents of "name" to determine which bit on which + * to operate. + */ +static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(obj); + uint32_t vq = atoi(&name[3]) / 128; + bool value; + + /* All vector lengths are disabled when SVE is off. */ + if (!cpu_isar_feature(aa64_sve, cpu)) { + value = false; + } else { + value = test_bit(vq - 1, cpu->sve_vq_map); + } + visit_type_bool(v, name, &value, errp); +} + +static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(obj); + uint32_t vq = atoi(&name[3]) / 128; + bool value; + + if (!visit_type_bool(v, name, &value, errp)) { + return; + } + + if (value && kvm_enabled() && !kvm_arm_sve_supported()) { + error_setg(errp, "cannot enable %s", name); + error_append_hint(errp, "SVE not supported by KVM on this host\n"); + return; + } + + if (value) { + set_bit(vq - 1, cpu->sve_vq_map); + } else { + clear_bit(vq - 1, cpu->sve_vq_map); + } + set_bit(vq - 1, cpu->sve_vq_init); +} + +static bool cpu_arm_get_sve(Object *obj, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(obj); + return cpu_isar_feature(aa64_sve, cpu); +} + +static void cpu_arm_set_sve(Object *obj, bool value, Error **errp) +{ + ARMCPU *cpu = ARM_CPU(obj); + uint64_t t; + + if (value && kvm_enabled() && !kvm_arm_sve_supported()) { + error_setg(errp, "'sve' feature not supported by KVM on this host"); + return; + } + + t = cpu->isar.id_aa64pfr0; + t = FIELD_DP64(t, ID_AA64PFR0, SVE, value); + cpu->isar.id_aa64pfr0 = t; +} + +void aarch64_add_sve_properties(Object *obj) +{ + uint32_t vq; + + object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve); + + for (vq = 1; vq <= ARM_MAX_VQ; ++vq) { + char name[8]; + sprintf(name, "sve%d", vq * 128); + object_property_add(obj, name, "bool", cpu_arm_get_sve_vq, cpu_arm_set_sve_vq, NULL, NULL); + } +} + +/* properties added for MAX CPU */ +void aarch64_add_sve_properties_max(Object *obj) +{ + object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq, cpu_max_set_sve_max_vq, NULL, NULL); +} diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 2fef8ca471..6db37b42d1 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -23,6 +23,7 @@ #include "target/arm/idau.h" #include "qapi/error.h" #include "cpu.h" +#include "cpu-sve.h" #include "cpregs.h" #ifdef CONFIG_TCG @@ -818,6 +819,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) { Error *local_err = NULL; +#ifdef TARGET_AARCH64 if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { arm_cpu_sve_finalize(cpu, &local_err); if (local_err != NULL) { @@ -838,6 +840,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) } } } +#endif /* TARGET_AARCH64 */ if (kvm_enabled()) { kvm_arm_steal_time_finalize(cpu, &local_err); diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index e3d818275c..3a6b7cf5d1 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -23,6 +23,7 @@ #include "qemu/qemu-print.h" #include "cpu.h" #include "cpu32.h" +#include "cpu-sve.h" #include "qemu/module.h" #include "sysemu/tcg.h" #include "sysemu/kvm.h" @@ -245,331 +246,6 @@ static void aarch64_a72_initfn(Object *obj) define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo); } -void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) -{ - /* - * If any vector lengths are explicitly enabled with sve properties, - * then all other lengths are implicitly disabled. If sve-max-vq is - * specified then it is the same as explicitly enabling all lengths - * up to and including the specified maximum, which means all larger - * lengths will be implicitly disabled. If no sve properties - * are enabled and sve-max-vq is not specified, then all lengths not - * explicitly disabled will be enabled. Additionally, all power-of-two - * vector lengths less than the maximum enabled length will be - * automatically enabled and all vector lengths larger than the largest - * disabled power-of-two vector length will be automatically disabled. - * Errors are generated if the user provided input that interferes with - * any of the above. Finally, if SVE is not disabled, then at least one - * vector length must be enabled. - */ - DECLARE_BITMAP(kvm_supported, ARM_MAX_VQ); - DECLARE_BITMAP(tmp, ARM_MAX_VQ); - uint32_t vq, max_vq = 0; - - /* Collect the set of vector lengths supported by KVM. */ - bitmap_zero(kvm_supported, ARM_MAX_VQ); - if (kvm_enabled() && kvm_arm_sve_supported()) { - kvm_arm_sve_get_vls(CPU(cpu), kvm_supported); - } else if (kvm_enabled()) { - assert(!cpu_isar_feature(aa64_sve, cpu)); - } - - /* - * Process explicit sve properties. - * From the properties, sve_vq_map implies sve_vq_init. - * Check first for any sve enabled. - */ - if (!bitmap_empty(cpu->sve_vq_map, ARM_MAX_VQ)) { - max_vq = find_last_bit(cpu->sve_vq_map, ARM_MAX_VQ) + 1; - - if (cpu->sve_max_vq && max_vq > cpu->sve_max_vq) { - error_setg(errp, "cannot enable sve%d", max_vq * 128); - error_append_hint(errp, "sve%d is larger than the maximum vector " - "length, sve-max-vq=%d (%d bits)\n", - max_vq * 128, cpu->sve_max_vq, - cpu->sve_max_vq * 128); - return; - } - - if (kvm_enabled()) { - /* - * For KVM we have to automatically enable all supported unitialized - * lengths, even when the smaller lengths are not all powers-of-two. - */ - bitmap_andnot(tmp, kvm_supported, cpu->sve_vq_init, max_vq); - bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq); - } else if (tcg_enabled()) { - /* Propagate enabled bits down through required powers-of-two. */ - for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { - if (!test_bit(vq - 1, cpu->sve_vq_init)) { - set_bit(vq - 1, cpu->sve_vq_map); - } - } - } - } else if (cpu->sve_max_vq == 0) { - /* - * No explicit bits enabled, and no implicit bits from sve-max-vq. - */ - if (!cpu_isar_feature(aa64_sve, cpu)) { - /* SVE is disabled and so are all vector lengths. Good. */ - return; - } - - if (kvm_enabled()) { - /* Disabling a supported length disables all larger lengths. */ - for (vq = 1; vq <= ARM_MAX_VQ; ++vq) { - if (test_bit(vq - 1, cpu->sve_vq_init) && - test_bit(vq - 1, kvm_supported)) { - break; - } - } - max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; - bitmap_andnot(cpu->sve_vq_map, kvm_supported, - cpu->sve_vq_init, max_vq); - if (max_vq == 0 || bitmap_empty(cpu->sve_vq_map, max_vq)) { - error_setg(errp, "cannot disable sve%d", vq * 128); - error_append_hint(errp, "Disabling sve%d results in all " - "vector lengths being disabled.\n", - vq * 128); - error_append_hint(errp, "With SVE enabled, at least one " - "vector length must be enabled.\n"); - return; - } - } else if (tcg_enabled()) { - /* Disabling a power-of-two disables all larger lengths. */ - if (test_bit(0, cpu->sve_vq_init)) { - error_setg(errp, "cannot disable sve128"); - error_append_hint(errp, "Disabling sve128 results in all " - "vector lengths being disabled.\n"); - error_append_hint(errp, "With SVE enabled, at least one " - "vector length must be enabled.\n"); - return; - } - for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) { - if (test_bit(vq - 1, cpu->sve_vq_init)) { - break; - } - } - max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; - bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq); - } - - max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1; - } - - /* - * Process the sve-max-vq property. - * Note that we know from the above that no bit above - * sve-max-vq is currently set. - */ - if (cpu->sve_max_vq != 0) { - max_vq = cpu->sve_max_vq; - - if (!test_bit(max_vq - 1, cpu->sve_vq_map) && - test_bit(max_vq - 1, cpu->sve_vq_init)) { - error_setg(errp, "cannot disable sve%d", max_vq * 128); - error_append_hint(errp, "The maximum vector length must be " - "enabled, sve-max-vq=%d (%d bits)\n", - max_vq, max_vq * 128); - return; - } - - /* Set all bits not explicitly set within sve-max-vq. */ - bitmap_complement(tmp, cpu->sve_vq_init, max_vq); - bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq); - } - - /* - * We should know what max-vq is now. Also, as we're done - * manipulating sve-vq-map, we ensure any bits above max-vq - * are clear, just in case anybody looks. - */ - assert(max_vq != 0); - bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq); - - if (kvm_enabled()) { - /* Ensure the set of lengths matches what KVM supports. */ - bitmap_xor(tmp, cpu->sve_vq_map, kvm_supported, max_vq); - if (!bitmap_empty(tmp, max_vq)) { - vq = find_last_bit(tmp, max_vq) + 1; - if (test_bit(vq - 1, cpu->sve_vq_map)) { - if (cpu->sve_max_vq) { - error_setg(errp, "cannot set sve-max-vq=%d", - cpu->sve_max_vq); - error_append_hint(errp, "This KVM host does not support " - "the vector length %d-bits.\n", - vq * 128); - error_append_hint(errp, "It may not be possible to use " - "sve-max-vq with this KVM host. Try " - "using only sve properties.\n"); - } else { - error_setg(errp, "cannot enable sve%d", vq * 128); - error_append_hint(errp, "This KVM host does not support " - "the vector length %d-bits.\n", - vq * 128); - } - } else { - error_setg(errp, "cannot disable sve%d", vq * 128); - error_append_hint(errp, "The KVM host requires all " - "supported vector lengths smaller " - "than %d bits to also be enabled.\n", - max_vq * 128); - } - return; - } - } else if (tcg_enabled()) { - /* Ensure all required powers-of-two are enabled. */ - for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { - if (!test_bit(vq - 1, cpu->sve_vq_map)) { - error_setg(errp, "cannot disable sve%d", vq * 128); - error_append_hint(errp, "sve%d is required as it " - "is a power-of-two length smaller than " - "the maximum, sve%d\n", - vq * 128, max_vq * 128); - return; - } - } - } - - /* - * Now that we validated all our vector lengths, the only question - * left to answer is if we even want SVE at all. - */ - if (!cpu_isar_feature(aa64_sve, cpu)) { - error_setg(errp, "cannot enable sve%d", max_vq * 128); - error_append_hint(errp, "SVE must be enabled to enable vector " - "lengths.\n"); - error_append_hint(errp, "Add sve=on to the CPU property list.\n"); - return; - } - - /* From now on sve_max_vq is the actual maximum supported length. */ - cpu->sve_max_vq = max_vq; -} - -static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) -{ - ARMCPU *cpu = ARM_CPU(obj); - uint32_t value; - - /* All vector lengths are disabled when SVE is off. */ - if (!cpu_isar_feature(aa64_sve, cpu)) { - value = 0; - } else { - value = cpu->sve_max_vq; - } - visit_type_uint32(v, name, &value, errp); -} - -static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) -{ - ARMCPU *cpu = ARM_CPU(obj); - uint32_t max_vq; - - if (!visit_type_uint32(v, name, &max_vq, errp)) { - return; - } - - if (kvm_enabled() && !kvm_arm_sve_supported()) { - error_setg(errp, "cannot set sve-max-vq"); - error_append_hint(errp, "SVE not supported by KVM on this host\n"); - return; - } - - if (max_vq == 0 || max_vq > ARM_MAX_VQ) { - error_setg(errp, "unsupported SVE vector length"); - error_append_hint(errp, "Valid sve-max-vq in range [1-%d]\n", - ARM_MAX_VQ); - return; - } - - cpu->sve_max_vq = max_vq; -} - -/* - * Note that cpu_arm_get/set_sve_vq cannot use the simpler - * object_property_add_bool interface because they make use - * of the contents of "name" to determine which bit on which - * to operate. - */ -static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) -{ - ARMCPU *cpu = ARM_CPU(obj); - uint32_t vq = atoi(&name[3]) / 128; - bool value; - - /* All vector lengths are disabled when SVE is off. */ - if (!cpu_isar_feature(aa64_sve, cpu)) { - value = false; - } else { - value = test_bit(vq - 1, cpu->sve_vq_map); - } - visit_type_bool(v, name, &value, errp); -} - -static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) -{ - ARMCPU *cpu = ARM_CPU(obj); - uint32_t vq = atoi(&name[3]) / 128; - bool value; - - if (!visit_type_bool(v, name, &value, errp)) { - return; - } - - if (value && kvm_enabled() && !kvm_arm_sve_supported()) { - error_setg(errp, "cannot enable %s", name); - error_append_hint(errp, "SVE not supported by KVM on this host\n"); - return; - } - - if (value) { - set_bit(vq - 1, cpu->sve_vq_map); - } else { - clear_bit(vq - 1, cpu->sve_vq_map); - } - set_bit(vq - 1, cpu->sve_vq_init); -} - -static bool cpu_arm_get_sve(Object *obj, Error **errp) -{ - ARMCPU *cpu = ARM_CPU(obj); - return cpu_isar_feature(aa64_sve, cpu); -} - -static void cpu_arm_set_sve(Object *obj, bool value, Error **errp) -{ - ARMCPU *cpu = ARM_CPU(obj); - uint64_t t; - - if (value && kvm_enabled() && !kvm_arm_sve_supported()) { - error_setg(errp, "'sve' feature not supported by KVM on this host"); - return; - } - - t = cpu->isar.id_aa64pfr0; - t = FIELD_DP64(t, ID_AA64PFR0, SVE, value); - cpu->isar.id_aa64pfr0 = t; -} - -void aarch64_add_sve_properties(Object *obj) -{ - uint32_t vq; - - object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve); - - for (vq = 1; vq <= ARM_MAX_VQ; ++vq) { - char name[8]; - sprintf(name, "sve%d", vq * 128); - object_property_add(obj, name, "bool", cpu_arm_get_sve_vq, - cpu_arm_set_sve_vq, NULL, NULL); - } -} - void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { int arch_val = 0, impdef_val = 0; @@ -777,8 +453,7 @@ static void aarch64_max_initfn(Object *obj) } aarch64_add_sve_properties(obj); - object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq, - cpu_max_set_sve_max_vq, NULL, NULL); + aarch64_add_sve_properties_max(obj); } static const ARMCPUInfo aarch64_cpus[] = { diff --git a/target/arm/kvm/kvm-cpu.c b/target/arm/kvm/kvm-cpu.c index 9f65010c0c..a23831e3c6 100644 --- a/target/arm/kvm/kvm-cpu.c +++ b/target/arm/kvm/kvm-cpu.c @@ -21,6 +21,7 @@ #include "qemu/osdep.h" #include "qemu-common.h" #include "cpu.h" +#include "cpu-sve.h" #include "hw/core/accel-cpu.h" #include "qapi/error.h" diff --git a/target/arm/meson.build b/target/arm/meson.build index 448e94861f..bad5a659a7 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -13,6 +13,7 @@ arm_ss.add(zlib) arm_ss.add(when: 'TARGET_AARCH64', if_true: files( 'cpu64.c', + 'cpu-sve.c', 'gdbstub64.c', )) From patchwork Fri Jun 4 15:52:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454142 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp652872jae; Fri, 4 Jun 2021 11:23:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz8xOofMU1MVmfJbAI3nJyAvvrNFp2hGJVLA/wJYpimEiKtSFA2X0E3TZjbPH5v9WQ6zQik X-Received: by 2002:a05:6830:1e37:: with SMTP id t23mr4683113otr.318.1622831002749; Fri, 04 Jun 2021 11:23:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622831002; cv=none; d=google.com; s=arc-20160816; b=jWjMnD8kowVeKAfI9BqKyHW0poBMijxzTTn/aBesWjq/GENC1b8yb6qIeOSuVbaGz6 QJ13+kqXH6ZBUObb0a0HJnqouf64N6OQYCv+AMERoE/1C90RWNrUfGmBizVwKmU88yMh vW6G9SLu29qYOkRMlxov3i2wu08JTx+Z9LqJJ8j0E2ImT6B9Cfs/ArchfFJlhjJXbsle /t3UzADIw9ZyyhXH5e5omLWvaPq0rPUS6bYHdS/q6lHV7lPhb5BCQcriIoy60lJUwLjd vEdzt/wOCzFhVMAqJU4N3Qewaj3ktcWvb2UNdePgWEjKkZDRDo0VG7RcSDZWKh1naUwT T3CQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=9CUf/8kmWW8xLQQ+yY0pjmJPzJUFPuD+MEJ2WMvFClI=; b=qu2hWiJoCuoQfYLAjIMMB7BVIMPShUx7g5SMnxxtbn8e1qCG3bigl018i++b1rUkEV VYlKNhywr+bIIEu1IzuJWaqemy1YSnIpNyf/iO2DpHf56ShboUxPgUKo3cuxM0R/c8qg IFvPBB9Xkyj3Etuzo0yMsxGjeVfuCx/rRL5mPfli+l3bKmoYX5o6Zwn5i/CL8JZe9imN 40uFgG0WGJthAJuTRQQFgKn0alZFRr1iZSd+4IEaNLDKUh7+50Z8Dm8JEu5IqpAUC2Wq 2U5ob927DKqDmA0YJs213UH2C8mlixHqS6mnBrKyheEPXQlNVIsNmpkhWH7stb1DWTf9 /vsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="I/eAYba+"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n20si2662875otr.119.2021.06.04.11.23.22 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 11:23:22 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="I/eAYba+"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:60736 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpETZ-0000bj-WF for patch@linaro.org; Fri, 04 Jun 2021 14:23:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42218) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpESw-0000b4-Us for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:42 -0400 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]:36584) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpESu-0000J7-Tw for qemu-devel@nongnu.org; Fri, 04 Jun 2021 14:22:42 -0400 Received: by mail-wr1-x432.google.com with SMTP id n4so10210319wrw.3 for ; Fri, 04 Jun 2021 11:22:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9CUf/8kmWW8xLQQ+yY0pjmJPzJUFPuD+MEJ2WMvFClI=; b=I/eAYba+zTRXFyI/7v2n7HVFEJY4fpYSvErYW7LTPFPYdYjf57isXcijhX73Yseu2J S2PIY/tlsfwNRqfGgckJKhRksm/MTSVZnowDAQdCnEvUTvOfE1UjLKYaEyNRkqebFoI1 OLn9szFJrexJTTnhwXHA7yrGd0bX4NckrOzcSINc/873ikNRFj8P5R12fZoipUKSRfko gUupIC57fsimPOyLX09wb2sx4jv3DNgMHoBK1vS7dOJUQ16fHh0FrWkWBjuFM5mcyzW4 3a1Fwh0KlyUP9Gczy5++gAjV3kFZqmp4v2hlszSJcJphB6E6Lv0Un/cXvDpmO2v+/9wR lrFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9CUf/8kmWW8xLQQ+yY0pjmJPzJUFPuD+MEJ2WMvFClI=; b=IQzaEqLSKiNKIH+CCu4e4TprQr3xHuslpjIIW1EVEqRP8/jE9KDfsUFM7FdXuZTALg M9Vv0DqlGuY1Zfa+XpDqnrXkUXIsH+QRMRzlNhrB3wAmc75jJbwZyOW8YgH3e8ZmJtDh wNwAtkuyRWCP/zDrUtEq4WvpcHLgwZf+OPmNDnlV+p6mA5zzdJx92rQ0/WUaFVJsc4ps 4vuRG8HkyNG9FYVfXTA2l0+iMbXhPEAJ8FPRRHLVM5uRBhO/Lt4MOTb2iXz3mUJ3DEpp HMU9rjczigerg+qHzkf4TlHqSQecFBRSkAPYy/H+zQgZyOO+PAjbBlWBbDFW6Jc6vHfi Q6Pg== X-Gm-Message-State: AOAM531+/c9T6AXSDHDJXbZu5J5PWoyQ3kchJj9edzRA3gxzjDR9l9/c cuB9BQE8+cpvlNJdaqH33ayJsg== X-Received: by 2002:adf:a195:: with SMTP id u21mr5128992wru.367.1622830958710; Fri, 04 Jun 2021 11:22:38 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id b7sm7288622wri.83.2021.06.04.11.22.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:22:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 85BAB1FFE0; Fri, 4 Jun 2021 16:53:21 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 72/99] target/arm: cpu-sve: rename functions according to module prefix Date: Fri, 4 Jun 2021 16:52:45 +0100 Message-Id: <20210604155312.15902-73-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::432; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana external functions have the cpu_sve prefix, while for static functions it can be omitted. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-sve.h | 6 +++--- target/arm/cpu-sve.c | 32 ++++++++++++++++---------------- target/arm/cpu.c | 2 +- target/arm/cpu64.c | 4 ++-- target/arm/kvm/kvm-cpu.c | 2 +- 5 files changed, 23 insertions(+), 23 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-sve.h b/target/arm/cpu-sve.h index 692509d419..ece36d2a0c 100644 --- a/target/arm/cpu-sve.h +++ b/target/arm/cpu-sve.h @@ -26,12 +26,12 @@ #include "cpu.h" /* called by arm_cpu_finalize_features in realizefn */ -void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp); +void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp); /* add the CPU SVE properties */ -void aarch64_add_sve_properties(Object *obj); +void cpu_sve_add_props(Object *obj); /* add the CPU SVE properties specific to the "MAX" CPU */ -void aarch64_add_sve_properties_max(Object *obj); +void cpu_sve_add_props_max(Object *obj); #endif /* CPU_SVE_H */ diff --git a/target/arm/cpu-sve.c b/target/arm/cpu-sve.c index 129fb9586e..da60330cc2 100644 --- a/target/arm/cpu-sve.c +++ b/target/arm/cpu-sve.c @@ -27,7 +27,7 @@ #include "qapi/visitor.h" #include "cpu-sve.h" -void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) +void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) { /* * If any vector lengths are explicitly enabled with sve properties, @@ -229,8 +229,8 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) cpu->sve_max_vq = max_vq; } -static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) +static void get_prop_max_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) { ARMCPU *cpu = ARM_CPU(obj); uint32_t value; @@ -244,8 +244,8 @@ static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name, visit_type_uint32(v, name, &value, errp); } -static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) +static void set_prop_max_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) { ARMCPU *cpu = ARM_CPU(obj); uint32_t max_vq; @@ -276,8 +276,8 @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name, * of the contents of "name" to determine which bit on which * to operate. */ -static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) +static void get_prop_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) { ARMCPU *cpu = ARM_CPU(obj); uint32_t vq = atoi(&name[3]) / 128; @@ -292,8 +292,8 @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name, visit_type_bool(v, name, &value, errp); } -static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name, - void *opaque, Error **errp) +static void set_prop_vq(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) { ARMCPU *cpu = ARM_CPU(obj); uint32_t vq = atoi(&name[3]) / 128; @@ -317,13 +317,13 @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name, set_bit(vq - 1, cpu->sve_vq_init); } -static bool cpu_arm_get_sve(Object *obj, Error **errp) +static bool get_prop_sve(Object *obj, Error **errp) { ARMCPU *cpu = ARM_CPU(obj); return cpu_isar_feature(aa64_sve, cpu); } -static void cpu_arm_set_sve(Object *obj, bool value, Error **errp) +static void set_prop_sve(Object *obj, bool value, Error **errp) { ARMCPU *cpu = ARM_CPU(obj); uint64_t t; @@ -338,21 +338,21 @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp) cpu->isar.id_aa64pfr0 = t; } -void aarch64_add_sve_properties(Object *obj) +void cpu_sve_add_props(Object *obj) { uint32_t vq; - object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve); + object_property_add_bool(obj, "sve", get_prop_sve, set_prop_sve); for (vq = 1; vq <= ARM_MAX_VQ; ++vq) { char name[8]; sprintf(name, "sve%d", vq * 128); - object_property_add(obj, name, "bool", cpu_arm_get_sve_vq, cpu_arm_set_sve_vq, NULL, NULL); + object_property_add(obj, name, "bool", get_prop_vq, set_prop_vq, NULL, NULL); } } /* properties added for MAX CPU */ -void aarch64_add_sve_properties_max(Object *obj) +void cpu_sve_add_props_max(Object *obj) { - object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq, cpu_max_set_sve_max_vq, NULL, NULL); + object_property_add(obj, "sve-max-vq", "uint32", get_prop_max_vq, set_prop_max_vq, NULL, NULL); } diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 6db37b42d1..e4ad92ffec 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -821,7 +821,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) #ifdef TARGET_AARCH64 if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { - arm_cpu_sve_finalize(cpu, &local_err); + cpu_sve_finalize_features(cpu, &local_err); if (local_err != NULL) { error_propagate(errp, local_err); return; diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 3a6b7cf5d1..03ed637bdb 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -452,8 +452,8 @@ static void aarch64_max_initfn(Object *obj) qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_impdef_property); } - aarch64_add_sve_properties(obj); - aarch64_add_sve_properties_max(obj); + cpu_sve_add_props(obj); + cpu_sve_add_props_max(obj); } static const ARMCPUInfo aarch64_cpus[] = { diff --git a/target/arm/kvm/kvm-cpu.c b/target/arm/kvm/kvm-cpu.c index a23831e3c6..09aede9319 100644 --- a/target/arm/kvm/kvm-cpu.c +++ b/target/arm/kvm/kvm-cpu.c @@ -89,7 +89,7 @@ static void host_cpu_instance_init(Object *obj) kvm_arm_set_cpu_features_from_host(cpu); if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { - aarch64_add_sve_properties(obj); + cpu_sve_add_props(obj); } arm_cpu_post_init(obj); } From patchwork Fri Jun 4 15:52:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454124 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp605112jae; Fri, 4 Jun 2021 10:17:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwAUXQUpSFaPdwIOTNMxVZUo0l9KYId+jQzq19Cg+OKo71c+7lmtnaXHcbv3eUYpQtZWo5E X-Received: by 2002:a67:6ec6:: with SMTP id j189mr3825841vsc.32.1622827055825; Fri, 04 Jun 2021 10:17:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827055; cv=none; d=google.com; s=arc-20160816; b=qEQAvt3ea0DeRFfMyjTL2YcSFbEEtrNMlsTCTUhv+emWlSNB62/sKKssPM9lXOhENp Q3jQJOKzc7CA9oL9wXGIjKPIojU2p4RioxfRu/OKA27QMYggsPvD8aW1/F+T/tUCe0wf osBA1aCbF03mdaWXs5VjXdahavFUDZmrP0k0l7tDxwgL5cse4EurYOnS9Ri8udoYtrCe Al1A28MWZla83Bt0yBzGop4AbtuQwi7mexARf7j5AFI0FsISv99JcG6KdyRyf9zGRYH8 h+piwtoaMq/sCMqs0tMs4qVg+GhwXPmXRnSYwaqbc7854zgSCM1kglMCOIlKA94gqveQ KcWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=ofbUZYQxVSTX2zCLFpcrYT6uJV2+s+AkgpY/MjopHLY=; b=0+3Pc5Qu6kQfaarQQ9gVI+pETJFFf6o+wUsUWthCg920qZdo89bkGRgGhMvyIEpkcH WZvROMDbqpt67xnZPPCWRtmLaZu9sfU8L7ZCXytiYshjjqJA122hBaoWGCKKTVPW4rDM CXkzH70xBwzFlZWyOMw+JgNCqP0x/t1X2GFyLvQj7GVJYrIvjTB1wxl2msUCfsQFuO0t vepokePj3vQ/Vf3ffoSPCfykTmLdhrEwhxuXcQTQskKwVASFWZfpWd3NNF/k6D4buMGB i3In5YK+0DGSPQHKE2L8Tji3CBgxVnvK0QmzxNidyDFTWtuTczpDwQjj7jTcf9Hf7jxM 1r7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=TwV39Qrt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id h19si3459295vsj.38.2021.06.04.10.17.35 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:17:35 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=TwV39Qrt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:41028 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDRv-0002td-3M for patch@linaro.org; Fri, 04 Jun 2021 13:17:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33680) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkn-0008T8-E0 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:02 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]:40646) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCke-0002BW-3y for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:01 -0400 Received: by mail-wm1-x330.google.com with SMTP id b145-20020a1c80970000b029019c8c824054so8275642wmd.5 for ; Fri, 04 Jun 2021 09:32:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ofbUZYQxVSTX2zCLFpcrYT6uJV2+s+AkgpY/MjopHLY=; b=TwV39QrtePGMEnp/LpXVEHnxUiUIMF823KJph6yhqs+h9TAfkvJGfBg+4XncSu5WcQ WyuSJ5YG+ye2Y/rGJgPAHgUqYAYJuORrx0VaR1p16/FpPK38oOlrSsfw397Sr8dNa0WE YRUwsHLLpHeJZdXoCbsa1gEnU9sMdAhZe+CPmNLTKLWPI9qCc6SUBWbzRF+V1p7HswM/ 5osonvrjTZuDDgP5XZxKtJectD0A0vMYVwPJojAF6ea85psbV7NtxuMPLhtktoRBP5N2 Pk5TuQcPlHc9WJTZQ9bykUjQcDCJTSTPArI5jmh7HZDiBFQVu4N52y/bNYHSRtjFNvUK jhPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ofbUZYQxVSTX2zCLFpcrYT6uJV2+s+AkgpY/MjopHLY=; b=sFaDA3cG5QdmGzpaGXdJkt7xrR2VW+kY+GhRcm7hsnq38q6kjMVrXd5cYLZHvG5GL1 D4RvPV8eryOsLSz7tNgZeU0mRnMbcMtdKteppVOk9y9vnUbVzwF6FAKEnkOSm4yUUfj6 gLP6pfUCjKm10yrvj+dARwh7ahi4dDApgT9qKVYbn5N0iH6x1mL4GmHZ/9+7yaaOEWOz Wsekdi5kbdjw3xloUBu9Bp9Brkg+Ud6OvGGKDYPlm1lg0a8C3oxHeXuyJOyGKV37mAl+ IUqbBlhnil50OVv+BpiYRXFceGHAi01eFOj0fxiLhFpAA+9Ugr5z9MWPjvGHO4b8sFe/ tFBw== X-Gm-Message-State: AOAM531rDk00aaW/tcfLh/ANGQm9DjcNPkynGvpLg/HXco/bM9hqGeHj uGJO/XaDeYT0kYkt49ePbanqI30BncRtWQ== X-Received: by 2002:a1c:4d09:: with SMTP id o9mr4612477wmh.149.1622824369449; Fri, 04 Jun 2021 09:32:49 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id 32sm8072606wrs.5.2021.06.04.09.32.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id AB3721FFE1; Fri, 4 Jun 2021 16:53:21 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 73/99] target/arm: cpu-sve: split TCG and KVM functionality Date: Fri, 4 Jun 2021 16:52:46 +0100 Message-Id: <20210604155312.15902-74-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana put the KVM-specific and TCG-specific functionality in the respective subdirectories kvm/ and tcg/ Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/kvm/kvm-sve.h | 28 +++++++ target/arm/tcg/tcg-sve.h | 24 ++++++ target/arm/cpu-sve.c | 155 ++++++++++--------------------------- target/arm/kvm/kvm-sve.c | 118 ++++++++++++++++++++++++++++ target/arm/tcg/tcg-sve.c | 81 +++++++++++++++++++ target/arm/kvm/meson.build | 1 + target/arm/tcg/meson.build | 1 + 7 files changed, 296 insertions(+), 112 deletions(-) create mode 100644 target/arm/kvm/kvm-sve.h create mode 100644 target/arm/tcg/tcg-sve.h create mode 100644 target/arm/kvm/kvm-sve.c create mode 100644 target/arm/tcg/tcg-sve.c -- 2.20.1 diff --git a/target/arm/kvm/kvm-sve.h b/target/arm/kvm/kvm-sve.h new file mode 100644 index 0000000000..9a9556b916 --- /dev/null +++ b/target/arm/kvm/kvm-sve.h @@ -0,0 +1,28 @@ +/* + * QEMU AArch64 CPU SVE KVM interface + * + * Copyright 2021 SUSE LLC + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef KVM_SVE_H +#define KVM_SVE_H + +void kvm_sve_get_supported_lens(ARMCPU *cpu, + unsigned long *kvm_supported); + +void kvm_sve_enable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, uint32_t max_vq, + unsigned long *kvm_supported); + +uint32_t kvm_sve_disable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, + unsigned long *kvm_supported, Error **errp); + +bool kvm_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, + unsigned long *kvm_supported, Error **errp, + uint32_t sve_max_vq); + +#endif /* KVM_SVE_H */ diff --git a/target/arm/tcg/tcg-sve.h b/target/arm/tcg/tcg-sve.h new file mode 100644 index 0000000000..4bed809b9a --- /dev/null +++ b/target/arm/tcg/tcg-sve.h @@ -0,0 +1,24 @@ +/* + * QEMU AArch64 CPU SVE TCG interface + * + * Copyright 2021 SUSE LLC + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef TCG_SVE_H +#define TCG_SVE_H + +/* note: SVE is an AARCH64-only option, only include this for TARGET_AARCH64 */ + +void tcg_sve_enable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, uint32_t max_vq); + +uint32_t tcg_sve_disable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, Error **errp); + +bool tcg_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, + Error **errp); + +#endif /* TCG_SVE_H */ diff --git a/target/arm/cpu-sve.c b/target/arm/cpu-sve.c index da60330cc2..5190e4a639 100644 --- a/target/arm/cpu-sve.c +++ b/target/arm/cpu-sve.c @@ -27,6 +27,28 @@ #include "qapi/visitor.h" #include "cpu-sve.h" +#include "tcg/tcg-sve.h" +#include "kvm/kvm-sve.h" + +static bool apply_max_vq(unsigned long *sve_vq_map, unsigned long *sve_vq_init, + uint32_t max_vq, Error **errp) +{ + DECLARE_BITMAP(tmp, ARM_MAX_VQ); + + if (!test_bit(max_vq - 1, sve_vq_map) && + test_bit(max_vq - 1, sve_vq_init)) { + error_setg(errp, "cannot disable sve%d", max_vq * 128); + error_append_hint(errp, "The maximum vector length must be " + "enabled, sve-max-vq=%d (%d bits)\n", + max_vq, max_vq * 128); + return false; + } + /* Set all bits not explicitly set within sve-max-vq. */ + bitmap_complement(tmp, sve_vq_init, max_vq); + bitmap_or(sve_vq_map, sve_vq_map, tmp, max_vq); + return true; +} + void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) { /* @@ -45,17 +67,11 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) * vector length must be enabled. */ DECLARE_BITMAP(kvm_supported, ARM_MAX_VQ); - DECLARE_BITMAP(tmp, ARM_MAX_VQ); - uint32_t vq, max_vq = 0; - - /* Collect the set of vector lengths supported by KVM. */ - bitmap_zero(kvm_supported, ARM_MAX_VQ); - if (kvm_enabled() && kvm_arm_sve_supported()) { - kvm_arm_sve_get_vls(CPU(cpu), kvm_supported); - } else if (kvm_enabled()) { - assert(!cpu_isar_feature(aa64_sve, cpu)); - } + uint32_t max_vq = 0; + if (kvm_enabled()) { + kvm_sve_get_supported_lens(cpu, kvm_supported); + } /* * Process explicit sve properties. * From the properties, sve_vq_map implies sve_vq_init. @@ -72,70 +88,28 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) cpu->sve_max_vq * 128); return; } - if (kvm_enabled()) { - /* - * For KVM we have to automatically enable all supported unitialized - * lengths, even when the smaller lengths are not all powers-of-two. - */ - bitmap_andnot(tmp, kvm_supported, cpu->sve_vq_init, max_vq); - bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq); + kvm_sve_enable_lens(cpu->sve_vq_map, cpu->sve_vq_init, max_vq, + kvm_supported); } else if (tcg_enabled()) { - /* Propagate enabled bits down through required powers-of-two. */ - for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { - if (!test_bit(vq - 1, cpu->sve_vq_init)) { - set_bit(vq - 1, cpu->sve_vq_map); - } - } + tcg_sve_enable_lens(cpu->sve_vq_map, cpu->sve_vq_init, max_vq); } } else if (cpu->sve_max_vq == 0) { - /* - * No explicit bits enabled, and no implicit bits from sve-max-vq. - */ + /* No explicit bits enabled, and no implicit bits from sve-max-vq. */ if (!cpu_isar_feature(aa64_sve, cpu)) { /* SVE is disabled and so are all vector lengths. Good. */ return; } - if (kvm_enabled()) { - /* Disabling a supported length disables all larger lengths. */ - for (vq = 1; vq <= ARM_MAX_VQ; ++vq) { - if (test_bit(vq - 1, cpu->sve_vq_init) && - test_bit(vq - 1, kvm_supported)) { - break; - } - } - max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; - bitmap_andnot(cpu->sve_vq_map, kvm_supported, - cpu->sve_vq_init, max_vq); - if (max_vq == 0 || bitmap_empty(cpu->sve_vq_map, max_vq)) { - error_setg(errp, "cannot disable sve%d", vq * 128); - error_append_hint(errp, "Disabling sve%d results in all " - "vector lengths being disabled.\n", - vq * 128); - error_append_hint(errp, "With SVE enabled, at least one " - "vector length must be enabled.\n"); - return; - } + max_vq = kvm_sve_disable_lens(cpu->sve_vq_map, cpu->sve_vq_init, + kvm_supported, errp); } else if (tcg_enabled()) { - /* Disabling a power-of-two disables all larger lengths. */ - if (test_bit(0, cpu->sve_vq_init)) { - error_setg(errp, "cannot disable sve128"); - error_append_hint(errp, "Disabling sve128 results in all " - "vector lengths being disabled.\n"); - error_append_hint(errp, "With SVE enabled, at least one " - "vector length must be enabled.\n"); - return; - } - for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) { - if (test_bit(vq - 1, cpu->sve_vq_init)) { - break; - } - } - max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; - bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq); + max_vq = tcg_sve_disable_lens(cpu->sve_vq_map, cpu->sve_vq_init, + errp); + } + if (!max_vq) { + return; } - max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1; } @@ -146,21 +120,11 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) */ if (cpu->sve_max_vq != 0) { max_vq = cpu->sve_max_vq; - - if (!test_bit(max_vq - 1, cpu->sve_vq_map) && - test_bit(max_vq - 1, cpu->sve_vq_init)) { - error_setg(errp, "cannot disable sve%d", max_vq * 128); - error_append_hint(errp, "The maximum vector length must be " - "enabled, sve-max-vq=%d (%d bits)\n", - max_vq, max_vq * 128); + if (!apply_max_vq(cpu->sve_vq_map, cpu->sve_vq_init, max_vq, + errp)) { return; } - - /* Set all bits not explicitly set within sve-max-vq. */ - bitmap_complement(tmp, cpu->sve_vq_init, max_vq); - bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq); } - /* * We should know what max-vq is now. Also, as we're done * manipulating sve-vq-map, we ensure any bits above max-vq @@ -170,46 +134,13 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq); if (kvm_enabled()) { - /* Ensure the set of lengths matches what KVM supports. */ - bitmap_xor(tmp, cpu->sve_vq_map, kvm_supported, max_vq); - if (!bitmap_empty(tmp, max_vq)) { - vq = find_last_bit(tmp, max_vq) + 1; - if (test_bit(vq - 1, cpu->sve_vq_map)) { - if (cpu->sve_max_vq) { - error_setg(errp, "cannot set sve-max-vq=%d", - cpu->sve_max_vq); - error_append_hint(errp, "This KVM host does not support " - "the vector length %d-bits.\n", - vq * 128); - error_append_hint(errp, "It may not be possible to use " - "sve-max-vq with this KVM host. Try " - "using only sve properties.\n"); - } else { - error_setg(errp, "cannot enable sve%d", vq * 128); - error_append_hint(errp, "This KVM host does not support " - "the vector length %d-bits.\n", - vq * 128); - } - } else { - error_setg(errp, "cannot disable sve%d", vq * 128); - error_append_hint(errp, "The KVM host requires all " - "supported vector lengths smaller " - "than %d bits to also be enabled.\n", - max_vq * 128); - } + if (!kvm_sve_validate_lens(cpu->sve_vq_map, max_vq, kvm_supported, + errp, cpu->sve_max_vq)) { return; } } else if (tcg_enabled()) { - /* Ensure all required powers-of-two are enabled. */ - for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { - if (!test_bit(vq - 1, cpu->sve_vq_map)) { - error_setg(errp, "cannot disable sve%d", vq * 128); - error_append_hint(errp, "sve%d is required as it " - "is a power-of-two length smaller than " - "the maximum, sve%d\n", - vq * 128, max_vq * 128); - return; - } + if (!tcg_sve_validate_lens(cpu->sve_vq_map, max_vq, errp)) { + return; } } diff --git a/target/arm/kvm/kvm-sve.c b/target/arm/kvm/kvm-sve.c new file mode 100644 index 0000000000..21dfee5b5c --- /dev/null +++ b/target/arm/kvm/kvm-sve.c @@ -0,0 +1,118 @@ +/* + * QEMU ARM CPU + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "cpu.h" +#include "sysemu/kvm.h" +#include "kvm/kvm_arm.h" +#include "kvm/kvm-sve.h" + +void kvm_sve_get_supported_lens(ARMCPU *cpu, unsigned long *kvm_supported) +{ + /* Collect the set of vector lengths supported by KVM. */ + bitmap_zero(kvm_supported, ARM_MAX_VQ); + + if (kvm_arm_sve_supported()) { + kvm_arm_sve_get_vls(CPU(cpu), kvm_supported); + } else { + assert(!cpu_isar_feature(aa64_sve, cpu)); + } +} + +void kvm_sve_enable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, uint32_t max_vq, + unsigned long *kvm_supported) +{ + /* + * For KVM we have to automatically enable all supported unitialized + * lengths, even when the smaller lengths are not all powers-of-two. + */ + DECLARE_BITMAP(tmp, ARM_MAX_VQ); + + bitmap_andnot(tmp, kvm_supported, sve_vq_init, max_vq); + bitmap_or(sve_vq_map, sve_vq_map, tmp, max_vq); +} + +uint32_t kvm_sve_disable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, + unsigned long *kvm_supported, Error **errp) +{ + uint32_t max_vq, vq; + + /* Disabling a supported length disables all larger lengths. */ + for (vq = 1; vq <= ARM_MAX_VQ; ++vq) { + if (test_bit(vq - 1, sve_vq_init) && + test_bit(vq - 1, kvm_supported)) { + break; + } + } + + max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; + bitmap_andnot(sve_vq_map, kvm_supported, sve_vq_init, max_vq); + + if (max_vq == 0 || bitmap_empty(sve_vq_map, max_vq)) { + error_setg(errp, "cannot disable sve%d", vq * 128); + error_append_hint(errp, "Disabling sve%d results in all " + "vector lengths being disabled.\n", + vq * 128); + error_append_hint(errp, "With SVE enabled, at least one " + "vector length must be enabled.\n"); + return 0; + } + + return max_vq; +} + +bool kvm_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, + unsigned long *kvm_supported, Error **errp, + uint32_t sve_max_vq) +{ + /* Ensure the set of lengths matches what KVM supports. */ + DECLARE_BITMAP(tmp, ARM_MAX_VQ); + uint32_t vq; + + bitmap_xor(tmp, sve_vq_map, kvm_supported, max_vq); + if (bitmap_empty(tmp, max_vq)) { + return true; + } + + vq = find_last_bit(tmp, max_vq) + 1; + if (test_bit(vq - 1, sve_vq_map)) { + if (sve_max_vq) { + error_setg(errp, "cannot set sve-max-vq=%d", sve_max_vq); + error_append_hint(errp, "This KVM host does not support " + "the vector length %d-bits.\n", vq * 128); + error_append_hint(errp, "It may not be possible to use " + "sve-max-vq with this KVM host. Try " + "using only sve properties.\n"); + } else { + error_setg(errp, "cannot enable sve%d", vq * 128); + error_append_hint(errp, "This KVM host does not support " + "the vector length %d-bits.\n", vq * 128); + } + } else { + error_setg(errp, "cannot disable sve%d", vq * 128); + error_append_hint(errp, "The KVM host requires all " + "supported vector lengths smaller " + "than %d bits to also be enabled.\n", max_vq * 128); + } + return false; +} diff --git a/target/arm/tcg/tcg-sve.c b/target/arm/tcg/tcg-sve.c new file mode 100644 index 0000000000..99cfde1f41 --- /dev/null +++ b/target/arm/tcg/tcg-sve.c @@ -0,0 +1,81 @@ +/* + * QEMU ARM CPU + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "cpu.h" +#include "sysemu/tcg.h" +#include "cpu-sve.h" +#include "tcg-sve.h" + +void tcg_sve_enable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, uint32_t max_vq) +{ + /* Propagate enabled bits down through required powers-of-two. */ + uint32_t vq; + + for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { + if (!test_bit(vq - 1, sve_vq_init)) { + set_bit(vq - 1, sve_vq_map); + } + } +} + +uint32_t tcg_sve_disable_lens(unsigned long *sve_vq_map, + unsigned long *sve_vq_init, Error **errp) +{ + /* Disabling a power-of-two disables all larger lengths. */ + uint32_t max_vq, vq; + + if (test_bit(0, sve_vq_init)) { + error_setg(errp, "cannot disable sve128"); + error_append_hint(errp, "Disabling sve128 results in all " + "vector lengths being disabled.\n"); + error_append_hint(errp, "With SVE enabled, at least one " + "vector length must be enabled.\n"); + return 0; + } + for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) { + if (test_bit(vq - 1, sve_vq_init)) { + break; + } + } + max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ; + bitmap_complement(sve_vq_map, sve_vq_init, max_vq); + return max_vq; +} + +bool tcg_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, + Error **errp) +{ + /* Ensure all required powers-of-two are enabled. */ + uint32_t vq; + + for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) { + if (!test_bit(vq - 1, sve_vq_map)) { + error_setg(errp, "cannot disable sve%d", vq * 128); + error_append_hint(errp, "sve%d is required as it " + "is a power-of-two length smaller than " + "the maximum, sve%d\n", vq * 128, max_vq * 128); + return false; + } + } + return true; +} diff --git a/target/arm/kvm/meson.build b/target/arm/kvm/meson.build index ef58a29dd7..1ae62bd65c 100644 --- a/target/arm/kvm/meson.build +++ b/target/arm/kvm/meson.build @@ -2,4 +2,5 @@ arm_ss.add(when: 'CONFIG_KVM', if_true: files( 'kvm.c', 'kvm64.c', 'kvm-cpu.c', + 'kvm-sve.c', )) diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index 5b36a13a24..c289771e97 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -45,6 +45,7 @@ arm_ss.add(when: ['TARGET_AARCH64','CONFIG_TCG'], if_true: files( 'mte_helper.c', 'pauth_helper.c', 'sve_helper.c', + 'tcg-sve.c', )) subdir('user') From patchwork Fri Jun 4 15:52:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454135 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp617842jae; Fri, 4 Jun 2021 10:35:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJysr/iAiNB78j6L6JpcYVmPg3FV1dRF8UXMgzVKkPSpxuqVqFSODU9wjT6JtGg8PaaATW1m X-Received: by 2002:a1f:308a:: with SMTP id w132mr3467092vkw.7.1622828113666; Fri, 04 Jun 2021 10:35:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622828113; cv=none; d=google.com; s=arc-20160816; b=PttIDiAMrxKslbACyhE+FwQ1raYFdW1Qvxm/FbbWydRs0nAOQ2m4MRT2fNSEjlm9Rk uckSjOL7i3dYgVrdZ9uW/4T11UPzSnqmzuZI+2oiQUoijSfy/hcemnfLXQ21ZWcMRTzK dWut5ewSVefkkqz/AaiPnsn9aPvtbaJudFZvy3qWhcZ/Xhc6wFR+YciqKH+ap/hXyJOI /CwqwsXpWFnB8cN6xp8XwEmz2ru0RvHAvBmeJhSc35NRly+GOEBBWwuaJ/8O6huSkd8I QJSyHdpPGIWUV0Bdl6yOF7aeHdkSvzX7XsFmu10Z+XAraUVsv/O/IUQJEU5VUFnBasmM GaAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=OyxWPhUmqlvRtQOK1rZ3n99TcV2f8SKQ96wxoL9+DN8=; b=QOIrBBFP1zoA2ZKuzqRn1hJZtN2y9FkUx+KDfwmYIIQat0DrImcenaCGUifYNuTIEk zbZzWikYz9maUwDiskuu6k3tYhxdaHri4036DlpmfCkZtdHieux/5kUj99sOglHtthdU wtyxImdQwdgKJ3EtZzuH4mWLBg3doxSSqDwj4ONV/hPF4u8BItdaDexsFRZL8qPEbDDx tu80YD1mR2cqt0/n1gRX+pcpUIP80EQK5D+uANUzWpMfyRQ8/BGSWoKH9rg3eUlic+St w3jQnrlGNmRxyROURIbWXpdy/9uifxGnoKMsQis2rxAiKhZ72+Qv53c31uc+V/sH/BSi D/Qw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=BxJmN116; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id v5si3387293vse.187.2021.06.04.10.35.13 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:35:13 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=BxJmN116; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:40836 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDiz-00088N-2Q for patch@linaro.org; Fri, 04 Jun 2021 13:35:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48794) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNM-0008H3-2f for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:52 -0400 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]:39725) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNH-00023P-GO for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:51 -0400 Received: by mail-wr1-x429.google.com with SMTP id l2so10038197wrw.6 for ; Fri, 04 Jun 2021 10:12:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OyxWPhUmqlvRtQOK1rZ3n99TcV2f8SKQ96wxoL9+DN8=; b=BxJmN116dfKYjZlKy++1ajEbL+naUiUruIVUyq8HsrKkI6C0E7jmcGmI8pmBI1wMG8 Rlh+HwvqY/rvxOoJIlbTbh+Vn+kMpRkW15OcCvehSorYhBcTuw4lFOI4/+N96hWofZM8 Ri5mDsKDgsv/jrhl5/byOg+z3E6nI2lq9wSKQ089gRAX88RSQolz19OafQio39zb1TBO AxSaL0smXJ3xdXH5FsJKfSjXdKfdfdFN+PaXtuqwmLmydASyCcCBu/OrpKXCD+okm/K5 8BOTkdDMCysUG7CbZiMnsgc4YjYKR5fYc5IG44GjxbtY+8fltKkhxK54vNaEPhIx51Ic H1Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OyxWPhUmqlvRtQOK1rZ3n99TcV2f8SKQ96wxoL9+DN8=; b=hNzHQWD7qtz1H9Rc0mjtiLNRa2AAugpo24Drd+7DRF/5kwr5Wc++0TJ8RpO6ybUXn5 rjv+wepZ9B23jrxHBO0c9HtuK9hfwMc6iGZ34gzX3nPcNaSPhbML4oslGyGB1IGyTLYE ttOwK0v5T92moVccdUqoC/j0Si3LktixXa6WQA2o2NWjZFoLdjzOvaPygq4CGtamjLLk O8VfLMjjKWi/MtYr9v4H9LF7ZM7TeFY3TtPb3VU++0dQ2iWSyZ+2NLIZ+SNQq6F1KMzD 7h58be7+Ol6zJkwpKGyJ6KIvAwW+WTuqmfIl9gx+fSXaAirjKEvHG1QseYCkBSd3Ogjo vtwg== X-Gm-Message-State: AOAM533+Q35M+Y6SMRFx6gS9SJ3xqXjfXOdoIBGcd+EMQ+nvkVBb4lAg ZSiGiUb88J+XiGTpjWJzBclKZQ== X-Received: by 2002:a5d:4f8f:: with SMTP id d15mr4815833wru.85.1622826764825; Fri, 04 Jun 2021 10:12:44 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id m132sm6351030wmf.10.2021.06.04.10.12.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id CAC8E1FFE6; Fri, 4 Jun 2021 16:53:21 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 74/99] target/arm: cpu-sve: make cpu_sve_finalize_features return bool Date: Fri, 4 Jun 2021 16:52:47 +0100 Message-Id: <20210604155312.15902-75-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::429; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana return false on error, true on success. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-sve.h | 2 +- target/arm/cpu-sve.c | 17 +++++++++-------- target/arm/cpu.c | 3 +-- 3 files changed, 11 insertions(+), 11 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-sve.h b/target/arm/cpu-sve.h index ece36d2a0c..6ab74b1d8f 100644 --- a/target/arm/cpu-sve.h +++ b/target/arm/cpu-sve.h @@ -26,7 +26,7 @@ #include "cpu.h" /* called by arm_cpu_finalize_features in realizefn */ -void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp); +bool cpu_sve_finalize_features(ARMCPU *cpu, Error **errp); /* add the CPU SVE properties */ void cpu_sve_add_props(Object *obj); diff --git a/target/arm/cpu-sve.c b/target/arm/cpu-sve.c index 5190e4a639..24bffbba8b 100644 --- a/target/arm/cpu-sve.c +++ b/target/arm/cpu-sve.c @@ -49,7 +49,7 @@ static bool apply_max_vq(unsigned long *sve_vq_map, unsigned long *sve_vq_init, return true; } -void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) +bool cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) { /* * If any vector lengths are explicitly enabled with sve properties, @@ -86,7 +86,7 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) "length, sve-max-vq=%d (%d bits)\n", max_vq * 128, cpu->sve_max_vq, cpu->sve_max_vq * 128); - return; + return false; } if (kvm_enabled()) { kvm_sve_enable_lens(cpu->sve_vq_map, cpu->sve_vq_init, max_vq, @@ -98,7 +98,7 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) /* No explicit bits enabled, and no implicit bits from sve-max-vq. */ if (!cpu_isar_feature(aa64_sve, cpu)) { /* SVE is disabled and so are all vector lengths. Good. */ - return; + return true; } if (kvm_enabled()) { max_vq = kvm_sve_disable_lens(cpu->sve_vq_map, cpu->sve_vq_init, @@ -108,7 +108,7 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) errp); } if (!max_vq) { - return; + return false; } max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1; } @@ -122,7 +122,7 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) max_vq = cpu->sve_max_vq; if (!apply_max_vq(cpu->sve_vq_map, cpu->sve_vq_init, max_vq, errp)) { - return; + return false; } } /* @@ -136,11 +136,11 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) if (kvm_enabled()) { if (!kvm_sve_validate_lens(cpu->sve_vq_map, max_vq, kvm_supported, errp, cpu->sve_max_vq)) { - return; + return false; } } else if (tcg_enabled()) { if (!tcg_sve_validate_lens(cpu->sve_vq_map, max_vq, errp)) { - return; + return false; } } @@ -153,11 +153,12 @@ void cpu_sve_finalize_features(ARMCPU *cpu, Error **errp) error_append_hint(errp, "SVE must be enabled to enable vector " "lengths.\n"); error_append_hint(errp, "Add sve=on to the CPU property list.\n"); - return; + return false; } /* From now on sve_max_vq is the actual maximum supported length. */ cpu->sve_max_vq = max_vq; + return true; } static void get_prop_max_vq(Object *obj, Visitor *v, const char *name, diff --git a/target/arm/cpu.c b/target/arm/cpu.c index e4ad92ffec..0b20faaca0 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -821,8 +821,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) #ifdef TARGET_AARCH64 if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { - cpu_sve_finalize_features(cpu, &local_err); - if (local_err != NULL) { + if (!cpu_sve_finalize_features(cpu, &local_err)) { error_propagate(errp, local_err); return; } From patchwork Fri Jun 4 15:52:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454066 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp547905jae; Fri, 4 Jun 2021 09:08:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwAO/Bf0kD9VdTkx152bcPZ1WOcVxXzo0vs1Yd47xecHaC87ezMMidSKo6eQvkzhq8YfjR0 X-Received: by 2002:a2e:8e73:: with SMTP id t19mr4019191ljk.460.1622822902592; Fri, 04 Jun 2021 09:08:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822902; cv=none; d=google.com; s=arc-20160816; b=opPIX3CAZNZ1awF2MTuZk7DZNQ3d6j29IghEUWNkNqlCTuGuEkbI4GSS81nl9NHyRJ np4nBSTN6xAlbjQHTpQe0VON1Rae6bZqRa3Z+wcmTK3Dckw1ND+nvcIRhfSAwJicCyEQ ZcMBCLv3HFydwMmoXvU8bbvBX+oIVDoN341umkfWd/X9oHNjXdA/GSUqJxgerdcXzETV JnqxGyY77WNOezxNO0i8S6PoBJ3bmSQbPiJy0Llm5mTsVUjRfrFbWh/moIaCVLWbUqLs jgPkPWN1n/5Bmm+TIDPLcsuFDxwyjlRODZq6DeLG1X4J75tIIasuWOhgy+iiIhfYnRyD k2sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=5gJQqbGfLehWu6/WKjXXSxpXerROzKZPZuw1dGRbuWs=; b=EI09GMzL6BCUmnC0om3dRyTRccmEvZAR0REkB36qpD6Wt5wt9prdQlVql5MODRateR JM3wgi+qfJroW53hnQkizKAYpfjB2Ic8SpGp50e7vRH/b57zsnvYFrtjO3b1H0MZYSGj BkiRa2UylxrHofWYz6aNNmoysZCA5AYPkYvXg82AiphH4xLsLG+TUM3c4ZDtKkZI3cDj 5T5iTKu+ue3dGV9yNx59LieIN8KmHh2vG7Boo8ytJN61fwtcwlknJiIx8nQyDidxWU3U xOQca2xYoWCMReZi00GWj0PFV9ppl2Yye5vRyIsAq53NczZ1wtz7u0hCRzDZ2z5hdp9f El7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ecI5C9iH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u20si7353145ljk.504.2021.06.04.09.08.22 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:08:22 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ecI5C9iH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:32880 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCMv-0000Z9-CO for patch@linaro.org; Fri, 04 Jun 2021 12:08:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44952) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8t-0003iU-Vj for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:52 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]:35601) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8k-0000FN-08 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:51 -0400 Received: by mail-wr1-x42b.google.com with SMTP id m18so9808618wrv.2 for ; Fri, 04 Jun 2021 08:53:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5gJQqbGfLehWu6/WKjXXSxpXerROzKZPZuw1dGRbuWs=; b=ecI5C9iHpGahiuAd6kGHn2T1Vgm+O82KxDUcwAyMKnqWUORmt2zoxH498n4KK8Pmbs qfnhh8mUH35HxneeHDpcEaF/PjZfwvukOVx0lbCVg+fTCZqRwQFdnjYU0HvYOnmXH7ZZ cAICKEU71eYBaXKMOl9Ch/GcR0xjTCZgHwOW3LNPMB8ABrok3E5oTuUgwLDRu0G+e0zN hfOJgwGktA/O1MC2zqXfS1kwKunato2WTFHuIhJWqWCbDN4EGFpFVW6UAVW/E/oinmVB 4jSd+pAtEyn5tfiSs8fhR6tmtkZtmUhEpyWARSOz4oZMqvQBAGSUCRzqv8tH/PoPeHVe AxbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5gJQqbGfLehWu6/WKjXXSxpXerROzKZPZuw1dGRbuWs=; b=E9RkODmx6pE+ymtoA5KcvJ9GXkd+ge4Nm+bd2YKPfgh1ogVZsp+VU3fKieXZTnn4Yz +zIjVr53pPxwfo25FEV5NK4+G7rQJ9dB1KTXCwv0vMnesxx1C6EwSMWWJHjt9RxqgD0W r0WrSefhyWk6njd5Omh4xgKrVo6AJ6J3QEowWrXFNFStKYdxHHkdg7XZgNxw62XPkIrQ 6ntN90lcDIT0Rd19RoBaABke41oZIroTJVHtBxcjd5gU+bnTnFQBd9iZlsexQpUmZKI7 J81s/MxqgJhyun3ND72NifrpCeGwgggEU8fw26kx1BhoQQ8jJPLZ02/6l858FBSauoXe Cy8Q== X-Gm-Message-State: AOAM5304w6XQNwvXyUWic9kT+Sq7tNIryXRRBUgivMmRNPtLgtS8O952 YVIeZ1GGiRi1dIXQ+VCtWZN3+A== X-Received: by 2002:adf:fe8c:: with SMTP id l12mr4722858wrr.26.1622822020659; Fri, 04 Jun 2021 08:53:40 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id a15sm8466956wrs.63.2021.06.04.08.53.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E274D1FF96; Fri, 4 Jun 2021 16:53:21 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 75/99] target/arm: make is_aa64 and arm_el_is_aa64 a macro for !TARGET_AARCH64 Date: Fri, 4 Jun 2021 16:52:48 +0100 Message-Id: <20210604155312.15902-76-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana when TARGET_AARCH64 is not defined, it is helpful to make is_aa64() and arm_el_is_aa64 macros defined to "false". This way we can make more code TARGET_AARCH64-only. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu.h | 37 ++++++++++++++++++++++++------------- 1 file changed, 24 insertions(+), 13 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index b9b9bd8b01..8614948543 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1060,6 +1060,11 @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq); void aarch64_sve_change_el(CPUARMState *env, int old_el, int new_el, bool el0_a64); +static inline bool is_a64(CPUARMState *env) +{ + return env->aarch64; +} + /* * SVE registers are encoded in KVM's memory in an endianness-invariant format. * The byte at offset i from the start of the in-memory representation contains @@ -1089,7 +1094,10 @@ static inline void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) { } static inline void aarch64_sve_change_el(CPUARMState *env, int o, int n, bool a) { } -#endif + +#define is_a64(env) ((void)env, false) + +#endif /* TARGET_AARCH64 */ void aarch64_sync_32_to_64(CPUARMState *env); void aarch64_sync_64_to_32(CPUARMState *env); @@ -1098,11 +1106,6 @@ int fp_exception_el(CPUARMState *env, int cur_el); int sve_exception_el(CPUARMState *env, int cur_el); uint32_t sve_zcr_len_for_el(CPUARMState *env, int el); -static inline bool is_a64(CPUARMState *env) -{ - return env->aarch64; -} - /* you can call this signal handler from your SIGBUS and SIGSEGV signal handlers to inform the virtual CPU of exceptions. non zero is returned if the signal was handled by the virtual CPU. */ @@ -2212,13 +2215,7 @@ static inline bool arm_is_el2_enabled(CPUARMState *env) } #endif -/** - * arm_hcr_el2_eff(): Return the effective value of HCR_EL2. - * E.g. when in secure state, fields in HCR_EL2 are suppressed, - * "for all purposes other than a direct read or write access of HCR_EL2." - * Not included here is HCR_RW. - */ -uint64_t arm_hcr_el2_eff(CPUARMState *env); +#ifdef TARGET_AARCH64 /* Return true if the specified exception level is running in AArch64 state. */ static inline bool arm_el_is_aa64(CPUARMState *env, int el) @@ -2253,6 +2250,20 @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el) return aa64; } +#else + +#define arm_el_is_aa64(env, el) ((void)env, (void)el, false) + +#endif /* TARGET_AARCH64 */ + +/** + * arm_hcr_el2_eff(): Return the effective value of HCR_EL2. + * E.g. when in secure state, fields in HCR_EL2 are suppressed, + * "for all purposes other than a direct read or write access of HCR_EL2." + * Not included here is HCR_RW. + */ +uint64_t arm_hcr_el2_eff(CPUARMState *env); + /* Function for determing whether guest cp register reads and writes should * access the secure or non-secure bank of a cp register. When EL3 is * operating in AArch32 state, the NS-bit determines whether the secure From patchwork Fri Jun 4 15:52:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454082 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp566040jae; Fri, 4 Jun 2021 09:29:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyPK0aqpq1HhlXLXn9A9cAkSAv+4IA2sFK9H5RJln6PcIqe1eOQ/vQBozsN15v7aHf4JT0/ X-Received: by 2002:a1f:3816:: with SMTP id f22mr3065455vka.17.1622824175361; Fri, 04 Jun 2021 09:29:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824175; cv=none; d=google.com; s=arc-20160816; b=LHzEusdhjw8Hpgok3iGCT8mnaC3jhvouGKW5l7v/YRVP8v9T4CfdijtNg6GGlg9eA2 A0MKgHhoptI2HXdir/TEroi/lsp7sa1L+weZ38sP7Rl67ERsheJLl4CnWwHpuJZzrWfu /TmXC9HmwBpFDpwRckvsc+ODLjW0xAQQEUXC8GCAWcBgd+ACqLPQB9oMYsUDKfLOAJfM QLl+kGuCObFdxnWBHHRDv33tdqU07p6dsjvA3pwOwAUpPCXNub9LgvFFx3StaQPy5V53 mlF+sbysIvRWD5HJeZKqgyDp73x3KlWP2jxdxL6IEU/PChxCNqTILcUQx+0roVDcBMhE 3Rrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=aJVHeJ+pGUaeyhH82Xlk1CVXigUjkgZJpWNOfvUWY7I=; b=poZ0fDRpSq0xmDyVDWOZBU0YSAUJuTpreXBcbHGT6SIClxIAkz0pII6fdsKJmF/B3z O4e6Al3nGgCOx3uz+klK43+ZtPBD+ljUZYcb9+os9vRwhsvLXVdly+rGMHYAcu3OUZU1 H2DO/7QduT5XEkWbu5tWYQ8ibqk5eOb+OkpwfD+ipU5TgNjZfk5D8XY2foaceQnQNviQ pUO9v7mi49FoYdnaUBjRHeuYt1s1WStVqoDPTkJn3xTJ61I6M+nrKxqmGx3yok49FTfJ XpAKgtU1lBOF447qkzGk9h4xZ+bCpVRjmJyi5mZUAfRBr+whPuQDv77MEsG11x98VKgW Pmcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=PY8oliIS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id e11si867068vst.298.2021.06.04.09.29.35 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:29:35 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=PY8oliIS; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49496 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpChS-0001Cu-Ph for patch@linaro.org; Fri, 04 Jun 2021 12:29:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51914) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRM-00033k-5G for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:56 -0400 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]:54183) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRF-0003qI-Jr for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:55 -0400 Received: by mail-wm1-x32c.google.com with SMTP id h3so5705257wmq.3 for ; Fri, 04 Jun 2021 09:12:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aJVHeJ+pGUaeyhH82Xlk1CVXigUjkgZJpWNOfvUWY7I=; b=PY8oliISMAt52zE7hZsr9LggbMNl1bp9EeWSqY8xK51X9QvcsR4jbQqJbQvGts43Po AKGV6IsvAZ1fl/YZCtcVQSsC0chw76tmhbYghNA4/Oxtz41KX5X/ZxlaC56LrxqVHx8U PbxzVKYdqS23JEOZwJpWdieQn5VXVJeYWZsEVB9nBCQPJH/hgwgFUBzNdeNWT2tRvAgN C06ChacCiSspBaOL1N3IaD/TGjCEdPD2zx6DWx4zFBgqo6RaTeUbGE+PbLaH6u71ZV3l Y9X8kTLT9QhEePgvfrvon9MjUpc5gg0oeRPI1jwbWLtoYe/ZUJDUUzx5nxr9FjB8ZkQW 4ZNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aJVHeJ+pGUaeyhH82Xlk1CVXigUjkgZJpWNOfvUWY7I=; b=XF8Eu1qyzk8okgk4HQXXYqREl79WGaGgBM5uxy12Mcvn9sGvP7IDYXQgNuif69Txs0 +4FVgRkBCE9tnrXciWlLbBilu6nGuAYe8IoFZrimAYDzjX2ltMO/U48BFsRkTbMNzy1O 6iCZo8mekN7HMwJ1ygJ7T6SZmMcBfD5t+aiUfyUo6R2c7zB1jW3SKzmUh4GuqhwFcBva VZav2VX+B26DdCpYOdsfC8rWC5A8JJZ310jG6had0MKVI7ggvbKzwC3LKhOy/1vbVOUZ kf3ZE95WzivSDJ0G2FqfTTW65LTFqp/d8XndEBG5QWYmfuYB/YDNpF0Vyrg/48j+RG4T fjPA== X-Gm-Message-State: AOAM533QJwdNnI4z2LSc0iZ02UYpJ0wr1uaehiCe2U4hEidQq45hh1qm L0SndWG1+aUMYHENp3zI+3BZmg== X-Received: by 2002:a1c:4d13:: with SMTP id o19mr4474528wmh.100.1622823168394; Fri, 04 Jun 2021 09:12:48 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id h1sm10457784wmq.0.2021.06.04.09.12.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 0ECFD1FFE7; Fri, 4 Jun 2021 16:53:22 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 76/99] target/arm: restrict rebuild_hflags_a64 to TARGET_AARCH64 Date: Fri, 4 Jun 2021 16:52:49 +0100 Message-Id: <20210604155312.15902-77-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32c; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana this work is in preparation of making sve_zcr_len_for_el AARCH64-only. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- v14 - fix merge failure with CPUARMTBflags update --- target/arm/helper-a64.h | 2 ++ target/arm/helper.h | 1 - target/arm/tcg/helper.c | 12 ++++++++++++ 3 files changed, 14 insertions(+), 1 deletion(-) -- 2.20.1 diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h index 7b706571bb..c89406e656 100644 --- a/target/arm/helper-a64.h +++ b/target/arm/helper-a64.h @@ -118,3 +118,5 @@ DEF_HELPER_FLAGS_2(st2g_stub, TCG_CALL_NO_WG, void, env, i64) DEF_HELPER_FLAGS_2(ldgm, TCG_CALL_NO_WG, i64, env, i64) DEF_HELPER_FLAGS_3(stgm, TCG_CALL_NO_WG, void, env, i64, i64) DEF_HELPER_FLAGS_3(stzgm_tags, TCG_CALL_NO_WG, void, env, i64, i64) + +DEF_HELPER_FLAGS_2(rebuild_hflags_a64, TCG_CALL_NO_RWG, void, env, int) diff --git a/target/arm/helper.h b/target/arm/helper.h index 23ccb0f72f..e8df4f7625 100644 --- a/target/arm/helper.h +++ b/target/arm/helper.h @@ -94,7 +94,6 @@ DEF_HELPER_FLAGS_1(rebuild_hflags_m32_newel, TCG_CALL_NO_RWG, void, env) DEF_HELPER_FLAGS_2(rebuild_hflags_m32, TCG_CALL_NO_RWG, void, env, int) DEF_HELPER_FLAGS_1(rebuild_hflags_a32_newel, TCG_CALL_NO_RWG, void, env) DEF_HELPER_FLAGS_2(rebuild_hflags_a32, TCG_CALL_NO_RWG, void, env, int) -DEF_HELPER_FLAGS_2(rebuild_hflags_a64, TCG_CALL_NO_RWG, void, env, int) DEF_HELPER_FLAGS_5(probe_access, TCG_CALL_NO_WG, void, env, tl, i32, i32, i32) diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 38cc7c6a3d..7136c82795 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -999,6 +999,8 @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el, return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags); } +#ifdef TARGET_AARCH64 + static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, ARMMMUIdx mmu_idx) { @@ -1122,6 +1124,14 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, return rebuild_hflags_common(env, fp_el, mmu_idx, flags); } +#else + +QEMU_ERROR("this should have been optimized away!") +CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, + ARMMMUIdx mmu_idx); + +#endif /* TARGET_AARCH64 */ + static CPUARMTBFlags rebuild_hflags_internal(CPUARMState *env) { int el = arm_current_el(env); @@ -1183,6 +1193,7 @@ void HELPER(rebuild_hflags_a32)(CPUARMState *env, int el) env->hflags = rebuild_hflags_a32(env, fp_el, mmu_idx); } +#ifdef TARGET_AARCH64 void HELPER(rebuild_hflags_a64)(CPUARMState *env, int el) { int fp_el = fp_exception_el(env, el); @@ -1190,6 +1201,7 @@ void HELPER(rebuild_hflags_a64)(CPUARMState *env, int el) env->hflags = rebuild_hflags_a64(env, el, fp_el, mmu_idx); } +#endif /* TARGET_AARCH64 */ static inline void assert_hflags_rebuild_correctly(CPUARMState *env) { From patchwork Fri Jun 4 15:52:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454063 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp544502jae; Fri, 4 Jun 2021 09:04:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwh8vvFG55Ww4z06XiszG+sLSU9mGJ1zexFJKapLxOo/rN5jFxbbl3Sr/E4MyzswUQmFBsT X-Received: by 2002:a17:907:948c:: with SMTP id dm12mr4869081ejc.484.1622822698758; Fri, 04 Jun 2021 09:04:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822698; cv=none; d=google.com; s=arc-20160816; b=H8JExZD4eAgHJwVWmrkhig1QIYvAF79iJ0Jwv2CBhWEVsOLlJAL2hkcHZ8+saM4UWb stFRVQiN01pWY1Uw/N7+OsV7kPSuW8Dy8cKkipGSziviS8QqJP0P1kHM7Yy77to8Pbdb 1mGrGmhdyjbdCHuG6RpPPtQsVX1Bu+l6NCX/Sx89ywmgaVA2b4BQ1ir/7x5Uw2WKscSr m+8P2OS+AHlqbYEHaDEZjbMwi5W1GhA07wIWIPeU2udE/qvqk0l9xaNHqA99U/yiSxmp OLgmCd/rvShJCdCZ7pQmhoMWvWOOrYuIew4Csh1ffrElgLfIza9iaZxn8rQnUloIleKt p3vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=PvoZ38UPnJd1eKmSeIml7qGU1C99SJRJFwjVwPNmxyQ=; b=qH5DOFNSPxk63K0LS9vEFqD3zQPQZIhJXRl1P0gpwvHpM15n5G43q4jdPc6Wg4L8n2 6Svx92kBHW5CTx2TGdU35DuGOo00jKrFqnhVYuanfP8PYGxG1ognsADm1+Rkc6aQU5Zm R68R6mkrslQdgwcbRJVxFGHT6iwvmWbP3Bu89FnEUEAt/Bi1aM6StvHUw9O+qy6GTfGh a4DFOHpP2wo0LBo+qg6KGC7vDllHPqWhTsedAdUpcvYa1U4+rB2TTutw2+xJyPuK5vcW QLb0EKehN3Og5yebLq5m1WAaW8/aYNyPRUe2VLqSOOhF9d1743PF3pztwAwFOe9sgWgs LbUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WhDJQAnY; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id eb8si200401edb.574.2021.06.04.09.04.58 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:04:58 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=WhDJQAnY; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:51138 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCJd-00023E-M0 for patch@linaro.org; Fri, 04 Jun 2021 12:04:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44740) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8h-0003JE-Fg for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:39 -0400 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]:43961) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8f-0000BZ-7u for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:39 -0400 Received: by mail-wr1-x434.google.com with SMTP id u7so4356061wrs.10 for ; Fri, 04 Jun 2021 08:53:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PvoZ38UPnJd1eKmSeIml7qGU1C99SJRJFwjVwPNmxyQ=; b=WhDJQAnY3k93ah2JXs8svrOJqarM9rzDCIGg2lCLXaLb1MB+qwEKlXbJUck6ARlEMU xkxDelggXdNRmv7U9pZciDXaJ7AHDQMSSuYUsEGQzxwtAoOz1L0n+U5WuI1LZ2pEMK0s AALde1wF1pJG+vxWOF0dTy7glllusqHIxpV+u0gpMNkmljll/GsDgiYTeEJ8xVQIrKDa LhpH98TFSn/UFhGsUlpc+28UClrKMaNcIvyYKXc/Tejf3BCk98VOaY5AMImwgvZqutk8 zVtBQKwwtZcA6tjJbioIbqEfkvrqxW/gQMA1UaIwkyi6gR7sQ4jQeFgvHrRdp89q+gpZ Z+QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PvoZ38UPnJd1eKmSeIml7qGU1C99SJRJFwjVwPNmxyQ=; b=Rn0dp0UJd4ZPgqUJ881yRw6AyKhL+SUr4JfBFubELlZ4BYGLXvABvKEehEZgAVv5kN w+B6q/F4RjX96nzc3ZPfS+RBtUg8Kc+vTFAVuCE7VWXArMumv3zeZlPpCtx/NPs0Obft +Ja7wNRJH2MYlKi1yeg8wvQ9tWUpIlC1ZPW+FxYQuILK55RYaq5l7KO4lBvjf5LuIbC2 EDAvhgHCNY0G7B+823XqXvZzxNad/kI3ZbpbRi+gIH0rfrp/uQ/uf+k39Lp8Tnx3RIFI wGBCgTS0NYDdgW9xTl8ssXbFZ0yWdgUTFFETD41lpTJ3fEvIXM6ETRY6MAGkVinCCLlT xYPA== X-Gm-Message-State: AOAM530ZK52WGdQ/pFsxZXObiQdXNxdYj4z5nBe/Nriufy5wg7V5EGhG CVtxHBhjkNQlLil76hGuoK+9kg== X-Received: by 2002:a5d:50ca:: with SMTP id f10mr4460353wrt.411.1622822015734; Fri, 04 Jun 2021 08:53:35 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id k5sm7410470wrv.85.2021.06.04.08.53.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2B8701FFE8; Fri, 4 Jun 2021 16:53:22 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 77/99] target/arm: arch_dump: restrict ELFCLASS64 to AArch64 Date: Fri, 4 Jun 2021 16:52:50 +0100 Message-Id: <20210604155312.15902-78-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana this will allow us to restrict more code to TARGET_AARCH64 Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- v16 - fix conflict now notes in arm_sysemu_ops --- target/arm/arch_dump.c | 12 +++++++----- target/arm/cpu.c | 2 ++ roms/u-boot | 2 +- 3 files changed, 10 insertions(+), 6 deletions(-) -- 2.20.1 diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c index 0184845310..9cc75a6fda 100644 --- a/target/arm/arch_dump.c +++ b/target/arm/arch_dump.c @@ -23,6 +23,8 @@ #include "elf.h" #include "sysemu/dump.h" +#ifdef TARGET_AARCH64 + /* struct user_pt_regs from arch/arm64/include/uapi/asm/ptrace.h */ struct aarch64_user_regs { uint64_t regs[31]; @@ -141,7 +143,6 @@ static int aarch64_write_elf64_prfpreg(WriteCoreDumpFunction f, return 0; } -#ifdef TARGET_AARCH64 static off_t sve_zreg_offset(uint32_t vq, int n) { off_t off = sizeof(struct aarch64_user_sve_header); @@ -229,7 +230,6 @@ static int aarch64_write_elf64_sve(WriteCoreDumpFunction f, return 0; } -#endif int arm_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs, int cpuid, void *opaque) @@ -272,15 +272,15 @@ int arm_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs, return ret; } -#ifdef TARGET_AARCH64 if (cpu_isar_feature(aa64_sve, cpu)) { ret = aarch64_write_elf64_sve(f, env, cpuid, s); } -#endif return ret; } +#endif /* TARGET_AARCH64 */ + /* struct pt_regs from arch/arm/include/asm/ptrace.h */ struct arm_user_regs { uint32_t regs[17]; @@ -449,12 +449,14 @@ ssize_t cpu_get_note_size(int class, int machine, int nr_cpus) size_t note_size; if (class == ELFCLASS64) { +#ifdef TARGET_AARCH64 note_size = AARCH64_PRSTATUS_NOTE_SIZE; note_size += AARCH64_PRFPREG_NOTE_SIZE; -#ifdef TARGET_AARCH64 if (cpu_isar_feature(aa64_sve, cpu)) { note_size += AARCH64_SVE_NOTE_SIZE(&cpu->env); } +#else + return -1; /* unsupported */ #endif } else { note_size = ARM_PRSTATUS_NOTE_SIZE; diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 0b20faaca0..b297d0e6aa 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -1380,7 +1380,9 @@ static const struct SysemuCPUOps arm_sysemu_ops = { .get_phys_page_attrs_debug = arm_cpu_get_phys_page_attrs_debug, .asidx_from_attrs = arm_asidx_from_attrs, .write_elf32_note = arm_cpu_write_elf32_note, +#ifdef TARGET_AARCH64 .write_elf64_note = arm_cpu_write_elf64_note, +#endif .virtio_is_big_endian = arm_cpu_virtio_is_big_endian, .legacy_vmsd = &vmstate_arm_cpu, }; diff --git a/roms/u-boot b/roms/u-boot index b46dd116ce..d3689267f9 160000 --- a/roms/u-boot +++ b/roms/u-boot @@ -1 +1 @@ -Subproject commit b46dd116ce03e235f2a7d4843c6278e1da44b5e1 +Subproject commit d3689267f92c5956e09cc7d1baa4700141662bff From patchwork Fri Jun 4 15:52:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454090 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp571756jae; Fri, 4 Jun 2021 09:36:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyRR7nRbe1auKFMePSK6RL0O5baZhOVUGnuRO2jEu+oZiW4ByN2frrU8gsuQmPLm/7FxSuk X-Received: by 2002:a17:906:15c7:: with SMTP id l7mr5124084ejd.167.1622824600681; Fri, 04 Jun 2021 09:36:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824600; cv=none; d=google.com; s=arc-20160816; b=jeVpOyhbhxLAhrRN2rI2/zLwwSBx7/U2MMJqxeODTWyQdlAHlSIIQXeMZ+tcIfnCbU QE90Ow6q0BOj4vsZDAWAWmZ5TSV/ltAHzQtpCNs3g1JAjMOB937gZArqtAFYfO6AUGEU +g58r1y38GMJCZf6UzinSYe0cGSndmOkPOuFg+k8g+MbhGk3H5mHhoTwSUcoCyEGnGAU TLcHIlHa6VIJen69N2OTk57PEcMm7uwmzrbuirLvomQK+EK6svTSj5Dx6J/xuCcgsZaE ZnYFtMPizQNNtNCcPSLfOxrhulVm9gAHcSCuF0qCmJowATyz168BwOqhoZnYYkL7dPM9 quQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=+xCvg7WwfIT24y4D/0f7rlElM8hXlhFx8JmBGskyndc=; b=RMJAocKr0au+g/XpCc/dFxWUM/1XwxL68FlI7ZG7IVGI/htfMkTerJf1FMIeLYzUdU VeCjgp5ktm+XaOPeTNwRnJmKxX8HOvpKjhPAbcBVpIw0CwJ41nlPiBqpeeXalXz5fIKH GTBmGwoLz5a2AqMCXz0tupoRylAAvVjLF/0wj/outlPFNr+G/SN4e38g8ATDThHkzQmN uSYHgmAqKobDBgQ6vlAw2oQ2Ml06gjiHD34JxOIgo3UMq9uzU9JElqPw749TMpE29gCC MNlgzsk78uCLv2IaWfo4Spx5VmvTmB57EeIOM24xS6Z0EviYJDN7SLOFHVqyFqLEjz4u ZHig== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=HRFEWZ53; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w2si4913934edx.174.2021.06.04.09.36.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:36:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=HRFEWZ53; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43586 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCoJ-00087j-EY for patch@linaro.org; Fri, 04 Jun 2021 12:36:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48466) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHs-00089R-L4 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:09 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]:40491) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHa-0005nd-BT for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:04 -0400 Received: by mail-wm1-x32e.google.com with SMTP id b145-20020a1c80970000b029019c8c824054so8223342wmd.5 for ; Fri, 04 Jun 2021 09:02:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+xCvg7WwfIT24y4D/0f7rlElM8hXlhFx8JmBGskyndc=; b=HRFEWZ53PHocg/8Rr20fOP1MAwcz1r4ZKTi3MhV1ndCca5LV5redkEYyQtAiRH+EbI MFtHV52I1iWzjRuUKOfMTqbBNQgcbsZ4bYZBxiNdmjnh1+ZFdcqCtfwaDD1GPOp0jZRW IQHJilAjJkwldjASwhqFIyM8YZkYanfxXXRpHCYo5AADfiZqsrl0eP3gEFeYh7X0AjCu FmNVGhYUF4zrYruH26umT+MUybXwQT+/+IGLUO4PB8hhxYbRO+HypxNTweRi05D9exA2 ToWOcP0cx/MYJwZwncCai6yracmSRrzMaZawgTN5+lex82VD+bgt6iIWv3ZQGXC0nfl+ E12A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+xCvg7WwfIT24y4D/0f7rlElM8hXlhFx8JmBGskyndc=; b=JPwbMaB5ezeqRR79Og0ZnPTXPL4CJSzA7EEM5jqckdlDcpgB06xgQZR2Jb00ybnMde 5u1R+unbisRY+zdbfMCY+EwDZSOesSpKPiGcS27/4wJNZt5l05NJ8U7kfNGUqh7WfL67 ctdfHRlld0cemeihnWeUdbwzQh7IrRsuohur5XIYgPvNl6PIAFxP05w3GMhl70r6L/S3 vdgDi4trt5XSKpooF6JAktZ+PYj4dHICGTSbO5ByFJeSYfsh+H2F7osXa923AsUUcL1Y gkp05xPtfXC27I5wfZno0O16VBtJIt0ZrwEA9mKf5OJGUrAMWd6EL1VN7QF7JKpH2ltc aErg== X-Gm-Message-State: AOAM530JQnAbutE3NQRxBAWUHtJJQpWDdHvNJ6cYwwZc+CILUGmY8mmG FZuHUaROU3S4ze7D0AHWFDXgFQ== X-Received: by 2002:a7b:c34a:: with SMTP id l10mr4393788wmj.46.1622822568533; Fri, 04 Jun 2021 09:02:48 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id f5sm7980459wrf.22.2021.06.04.09.02.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 5340A1FFE9; Fri, 4 Jun 2021 16:53:22 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 78/99] target/arm: cpu-exceptions, cpu-exceptions-aa64: new modules Date: Fri, 4 Jun 2021 16:52:51 +0100 Message-Id: <20210604155312.15902-79-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URI_NOVOWEL=0.5 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana after restricting AArch64-specific code to TARGET_AARCH64 builds, we can now extract the exception handling code from cpu-sysemu, and split its AArch64-specific part into its own module. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-exceptions-aa64.h | 28 ++ target/arm/cpu-exceptions-aa64.c | 276 +++++++++++++ target/arm/cpu-exceptions.c | 445 ++++++++++++++++++++ target/arm/cpu-sysemu.c | 672 ------------------------------- target/arm/cpu-user.c | 1 + target/arm/meson.build | 5 + 6 files changed, 755 insertions(+), 672 deletions(-) create mode 100644 target/arm/cpu-exceptions-aa64.h create mode 100644 target/arm/cpu-exceptions-aa64.c create mode 100644 target/arm/cpu-exceptions.c -- 2.20.1 diff --git a/target/arm/cpu-exceptions-aa64.h b/target/arm/cpu-exceptions-aa64.h new file mode 100644 index 0000000000..64f800a15d --- /dev/null +++ b/target/arm/cpu-exceptions-aa64.h @@ -0,0 +1,28 @@ +/* + * QEMU AArch64 CPU Exceptions Sysemu code + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#ifndef CPU_EXCEPTIONS_AA64_H +#define CPU_EXCEPTIONS_AA64_H + +#include "cpu.h" + +void arm_cpu_do_interrupt_aarch64(CPUState *cs); + +#endif /* CPU_EXCEPTIONS_AA64_H */ diff --git a/target/arm/cpu-exceptions-aa64.c b/target/arm/cpu-exceptions-aa64.c new file mode 100644 index 0000000000..7daaba0426 --- /dev/null +++ b/target/arm/cpu-exceptions-aa64.c @@ -0,0 +1,276 @@ +/* + * QEMU AArch64 CPU Exceptions Sysemu code + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qemu/log.h" +#include "cpu.h" +#include "internals.h" +#include "sysemu/tcg.h" + +#include "cpu-exceptions-aa64.h" + +static int aarch64_regnum(CPUARMState *env, int aarch32_reg) +{ + /* + * Return the register number of the AArch64 view of the AArch32 + * register @aarch32_reg. The CPUARMState CPSR is assumed to still + * be that of the AArch32 mode the exception came from. + */ + int mode = env->uncached_cpsr & CPSR_M; + + switch (aarch32_reg) { + case 0 ... 7: + return aarch32_reg; + case 8 ... 12: + return mode == ARM_CPU_MODE_FIQ ? aarch32_reg + 16 : aarch32_reg; + case 13: + switch (mode) { + case ARM_CPU_MODE_USR: + case ARM_CPU_MODE_SYS: + return 13; + case ARM_CPU_MODE_HYP: + return 15; + case ARM_CPU_MODE_IRQ: + return 17; + case ARM_CPU_MODE_SVC: + return 19; + case ARM_CPU_MODE_ABT: + return 21; + case ARM_CPU_MODE_UND: + return 23; + case ARM_CPU_MODE_FIQ: + return 29; + default: + g_assert_not_reached(); + } + case 14: + switch (mode) { + case ARM_CPU_MODE_USR: + case ARM_CPU_MODE_SYS: + case ARM_CPU_MODE_HYP: + return 14; + case ARM_CPU_MODE_IRQ: + return 16; + case ARM_CPU_MODE_SVC: + return 18; + case ARM_CPU_MODE_ABT: + return 20; + case ARM_CPU_MODE_UND: + return 22; + case ARM_CPU_MODE_FIQ: + return 30; + default: + g_assert_not_reached(); + } + case 15: + return 31; + default: + g_assert_not_reached(); + } +} + +static uint32_t cpsr_read_for_spsr_elx(CPUARMState *env) +{ + uint32_t ret = cpsr_read(env); + + /* Move DIT to the correct location for SPSR_ELx */ + if (ret & CPSR_DIT) { + ret &= ~CPSR_DIT; + ret |= PSTATE_DIT; + } + /* Merge PSTATE.SS into SPSR_ELx */ + ret |= env->pstate & PSTATE_SS; + + return ret; +} + +/* Handle exception entry to a target EL which is using AArch64 */ +void arm_cpu_do_interrupt_aarch64(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + unsigned int new_el = env->exception.target_el; + target_ulong addr = env->cp15.vbar_el[new_el]; + unsigned int new_mode = aarch64_pstate_mode(new_el, true); + unsigned int old_mode; + unsigned int cur_el = arm_current_el(env); + int rt; + + if (tcg_enabled()) { + /* + * Note that new_el can never be 0. If cur_el is 0, then + * el0_a64 is is_a64(), else el0_a64 is ignored. + */ + aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); + } + + if (cur_el < new_el) { + /* + * Entry vector offset depends on whether the implemented EL + * immediately lower than the target level is using AArch32 or AArch64 + */ + bool is_aa64; + uint64_t hcr; + + switch (new_el) { + case 3: + is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0; + break; + case 2: + hcr = arm_hcr_el2_eff(env); + if ((hcr & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { + is_aa64 = (hcr & HCR_RW) != 0; + break; + } + /* fall through */ + case 1: + is_aa64 = is_a64(env); + break; + default: + g_assert_not_reached(); + } + + if (is_aa64) { + addr += 0x400; + } else { + addr += 0x600; + } + } else if (pstate_read(env) & PSTATE_SP) { + addr += 0x200; + } + + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + case EXCP_DATA_ABORT: + env->cp15.far_el[new_el] = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", + env->cp15.far_el[new_el]); + /* fall through */ + case EXCP_BKPT: + case EXCP_UDEF: + case EXCP_SWI: + case EXCP_HVC: + case EXCP_HYP_TRAP: + case EXCP_SMC: + switch (syn_get_ec(env->exception.syndrome)) { + case EC_ADVSIMDFPACCESSTRAP: + /* + * QEMU internal FP/SIMD syndromes from AArch32 include the + * TA and coproc fields which are only exposed if the exception + * is taken to AArch32 Hyp mode. Mask them out to get a valid + * AArch64 format syndrome. + */ + env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20); + break; + case EC_CP14RTTRAP: + case EC_CP15RTTRAP: + case EC_CP14DTTRAP: + /* + * For a trap on AArch32 MRC/MCR/LDC/STC the Rt field is currently + * the raw register field from the insn; when taking this to + * AArch64 we must convert it to the AArch64 view of the register + * number. Notice that we read a 4-bit AArch32 register number and + * write back a 5-bit AArch64 one. + */ + rt = extract32(env->exception.syndrome, 5, 4); + rt = aarch64_regnum(env, rt); + env->exception.syndrome = deposit32(env->exception.syndrome, + 5, 5, rt); + break; + case EC_CP15RRTTRAP: + case EC_CP14RRTTRAP: + /* Similarly for MRRC/MCRR traps for Rt and Rt2 fields */ + rt = extract32(env->exception.syndrome, 5, 4); + rt = aarch64_regnum(env, rt); + env->exception.syndrome = deposit32(env->exception.syndrome, + 5, 5, rt); + rt = extract32(env->exception.syndrome, 10, 4); + rt = aarch64_regnum(env, rt); + env->exception.syndrome = deposit32(env->exception.syndrome, + 10, 5, rt); + break; + } + env->cp15.esr_el[new_el] = env->exception.syndrome; + break; + case EXCP_IRQ: + case EXCP_VIRQ: + addr += 0x80; + break; + case EXCP_FIQ: + case EXCP_VFIQ: + addr += 0x100; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (is_a64(env)) { + old_mode = pstate_read(env); + aarch64_save_sp(env, arm_current_el(env)); + env->elr_el[new_el] = env->pc; + } else { + old_mode = cpsr_read_for_spsr_elx(env); + env->elr_el[new_el] = env->regs[15]; + + aarch64_sync_32_to_64(env); + + env->condexec_bits = 0; + } + env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode; + + qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", + env->elr_el[new_el]); + + if (cpu_isar_feature(aa64_pan, cpu)) { + /* The value of PSTATE.PAN is normally preserved, except when ... */ + new_mode |= old_mode & PSTATE_PAN; + switch (new_el) { + case 2: + /* ... the target is EL2 with HCR_EL2.{E2H,TGE} == '11' ... */ + if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) + != (HCR_E2H | HCR_TGE)) { + break; + } + /* fall through */ + case 1: + /* ... the target is EL1 ... */ + /* ... and SCTLR_ELx.SPAN == 0, then set to 1. */ + if ((env->cp15.sctlr_el[new_el] & SCTLR_SPAN) == 0) { + new_mode |= PSTATE_PAN; + } + break; + } + } + if (cpu_isar_feature(aa64_mte, cpu)) { + new_mode |= PSTATE_TCO; + } + + pstate_write(env, PSTATE_DAIF | new_mode); + env->aarch64 = 1; + aarch64_restore_sp(env, new_el); + if (tcg_enabled()) { + arm_rebuild_hflags(env); + } + + env->pc = addr; + + qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", + new_el, env->pc, pstate_read(env)); +} diff --git a/target/arm/cpu-exceptions.c b/target/arm/cpu-exceptions.c new file mode 100644 index 0000000000..9526436e5d --- /dev/null +++ b/target/arm/cpu-exceptions.c @@ -0,0 +1,445 @@ +/* + * QEMU ARM CPU Exceptions Sysemu code + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qemu/log.h" +#include "qemu/main-loop.h" +#include "cpu.h" +#include "internals.h" +#include "sysemu/tcg.h" +#include "tcg/tcg-cpu.h" +#include "cpu-exceptions-aa64.h" + +static void take_aarch32_exception(CPUARMState *env, int new_mode, + uint32_t mask, uint32_t offset, + uint32_t newpc) +{ + int new_el; + + /* Change the CPU state so as to actually take the exception. */ + switch_mode(env, new_mode); + + /* + * For exceptions taken to AArch32 we must clear the SS bit in both + * PSTATE and in the old-state value we save to SPSR_, so zero it now. + */ + env->pstate &= ~PSTATE_SS; + env->spsr = cpsr_read(env); + /* Clear IT bits. */ + env->condexec_bits = 0; + /* Switch to the new mode, and to the correct instruction set. */ + env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode; + + /* This must be after mode switching. */ + new_el = arm_current_el(env); + + /* Set new mode endianness */ + env->uncached_cpsr &= ~CPSR_E; + if (env->cp15.sctlr_el[new_el] & SCTLR_EE) { + env->uncached_cpsr |= CPSR_E; + } + /* J and IL must always be cleared for exception entry */ + env->uncached_cpsr &= ~(CPSR_IL | CPSR_J); + env->daif |= mask; + + if (new_mode == ARM_CPU_MODE_HYP) { + env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0; + env->elr_el[2] = env->regs[15]; + } else { + /* CPSR.PAN is normally preserved preserved unless... */ + if (cpu_isar_feature(aa32_pan, env_archcpu(env))) { + switch (new_el) { + case 3: + if (!arm_is_secure_below_el3(env)) { + /* ... the target is EL3, from non-secure state. */ + env->uncached_cpsr &= ~CPSR_PAN; + break; + } + /* ... the target is EL3, from secure state ... */ + /* fall through */ + case 1: + /* ... the target is EL1 and SCTLR.SPAN is 0. */ + if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPAN)) { + env->uncached_cpsr |= CPSR_PAN; + } + break; + } + } + /* + * this is a lie, as there was no c1_sys on V4T/V5, but who cares + * and we should just guard the thumb mode on V4 + */ + if (arm_feature(env, ARM_FEATURE_V4T)) { + env->thumb = + (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) != 0; + } + env->regs[14] = env->regs[15] + offset; + } + env->regs[15] = newpc; + if (tcg_enabled()) { + arm_rebuild_hflags(env); + } +} + +static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) +{ + /* + * Handle exception entry to Hyp mode; this is sufficiently + * different to entry to other AArch32 modes that we handle it + * separately here. + * + * The vector table entry used is always the 0x14 Hyp mode entry point, + * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. + * The offset applied to the preferred return address is always zero + * (see DDI0487C.a section G1.12.3). + * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. + */ + uint32_t addr, mask; + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + + switch (cs->exception_index) { + case EXCP_UDEF: + addr = 0x04; + break; + case EXCP_SWI: + addr = 0x14; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + env->cp15.ifar_s = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr = 0x0c; + break; + case EXCP_DATA_ABORT: + env->cp15.dfar_s = env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr = 0x10; + break; + case EXCP_IRQ: + addr = 0x18; + break; + case EXCP_FIQ: + addr = 0x1c; + break; + case EXCP_HVC: + addr = 0x08; + break; + case EXCP_HYP_TRAP: + addr = 0x14; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) { + if (!arm_feature(env, ARM_FEATURE_V8)) { + /* + * QEMU syndrome values are v8-style. v7 has the IL bit + * UNK/SBZP for "field not valid" cases, where v8 uses RES1. + * If this is a v7 CPU, squash the IL bit in those cases. + */ + if (cs->exception_index == EXCP_PREFETCH_ABORT || + (cs->exception_index == EXCP_DATA_ABORT && + !(env->exception.syndrome & ARM_EL_ISV)) || + syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) { + env->exception.syndrome &= ~ARM_EL_IL; + } + } + env->cp15.esr_el[2] = env->exception.syndrome; + } + + if (arm_current_el(env) != 2 && addr < 0x14) { + addr = 0x14; + } + + mask = 0; + if (!(env->cp15.scr_el3 & SCR_EA)) { + mask |= CPSR_A; + } + if (!(env->cp15.scr_el3 & SCR_IRQ)) { + mask |= CPSR_I; + } + if (!(env->cp15.scr_el3 & SCR_FIQ)) { + mask |= CPSR_F; + } + + addr += env->cp15.hvbar; + + take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); +} + +static void arm_cpu_do_interrupt_aarch32(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + uint32_t addr; + uint32_t mask; + int new_mode; + uint32_t offset; + uint32_t moe; + + /* If this is a debug exception we must update the DBGDSCR.MOE bits */ + switch (syn_get_ec(env->exception.syndrome)) { + case EC_BREAKPOINT: + case EC_BREAKPOINT_SAME_EL: + moe = 1; + break; + case EC_WATCHPOINT: + case EC_WATCHPOINT_SAME_EL: + moe = 10; + break; + case EC_AA32_BKPT: + moe = 3; + break; + case EC_VECTORCATCH: + moe = 5; + break; + default: + moe = 0; + break; + } + + if (moe) { + env->cp15.mdscr_el1 = deposit64(env->cp15.mdscr_el1, 2, 4, moe); + } + + if (env->exception.target_el == 2) { + arm_cpu_do_interrupt_aarch32_hyp(cs); + return; + } + + switch (cs->exception_index) { + case EXCP_UDEF: + new_mode = ARM_CPU_MODE_UND; + addr = 0x04; + mask = CPSR_I; + if (env->thumb) { + offset = 2; + } else { + offset = 4; + } + break; + case EXCP_SWI: + new_mode = ARM_CPU_MODE_SVC; + addr = 0x08; + mask = CPSR_I; + /* The PC already points to the next instruction. */ + offset = 0; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", + env->exception.fsr, (uint32_t)env->exception.vaddress); + new_mode = ARM_CPU_MODE_ABT; + addr = 0x0c; + mask = CPSR_A | CPSR_I; + offset = 4; + break; + case EXCP_DATA_ABORT: + A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", + env->exception.fsr, + (uint32_t)env->exception.vaddress); + new_mode = ARM_CPU_MODE_ABT; + addr = 0x10; + mask = CPSR_A | CPSR_I; + offset = 8; + break; + case EXCP_IRQ: + new_mode = ARM_CPU_MODE_IRQ; + addr = 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I; + offset = 4; + if (env->cp15.scr_el3 & SCR_IRQ) { + /* IRQ routed to monitor mode */ + new_mode = ARM_CPU_MODE_MON; + mask |= CPSR_F; + } + break; + case EXCP_FIQ: + new_mode = ARM_CPU_MODE_FIQ; + addr = 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I | CPSR_F; + if (env->cp15.scr_el3 & SCR_FIQ) { + /* FIQ routed to monitor mode */ + new_mode = ARM_CPU_MODE_MON; + } + offset = 4; + break; + case EXCP_VIRQ: + new_mode = ARM_CPU_MODE_IRQ; + addr = 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I; + offset = 4; + break; + case EXCP_VFIQ: + new_mode = ARM_CPU_MODE_FIQ; + addr = 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask = CPSR_A | CPSR_I | CPSR_F; + offset = 4; + break; + case EXCP_SMC: + new_mode = ARM_CPU_MODE_MON; + addr = 0x08; + mask = CPSR_A | CPSR_I | CPSR_F; + offset = 0; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + return; /* Never happens. Keep compiler happy. */ + } + + if (new_mode == ARM_CPU_MODE_MON) { + addr += env->cp15.mvbar; + } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { + /* High vectors. When enabled, base address cannot be remapped. */ + addr += 0xffff0000; + } else { + /* + * ARM v7 architectures provide a vector base address register to remap + * the interrupt vector table. + * This register is only followed in non-monitor mode, and is banked. + * Note: only bits 31:5 are valid. + */ + addr += A32_BANKED_CURRENT_REG_GET(env, vbar); + } + + if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_MON) { + env->cp15.scr_el3 &= ~SCR_NS; + } + + take_aarch32_exception(env, new_mode, mask, offset, addr); +} + +void arm_log_exception(int idx) +{ + if (qemu_loglevel_mask(CPU_LOG_INT)) { + const char *exc = NULL; + static const char * const excnames[] = { + [EXCP_UDEF] = "Undefined Instruction", + [EXCP_SWI] = "SVC", + [EXCP_PREFETCH_ABORT] = "Prefetch Abort", + [EXCP_DATA_ABORT] = "Data Abort", + [EXCP_IRQ] = "IRQ", + [EXCP_FIQ] = "FIQ", + [EXCP_BKPT] = "Breakpoint", + [EXCP_EXCEPTION_EXIT] = "QEMU v7M exception exit", + [EXCP_KERNEL_TRAP] = "QEMU intercept of kernel commpage", + [EXCP_HVC] = "Hypervisor Call", + [EXCP_HYP_TRAP] = "Hypervisor Trap", + [EXCP_SMC] = "Secure Monitor Call", + [EXCP_VIRQ] = "Virtual IRQ", + [EXCP_VFIQ] = "Virtual FIQ", + [EXCP_SEMIHOST] = "Semihosting call", + [EXCP_NOCP] = "v7M NOCP UsageFault", + [EXCP_INVSTATE] = "v7M INVSTATE UsageFault", + [EXCP_STKOF] = "v8M STKOF UsageFault", + [EXCP_LAZYFP] = "v7M exception during lazy FP stacking", + [EXCP_LSERR] = "v8M LSERR UsageFault", + [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault", + }; + + if (idx >= 0 && idx < ARRAY_SIZE(excnames)) { + exc = excnames[idx]; + } + if (!exc) { + exc = "unknown"; + } + qemu_log_mask(CPU_LOG_INT, "Taking exception %d [%s]\n", idx, exc); + } +} + +/* + * Handle a CPU exception for A and R profile CPUs. + * Do any appropriate logging, handle PSCI calls, and then hand off + * to the AArch64-entry or AArch32-entry function depending on the + * target exception level's register width. + * + * Note: this is used for both TCG (as the do_interrupt tcg op), + * and KVM to re-inject guest debug exceptions, and to + * inject a Synchronous-External-Abort. + */ +void arm_cpu_do_interrupt(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + CPUARMState *env = &cpu->env; + unsigned int new_el = env->exception.target_el; + + assert(!arm_feature(env, ARM_FEATURE_M)); + + arm_log_exception(cs->exception_index); + qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(env), + new_el); + if (qemu_loglevel_mask(CPU_LOG_INT) + && !excp_is_internal(cs->exception_index)) { + qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", + syn_get_ec(env->exception.syndrome), + env->exception.syndrome); + } + + if (tcg_enabled()) { + if (arm_is_psci_call(cpu, cs->exception_index)) { + arm_handle_psci_call(cpu); + qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); + return; + } + /* + * Semihosting semantics depend on the register width of the code + * that caused the exception, not the target exception level, so + * must be handled here. + */ + if (cs->exception_index == EXCP_SEMIHOST) { + tcg_handle_semihosting(cs); + return; + } + } + /* + * Hooks may change global state so BQL should be held, also the + * BQL needs to be held for any modification of + * cs->interrupt_request. + */ + g_assert(qemu_mutex_iothread_locked()); + arm_call_pre_el_change_hook(cpu); + + assert(!excp_is_internal(cs->exception_index)); + if (arm_el_is_aa64(env, new_el)) { + arm_cpu_do_interrupt_aarch64(cs); + } else { + arm_cpu_do_interrupt_aarch32(cs); + } + + arm_call_el_change_hook(cpu); + + if (tcg_enabled()) { + cs->interrupt_request |= CPU_INTERRUPT_EXITTB; + } +} diff --git a/target/arm/cpu-sysemu.c b/target/arm/cpu-sysemu.c index fff55311f4..4bccf74996 100644 --- a/target/arm/cpu-sysemu.c +++ b/target/arm/cpu-sysemu.c @@ -19,13 +19,9 @@ */ #include "qemu/osdep.h" -#include "qemu/log.h" -#include "qemu/main-loop.h" #include "cpu.h" #include "internals.h" #include "sysemu/hw_accel.h" -#include "sysemu/tcg.h" -#include "tcg/tcg-cpu.h" #ifdef CONFIG_TCG #include "tcg/tcg-cpu.h" @@ -484,671 +480,3 @@ int fp_exception_el(CPUARMState *env, int cur_el) } return 0; } - -static void take_aarch32_exception(CPUARMState *env, int new_mode, - uint32_t mask, uint32_t offset, - uint32_t newpc) -{ - int new_el; - - /* Change the CPU state so as to actually take the exception. */ - switch_mode(env, new_mode); - - /* - * For exceptions taken to AArch32 we must clear the SS bit in both - * PSTATE and in the old-state value we save to SPSR_, so zero it now. - */ - env->pstate &= ~PSTATE_SS; - env->spsr = cpsr_read(env); - /* Clear IT bits. */ - env->condexec_bits = 0; - /* Switch to the new mode, and to the correct instruction set. */ - env->uncached_cpsr = (env->uncached_cpsr & ~CPSR_M) | new_mode; - - /* This must be after mode switching. */ - new_el = arm_current_el(env); - - /* Set new mode endianness */ - env->uncached_cpsr &= ~CPSR_E; - if (env->cp15.sctlr_el[new_el] & SCTLR_EE) { - env->uncached_cpsr |= CPSR_E; - } - /* J and IL must always be cleared for exception entry */ - env->uncached_cpsr &= ~(CPSR_IL | CPSR_J); - env->daif |= mask; - - if (new_mode == ARM_CPU_MODE_HYP) { - env->thumb = (env->cp15.sctlr_el[2] & SCTLR_TE) != 0; - env->elr_el[2] = env->regs[15]; - } else { - /* CPSR.PAN is normally preserved preserved unless... */ - if (cpu_isar_feature(aa32_pan, env_archcpu(env))) { - switch (new_el) { - case 3: - if (!arm_is_secure_below_el3(env)) { - /* ... the target is EL3, from non-secure state. */ - env->uncached_cpsr &= ~CPSR_PAN; - break; - } - /* ... the target is EL3, from secure state ... */ - /* fall through */ - case 1: - /* ... the target is EL1 and SCTLR.SPAN is 0. */ - if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPAN)) { - env->uncached_cpsr |= CPSR_PAN; - } - break; - } - } - /* - * this is a lie, as there was no c1_sys on V4T/V5, but who cares - * and we should just guard the thumb mode on V4 - */ - if (arm_feature(env, ARM_FEATURE_V4T)) { - env->thumb = - (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) != 0; - } - env->regs[14] = env->regs[15] + offset; - } - env->regs[15] = newpc; - if (tcg_enabled()) { - arm_rebuild_hflags(env); - } -} - -static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) -{ - /* - * Handle exception entry to Hyp mode; this is sufficiently - * different to entry to other AArch32 modes that we handle it - * separately here. - * - * The vector table entry used is always the 0x14 Hyp mode entry point, - * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. - * The offset applied to the preferred return address is always zero - * (see DDI0487C.a section G1.12.3). - * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. - */ - uint32_t addr, mask; - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - - switch (cs->exception_index) { - case EXCP_UDEF: - addr = 0x04; - break; - case EXCP_SWI: - addr = 0x14; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - env->cp15.ifar_s = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr = 0x0c; - break; - case EXCP_DATA_ABORT: - env->cp15.dfar_s = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr = 0x10; - break; - case EXCP_IRQ: - addr = 0x18; - break; - case EXCP_FIQ: - addr = 0x1c; - break; - case EXCP_HVC: - addr = 0x08; - break; - case EXCP_HYP_TRAP: - addr = 0x14; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) { - if (!arm_feature(env, ARM_FEATURE_V8)) { - /* - * QEMU syndrome values are v8-style. v7 has the IL bit - * UNK/SBZP for "field not valid" cases, where v8 uses RES1. - * If this is a v7 CPU, squash the IL bit in those cases. - */ - if (cs->exception_index == EXCP_PREFETCH_ABORT || - (cs->exception_index == EXCP_DATA_ABORT && - !(env->exception.syndrome & ARM_EL_ISV)) || - syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) { - env->exception.syndrome &= ~ARM_EL_IL; - } - } - env->cp15.esr_el[2] = env->exception.syndrome; - } - - if (arm_current_el(env) != 2 && addr < 0x14) { - addr = 0x14; - } - - mask = 0; - if (!(env->cp15.scr_el3 & SCR_EA)) { - mask |= CPSR_A; - } - if (!(env->cp15.scr_el3 & SCR_IRQ)) { - mask |= CPSR_I; - } - if (!(env->cp15.scr_el3 & SCR_FIQ)) { - mask |= CPSR_F; - } - - addr += env->cp15.hvbar; - - take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); -} - -static void arm_cpu_do_interrupt_aarch32(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - uint32_t addr; - uint32_t mask; - int new_mode; - uint32_t offset; - uint32_t moe; - - /* If this is a debug exception we must update the DBGDSCR.MOE bits */ - switch (syn_get_ec(env->exception.syndrome)) { - case EC_BREAKPOINT: - case EC_BREAKPOINT_SAME_EL: - moe = 1; - break; - case EC_WATCHPOINT: - case EC_WATCHPOINT_SAME_EL: - moe = 10; - break; - case EC_AA32_BKPT: - moe = 3; - break; - case EC_VECTORCATCH: - moe = 5; - break; - default: - moe = 0; - break; - } - - if (moe) { - env->cp15.mdscr_el1 = deposit64(env->cp15.mdscr_el1, 2, 4, moe); - } - - if (env->exception.target_el == 2) { - arm_cpu_do_interrupt_aarch32_hyp(cs); - return; - } - - switch (cs->exception_index) { - case EXCP_UDEF: - new_mode = ARM_CPU_MODE_UND; - addr = 0x04; - mask = CPSR_I; - if (env->thumb) { - offset = 2; - } else { - offset = 4; - } - break; - case EXCP_SWI: - new_mode = ARM_CPU_MODE_SVC; - addr = 0x08; - mask = CPSR_I; - /* The PC already points to the next instruction. */ - offset = 0; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", - env->exception.fsr, (uint32_t)env->exception.vaddress); - new_mode = ARM_CPU_MODE_ABT; - addr = 0x0c; - mask = CPSR_A | CPSR_I; - offset = 4; - break; - case EXCP_DATA_ABORT: - A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", - env->exception.fsr, - (uint32_t)env->exception.vaddress); - new_mode = ARM_CPU_MODE_ABT; - addr = 0x10; - mask = CPSR_A | CPSR_I; - offset = 8; - break; - case EXCP_IRQ: - new_mode = ARM_CPU_MODE_IRQ; - addr = 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I; - offset = 4; - if (env->cp15.scr_el3 & SCR_IRQ) { - /* IRQ routed to monitor mode */ - new_mode = ARM_CPU_MODE_MON; - mask |= CPSR_F; - } - break; - case EXCP_FIQ: - new_mode = ARM_CPU_MODE_FIQ; - addr = 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I | CPSR_F; - if (env->cp15.scr_el3 & SCR_FIQ) { - /* FIQ routed to monitor mode */ - new_mode = ARM_CPU_MODE_MON; - } - offset = 4; - break; - case EXCP_VIRQ: - new_mode = ARM_CPU_MODE_IRQ; - addr = 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I; - offset = 4; - break; - case EXCP_VFIQ: - new_mode = ARM_CPU_MODE_FIQ; - addr = 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask = CPSR_A | CPSR_I | CPSR_F; - offset = 4; - break; - case EXCP_SMC: - new_mode = ARM_CPU_MODE_MON; - addr = 0x08; - mask = CPSR_A | CPSR_I | CPSR_F; - offset = 0; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - return; /* Never happens. Keep compiler happy. */ - } - - if (new_mode == ARM_CPU_MODE_MON) { - addr += env->cp15.mvbar; - } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { - /* High vectors. When enabled, base address cannot be remapped. */ - addr += 0xffff0000; - } else { - /* - * ARM v7 architectures provide a vector base address register to remap - * the interrupt vector table. - * This register is only followed in non-monitor mode, and is banked. - * Note: only bits 31:5 are valid. - */ - addr += A32_BANKED_CURRENT_REG_GET(env, vbar); - } - - if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_MON) { - env->cp15.scr_el3 &= ~SCR_NS; - } - - take_aarch32_exception(env, new_mode, mask, offset, addr); -} - -static int aarch64_regnum(CPUARMState *env, int aarch32_reg) -{ - /* - * Return the register number of the AArch64 view of the AArch32 - * register @aarch32_reg. The CPUARMState CPSR is assumed to still - * be that of the AArch32 mode the exception came from. - */ - int mode = env->uncached_cpsr & CPSR_M; - - switch (aarch32_reg) { - case 0 ... 7: - return aarch32_reg; - case 8 ... 12: - return mode == ARM_CPU_MODE_FIQ ? aarch32_reg + 16 : aarch32_reg; - case 13: - switch (mode) { - case ARM_CPU_MODE_USR: - case ARM_CPU_MODE_SYS: - return 13; - case ARM_CPU_MODE_HYP: - return 15; - case ARM_CPU_MODE_IRQ: - return 17; - case ARM_CPU_MODE_SVC: - return 19; - case ARM_CPU_MODE_ABT: - return 21; - case ARM_CPU_MODE_UND: - return 23; - case ARM_CPU_MODE_FIQ: - return 29; - default: - g_assert_not_reached(); - } - case 14: - switch (mode) { - case ARM_CPU_MODE_USR: - case ARM_CPU_MODE_SYS: - case ARM_CPU_MODE_HYP: - return 14; - case ARM_CPU_MODE_IRQ: - return 16; - case ARM_CPU_MODE_SVC: - return 18; - case ARM_CPU_MODE_ABT: - return 20; - case ARM_CPU_MODE_UND: - return 22; - case ARM_CPU_MODE_FIQ: - return 30; - default: - g_assert_not_reached(); - } - case 15: - return 31; - default: - g_assert_not_reached(); - } -} - -static uint32_t cpsr_read_for_spsr_elx(CPUARMState *env) -{ - uint32_t ret = cpsr_read(env); - - /* Move DIT to the correct location for SPSR_ELx */ - if (ret & CPSR_DIT) { - ret &= ~CPSR_DIT; - ret |= PSTATE_DIT; - } - /* Merge PSTATE.SS into SPSR_ELx */ - ret |= env->pstate & PSTATE_SS; - - return ret; -} - -/* Handle exception entry to a target EL which is using AArch64 */ -static void arm_cpu_do_interrupt_aarch64(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - unsigned int new_el = env->exception.target_el; - target_ulong addr = env->cp15.vbar_el[new_el]; - unsigned int new_mode = aarch64_pstate_mode(new_el, true); - unsigned int old_mode; - unsigned int cur_el = arm_current_el(env); - int rt; - - if (tcg_enabled()) { - /* - * Note that new_el can never be 0. If cur_el is 0, then - * el0_a64 is is_a64(), else el0_a64 is ignored. - */ - aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); - } - - if (cur_el < new_el) { - /* - * Entry vector offset depends on whether the implemented EL - * immediately lower than the target level is using AArch32 or AArch64 - */ - bool is_aa64; - uint64_t hcr; - - switch (new_el) { - case 3: - is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0; - break; - case 2: - hcr = arm_hcr_el2_eff(env); - if ((hcr & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) { - is_aa64 = (hcr & HCR_RW) != 0; - break; - } - /* fall through */ - case 1: - is_aa64 = is_a64(env); - break; - default: - g_assert_not_reached(); - } - - if (is_aa64) { - addr += 0x400; - } else { - addr += 0x600; - } - } else if (pstate_read(env) & PSTATE_SP) { - addr += 0x200; - } - - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - case EXCP_DATA_ABORT: - env->cp15.far_el[new_el] = env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", - env->cp15.far_el[new_el]); - /* fall through */ - case EXCP_BKPT: - case EXCP_UDEF: - case EXCP_SWI: - case EXCP_HVC: - case EXCP_HYP_TRAP: - case EXCP_SMC: - switch (syn_get_ec(env->exception.syndrome)) { - case EC_ADVSIMDFPACCESSTRAP: - /* - * QEMU internal FP/SIMD syndromes from AArch32 include the - * TA and coproc fields which are only exposed if the exception - * is taken to AArch32 Hyp mode. Mask them out to get a valid - * AArch64 format syndrome. - */ - env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20); - break; - case EC_CP14RTTRAP: - case EC_CP15RTTRAP: - case EC_CP14DTTRAP: - /* - * For a trap on AArch32 MRC/MCR/LDC/STC the Rt field is currently - * the raw register field from the insn; when taking this to - * AArch64 we must convert it to the AArch64 view of the register - * number. Notice that we read a 4-bit AArch32 register number and - * write back a 5-bit AArch64 one. - */ - rt = extract32(env->exception.syndrome, 5, 4); - rt = aarch64_regnum(env, rt); - env->exception.syndrome = deposit32(env->exception.syndrome, - 5, 5, rt); - break; - case EC_CP15RRTTRAP: - case EC_CP14RRTTRAP: - /* Similarly for MRRC/MCRR traps for Rt and Rt2 fields */ - rt = extract32(env->exception.syndrome, 5, 4); - rt = aarch64_regnum(env, rt); - env->exception.syndrome = deposit32(env->exception.syndrome, - 5, 5, rt); - rt = extract32(env->exception.syndrome, 10, 4); - rt = aarch64_regnum(env, rt); - env->exception.syndrome = deposit32(env->exception.syndrome, - 10, 5, rt); - break; - } - env->cp15.esr_el[new_el] = env->exception.syndrome; - break; - case EXCP_IRQ: - case EXCP_VIRQ: - addr += 0x80; - break; - case EXCP_FIQ: - case EXCP_VFIQ: - addr += 0x100; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (is_a64(env)) { - old_mode = pstate_read(env); - aarch64_save_sp(env, arm_current_el(env)); - env->elr_el[new_el] = env->pc; - } else { - old_mode = cpsr_read_for_spsr_elx(env); - env->elr_el[new_el] = env->regs[15]; - - aarch64_sync_32_to_64(env); - - env->condexec_bits = 0; - } - env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode; - - qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", - env->elr_el[new_el]); - - if (cpu_isar_feature(aa64_pan, cpu)) { - /* The value of PSTATE.PAN is normally preserved, except when ... */ - new_mode |= old_mode & PSTATE_PAN; - switch (new_el) { - case 2: - /* ... the target is EL2 with HCR_EL2.{E2H,TGE} == '11' ... */ - if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) - != (HCR_E2H | HCR_TGE)) { - break; - } - /* fall through */ - case 1: - /* ... the target is EL1 ... */ - /* ... and SCTLR_ELx.SPAN == 0, then set to 1. */ - if ((env->cp15.sctlr_el[new_el] & SCTLR_SPAN) == 0) { - new_mode |= PSTATE_PAN; - } - break; - } - } - if (cpu_isar_feature(aa64_mte, cpu)) { - new_mode |= PSTATE_TCO; - } - - pstate_write(env, PSTATE_DAIF | new_mode); - env->aarch64 = 1; - aarch64_restore_sp(env, new_el); - - if (tcg_enabled()) { - /* pstate already written, so we can use arm_rebuild_hflags here */ - arm_rebuild_hflags(env); - } - - env->pc = addr; - - qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", - new_el, env->pc, pstate_read(env)); -} - -void arm_log_exception(int idx) -{ - if (qemu_loglevel_mask(CPU_LOG_INT)) { - const char *exc = NULL; - static const char * const excnames[] = { - [EXCP_UDEF] = "Undefined Instruction", - [EXCP_SWI] = "SVC", - [EXCP_PREFETCH_ABORT] = "Prefetch Abort", - [EXCP_DATA_ABORT] = "Data Abort", - [EXCP_IRQ] = "IRQ", - [EXCP_FIQ] = "FIQ", - [EXCP_BKPT] = "Breakpoint", - [EXCP_EXCEPTION_EXIT] = "QEMU v7M exception exit", - [EXCP_KERNEL_TRAP] = "QEMU intercept of kernel commpage", - [EXCP_HVC] = "Hypervisor Call", - [EXCP_HYP_TRAP] = "Hypervisor Trap", - [EXCP_SMC] = "Secure Monitor Call", - [EXCP_VIRQ] = "Virtual IRQ", - [EXCP_VFIQ] = "Virtual FIQ", - [EXCP_SEMIHOST] = "Semihosting call", - [EXCP_NOCP] = "v7M NOCP UsageFault", - [EXCP_INVSTATE] = "v7M INVSTATE UsageFault", - [EXCP_STKOF] = "v8M STKOF UsageFault", - [EXCP_LAZYFP] = "v7M exception during lazy FP stacking", - [EXCP_LSERR] = "v8M LSERR UsageFault", - [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault", - }; - - if (idx >= 0 && idx < ARRAY_SIZE(excnames)) { - exc = excnames[idx]; - } - if (!exc) { - exc = "unknown"; - } - qemu_log_mask(CPU_LOG_INT, "Taking exception %d [%s]\n", idx, exc); - } -} - -/* - * Handle a CPU exception for A and R profile CPUs. - * Do any appropriate logging, handle PSCI calls, and then hand off - * to the AArch64-entry or AArch32-entry function depending on the - * target exception level's register width. - * - * Note: this is used for both TCG (as the do_interrupt tcg op), - * and KVM to re-inject guest debug exceptions, and to - * inject a Synchronous-External-Abort. - */ -void arm_cpu_do_interrupt(CPUState *cs) -{ - ARMCPU *cpu = ARM_CPU(cs); - CPUARMState *env = &cpu->env; - unsigned int new_el = env->exception.target_el; - - assert(!arm_feature(env, ARM_FEATURE_M)); - - arm_log_exception(cs->exception_index); - qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(env), - new_el); - if (qemu_loglevel_mask(CPU_LOG_INT) - && !excp_is_internal(cs->exception_index)) { - qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", - syn_get_ec(env->exception.syndrome), - env->exception.syndrome); - } - - if (tcg_enabled()) { - if (arm_is_psci_call(cpu, cs->exception_index)) { - arm_handle_psci_call(cpu); - qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); - return; - } - /* - * Semihosting semantics depend on the register width of the code - * that caused the exception, not the target exception level, so - * must be handled here. - */ - if (cs->exception_index == EXCP_SEMIHOST) { - tcg_handle_semihosting(cs); - return; - } - } - /* - * Hooks may change global state so BQL should be held, also the - * BQL needs to be held for any modification of - * cs->interrupt_request. - */ - g_assert(qemu_mutex_iothread_locked()); - arm_call_pre_el_change_hook(cpu); - - assert(!excp_is_internal(cs->exception_index)); - if (arm_el_is_aa64(env, new_el)) { - arm_cpu_do_interrupt_aarch64(cs); - } else { - arm_cpu_do_interrupt_aarch32(cs); - } - - arm_call_el_change_hook(cpu); - - if (tcg_enabled()) { - cs->interrupt_request |= CPU_INTERRUPT_EXITTB; - } -} diff --git a/target/arm/cpu-user.c b/target/arm/cpu-user.c index 6a1a1fa273..a8e6f28ec6 100644 --- a/target/arm/cpu-user.c +++ b/target/arm/cpu-user.c @@ -12,6 +12,7 @@ #include "qapi/qapi-commands-machine-target.h" #include "qapi/error.h" #include "cpu.h" +#include "cpu-exceptions-aa64.h" #include "internals.h" void switch_mode(CPUARMState *env, int mode) diff --git a/target/arm/meson.build b/target/arm/meson.build index bad5a659a7..8bcd394828 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -21,12 +21,17 @@ arm_softmmu_ss = ss.source_set() arm_softmmu_ss.add(files( 'arch_dump.c', 'arm-powerctl.c', + 'cpu-exceptions.c', 'cpu-mmu-sysemu.c', 'cpu-sysemu.c', 'machine.c', 'monitor.c', )) +arm_softmmu_ss.add(when: 'TARGET_AARCH64', if_true: files( + 'cpu-exceptions-aa64.c' +)) + arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( 'psci.c', )) From patchwork Fri Jun 4 15:52:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454097 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp577477jae; Fri, 4 Jun 2021 09:43:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJznSaC1Ig4gmmRm6bStLvpLCwmN9PuB3GCUQY2U2A8shiHJYDKz7AqvVD+ZbdHX7CYKT8H8 X-Received: by 2002:a67:87ca:: with SMTP id j193mr3426207vsd.55.1622825019106; Fri, 04 Jun 2021 09:43:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825019; cv=none; d=google.com; s=arc-20160816; b=TxY6MWPx6OF23WYSMe6Jxa5xGBR35ilBs77JC4YXiSC4QekjnnFsXGiHxgL7RIsBPV F8cm4IvagaIazJJu5XEf602knZCaa05htUV5qC6sK58IXwdaspWLXjfAyETVUlUuDffv ToJsSIrKWQBNM0fUGv1bdBDyCmLD3uYst5xAe/pu63Mg/PpiF3OjvZoGVayRw8HKFF+b pAJv+Qe+ko9XPbtBaINEK3OQukeA82sXZ0my+xwlnEv3f1VfVM/b34WYeQKYn8AxeEF4 YhcuCtS1Io7XnOzKBHB5HdLA/SfahUGqxB9dKiEcgRXOe6umfFoHIAguD8wzIDHX35qH mEFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=4nGP6Z5x0fcvCA1VqJc1xqqHTfKci3f1/omLPFccgrE=; b=Ec5R+y3KDMlPODmjbM4khlOY4LjhD4glq5GPMFF0+kiiUpBa8zPMGj+GnUWwVP2xEx LcFF89t4XmW5iA0xxK9r2wDt9vv2dTdY12EzbOtvSloADaQFpO68jyR96QyDM0T3vn7t XMME4ogP8Fo6puzva0n0YP+BdIT5pwXmVs+5Gt8wBi3XeLsNJVwzCWDTQH/oLRQfuOyT C6jSq3thhcMc+1jbrYEVHvxQHeE3jZonqAS/qF68/QRdmyivX+6wAb+YslDs8A8miAXb JTt6J3vNv1HzszZcfRao92QocT0AvHXvUaNJnmZuy2b6q7+P75LNUTkuSdIOO4v4d5OP JReQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=rYrBGRRK; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id z189si3830035vsb.105.2021.06.04.09.43.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:43:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=rYrBGRRK; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:34046 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCv4-0004Wa-Ej for patch@linaro.org; Fri, 04 Jun 2021 12:43:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48578) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCI3-0008Jt-7q for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:19 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]:43960) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHc-0005pv-Ro for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:18 -0400 Received: by mail-wm1-x32d.google.com with SMTP id 3-20020a05600c0243b029019f2f9b2b8aso5901647wmj.2 for ; Fri, 04 Jun 2021 09:02:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4nGP6Z5x0fcvCA1VqJc1xqqHTfKci3f1/omLPFccgrE=; b=rYrBGRRKUpmtFoVFXwQDQZhTI56HGv0MSBw/8FGkC1Wcjuet3fUoIZiQjf+DQuUqwC z4HprW60hoQme4+MsVmcNa6ubCxEJHr7owrpzpAXlqHruAp3/wKRshqydHctNhVCuP/v TpXU3bwe7feRBXO//YJ+UHkQ4Dzc24DngLhNnYHOFXOGUd05Xc8zIAUYfpRDtJk9N7wo p3pdHe3MdTT/CdiChZvFd3mY3KUSjidE3fLkk6H310oPCsY6qKGyJFxQbWh2dvDBFZcv 4zdJXvaiihDdbV3PRa+3fF8VDS1EwPU+OQbLytjuTP6I8s6aldexm74oOyFu3AAIUB1Z 8ldg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4nGP6Z5x0fcvCA1VqJc1xqqHTfKci3f1/omLPFccgrE=; b=Mczs4lj9PXRVS7+ejAazDKr8V/HOdbgrkpVz5Wo3nkJbUmMc6XhmPDM3T3sgJhCTNk 0jqBQOnvl2iLy8oh4UC76Zd8S+DsddObzx3T9tiaFvRi6m85S8jgMWr0cSHfmsUukrwW v+xsdHr4dWf5cYooh9G9vwTHpMWQFh+BDSbpwt0lyBpX4NWU5spWdW1MHrUS1ipv+D8b S8JaDYGVsosJqu4LBFSGMQ+HBgXcdgzn0/RNaoUYB7/G8TIuxw+ZBPmkzqwMmJcoHOi2 asfYwGTT+r2oKEC5XEQ4y7YEYGKVmOFD3+OB0G/NYpUg7J02ZqQ9achzQW7/ek5COUGd e1jQ== X-Gm-Message-State: AOAM5339HLIEubiRga9YpfiZ2B/vYJYFHhKGS5XCpDvrf6YnhRNsr3Q4 o0SmqNreqBCuH+KBU3d74lP+CA== X-Received: by 2002:a05:600c:21cf:: with SMTP id x15mr4323372wmj.174.1622822571487; Fri, 04 Jun 2021 09:02:51 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id a123sm10341991wmd.2.2021.06.04.09.02.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 6CD941FFF1; Fri, 4 Jun 2021 16:53:22 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 79/99] target/arm: tcg: restrict ZCR cpregs to TARGET_AARCH64 Date: Fri, 4 Jun 2021 16:52:52 +0100 Message-Id: <20210604155312.15902-80-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana restrict zcr_el1, zcr_el2, zcr_no_el2, zcr_el3 reginfo, and the related SVE functions to TARGET_AARCH64. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/tcg/cpregs.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/cpregs.c b/target/arm/tcg/cpregs.c index 8422da4335..56d56f7f81 100644 --- a/target/arm/tcg/cpregs.c +++ b/target/arm/tcg/cpregs.c @@ -5791,6 +5791,8 @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = { REGINFO_SENTINEL }; +#ifdef TARGET_AARCH64 + static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { @@ -5843,6 +5845,8 @@ static const ARMCPRegInfo zcr_el3_reginfo = { .writefn = zcr_write, .raw_writefn = raw_write }; +#endif /* TARGET_AARCH64 */ + static void dbgwvr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { @@ -7572,6 +7576,7 @@ void register_cp_regs_for_features(ARMCPU *cpu) define_arm_cp_regs(cpu, vhe_reginfo); } +#ifdef TARGET_AARCH64 if (cpu_isar_feature(aa64_sve, cpu)) { define_one_arm_cp_reg(cpu, &zcr_el1_reginfo); if (arm_feature(env, ARM_FEATURE_EL2)) { @@ -7584,7 +7589,6 @@ void register_cp_regs_for_features(ARMCPU *cpu) } } -#ifdef TARGET_AARCH64 if (cpu_isar_feature(aa64_pauth, cpu)) { define_arm_cp_regs(cpu, pauth_reginfo); } @@ -7614,7 +7618,7 @@ void register_cp_regs_for_features(ARMCPU *cpu) define_arm_cp_regs(cpu, mte_tco_ro_reginfo); define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo); } -#endif +#endif /* TARGET_AARCH64 */ if (cpu_isar_feature(any_predinv, cpu)) { define_arm_cp_regs(cpu, predinv_reginfo); From patchwork Fri Jun 4 15:52:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454056 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp538077jae; Fri, 4 Jun 2021 08:57:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyegCriW54qE/6cveKWvggJcYiIBAMN3nqOy6isZzf+fsbssimGh8fSGACKmCY0Ok0zYwXw X-Received: by 2002:ab0:710f:: with SMTP id x15mr4164718uan.74.1622822241956; Fri, 04 Jun 2021 08:57:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822241; cv=none; d=google.com; s=arc-20160816; b=xAhBZsURtci/WQq+qVJEyJaLKNgt9YlCXWKqAR4q7jct5bWxX5X7RUeGmFA5HzphYQ 2iCZ/QP6+dovZ8+a8kG4zgfnIQZhtw3YejPzOUy8+a5y+R5BLGXyGeD48LYzutf2XyfG m0kd7rg2obeDIvZzo6kJBWdSHGCbjNxVT//gCyOpJNjptjbAZPgCIc9746JHoWLJqtGB ObQ/GqS0QNFBrB8YNMk88mJrupjzeS1WHGRarUpsFBuxn6+TlxELVtdEia3ReziH7aRT akKk3SlGLuJ6Ou2aP0An9ezKItFkrxJmX6xG8ROc8XViOjqUyPV0AE3qd6msSaVgpdlS sizQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=ZEqovDXWlP3I0jnR9ymxC8JazCNBkqL0mUNiGG3NMBI=; b=viVxVuSuZL8/8mCkljQRKPYVXUcFPwxVO/14+VTZnI2PtKvpySLlpGYKkuuo2nCILH xsTG9AHl9NyAB2xNu3G44YeSLt6Z13CExPMfVuulpJPKOiII+/kjVOCR5sGAnrjhyOyK eSlEsXHun+qpz4DTLOf6q8/UjTD/ud4iPnBh9NJkjWjdcOKL779gC4LiGxfIC8k9TRSR B89gaUKpj0qBkD4kRr4hIpVmgb2yjHWH3IFmbUbotwo5+IsYKFWBENK2vJZNucpuItFU IRe0ODLiG4WqNYwQEh27otEw6WExpgglRmITWYTBHItMo01xJBk6Gv44NLA+JK0HdWM7 zbsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lion6Dlw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id g18si163949vsq.185.2021.06.04.08.57.21 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 08:57:21 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=lion6Dlw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58736 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCCH-0004UL-6C for patch@linaro.org; Fri, 04 Jun 2021 11:57:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44950) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8t-0003iT-VX for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:52 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]:55832) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8m-0000Fm-3L for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:51 -0400 Received: by mail-wm1-x330.google.com with SMTP id g204so5652788wmf.5 for ; Fri, 04 Jun 2021 08:53:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZEqovDXWlP3I0jnR9ymxC8JazCNBkqL0mUNiGG3NMBI=; b=lion6DlwoGdXl51iAkhbyqqhUua9KEMa2tNoCUFwvf7VGvoBPNU1MRNyjqigLqb6A5 FUFg+AT3QHwQfZZkBuVWA2am6fmVoYGoEpfbUzg9my/umiSaHDg5M+7GbgaZfpEdh5ZC FTGmniQDqQ+jOqd5VXDcX7+Jb7BvhbU1s8Y+RAktg2MoH1bg1UlRHvSbJ6eXGBtRFGyA /3IvkLU9kXPHuFdfkpmRFx8NpaCYknVcVYF1+v7DbYfk/vye6EPR49h+KUc1hRaN44AI SRm4+4xrUrGLl6jzsdvsd9ucXkrPdHAD5iwE+AyIqvVc0LVet3OajJKfbTkYW6nFX1n2 kSqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZEqovDXWlP3I0jnR9ymxC8JazCNBkqL0mUNiGG3NMBI=; b=jq6HGwJ+7p827uvu1S8/yHYdQi21ifS61/3QEvuQxQmD1AlyOk3N/Jok6sWeN1mHIF f1vOx746tzDB3YbwxcViMK0xvFXQ2H9fUO+zTs4YZz6uJ6uBnI36euT8lUpyMzL+14r8 Yi6FNynzR1mcf1TXcly5jIjn/Wk0yDYHWvWpOBSMrX+MgZGXGvhTURd10wftV/lylHmw EUr0iHwvFDxiHzRAEjcJMxXh+udCY52dbtYAP/bi5aYeMV3qYWTxfE/2koVzzQzSJmfN vBJ9xDywehrkPsbRSggWnVdtw7r+6p9EUHoHuLH+ILssXHgtiODKHTLXjcCHkU+TTQU7 vS1A== X-Gm-Message-State: AOAM532FAqazY+ay0HcodFbAQdfHFh1pAq7QeiN7UT/BuuqDnQXDhZU4 39YGeA6TdETy2KO20mt3ScT2pE7NJoYmKQ== X-Received: by 2002:a05:600c:c9:: with SMTP id u9mr4292927wmm.156.1622822021494; Fri, 04 Jun 2021 08:53:41 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id 89sm7541924wrq.14.2021.06.04.08.53.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 912CF1FF93; Fri, 4 Jun 2021 16:53:22 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 80/99] target/arm: tcg-sve: import narrow_vq and change_el functions Date: Fri, 4 Jun 2021 16:52:53 +0100 Message-Id: <20210604155312.15902-81-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Richard Henderson , Laurent Vivier , qemu-arm@nongnu.org, Claudio Fontana , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana aarch64_sve_narrow_vq and aarch64_sve_change_el are SVE-related functions only used for TCG, so we can put them in the tcg-sve.c module. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu.h | 7 --- target/arm/tcg/tcg-sve.h | 5 ++ linux-user/syscall.c | 4 ++ target/arm/cpu-exceptions-aa64.c | 1 + target/arm/tcg/cpregs.c | 4 ++ target/arm/tcg/helper-a64.c | 1 + target/arm/tcg/helper.c | 87 -------------------------------- target/arm/tcg/tcg-sve.c | 86 +++++++++++++++++++++++++++++++ 8 files changed, 101 insertions(+), 94 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 8614948543..3edf8bb4ec 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1056,9 +1056,6 @@ int arm_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs, #ifdef TARGET_AARCH64 int aarch64_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg); int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg); -void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq); -void aarch64_sve_change_el(CPUARMState *env, int old_el, - int new_el, bool el0_a64); static inline bool is_a64(CPUARMState *env) { @@ -1090,10 +1087,6 @@ static inline uint64_t *sve_bswap64(uint64_t *dst, uint64_t *src, int nr) } #else -static inline void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) { } -static inline void aarch64_sve_change_el(CPUARMState *env, int o, - int n, bool a) -{ } #define is_a64(env) ((void)env, false) diff --git a/target/arm/tcg/tcg-sve.h b/target/arm/tcg/tcg-sve.h index 4bed809b9a..5855bb4289 100644 --- a/target/arm/tcg/tcg-sve.h +++ b/target/arm/tcg/tcg-sve.h @@ -21,4 +21,9 @@ uint32_t tcg_sve_disable_lens(unsigned long *sve_vq_map, bool tcg_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, Error **errp); +void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq); + +void aarch64_sve_change_el(CPUARMState *env, int old_el, + int new_el, bool el0_a64); + #endif /* TCG_SVE_H */ diff --git a/linux-user/syscall.c b/linux-user/syscall.c index c9f812091c..db4b7b1e46 100644 --- a/linux-user/syscall.c +++ b/linux-user/syscall.c @@ -134,6 +134,10 @@ #include "fd-trans.h" #include "tcg/tcg.h" +#ifdef TARGET_AARCH64 +#include "tcg/tcg-sve.h" +#endif /* TARGET_AARCH64 */ + #ifndef CLONE_IO #define CLONE_IO 0x80000000 /* Clone io context */ #endif diff --git a/target/arm/cpu-exceptions-aa64.c b/target/arm/cpu-exceptions-aa64.c index 7daaba0426..adaf3bab17 100644 --- a/target/arm/cpu-exceptions-aa64.c +++ b/target/arm/cpu-exceptions-aa64.c @@ -21,6 +21,7 @@ #include "qemu/osdep.h" #include "qemu/log.h" #include "cpu.h" +#include "tcg/tcg-sve.h" #include "internals.h" #include "sysemu/tcg.h" diff --git a/target/arm/tcg/cpregs.c b/target/arm/tcg/cpregs.c index 56d56f7f81..9d3c9ae841 100644 --- a/target/arm/tcg/cpregs.c +++ b/target/arm/tcg/cpregs.c @@ -16,6 +16,10 @@ #include "cpu-mmu.h" #include "cpregs.h" +#ifdef TARGET_AARCH64 +#include "tcg/tcg-sve.h" +#endif /* TARGET_AARCH64 */ + #define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */ #define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */ diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c index 9cc3b066e2..f261f13b2c 100644 --- a/target/arm/tcg/helper-a64.c +++ b/target/arm/tcg/helper-a64.c @@ -20,6 +20,7 @@ #include "qemu/osdep.h" #include "qemu/units.h" #include "cpu.h" +#include "tcg/tcg-sve.h" #include "exec/gdbstub.h" #include "exec/helper-proto.h" #include "qemu/host-utils.h" diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 7136c82795..edc4b4cb4e 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -1294,90 +1294,3 @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc, *pflags = flags.flags; *cs_base = flags.flags2; } - -#ifdef TARGET_AARCH64 -/* - * The manual says that when SVE is enabled and VQ is widened the - * implementation is allowed to zero the previously inaccessible - * portion of the registers. The corollary to that is that when - * SVE is enabled and VQ is narrowed we are also allowed to zero - * the now inaccessible portion of the registers. - * - * The intent of this is that no predicate bit beyond VQ is ever set. - * Which means that some operations on predicate registers themselves - * may operate on full uint64_t or even unrolled across the maximum - * uint64_t[4]. Performing 4 bits of host arithmetic unconditionally - * may well be cheaper than conditionals to restrict the operation - * to the relevant portion of a uint16_t[16]. - */ -void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) -{ - int i, j; - uint64_t pmask; - - assert(vq >= 1 && vq <= ARM_MAX_VQ); - assert(vq <= env_archcpu(env)->sve_max_vq); - - /* Zap the high bits of the zregs. */ - for (i = 0; i < 32; i++) { - memset(&env->vfp.zregs[i].d[2 * vq], 0, 16 * (ARM_MAX_VQ - vq)); - } - - /* Zap the high bits of the pregs and ffr. */ - pmask = 0; - if (vq & 3) { - pmask = ~(-1ULL << (16 * (vq & 3))); - } - for (j = vq / 4; j < ARM_MAX_VQ / 4; j++) { - for (i = 0; i < 17; ++i) { - env->vfp.pregs[i].p[j] &= pmask; - } - pmask = 0; - } -} - -/* - * Notice a change in SVE vector size when changing EL. - */ -void aarch64_sve_change_el(CPUARMState *env, int old_el, - int new_el, bool el0_a64) -{ - ARMCPU *cpu = env_archcpu(env); - int old_len, new_len; - bool old_a64, new_a64; - - /* Nothing to do if no SVE. */ - if (!cpu_isar_feature(aa64_sve, cpu)) { - return; - } - - /* Nothing to do if FP is disabled in either EL. */ - if (fp_exception_el(env, old_el) || fp_exception_el(env, new_el)) { - return; - } - - /* - * DDI0584A.d sec 3.2: "If SVE instructions are disabled or trapped - * at ELx, or not available because the EL is in AArch32 state, then - * for all purposes other than a direct read, the ZCR_ELx.LEN field - * has an effective value of 0". - * - * Consider EL2 (aa64, vq=4) -> EL0 (aa32) -> EL1 (aa64, vq=0). - * If we ignore aa32 state, we would fail to see the vq4->vq0 transition - * from EL2->EL1. Thus we go ahead and narrow when entering aa32 so that - * we already have the correct register contents when encountering the - * vq0->vq0 transition between EL0->EL1. - */ - old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64; - old_len = (old_a64 && !sve_exception_el(env, old_el) - ? sve_zcr_len_for_el(env, old_el) : 0); - new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64; - new_len = (new_a64 && !sve_exception_el(env, new_el) - ? sve_zcr_len_for_el(env, new_el) : 0); - - /* When changing vector length, clear inaccessible state. */ - if (new_len < old_len) { - aarch64_sve_narrow_vq(env, new_len + 1); - } -} -#endif diff --git a/target/arm/tcg/tcg-sve.c b/target/arm/tcg/tcg-sve.c index 99cfde1f41..908d2c2f2c 100644 --- a/target/arm/tcg/tcg-sve.c +++ b/target/arm/tcg/tcg-sve.c @@ -24,6 +24,7 @@ #include "sysemu/tcg.h" #include "cpu-sve.h" #include "tcg-sve.h" +#include "cpu-exceptions-aa64.h" void tcg_sve_enable_lens(unsigned long *sve_vq_map, unsigned long *sve_vq_init, uint32_t max_vq) @@ -79,3 +80,88 @@ bool tcg_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, } return true; } + +/* + * The manual says that when SVE is enabled and VQ is widened the + * implementation is allowed to zero the previously inaccessible + * portion of the registers. The corollary to that is that when + * SVE is enabled and VQ is narrowed we are also allowed to zero + * the now inaccessible portion of the registers. + * + * The intent of this is that no predicate bit beyond VQ is ever set. + * Which means that some operations on predicate registers themselves + * may operate on full uint64_t or even unrolled across the maximum + * uint64_t[4]. Performing 4 bits of host arithmetic unconditionally + * may well be cheaper than conditionals to restrict the operation + * to the relevant portion of a uint16_t[16]. + */ +void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) +{ + int i, j; + uint64_t pmask; + + assert(vq >= 1 && vq <= ARM_MAX_VQ); + assert(vq <= env_archcpu(env)->sve_max_vq); + + /* Zap the high bits of the zregs. */ + for (i = 0; i < 32; i++) { + memset(&env->vfp.zregs[i].d[2 * vq], 0, 16 * (ARM_MAX_VQ - vq)); + } + + /* Zap the high bits of the pregs and ffr. */ + pmask = 0; + if (vq & 3) { + pmask = ~(-1ULL << (16 * (vq & 3))); + } + for (j = vq / 4; j < ARM_MAX_VQ / 4; j++) { + for (i = 0; i < 17; ++i) { + env->vfp.pregs[i].p[j] &= pmask; + } + pmask = 0; + } +} + +/* + * Notice a change in SVE vector size when changing EL. + */ +void aarch64_sve_change_el(CPUARMState *env, int old_el, + int new_el, bool el0_a64) +{ + ARMCPU *cpu = env_archcpu(env); + int old_len, new_len; + bool old_a64, new_a64; + + /* Nothing to do if no SVE. */ + if (!cpu_isar_feature(aa64_sve, cpu)) { + return; + } + + /* Nothing to do if FP is disabled in either EL. */ + if (fp_exception_el(env, old_el) || fp_exception_el(env, new_el)) { + return; + } + + /* + * DDI0584A.d sec 3.2: "If SVE instructions are disabled or trapped + * at ELx, or not available because the EL is in AArch32 state, then + * for all purposes other than a direct read, the ZCR_ELx.LEN field + * has an effective value of 0". + * + * Consider EL2 (aa64, vq=4) -> EL0 (aa32) -> EL1 (aa64, vq=0). + * If we ignore aa32 state, we would fail to see the vq4->vq0 transition + * from EL2->EL1. Thus we go ahead and narrow when entering aa32 so that + * we already have the correct register contents when encountering the + * vq0->vq0 transition between EL0->EL1. + */ + old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64; + old_len = (old_a64 && !sve_exception_el(env, old_el) + ? sve_zcr_len_for_el(env, old_el) : 0); + new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64; + new_len = (new_a64 && !sve_exception_el(env, new_el) + ? sve_zcr_len_for_el(env, new_el) : 0); + + /* When changing vector length, clear inaccessible state. */ + if (new_len < old_len) { + aarch64_sve_narrow_vq(env, new_len + 1); + } +} From patchwork Fri Jun 4 15:52:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454120 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp602506jae; Fri, 4 Jun 2021 10:14:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx85P4rfZQyg4UBhT7w0ZStVDXS3Aq6JSvvHhyY0LHtNkEUJ3eVV06XTSWhueQTfkGNGFnD X-Received: by 2002:a67:1485:: with SMTP id 127mr3744779vsu.14.1622826846737; Fri, 04 Jun 2021 10:14:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826846; cv=none; d=google.com; s=arc-20160816; b=GtOgItlUBY1bL0I6bW94Tb8czgcII/WiRgAAISS0MWvuyzOCMEknQPnlTVScctmKMx aIkPFbUhbfch1M+dGk1yrwPZP0/wn6r28NhJu15VGnsajmKgBfVoyMpf7b6f6Dxfx8y2 F5bkh5zgEj3dXApLz2+e5NBwVcynhAGbNbo7bQQGhEejGcgBQN6bQSTvyXXePDTb+1Ah hm4Q1eluPbijR3VzL57CPaa2oUkQwNP9nFciDflk+GBxV75XvRCVZud1nCB/A4AjAil/ V5ReRgXEIPMGQHZkFhDX37r6/sBtxwrzASHRVMvUG+fFXkTfxCM8CVyiAyxLUld1Sgy/ ZXOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Q8h+n24TrWRUxnaytAqdbGvuFIoi1mcRJzQRuLb50YQ=; b=CHGv5llT3ldfFxLKc0Gmr6vIagJSajx0Bp7uGwtMDJdpY+tRNjFEXCiUiwHsgPZ9/f FRX37yjJyQwfvakY/Ss6tB0O5rmf4Le5DhVhG1dKI08YtnSwY872PjLwoVgfnaHblcpu Yv36W3Xho1dM6X6AbyvO9W1pdut1BvfcIQFjYJtWrTdqpm1mz3Zb+c+nF8BzLiAjKPPC MHod3f/BsELcMKsLR//H1BZoEtK6V8oGQ/5iaeKYjtM2u0XsqRZuE0vGnjJe6poMbStQ cr/jdVISRtvZTFFhdHlJzwwsjtR3OeMta7LI7a4lnsE9p9URfYNeZqKkMF3p2noBtXu9 cIwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=OVAumHYk; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u4si3632758vsi.42.2021.06.04.10.14.06 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:14:06 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=OVAumHYk; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:53048 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDOY-0007xG-3c for patch@linaro.org; Fri, 04 Jun 2021 13:14:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48602) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNC-0007ur-UT for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:42 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]:46023) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNA-00020l-Tv for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:42 -0400 Received: by mail-wr1-x42d.google.com with SMTP id z8so9998281wrp.12 for ; Fri, 04 Jun 2021 10:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q8h+n24TrWRUxnaytAqdbGvuFIoi1mcRJzQRuLb50YQ=; b=OVAumHYklZRMtwRNTiCaSZlgF4QkU5a0aQPMUKMfJ67aVsDtkJxZ4wXFMgJNg2jNRE R7EbQqyN3+zU8k3ian76Gpyor1iea8pwgjfaUz6VLIFHNeMnfc3ciQRomR78kMSDxxH6 z8Qpz1qGrieO5m/3CjzZfnw+t374pgevuuuWxOmIEAjgTySUmMbLWMyU3xu2ScejJewP d3cSE+BwNpTAJxCg2KabRaJ2jEXDTc229qqRTh0OXooR1QxvYpztjanGMggl3lUbK3Vz Z7sggk9piSf7TTBJY6CfVEr0mreZAw1qZsvYLWgLZ+P8dfgpEAk0iYh4iALznyCAJ8H8 IX7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q8h+n24TrWRUxnaytAqdbGvuFIoi1mcRJzQRuLb50YQ=; b=ZbkH9kG0F4YMJpmqf5jK/iQd1K3vdjHvZoVMLm6836on76hyl30IMpN75w6HkVtlNR OgLrr7XMJAVZgHBtekOtaFbyI5YH45y2TMieWu4vWbSipIkoY3KH06EaXcBGEJkbRz+F Bdac8GbLPwBRwMsGwag/8qkFJK1lmrkelk7J2jFI2BCibsfRWZUVhaa4tSl+wPwhlbcB /GMQsoNE1gsyYm6Lzi8rEGJO9fZoNcXdvm09a2+Hf08C7x4Zd8nELA7ymjkJ4n6YHcDm cPw+lEG4InBdvERKRcgfcFUNYajvaEaIjX5WxMXA5dJosG8z6vJ2YRjdGT2cEa19GYJx 47LQ== X-Gm-Message-State: AOAM532glOAUe04gwwM/gFc2tod5aFwrhaf0YzW+gY0CWGc3UlgqAZYS 4marMSeItJDU5d4qKcBnH5uWmQ== X-Received: by 2002:adf:bc06:: with SMTP id s6mr5147755wrg.250.1622826759038; Fri, 04 Jun 2021 10:12:39 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id c23sm9282915wme.37.2021.06.04.10.12.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:38 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id B17C01FFF5; Fri, 4 Jun 2021 16:53:22 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 81/99] target/arm: tcg-sve: rename the narrow_vq and change_el functions Date: Fri, 4 Jun 2021 16:52:54 +0100 Message-Id: <20210604155312.15902-82-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Richard Henderson , Laurent Vivier , qemu-arm@nongnu.org, Claudio Fontana , =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana make them canonical for the module name. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/tcg/tcg-sve.h | 6 +++--- linux-user/syscall.c | 2 +- target/arm/cpu-exceptions-aa64.c | 2 +- target/arm/tcg/cpregs.c | 2 +- target/arm/tcg/helper-a64.c | 2 +- target/arm/tcg/tcg-sve.c | 6 +++--- 6 files changed, 10 insertions(+), 10 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/tcg-sve.h b/target/arm/tcg/tcg-sve.h index 5855bb4289..46e42d1139 100644 --- a/target/arm/tcg/tcg-sve.h +++ b/target/arm/tcg/tcg-sve.h @@ -21,9 +21,9 @@ uint32_t tcg_sve_disable_lens(unsigned long *sve_vq_map, bool tcg_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, Error **errp); -void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq); +void tcg_sve_narrow_vq(CPUARMState *env, unsigned vq); -void aarch64_sve_change_el(CPUARMState *env, int old_el, - int new_el, bool el0_a64); +void tcg_sve_change_el(CPUARMState *env, int old_el, + int new_el, bool el0_a64); #endif /* TCG_SVE_H */ diff --git a/linux-user/syscall.c b/linux-user/syscall.c index db4b7b1e46..4cfbe72b21 100644 --- a/linux-user/syscall.c +++ b/linux-user/syscall.c @@ -10877,7 +10877,7 @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1, vq = MIN(vq, cpu->sve_max_vq); if (vq < old_vq) { - aarch64_sve_narrow_vq(env, vq); + tcg_sve_narrow_vq(env, vq); } env->vfp.zcr_el[1] = vq - 1; arm_rebuild_hflags(env); diff --git a/target/arm/cpu-exceptions-aa64.c b/target/arm/cpu-exceptions-aa64.c index adaf3bab17..1a3e1d6458 100644 --- a/target/arm/cpu-exceptions-aa64.c +++ b/target/arm/cpu-exceptions-aa64.c @@ -119,7 +119,7 @@ void arm_cpu_do_interrupt_aarch64(CPUState *cs) * Note that new_el can never be 0. If cur_el is 0, then * el0_a64 is is_a64(), else el0_a64 is ignored. */ - aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); + tcg_sve_change_el(env, cur_el, new_el, is_a64(env)); } if (cur_el < new_el) { diff --git a/target/arm/tcg/cpregs.c b/target/arm/tcg/cpregs.c index 9d3c9ae841..9d4ac66281 100644 --- a/target/arm/tcg/cpregs.c +++ b/target/arm/tcg/cpregs.c @@ -5814,7 +5814,7 @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri, */ new_len = sve_zcr_len_for_el(env, cur_el); if (new_len < old_len) { - aarch64_sve_narrow_vq(env, new_len + 1); + tcg_sve_narrow_vq(env, new_len + 1); } } diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c index f261f13b2c..e169c03c63 100644 --- a/target/arm/tcg/helper-a64.c +++ b/target/arm/tcg/helper-a64.c @@ -1042,7 +1042,7 @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc) * Note that cur_el can never be 0. If new_el is 0, then * el0_a64 is return_to_aa64, else el0_a64 is ignored. */ - aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64); + tcg_sve_change_el(env, cur_el, new_el, return_to_aa64); qemu_mutex_lock_iothread(); arm_call_el_change_hook(env_archcpu(env)); diff --git a/target/arm/tcg/tcg-sve.c b/target/arm/tcg/tcg-sve.c index 908d2c2f2c..25d5a5867c 100644 --- a/target/arm/tcg/tcg-sve.c +++ b/target/arm/tcg/tcg-sve.c @@ -95,7 +95,7 @@ bool tcg_sve_validate_lens(unsigned long *sve_vq_map, uint32_t max_vq, * may well be cheaper than conditionals to restrict the operation * to the relevant portion of a uint16_t[16]. */ -void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) +void tcg_sve_narrow_vq(CPUARMState *env, unsigned vq) { int i, j; uint64_t pmask; @@ -124,7 +124,7 @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) /* * Notice a change in SVE vector size when changing EL. */ -void aarch64_sve_change_el(CPUARMState *env, int old_el, +void tcg_sve_change_el(CPUARMState *env, int old_el, int new_el, bool el0_a64) { ARMCPU *cpu = env_archcpu(env); @@ -162,6 +162,6 @@ void aarch64_sve_change_el(CPUARMState *env, int old_el, /* When changing vector length, clear inaccessible state. */ if (new_len < old_len) { - aarch64_sve_narrow_vq(env, new_len + 1); + tcg_sve_narrow_vq(env, new_len + 1); } } From patchwork Fri Jun 4 15:52:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454089 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp571540jae; Fri, 4 Jun 2021 09:36:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwdxj1YLSCAGqh67rE7kTxcy1cZxqtrEwg83GX7sPTeE2Edr3mS+Ys9izd6dHj49R60nyQB X-Received: by 2002:a67:1906:: with SMTP id 6mr3660321vsz.6.1622824584382; Fri, 04 Jun 2021 09:36:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824584; cv=none; d=google.com; s=arc-20160816; b=qy2LFQDZ62azFJJRiSchEWWGO66qTvbegCNsVG/6D+k3YOn9muK2/1kCmU1lMVM+RL pTMZT80EqzjZjqGkIsY+OjAsbzCX4XvDgV9U1VDCD4ptz0I0KyrwrjpxSZ2+TnWHYTa9 96NyjC9MCvothSDKNuKm++p2rHnBwpk78lWjT0iDz4377qRDHLnCGTwrp096o53IsqrX ZsPxXys7HDZNaQCXFMO/gTvWtVQGEtAcV3oIQpxLdsICXKc0mUscIPdO1QrkyPd7VZpl o/BMv+bOQrsjnlXNVTtMORBgbc/fKkV2ihx5WdkUl/N1NdDT0/sbRSx0pLWy6JDdGykX 6q3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=YLNKH6WMEOpMy9rbclft4o2lCy4Z7+mVhGSkPGtULLE=; b=WqQeOQB8paqkHmZdJk0E5vyWkrvqAHncjU3HdAYarJonUU3S5vnZgTukvQTTdLbAwO o5xFg9aU2yXPDUknnkAkO7ipk2tSfcaGolt8SawnLpO7LPQic5kuv7DiAA4I5USdnIph SBAiLHw0R5Q8wcuALnLgPbCXUUpKidMaZHOs9v6F10RDkGU01ZWB26OxFudynCfrDoeu BK+yt4ux9FSHugBkMaRBofrSo4VBA54BwcD/pGUFvakTu8luLoFTnPVEPQ5+dF+4M9d5 HsPWzc2xeX2f7i5G+2NlL3DFl3u++C9+5Th/jAxXqXCOego4Oeg5jLjoxnldEVJ1CDM6 A9Gg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=EknOOgdU; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a74si3122758vke.94.2021.06.04.09.36.24 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:36:24 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=EknOOgdU; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:40310 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCo3-0005pC-NW for patch@linaro.org; Fri, 04 Jun 2021 12:36:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52066) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRU-0003E0-PV for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:06 -0400 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]:55149) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRL-0003rr-E9 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:04 -0400 Received: by mail-wm1-x329.google.com with SMTP id o127so5694414wmo.4 for ; Fri, 04 Jun 2021 09:12:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YLNKH6WMEOpMy9rbclft4o2lCy4Z7+mVhGSkPGtULLE=; b=EknOOgdUdP7RZfsZU2t6/2TJqfmiN7AvQ7GY/cTZ0Iei400/sBpXExpok5jn6i/mjF mve4BU73qU4B3sORAkOAgjnDZSt802eUqDVleFLRQbw1Z/MAU0xIBcvwRi8hNbNnHtNX ZeRQYlACHiFWPdgUlzJVag4i4UU3TXk0N2KTSmov3VuPl18R5nTSe2J5Jq8pi2YutFh6 vBNtntVGf909NpqPpdAp37HU1MUTusNODW3/NFEBLftAQx/6ZV+Els3vHSIcBWqIYz8f 4AiVOslN4ieIF96iPobJcMz/6IDA4sPnjuJ+Ocmii431oxEi+g1jQCpWMWyNlVpUKmYc OIrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YLNKH6WMEOpMy9rbclft4o2lCy4Z7+mVhGSkPGtULLE=; b=UGqJbwfFpZYxFugHNlWxoJhZAxD9ZOQqDKkedcIOrrtK4gpzU5sSM2ZVrf2ltcO66G AWc/7whEW2l3I+FG8fqhF6lMNbZEP7X6dzIhcH9fOzzYteS0BXNsdPVQQ6iUprLECTle nsZ2dToVm8F77gWW+BFuNm9VzRm6ACRcRN5cipklHYnG93oCBOjNFCe3s2NaX5z0J3y+ LMvPpoWSOgLGeVHNFNFzzs011HJroZGcdEilr+WvBcFdgz5+Y3OvswvVZvGaPUXvjb7W JcixxSzIfvYW4OSFXusDEsyYbQyVIPI5HbIzAHF8Y14m5fzNmxoNP0th3e3AfUvemP56 x1Vg== X-Gm-Message-State: AOAM53163r5oLP5kK/s5wswS6cS9gOiOQc37lGRCz900nXi86OSmy5hC yX+enyHQBRLJZKY8uFTHn3eJHw== X-Received: by 2002:a05:600c:4ba4:: with SMTP id e36mr3799460wmp.28.1622823173878; Fri, 04 Jun 2021 09:12:53 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id a12sm5974399wmj.36.2021.06.04.09.12.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:52 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id D4AA71FFF6; Fri, 4 Jun 2021 16:53:22 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 82/99] target/arm: move sve_zcr_len_for_el to TARGET_AARCH64-only cpu-sve Date: Fri, 4 Jun 2021 16:52:55 +0100 Message-Id: <20210604155312.15902-83-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::329; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x329.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana now that we handled the dependency between HELPER(), cpregs defs and functions in tcg/, we can make sve_zcr_len_for_el TARGET_AARCH64-only, and move it to the cpu-sve module. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-sve.h | 3 +++ target/arm/cpu.h | 4 ++-- target/arm/arch_dump.c | 1 + target/arm/cpu-common.c | 43 ----------------------------------------- target/arm/cpu-sve.c | 33 +++++++++++++++++++++++++++++++ target/arm/cpu.c | 4 ++++ target/arm/tcg/cpregs.c | 1 + target/arm/tcg/helper.c | 4 ++++ 8 files changed, 48 insertions(+), 45 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-sve.h b/target/arm/cpu-sve.h index 6ab74b1d8f..1512c56a6b 100644 --- a/target/arm/cpu-sve.h +++ b/target/arm/cpu-sve.h @@ -34,4 +34,7 @@ void cpu_sve_add_props(Object *obj); /* add the CPU SVE properties specific to the "MAX" CPU */ void cpu_sve_add_props_max(Object *obj); +/* return the vector length for EL */ +uint32_t sve_zcr_len_for_el(CPUARMState *env, int el); + #endif /* CPU_SVE_H */ diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 3edf8bb4ec..e9bfb6f575 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -223,7 +223,8 @@ typedef struct ARMPACKey { } ARMPACKey; #else static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { } -#endif + +#endif /* TARGET_AARCH64 */ /* See the commentary above the TBFLAG field definitions. */ typedef struct CPUARMTBFlags { @@ -1097,7 +1098,6 @@ void aarch64_sync_64_to_32(CPUARMState *env); int fp_exception_el(CPUARMState *env, int cur_el); int sve_exception_el(CPUARMState *env, int cur_el); -uint32_t sve_zcr_len_for_el(CPUARMState *env, int el); /* you can call this signal handler from your SIGBUS and SIGSEGV signal handlers to inform the virtual CPU of exceptions. non zero diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c index 9cc75a6fda..9b2e76f5a7 100644 --- a/target/arm/arch_dump.c +++ b/target/arm/arch_dump.c @@ -24,6 +24,7 @@ #include "sysemu/dump.h" #ifdef TARGET_AARCH64 +#include "cpu-sve.h" /* struct user_pt_regs from arch/arm64/include/uapi/asm/ptrace.h */ struct aarch64_user_regs { diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c index f4a3780e9e..b7a199a8d6 100644 --- a/target/arm/cpu-common.c +++ b/target/arm/cpu-common.c @@ -301,49 +301,6 @@ uint64_t arm_hcr_el2_eff(CPUARMState *env) return ret; } -/* - * these are AARCH64-only, but due to the chain of dependencies, - * between HELPER prototypes, hflags, cpreg definitions and functions in - * tcg/ etc, it becomes incredibly messy to add what should be here: - * - * #ifdef TARGET_AARCH64 - */ - -static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) -{ - uint32_t end_len; - - end_len = start_len &= 0xf; - if (!test_bit(start_len, cpu->sve_vq_map)) { - end_len = find_last_bit(cpu->sve_vq_map, start_len); - assert(end_len < start_len); - } - return end_len; -} - -/* - * Given that SVE is enabled, return the vector length for EL. - */ -uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) -{ - ARMCPU *cpu = env_archcpu(env); - uint32_t zcr_len = cpu->sve_max_vq - 1; - - if (el <= 1) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]); - } - if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]); - } - if (arm_feature(env, ARM_FEATURE_EL3)) { - zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]); - } - - return sve_zcr_get_valid_len(cpu, zcr_len); -} - -/* #endif TARGET_AARCH64 , see matching comment above */ - uint64_t arm_sctlr(CPUARMState *env, int el) { /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */ diff --git a/target/arm/cpu-sve.c b/target/arm/cpu-sve.c index 24bffbba8b..e8e817e110 100644 --- a/target/arm/cpu-sve.c +++ b/target/arm/cpu-sve.c @@ -288,3 +288,36 @@ void cpu_sve_add_props_max(Object *obj) { object_property_add(obj, "sve-max-vq", "uint32", get_prop_max_vq, set_prop_max_vq, NULL, NULL); } + +static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) +{ + uint32_t end_len; + + end_len = start_len &= 0xf; + if (!test_bit(start_len, cpu->sve_vq_map)) { + end_len = find_last_bit(cpu->sve_vq_map, start_len); + assert(end_len < start_len); + } + return end_len; +} + +/* + * Given that SVE is enabled, return the vector length for EL. + */ +uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) +{ + ARMCPU *cpu = env_archcpu(env); + uint32_t zcr_len = cpu->sve_max_vq - 1; + + if (el <= 1) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]); + } + if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]); + } + if (arm_feature(env, ARM_FEATURE_EL3)) { + zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]); + } + + return sve_zcr_get_valid_len(cpu, zcr_len); +} diff --git a/target/arm/cpu.c b/target/arm/cpu.c index b297d0e6aa..0e41854b92 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -23,7 +23,11 @@ #include "target/arm/idau.h" #include "qapi/error.h" #include "cpu.h" + +#ifdef TARGET_AARCH64 #include "cpu-sve.h" +#endif /* TARGET_AARCH64 */ + #include "cpregs.h" #ifdef CONFIG_TCG diff --git a/target/arm/tcg/cpregs.c b/target/arm/tcg/cpregs.c index 9d4ac66281..c971dc6097 100644 --- a/target/arm/tcg/cpregs.c +++ b/target/arm/tcg/cpregs.c @@ -17,6 +17,7 @@ #include "cpregs.h" #ifdef TARGET_AARCH64 +#include "cpu-sve.h" #include "tcg/tcg-sve.h" #endif /* TARGET_AARCH64 */ diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index edc4b4cb4e..984dae7643 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -18,6 +18,10 @@ #include "cpregs.h" #include "tcg-cpu.h" +#ifdef TARGET_AARCH64 +#include "cpu-sve.h" +#endif /* TARGET_AARCH64 */ + static int vfp_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg) { ARMCPU *cpu = env_archcpu(env); From patchwork Fri Jun 4 15:52:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454114 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp597134jae; Fri, 4 Jun 2021 10:07:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOPkUPtewVuFlUv4nh0ZNQ5hsRHP3EGnJnVmYpoRtTYd5XaJoSXe2erIwZfFkENRqttOrR X-Received: by 2002:aa7:ce03:: with SMTP id d3mr5669107edv.360.1622826475103; Fri, 04 Jun 2021 10:07:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826475; cv=none; d=google.com; s=arc-20160816; b=jPjNE3+shTDmFnBetlBMGLm2CCSFGnStb2XB5yYCkE9/tOh6tntnvGzvWtAg9RJS8R ExYAqPEeiDpAskce9u3MwO3Gqbab+qF63YI6lEULtdmytosiv60VWfjH1qZLiiKeIhjn SscSzhGen9Z+SJJbDmyJUM/x/OHqbn3EqNrRD5kXmXKEn1IOn4qlPmGF4iObKprbEnTB mYRT2BOPMmM6eUKx3gnO23uTabIKX5nCqmKIB1cAxOEpcLVnEBdA81iGHD6tDQxUU4t/ oEX9AdPRAzIxh6759pYKqDDm+mttqIgHsgsvczWBkHiIVTk0tLm43BjQo7pJk38PWTti eD2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=jq4lq9iuiQPC9xlaM7lJ7USLdiB1vnToi4qLCV27AdQ=; b=v2nXq4YKHDIr8JcsNXsRuMkp0GaRdyx4QX4ddNSv3CKDWTghsEPab6/8M18qdR5atL cpLkDsRV00Y2YVgGJGf+ica58CoGY/Kem6IgvVQmzI6Xr3G/tXmqYpCDPFpCOAhc4jMm Tq2q7g1m/Ym4lHb7WvJna+MVoIfEOfs92/ac06yDmSeEzmATkEiuKOCQaNfHYR7FVEcr 4SD1TnHTuSb+XnW7PvaoyBHDU+7vOpIP9RBpeggQHOJKHOfE73MStlEqtNgI4XKv3JPJ Hw+PuAwiL0BqSKX/S29j8DoCl/asWCMTUxR7QVN2LouAf1JHGJqACk1SGiBVducr6jZI 4BfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="CCxB3m4/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a24si5046709edy.7.2021.06.04.10.07.54 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:07:55 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="CCxB3m4/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:34062 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDIY-0003Pn-2m for patch@linaro.org; Fri, 04 Jun 2021 13:07:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51908) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRL-00033I-O0 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:56 -0400 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]:42645) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRE-0003q3-SP for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:55 -0400 Received: by mail-wr1-x432.google.com with SMTP id c5so9823048wrq.9 for ; Fri, 04 Jun 2021 09:12:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jq4lq9iuiQPC9xlaM7lJ7USLdiB1vnToi4qLCV27AdQ=; b=CCxB3m4/qWB+FBCB5RYuIyfdzCHrqKav2Y/P3/CKCBiNLyAcmaFVd+pLOAdDnOavjH Nd74xIo731m5pKaJptlAGHYP/eNNXdnVF+BuF7yszNPMjJUJvQVJ4lwIY6SD1nkId2jc U/ZjCte5ozm/kiuNCjIPLFmtJ7TSZgrC9SbTsHkpD5WozzLF71gjj2knuIo346/PpJDN GdBHMOuSWAH7ZwsDHjf4jXy3O58rQaUwnYl0bTM1reTs5LtALEcajQ07veI/UQpKDIp9 JlKn2UlOzRj0zLQYFQX2C0jR5fJ/aVK670W4Dx3w34QGptt0VEbvOJY1kw1QzII8M2js nV1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jq4lq9iuiQPC9xlaM7lJ7USLdiB1vnToi4qLCV27AdQ=; b=Csi79o9M2Q+gE+4TddAA1WuEBvcYS3cXxYRNTpxb9Tg1tcR15c+HLQSVH2cdf/ktdp co3scI04VJmkHPueClcDw2Djw4eowL22CN+Iq9BBh6UMv65r6wtq0lx2bAPqVmSgsGMk BKPwOivZg7xvgfNS5d+VOAggP30oGmAJbxtmdw8eIwvyKqHVM4ue7ssSFBeDuq1AKV7R 8Sin3ZbauZK6Z0KlzChJ4/bEVncd/KyUQIGe6BeGpMS5SFlJAdvkOzlDsblpZiAOIOC5 6/hRuhBSOz40M2Xfw2Rx9wBhQd98FKOsYfDRcxMLB0yvuauFNSe/OMEJyHoFuamYuFtj EXxg== X-Gm-Message-State: AOAM532OHz8qENW2GjhovPi6/BlF/Tftf/E2tPvXsHd5UqfH4mPbCFg0 ZI7sjoS/sODkhdcsXNN6naXWmg== X-Received: by 2002:a05:6000:2c1:: with SMTP id o1mr4718171wry.425.1622823167538; Fri, 04 Jun 2021 09:12:47 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id g17sm2392705wrp.61.2021.06.04.09.12.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 036231FFF7; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 83/99] cpu-sve: rename sve_zcr_len_for_el to cpu_sve_get_zcr_len_for_el Date: Fri, 4 Jun 2021 16:52:56 +0100 Message-Id: <20210604155312.15902-84-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::432; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana use a canonical module prefix followed by the get_zcr_len_for_el() method name. Also rename the static internal auxiliary function, where the module prefix is not necessary. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/cpu-sve.h | 2 +- target/arm/arch_dump.c | 2 +- target/arm/cpu-sve.c | 6 +++--- target/arm/cpu64.c | 2 +- target/arm/tcg/cpregs.c | 4 ++-- target/arm/tcg/helper.c | 4 ++-- target/arm/tcg/tcg-sve.c | 4 ++-- 7 files changed, 12 insertions(+), 12 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-sve.h b/target/arm/cpu-sve.h index 1512c56a6b..c83508ea0a 100644 --- a/target/arm/cpu-sve.h +++ b/target/arm/cpu-sve.h @@ -35,6 +35,6 @@ void cpu_sve_add_props(Object *obj); void cpu_sve_add_props_max(Object *obj); /* return the vector length for EL */ -uint32_t sve_zcr_len_for_el(CPUARMState *env, int el); +uint32_t cpu_sve_get_zcr_len_for_el(CPUARMState *env, int el); #endif /* CPU_SVE_H */ diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c index 9b2e76f5a7..f192c8df97 100644 --- a/target/arm/arch_dump.c +++ b/target/arm/arch_dump.c @@ -168,7 +168,7 @@ static off_t sve_fpcr_offset(uint32_t vq) static uint32_t sve_current_vq(CPUARMState *env) { - return sve_zcr_len_for_el(env, arm_current_el(env)) + 1; + return cpu_sve_get_zcr_len_for_el(env, arm_current_el(env)) + 1; } static size_t sve_size_vq(uint32_t vq) diff --git a/target/arm/cpu-sve.c b/target/arm/cpu-sve.c index e8e817e110..1bc8c0bdb0 100644 --- a/target/arm/cpu-sve.c +++ b/target/arm/cpu-sve.c @@ -289,7 +289,7 @@ void cpu_sve_add_props_max(Object *obj) object_property_add(obj, "sve-max-vq", "uint32", get_prop_max_vq, set_prop_max_vq, NULL, NULL); } -static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) +static uint32_t get_valid_len(ARMCPU *cpu, uint32_t start_len) { uint32_t end_len; @@ -304,7 +304,7 @@ static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len) /* * Given that SVE is enabled, return the vector length for EL. */ -uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) +uint32_t cpu_sve_get_zcr_len_for_el(CPUARMState *env, int el) { ARMCPU *cpu = env_archcpu(env); uint32_t zcr_len = cpu->sve_max_vq - 1; @@ -319,5 +319,5 @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el) zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]); } - return sve_zcr_get_valid_len(cpu, zcr_len); + return get_valid_len(cpu, zcr_len); } diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 03ed637bdb..67b35feb17 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -549,7 +549,7 @@ static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags) vfp_get_fpcr(env), vfp_get_fpsr(env)); if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) { - int j, zcr_len = sve_zcr_len_for_el(env, el); + int j, zcr_len = cpu_sve_get_zcr_len_for_el(env, el); for (i = 0; i <= FFR_PRED_NUM; i++) { bool eol; diff --git a/target/arm/tcg/cpregs.c b/target/arm/tcg/cpregs.c index c971dc6097..9118f4347c 100644 --- a/target/arm/tcg/cpregs.c +++ b/target/arm/tcg/cpregs.c @@ -5802,7 +5802,7 @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { int cur_el = arm_current_el(env); - int old_len = sve_zcr_len_for_el(env, cur_el); + int old_len = cpu_sve_get_zcr_len_for_el(env, cur_el); int new_len; /* Bits other than [3:0] are RAZ/WI. */ @@ -5813,7 +5813,7 @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri, * Because we arrived here, we know both FP and SVE are enabled; * otherwise we would have trapped access to the ZCR_ELn register. */ - new_len = sve_zcr_len_for_el(env, cur_el); + new_len = cpu_sve_get_zcr_len_for_el(env, cur_el); if (new_len < old_len) { tcg_sve_narrow_vq(env, new_len + 1); } diff --git a/target/arm/tcg/helper.c b/target/arm/tcg/helper.c index 984dae7643..fff185f422 100644 --- a/target/arm/tcg/helper.c +++ b/target/arm/tcg/helper.c @@ -186,7 +186,7 @@ static int arm_gdb_get_svereg(CPUARMState *env, GByteArray *buf, int reg) * We report in Vector Granules (VG) which is 64bit in a Z reg * while the ZCR works in Vector Quads (VQ) which is 128bit chunks. */ - int vq = sve_zcr_len_for_el(env, arm_current_el(env)) + 1; + int vq = cpu_sve_get_zcr_len_for_el(env, arm_current_el(env)) + 1; return gdb_get_reg64(buf, vq * 2); } default: @@ -1034,7 +1034,7 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, if (sve_el != 0 && fp_el == 0) { zcr_len = 0; } else { - zcr_len = sve_zcr_len_for_el(env, el); + zcr_len = cpu_sve_get_zcr_len_for_el(env, el); } DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el); DP_TBFLAG_A64(flags, ZCR_LEN, zcr_len); diff --git a/target/arm/tcg/tcg-sve.c b/target/arm/tcg/tcg-sve.c index 25d5a5867c..80a37caf6e 100644 --- a/target/arm/tcg/tcg-sve.c +++ b/target/arm/tcg/tcg-sve.c @@ -155,10 +155,10 @@ void tcg_sve_change_el(CPUARMState *env, int old_el, */ old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64; old_len = (old_a64 && !sve_exception_el(env, old_el) - ? sve_zcr_len_for_el(env, old_el) : 0); + ? cpu_sve_get_zcr_len_for_el(env, old_el) : 0); new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64; new_len = (new_a64 && !sve_exception_el(env, new_el) - ? sve_zcr_len_for_el(env, new_el) : 0); + ? cpu_sve_get_zcr_len_for_el(env, new_el) : 0); /* When changing vector length, clear inaccessible state. */ if (new_len < old_len) { From patchwork Fri Jun 4 15:52:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454109 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp589372jae; Fri, 4 Jun 2021 09:59:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyO8nF9Xo/w50fWGOfnyTXsOTlOVjp8jXNq8qbDxG27e9/DSvqzDyEKi30S8mwRFJSpDi54 X-Received: by 2002:a9d:3d1:: with SMTP id f75mr1418876otf.278.1622825945303; Fri, 04 Jun 2021 09:59:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825945; cv=none; d=google.com; s=arc-20160816; b=CzGxc00OOpx+TMXa8fAzFxpSwmjCV2HwCFdQb6Rqgcv34bfUkVb11A6J4VA/YSx3Mp OZKdpAKOrfmrhnyylBspYSgiHqHu3fU2crYBkFqzzRc9LEyv8WyxHmIZwqZiM8BsFWZz Qxc8I80VhsR6oYCy3gQbd0CG4ebSbgwKBcEw+eV8cYTlMurOgT5SjLgcKqL3yAQQtvZU VrD1NjkHiI8vNLbMqSDB3vxlzkMkhldjyYDLBj2+/TX3LEIE6v9qLCIqAorQuz4HUnEx 5vAQO3JGSOQNO7k4yyMc+92f4qx1LPSsFhiCZ8R52Z+BET/mVwjQtiyiHflxbxQNnS8X r8vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=dQ5wxdfmiKKfim62JoM4B+biad6w8O8eV+yYLw/gu6A=; b=u5IrSCHKcPDVW/QcE/+OR89CPrZHENMtYxaP3sjUCMFRQwKhH0K1WO7wNYOPvDZNmo TTxcHT2oNOjZ7hN4cJ2FMpApG8H6V7CCfZ69hRl4LagreTIfrD9WEu3WijSBPTpavg3L m9ysIWwze7a2Tm1a/hKaH5Ac8xQGM22RDc8eNBFse7hW6B9i8zeITuATBR5iZtiIBsbR j4CWMPf82i/nadhafJ2uSqY+R+6Tr7z2k6Pb6XT0Ih0l0dnUxlm7TwaiK7cm/9frX7Dr YGYO5paAQMTlp5tWrx9IkOizlLsrkUoAnHnprsV5STFNrDIozqOvzr+q5GBzxUbNJ38i Mmug== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=UJ5lzGop; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q65si2223195ooq.56.2021.06.04.09.59.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:59:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=UJ5lzGop; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:60800 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDA0-0008BT-NZ for patch@linaro.org; Fri, 04 Jun 2021 12:59:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51666) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRC-0002g5-A8 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:46 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]:45998) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCR7-0003nJ-Tc for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:46 -0400 Received: by mail-wr1-x42b.google.com with SMTP id z8so9831691wrp.12 for ; Fri, 04 Jun 2021 09:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dQ5wxdfmiKKfim62JoM4B+biad6w8O8eV+yYLw/gu6A=; b=UJ5lzGopsP9AQtjSgFaH1tpTRzHkHZ/Vq1B2TWuYenV1Wu/faEwBOseEXSRS5oigbX +4CSqyhboHFqh5+npL7V6R6cxMq/jE8LvXbq5G/46qWnKOOyWbI+ifSQcOnDbwM3HC66 vyd7X1wxtPPCP3vmso89pBgQ1k3vEV6gfJu1QG4fNlKLqR5wMS6v0WgFSzTnSUItHsde su25ryqMMGLLXTyDYevVndq4Pw/RhI+O3PP/HXKibfnYP/bTS0a6zM7yWYA2VrCEqRGe FfRE277Derg5DcQy6/gWZR1luhFS9CN8dkuNrM8NNEOUj8p4Y3p39C5nWjEC2mdG0NsR g2XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dQ5wxdfmiKKfim62JoM4B+biad6w8O8eV+yYLw/gu6A=; b=g36zxwWmLppCBeqtvOH96A6j7KmYKaWWutJ27BDylDo3fiQ48YLcuysAqB7t39BHFV Hd18q8BkrydbzG43cUzShV8WQZWDAJBv1RVnPBWFqd92zzGPt5RrjyH0Xd7e9ahfP4qd BdxnqXRjnGq3lNj3VFLuKgicIeqp28YGXEvDhTEWiadLlVr3JQWZGKBDhpW3ZOIKQJT0 UEHdkFE38gQRZ54C+W/dy7rYIehi9uDouNfrOd17wS/2cfGMFpd2APpSUbyB8M0eK3Ps 6RlT6bkqLFdgwqdlLvK1hNi4jlZoLSfvZpujCTmruAd7Qs1X31aRXvRDrodAaRVO537P MHyQ== X-Gm-Message-State: AOAM533zHZBeDuM+n993OwDFEp9l9vSHAnS8F3v+vz7suNy+enoH47yI JFL2d5zQ+FZb6Hk4iBlFoQ9LOA== X-Received: by 2002:a5d:4287:: with SMTP id k7mr4860461wrq.98.1622823160471; Fri, 04 Jun 2021 09:12:40 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id u7sm1508669wrt.18.2021.06.04.09.12.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 1A3B920002; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 84/99] target/arm: cpu-common: wrap a64-only check with is_a64 Date: Fri, 4 Jun 2021 16:52:57 +0100 Message-Id: <20210604155312.15902-85-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana now that is_a64() is just always false when !TARGET_AARCH64, we can just use that instead of introducing a new ifdef. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-common.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) -- 2.20.1 Acked-by: Richard Henderson diff --git a/target/arm/cpu-common.c b/target/arm/cpu-common.c index b7a199a8d6..585223350f 100644 --- a/target/arm/cpu-common.c +++ b/target/arm/cpu-common.c @@ -305,9 +305,13 @@ uint64_t arm_sctlr(CPUARMState *env, int el) { /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */ if (el == 0) { - ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); - el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0) - ? 2 : 1; + if (is_a64(env)) { + ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); + el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0) + ? 2 : 1; + } else { + el = 1; + } } return env->cp15.sctlr_el[el]; } From patchwork Fri Jun 4 15:52:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454108 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp588078jae; Fri, 4 Jun 2021 09:57:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwzVsGmvkmLTnv7sGs2GayEh33tx5sdl3Zx0VU3H/wFSGgSJxhq3AueTANKIcSSy18cVHew X-Received: by 2002:a9d:7012:: with SMTP id k18mr4408652otj.179.1622825829495; Fri, 04 Jun 2021 09:57:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825829; cv=none; d=google.com; s=arc-20160816; b=pDvQdb5U2QGQwmvO8183gyP5pYFbmDxni9WkguqgjHNLEHIMSebYXaF1TA4UDCOgXl yxQge8kAhWvywTShuHE8q4CiALO8dLPgsdlz9wDPIds4ue/EZzfL8c6zkwfVp+SleFvA ysRlNRGWzcMi9x8QlEIc9UuErzI3SQ9CirWLdQ2kyDnr8brwfMEg7KqgP65RQwlFUR2n 7JL5LCTlYnJ5ZU+FzOAARQz2e3lU/OW1H96yK1SDHcI7U6SJlpNpLti+ED6r8XpGTSoS lBHhi2uBQBbsuDg1h1bVuOlVac89PSRcwc3RQApPmD1nHa4eJy0mdnzuF9Actvihmpfu 6+ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=Wv3S6+DE4Tp2NGr/S13GqQas1/V9v12ce15H5pOTlFY=; b=UweFAqwRm27/fFPPCh000cPwhkMxp/sBoMRCo6t1T/8EuOlRkZRHBUsqtk1do5AULU oN9sOTXiqbcN8KQqUxQgXqkIgd67nbaUZyzB08nmeHKvhKDTBlNQ/voW554b0Peyq773 dftAeVlTLLmw5v1jmaeKixG0uE9ayK6Iplrt1dWR5RWmZAzblWFNwLhbkchJphtWuTLC KqpJs6gpAurkdDlpIyEz7s8ofwCgKj/qKM0nFBqNxXylLIYKGIIS5mUXPpD2cLUxBNTI pU0fB1EOR7t7hhOvTCO+aD8RaY/ZmRQVvPKgUi9UiMbzwDntZweWLawEl6uzjMHtXWaw jNSg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="B/hSGWi9"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id i19si2124533ooh.81.2021.06.04.09.57.09 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:57:09 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="B/hSGWi9"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52212 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD88-0002Pv-SR for patch@linaro.org; Fri, 04 Jun 2021 12:57:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51636) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRA-0002aC-T8 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:44 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]:44806) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCR6-0003mz-J5 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:44 -0400 Received: by mail-wm1-x331.google.com with SMTP id p13-20020a05600c358db029019f44afc845so5906107wmq.3 for ; Fri, 04 Jun 2021 09:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Wv3S6+DE4Tp2NGr/S13GqQas1/V9v12ce15H5pOTlFY=; b=B/hSGWi9lillA/dNDJyqNvauLgWPWxmFpDVfLdsD/YNrpDExP9k05FNJKcwJIqY3tX IlvHss/Ajn4aa8YHtpW46Upg3JElxoHulAaE6nKDRm5ECNQ0W4+vrsVt+bZHfFwbqCbI TBalPithNU9DZOJoCWBF3IyOXH4Uinrn6oNhmukyJ/vfLmjluDGX1yIxVrjqxiMOo8ib PC521YYCdMKNVeG6gRWGlNz178HAWEnOiu/SS547jC4QLd6cNm2mZZ9COmyuhrCU/PkY WZF8dwSu+Jb6n6/CE+I61oN6K9y0BYetVaRspFJJZbxeFfjU6nWjU+vMrMfEjamEVpfu rGzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Wv3S6+DE4Tp2NGr/S13GqQas1/V9v12ce15H5pOTlFY=; b=Kax4g7Pog4e+UHdoPXqy2wECTPTMnveL0KBf6nk7FvaQiBSYoVgFf2z+1s7KrpAvmw lwwn0kr+SCes/pt0wixs8IyDvuavK3fJd+Iczal1fCyR7QJylnZgyBQES/k1poW6Be/0 DtB2/qR6lDl+0RqiXnw8gmnnrL6Gr1VC4nS8/OHqYMCHCxlOjsaJiIiVz7DwzJ68wr6j au224V4wHX2WvpgvHIecyXEqSOWBSuK91btgDQ8ImOh4D5NUgjSFz6TWAEBx5l5QSfHe KuoVHKcWIpJnmvQnjVejkCHS4+gHk3dhjQuIC4Dk+7giaj7vjCgy6drFUhieDpo6gNl8 rbPg== X-Gm-Message-State: AOAM5336kP+4QsBsAf583JVB1Sec0fqxxsQY5L5g2RqSgJEDCGgrv8vZ ddaFRx3j72QWBo9yUj1SY5AS3Q== X-Received: by 2002:a1c:146:: with SMTP id 67mr4482278wmb.61.1622823158264; Fri, 04 Jun 2021 09:12:38 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l3sm6080884wmh.2.2021.06.04.09.12.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 3A81520005; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 85/99] target/arm: cpu-pauth: new module for ARMv8.3 Pointer Authentication Date: Fri, 4 Jun 2021 16:52:58 +0100 Message-Id: <20210604155312.15902-86-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana Pointer Authentication is an AARCH64-only ARMv8.3 optional extension, whose cpu properties can be separated out in its own module. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu.h | 3 -- target/arm/tcg/cpu-pauth.h | 34 ++++++++++++++++++++ target/arm/cpu.c | 1 + target/arm/cpu64.c | 35 ++------------------- target/arm/tcg/cpu-pauth.c | 63 ++++++++++++++++++++++++++++++++++++++ target/arm/tcg/meson.build | 1 + 6 files changed, 101 insertions(+), 36 deletions(-) create mode 100644 target/arm/tcg/cpu-pauth.h create mode 100644 target/arm/tcg/cpu-pauth.c -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index e9bfb6f575..02e0fe5dbd 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -216,13 +216,10 @@ typedef struct ARMPredicateReg { uint64_t p[DIV_ROUND_UP(2 * ARM_MAX_VQ, 8)] QEMU_ALIGNED(16); } ARMPredicateReg; -void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp); /* In AArch32 mode, PAC keys do not exist at all. */ typedef struct ARMPACKey { uint64_t lo, hi; } ARMPACKey; -#else -static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { } #endif /* TARGET_AARCH64 */ diff --git a/target/arm/tcg/cpu-pauth.h b/target/arm/tcg/cpu-pauth.h new file mode 100644 index 0000000000..af127876fe --- /dev/null +++ b/target/arm/tcg/cpu-pauth.h @@ -0,0 +1,34 @@ +/* + * QEMU AArch64 Pointer Authentication Extensions + * + * Copyright (c) 2013 Linaro Ltd + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#ifndef CPU_PAUTH_H +#define CPU_PAUTH_H + +/* ARMv8.3 pauth is an AARCH64 option, only include this for TARGET_AARCH64 */ + +#include "cpu.h" + +/* called by arm_cpu_finalize_features in realizefn */ +void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp); + +/* add the CPU Pointer Authentication properties */ +void cpu_pauth_add_props(Object *obj); + +#endif /* CPU_PAUTH_H */ diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 0e41854b92..5359331bff 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -33,6 +33,7 @@ #ifdef CONFIG_TCG #include "tcg/tcg-cpu.h" #endif /* CONFIG_TCG */ +#include "tcg/cpu-pauth.h" #include "cpu32.h" #include "exec/exec-all.h" #include "hw/qdev-properties.h" diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 67b35feb17..fefb6954fc 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -24,6 +24,7 @@ #include "cpu.h" #include "cpu32.h" #include "cpu-sve.h" +#include "tcg/cpu-pauth.h" #include "qemu/module.h" #include "sysemu/tcg.h" #include "sysemu/kvm.h" @@ -246,36 +247,6 @@ static void aarch64_a72_initfn(Object *obj) define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo); } -void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) -{ - int arch_val = 0, impdef_val = 0; - uint64_t t; - - /* TODO: Handle HaveEnhancedPAC, HaveEnhancedPAC2, HaveFPAC. */ - if (cpu->prop_pauth) { - if (cpu->prop_pauth_impdef) { - impdef_val = 1; - } else { - arch_val = 1; - } - } else if (cpu->prop_pauth_impdef) { - error_setg(errp, "cannot enable pauth-impdef without pauth"); - error_append_hint(errp, "Add pauth=on to the CPU property list.\n"); - } - - t = cpu->isar.id_aa64isar1; - t = FIELD_DP64(t, ID_AA64ISAR1, APA, arch_val); - t = FIELD_DP64(t, ID_AA64ISAR1, GPA, arch_val); - t = FIELD_DP64(t, ID_AA64ISAR1, API, impdef_val); - t = FIELD_DP64(t, ID_AA64ISAR1, GPI, impdef_val); - cpu->isar.id_aa64isar1 = t; -} - -static Property arm_cpu_pauth_property = - DEFINE_PROP_BOOL("pauth", ARMCPU, prop_pauth, true); -static Property arm_cpu_pauth_impdef_property = - DEFINE_PROP_BOOL("pauth-impdef", ARMCPU, prop_pauth_impdef, false); - /* -cpu max: if KVM is enabled, like -cpu host (best possible with this host); * otherwise, a CPU with as many features enabled as our emulation supports. * The version of '-cpu max' for qemu-system-arm is defined in cpu.c; @@ -447,9 +418,7 @@ static void aarch64_max_initfn(Object *obj) cpu->dcz_blocksize = 7; /* 512 bytes */ #endif - /* Default to PAUTH on, with the architected algorithm. */ - qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_property); - qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_impdef_property); + cpu_pauth_add_props(obj); } cpu_sve_add_props(obj); diff --git a/target/arm/tcg/cpu-pauth.c b/target/arm/tcg/cpu-pauth.c new file mode 100644 index 0000000000..f821087b14 --- /dev/null +++ b/target/arm/tcg/cpu-pauth.c @@ -0,0 +1,63 @@ +/* + * QEMU AArch64 Pointer Authentication Extensions + * + * Copyright (c) 2012 SUSE LINUX Products GmbH + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see + * + */ + +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "cpu.h" +#include "sysemu/tcg.h" +#include "tcg/cpu-pauth.h" +#include "hw/qdev-properties.h" + +void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) +{ + int arch_val = 0, impdef_val = 0; + uint64_t t; + + /* TODO: Handle HaveEnhancedPAC, HaveEnhancedPAC2, HaveFPAC. */ + if (cpu->prop_pauth) { + if (cpu->prop_pauth_impdef) { + impdef_val = 1; + } else { + arch_val = 1; + } + } else if (cpu->prop_pauth_impdef) { + error_setg(errp, "cannot enable pauth-impdef without pauth"); + error_append_hint(errp, "Add pauth=on to the CPU property list.\n"); + } + + t = cpu->isar.id_aa64isar1; + t = FIELD_DP64(t, ID_AA64ISAR1, APA, arch_val); + t = FIELD_DP64(t, ID_AA64ISAR1, GPA, arch_val); + t = FIELD_DP64(t, ID_AA64ISAR1, API, impdef_val); + t = FIELD_DP64(t, ID_AA64ISAR1, GPI, impdef_val); + cpu->isar.id_aa64isar1 = t; +} + +static Property arm_cpu_pauth_property = + DEFINE_PROP_BOOL("pauth", ARMCPU, prop_pauth, true); +static Property arm_cpu_pauth_impdef_property = + DEFINE_PROP_BOOL("pauth-impdef", ARMCPU, prop_pauth_impdef, false); + +void cpu_pauth_add_props(Object *obj) +{ + /* Default to PAUTH on, with the architected algorithm. */ + qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_property); + qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_impdef_property); +} diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build index c289771e97..646bb5eb25 100644 --- a/target/arm/tcg/meson.build +++ b/target/arm/tcg/meson.build @@ -39,6 +39,7 @@ arm_ss.add(when: 'CONFIG_TCG', if_true: files( )) arm_ss.add(when: ['TARGET_AARCH64','CONFIG_TCG'], if_true: files( + 'cpu-pauth.c', 'translate-a64.c', 'translate-sve.c', 'helper-a64.c', From patchwork Fri Jun 4 15:52:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454071 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp556373jae; Fri, 4 Jun 2021 09:18:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyXRoaLN7YSsiqcJRAgoPSOZrZATm4JorT9cNc3EMTgMjqQJvNMfkD6jvF3FXRE4CLDuaru X-Received: by 2002:a17:906:e2d3:: with SMTP id gr19mr4835023ejb.525.1622823481853; Fri, 04 Jun 2021 09:18:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823481; cv=none; d=google.com; s=arc-20160816; b=bP7pASfuzoWU80sqUDNgb22BC7lDBVmBm6A5qq5GnzWTcyd4Cq1SE23b29ijyHrarB OKuEHgdn8IHxw8DEbTIB7Em/CN2ek65lzebeOuudL/Vpkkj6mtI7rBvB3oWSmtTMwTtx 7Tf4dNq+pPzofmIjwIJO3hdpjmGJTMCNK163MfhKgxIhRZLVN9fR2ICNGJlqKEkNN0au x511nJ7Lm4aqFBOnGKGRrhPo3sjES9UWT+ct4ujazaIbsb28oGNz8DCFRq5K/dZyKXnr 7Bcnj1He8GRUXydl4CZ+VzLd0R8HAInN/UTU6wSOa0ZQs2SitVLr0PkSDZNKuTMsKmzu X0Nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=OSMTOGHNpqVDrWKBS5w0Ojq5CsCdFRCX6CpyFDgYeFE=; b=e8DBgmzF4/HgXc5udE82YDpFGDvnQcwFk97C9W7d/htqbt+pY/3HjzxoBadZ6wQk6j qqxL7txfny4fb1iqst7xYHPZhCxMI0fGIW6KDVtLc4QVSVGZeOhhwg9VzxYUOGENbHUE tR9ehatu0QBrh5wPMabS84IFx9ItRVbUc16WaIlCLzQ8MPvhvbqHPhKV4zMQ0WBsRrai OVTfTIoJ1eL2mXjDGJZheGDqwC/1yJFT/+NAwY3g9RmyuvOV1Vu6IRIc/PPxwE1KNcQ0 H4ntZig4kYv6xmgT38ALCj6psUMiS7oWMi8nRYhXjqm5HSmcFMqwYkH/7TxeHJ7tuGnv K6DQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=q96mASnm; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id dy11si5202420edb.579.2021.06.04.09.18.01 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:18:01 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=q96mASnm; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:60436 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCWG-0003mx-Rw for patch@linaro.org; Fri, 04 Jun 2021 12:18:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44772) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8i-0003N3-Cc for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:40 -0400 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]:39548) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8f-0000CU-T8 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:40 -0400 Received: by mail-wr1-x430.google.com with SMTP id l2so9813729wrw.6 for ; Fri, 04 Jun 2021 08:53:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OSMTOGHNpqVDrWKBS5w0Ojq5CsCdFRCX6CpyFDgYeFE=; b=q96mASnmKRNkqqhVOYy6ncNg91A9TgllizfE3UQXVyrVQTEaj59noi+dbN5AEB+xiK /vLyjGF1Ca6+MWWN8Zw3rq2r6y2/DDgcmaJCyaJcHo6bDP9IRzP74GY883jjJMDsFkD6 lbrANKs83N+VL6uOjwRHtNmyH8+W65mYrC5/zsET5nqKm9Z6ue50xz8nMrvEe8kVr0R9 eSf7ib1cCBsHKLgH9aBt/TQq/byC9dZsXgs5KSnXaASbI2MAsq8w0PZAn3ejFOTwjIKT cW6laUC1Brf5x6i49QKPHs8Ld4NxngS1p+4L1MMhH78l0RSrORs1YeFksfmp35tUFGIW n1uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OSMTOGHNpqVDrWKBS5w0Ojq5CsCdFRCX6CpyFDgYeFE=; b=bVxSKEzkjxNPMVKYoLdolsrI0ouUWJZx4o/riAOtppvjPL7NowraaGGyEg5rDVFe9v gk/+LiwI4Y8DHJ9OatKB+X9OK3nuflv7K0MESBXbIMe+s91b2lpKGo+XbCGzzQpELDba vQICLxfFvl5spUq01qkE+xRLoi3ILRoVfBP1HVx1jrvXZj/RmjXiItVRy/PfTaM6OpKO FbogMKDIYOLgWazIx+CING9zLBV92wYjuKKgccgcW80gD4iuMy3tLXsOpb76tVkUW+S+ CXqWlAvw/GxE2fn4EC+OYjbE1JkcdxKt5GSdNy9DLAt0VE6z6cTo1Ru1xEY/2+MtZkuF 0zEQ== X-Gm-Message-State: AOAM5319Ok9EuqbjA4f3yhBBE05DDIZGAwbPbsTKGnZjYrUF34kBfbKO 2IziLB8GGwJiPE2P8Nl9WYp2ew== X-Received: by 2002:a5d:6546:: with SMTP id z6mr4683184wrv.100.1622822016391; Fri, 04 Jun 2021 08:53:36 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id x10sm7152427wrt.65.2021.06.04.08.53.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 5572F20006; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 86/99] target/arm: cpu-pauth: change arm_cpu_pauth_finalize name and sig Date: Fri, 4 Jun 2021 16:52:59 +0100 Message-Id: <20210604155312.15902-87-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana make arm_cpu_pauth_finalize return a bool, and make the name canonical for the module (cpu_pauth_finalize). Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/tcg/cpu-pauth.h | 2 +- target/arm/cpu.c | 3 +-- target/arm/tcg/cpu-pauth.c | 5 ++++- 3 files changed, 6 insertions(+), 4 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/tcg/cpu-pauth.h b/target/arm/tcg/cpu-pauth.h index af127876fe..a0ef74dc77 100644 --- a/target/arm/tcg/cpu-pauth.h +++ b/target/arm/tcg/cpu-pauth.h @@ -26,7 +26,7 @@ #include "cpu.h" /* called by arm_cpu_finalize_features in realizefn */ -void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp); +bool cpu_pauth_finalize(ARMCPU *cpu, Error **errp); /* add the CPU Pointer Authentication properties */ void cpu_pauth_add_props(Object *obj); diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 5359331bff..8709c11784 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -837,8 +837,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) * is in use, so the user will not be able to set them. */ if (tcg_enabled()) { - arm_cpu_pauth_finalize(cpu, &local_err); - if (local_err != NULL) { + if (!cpu_pauth_finalize(cpu, &local_err)) { error_propagate(errp, local_err); return; } diff --git a/target/arm/tcg/cpu-pauth.c b/target/arm/tcg/cpu-pauth.c index f821087b14..4f087923ac 100644 --- a/target/arm/tcg/cpu-pauth.c +++ b/target/arm/tcg/cpu-pauth.c @@ -25,8 +25,9 @@ #include "tcg/cpu-pauth.h" #include "hw/qdev-properties.h" -void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) +bool cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { + bool result = true; int arch_val = 0, impdef_val = 0; uint64_t t; @@ -40,6 +41,7 @@ void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) } else if (cpu->prop_pauth_impdef) { error_setg(errp, "cannot enable pauth-impdef without pauth"); error_append_hint(errp, "Add pauth=on to the CPU property list.\n"); + result = false; } t = cpu->isar.id_aa64isar1; @@ -48,6 +50,7 @@ void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) t = FIELD_DP64(t, ID_AA64ISAR1, API, impdef_val); t = FIELD_DP64(t, ID_AA64ISAR1, GPI, impdef_val); cpu->isar.id_aa64isar1 = t; + return result; } static Property arm_cpu_pauth_property = From patchwork Fri Jun 4 15:53:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454060 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp541381jae; Fri, 4 Jun 2021 09:01:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwCWI+1aM95pTBLa+0MH5ZPIm4IYAmB+GpO6gSag3/whWRkZ8AgMMPvUZR3Fo078BI8iPp+ X-Received: by 2002:aa7:cad3:: with SMTP id l19mr5510606edt.289.1622822502713; Fri, 04 Jun 2021 09:01:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822502; cv=none; d=google.com; s=arc-20160816; b=pbPXpV7n6WdNgFLNehDonymJtdHXxaJvcSX9mua7v/2HtVSswQU3ya/EDO1xmB/g4k Ul07CGNk6sBWprEDXMcESafRrfIAmwm+6ak8KoqXJHrKq8+p8biPaUKI4P2UW9ohdmwZ a3c2oLQIhdZKfqPNt7jzy6CQyXWgZ22Hb96vEg91xEA1bFpi7GPplT6pMe8NelMCLnVo hLg/+zq3edFox4J16RSaaoKsuWWuQl45t3yVOpk41HFptqUI3lzwy2QMS9UgKSr+81Mv XcXce9lCguRi5MdYkbD1JrMcq+hub7aqKR/h2YniWU/LXVSyBbam1w08UTek0S8gOAZr L7DA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=kkP+MYbBmAzVDrfIZPpeRjBWZ6tsRXqrVrjsv5uYDeI=; b=UMvIG9cVbLCkIZYKoF0U074r1PRJnbM2rMPPY2L79Lpw3C6zHNYcaM4gbl31RpaQkm KwtYKf+UeGs6k9LBb2kOI3XNlQoF1U749I44kEB47pF4DpUkLSNM47COONuZJum3dYCA wRIoW1Lt1qDYyTxaE7VnqRKdUFk/TF7k2xQLuj8CrgLiaIUHmUdG43PO0ShDrinDrCqw zyVl6gNcy3M/DB38kZddgtxS/naN0cTim1P8bm+UoRbhsJoPFyXIGVUq7yhnOxix95P0 XuvxqUdhZJUtAOjlHFof0y3pgLg/SUi9yB9rys1jr83cmu5m3Y7SZlfnW+wPWXKikizr aRPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=iJNOkhUH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id z13si4935815ejb.275.2021.06.04.09.01.42 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:01:42 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=iJNOkhUH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42392 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCGT-00046k-Kf for patch@linaro.org; Fri, 04 Jun 2021 12:01:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44678) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8f-0003D3-Sh for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:37 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]:37759) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8c-00009K-Vp for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:37 -0400 Received: by mail-wm1-x32d.google.com with SMTP id t16-20020a05600c1990b02901a0d45ff03aso4661238wmq.2 for ; Fri, 04 Jun 2021 08:53:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kkP+MYbBmAzVDrfIZPpeRjBWZ6tsRXqrVrjsv5uYDeI=; b=iJNOkhUHHnJyguCif0Hbohyg8LrW1hxcTTqcCOYVAIh2SPCQ81ypnOkda20WuHQR1g Jwxh1g28Rju1oeE14+KBJM75CsPXJps+C+NWsvbQUlMj/MwWV+AnoJAAGOqRacFpVceG 1sYynQ4pqis/e+ruhvMDNGvGAtV13jDWPNo0WyWGQDk2QE04J6/BOW7jIHxAuGVF8/l2 OiNPCPYRRb7lzv31CA4GZbdH6DEvG9+0p2hbFd2g1lpdajOnZeAindapNLdi8+519EJW yTX3N3JtHBga90IjEHm41VKtGvO0w7XM/0Cbm2EUINX4bFEHJFlEl/acwbsQ0K75wW9K dC1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kkP+MYbBmAzVDrfIZPpeRjBWZ6tsRXqrVrjsv5uYDeI=; b=q73l3Ux+Bkzxi9J1xgtN0yV0d0CMNBugX+ModgzmY9ADMTKjQJUKUtcg5Ig+UFlwxs unK+7bCAlic4MFR2Bs50EbVUCWkAJkeEZYUQx9x5TJ+K5YR2U3yz0/G1f3cXzIhBJ+lt z9+Y1HrP8Ex5/K4/b6Hv7hYkSvMQTau3q5TF+ozKvtp9tbWcfhK1fGD6X0jSkmngAtnF XiKb0ZlUcTrr7kPHkCOtY+OcHF0EtHQ4nH6tymA9fRZJGZxuGMYcRZ0iFx6D78x1zLGH necDl0RIW/KimFzXb3SkTdc9AH8Qj8CENE11JAfbSfC/E2ZVWxvZE7T3mw0aS81mK7kq dChA== X-Gm-Message-State: AOAM530ByqM1FBjRJ8t1u/DnI5cMD2RVseIACXSnMZWVPXDSutZrDz62 YQWmFPX7/0dvTymbDSP772BEWnfStjOW+Q== X-Received: by 2002:a1c:98d0:: with SMTP id a199mr4387062wme.22.1622822013426; Fri, 04 Jun 2021 08:53:33 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id o6sm8006091wre.73.2021.06.04.08.53.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 7163C20007; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 87/99] target/arm: move arm_cpu_finalize_features into cpu64 Date: Fri, 4 Jun 2021 16:53:00 +0100 Message-Id: <20210604155312.15902-88-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana all the features in arm_cpu_finalize_features are actually TARGET_AARCH64-only now, since KVM is now only supported on 64bit. Therefore move the function to cpu64. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu.c | 36 ++---------------------------------- target/arm/cpu64.c | 34 ++++++++++++++++++++++++++++++++++ target/arm/monitor.c | 4 ++++ 3 files changed, 40 insertions(+), 34 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 8709c11784..0adbf36347 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -820,40 +820,6 @@ static void arm_cpu_finalizefn(Object *obj) #endif } -void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) -{ - Error *local_err = NULL; - -#ifdef TARGET_AARCH64 - if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { - if (!cpu_sve_finalize_features(cpu, &local_err)) { - error_propagate(errp, local_err); - return; - } - - /* - * KVM does not support modifications to this feature. - * We have not registered the cpu properties when KVM - * is in use, so the user will not be able to set them. - */ - if (tcg_enabled()) { - if (!cpu_pauth_finalize(cpu, &local_err)) { - error_propagate(errp, local_err); - return; - } - } - } -#endif /* TARGET_AARCH64 */ - - if (kvm_enabled()) { - kvm_arm_steal_time_finalize(cpu, &local_err); - if (local_err != NULL) { - error_propagate(errp, local_err); - return; - } - } -} - static void arm_cpu_realizefn(DeviceState *dev, Error **errp) { CPUState *cs = CPU(dev); @@ -876,6 +842,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) return; } +#ifdef TARGET_AARCH64 arm_cpu_finalize_features(cpu, &local_err); if (local_err != NULL) { error_propagate(errp, local_err); @@ -892,6 +859,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) "AArch64 CPUs must have both VFP and Neon or neither"); return; } +#endif /* TARGET_AARCH64 */ if (!cpu->has_vfp) { uint64_t t; diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index fefb6954fc..c762f3f07a 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -469,6 +469,40 @@ static gchar *aarch64_gdb_arch_name(CPUState *cs) return g_strdup("aarch64"); } +void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) +{ + Error *local_err = NULL; + +#ifdef TARGET_AARCH64 + if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { + if (!cpu_sve_finalize_features(cpu, &local_err)) { + error_propagate(errp, local_err); + return; + } + + /* + * KVM does not support modifications to this feature. + * We have not registered the cpu properties when KVM + * is in use, so the user will not be able to set them. + */ + if (tcg_enabled()) { + if (!cpu_pauth_finalize(cpu, &local_err)) { + error_propagate(errp, local_err); + return; + } + } + } +#endif /* TARGET_AARCH64 */ + + if (kvm_enabled()) { + kvm_arm_steal_time_finalize(cpu, &local_err); + if (local_err != NULL) { + error_propagate(errp, local_err); + return; + } + } +} + static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, int flags) { ARMCPU *cpu = ARM_CPU(cs); diff --git a/target/arm/monitor.c b/target/arm/monitor.c index 0c72bf7c31..95c1e72cd1 100644 --- a/target/arm/monitor.c +++ b/target/arm/monitor.c @@ -184,9 +184,11 @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type, if (!err) { visit_check_struct(visitor, &err); } +#ifdef TARGET_AARCH64 if (!err) { arm_cpu_finalize_features(ARM_CPU(obj), &err); } +#endif /* TARGET_AARCH64 */ visit_end_struct(visitor, NULL); visit_free(visitor); if (err) { @@ -195,7 +197,9 @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type, return NULL; } } else { +#ifdef TARGET_AARCH64 arm_cpu_finalize_features(ARM_CPU(obj), &error_abort); +#endif /* TARGET_AARCH64 */ } expansion_info = g_new0(CpuModelExpansionInfo, 1); From patchwork Fri Jun 4 15:53:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454084 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp567564jae; Fri, 4 Jun 2021 09:31:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxqirCpUvfqwHSwlmGm3j1GOWWOMMqYvuoRZp5hE1pGh9uljNrqOPGaZ5v8AGYKs1XWSyLW X-Received: by 2002:a05:6402:1111:: with SMTP id u17mr5659533edv.87.1622824280227; Fri, 04 Jun 2021 09:31:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824280; cv=none; d=google.com; s=arc-20160816; b=t99EGPli4iFgWGJ/GuDUsdvXjGGDbPHsdW320Xq4Pqjscn8fOKyIK609UblzTwQCDQ HWArTtv9sn7MpeP0vQNuClYOxxKeJsn6acXWcRcq7J8MlOlBSptxN11fP6EbZBgA3sO0 9A4fNSO39k4iGiV444rQHoTDzHYcgmxEVrQXZxrXjT4uwdO4t09xzCKvQM07LptOE5ny DbPDmAX5zfxzUoSIuMrJ1rLkMTJd0gk7sCZ9kE+wfridA217Coih13JcdsTJtuge/ICO VP9OyHn6eP9OMr1aqP2ugkHdsacUFh83+gxb4dFJZfgUeHWne9jWX9t6WBJiSQPx3sxE gZKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=KFXdupVdeAnSfunCtZQNKO6+//sRzqaD3i4QyYWExWo=; b=OKqfLfTnOxD5W+877tB3I2nI14WCdSQEcd5KGr/7lEESSIveYBBTOdaxsdxiNZcBp1 mhn+GQjsWBaZShoy0+w/3YddlvdwgoAz88JpLkLSnMTJn0XUTNp/c084XWHEbOdWt6PC TqQ1RPNo3K7fTkZ8P8hnOpwa7pjhOpVyx0zCt/v13XwCCtrrfTYP/u2oBfY2z80RRR4N jR629W3a4h7vKrrsRrd0rU/Bi5UJo16BLBxC9L39in/R3Qm2TkFzmiixb9Tc2+Mz0nm0 YfzXbD4w08ZzpCpP7Oz0AV8WJbL2YTE7bxE0S6z+MVvo5pQpH4M0K5xvT3GMy4bG0cku 39bQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=EWIBS1im; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n5si4678333edv.210.2021.06.04.09.31.19 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:31:20 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=EWIBS1im; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:53716 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCj9-000476-5a for patch@linaro.org; Fri, 04 Jun 2021 12:31:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48212) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHb-0007iu-BJ for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:51 -0400 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]:42551) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHS-0005iV-Ku for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:51 -0400 Received: by mail-wr1-x435.google.com with SMTP id c5so9793484wrq.9 for ; Fri, 04 Jun 2021 09:02:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KFXdupVdeAnSfunCtZQNKO6+//sRzqaD3i4QyYWExWo=; b=EWIBS1imBQS8e8FjiILHdMdtM6O+ZJB0YUmjyTfJo0Hv8D+MCpAlQq2dEaAfVqm6Zk LFFXiMQl0sKZuuzDJuxbcdBgN855xptbvbSBkuZIL+Mrt9s9WxNOyW7zWeJhtaq3jNqm 740aueAjyc6X+YahPUCVSK+WZBUbhLKDIvZValekPtSj1Wysyk12EbqpTt82Q08bk0+Y 5iwczK7C8Vy6UunvvbFBSflAo8KdqKdljlIRhOFfsPCVypiQ5ZnLlqhGJUyvsty2rkmX LTv896m/KYscSfvpwWzEtVtSGecCqwJ3lBHMEg2mKOl+6LgGJWT/7s1EprlU/BFWbLNc ooPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KFXdupVdeAnSfunCtZQNKO6+//sRzqaD3i4QyYWExWo=; b=oqdBWEVQfmd62xA98lvsJFjmu2obB0pNAjE3JIY+EU5lLYxE08t0ZI6Sk+7BUJs9xo hJ8KdX0aWLuMJKTO850Eqd7sWmC7uX3Rf42bESTO1Rou4NwFDWaLcTh3JnvaE8rc28qN H/qhvg1ijCNt1jbR2AdFWmvqrudCCzFLxQFZtQXug6u9FjkW9sAJ3Ugx7XtE7aSY6Cn4 PvRlPLB4L/BiUXJFiavoZlvxF+7/SGELp7OaNCcLQffhbgbL2dIo/ZhxPewzu0oV2cNa kh1DkdbcUsm92Tpfu4k6MnLRRefgurkd++uQn+TmnEU35l7mZfQp1/KPuPXLbo9978ys kKhA== X-Gm-Message-State: AOAM533JYSn+d9tcdY2g6CZ3aIy9Fp0Jhrtvhfmj1vB+dO2aIP8Vmirj k8JDuAzsteeTe3nlcoY6yC1hEw== X-Received: by 2002:a5d:64a5:: with SMTP id m5mr4566198wrp.182.1622822561382; Fri, 04 Jun 2021 09:02:41 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id s1sm7503481wre.67.2021.06.04.09.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:37 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 9026A20008; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 88/99] target/arm: cpu64: rename arm_cpu_finalize_features Date: Fri, 4 Jun 2021 16:53:01 +0100 Message-Id: <20210604155312.15902-89-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::435; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana also remove the now useless ifdef TARGET_AARCH64 from the function Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-sve.h | 2 +- target/arm/cpu.h | 2 +- target/arm/tcg/cpu-pauth.h | 2 +- target/arm/cpu.c | 2 +- target/arm/cpu64.c | 4 +--- target/arm/monitor.c | 4 ++-- 6 files changed, 7 insertions(+), 9 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu-sve.h b/target/arm/cpu-sve.h index c83508ea0a..85078550bb 100644 --- a/target/arm/cpu-sve.h +++ b/target/arm/cpu-sve.h @@ -25,7 +25,7 @@ #include "cpu.h" -/* called by arm_cpu_finalize_features in realizefn */ +/* called by aarch64_cpu_finalize_features in realizefn */ bool cpu_sve_finalize_features(ARMCPU *cpu, Error **errp); /* add the CPU SVE properties */ diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 02e0fe5dbd..847d3628e9 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2127,7 +2127,7 @@ static inline int arm_feature(CPUARMState *env, int feature) return (env->features & (1ULL << feature)) != 0; } -void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp); +void aarch64_cpu_finalize_features(ARMCPU *cpu, Error **errp); #if !defined(CONFIG_USER_ONLY) /* Return true if exception levels below EL3 are in secure state, diff --git a/target/arm/tcg/cpu-pauth.h b/target/arm/tcg/cpu-pauth.h index a0ef74dc77..b106b9cefc 100644 --- a/target/arm/tcg/cpu-pauth.h +++ b/target/arm/tcg/cpu-pauth.h @@ -25,7 +25,7 @@ #include "cpu.h" -/* called by arm_cpu_finalize_features in realizefn */ +/* called by aarch64_cpu_finalize_features in realizefn */ bool cpu_pauth_finalize(ARMCPU *cpu, Error **errp); /* add the CPU Pointer Authentication properties */ diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 0adbf36347..fb04d768b5 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -843,7 +843,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) } #ifdef TARGET_AARCH64 - arm_cpu_finalize_features(cpu, &local_err); + aarch64_cpu_finalize_features(cpu, &local_err); if (local_err != NULL) { error_propagate(errp, local_err); return; diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index c762f3f07a..3058e2c273 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -469,11 +469,10 @@ static gchar *aarch64_gdb_arch_name(CPUState *cs) return g_strdup("aarch64"); } -void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) +void aarch64_cpu_finalize_features(ARMCPU *cpu, Error **errp) { Error *local_err = NULL; -#ifdef TARGET_AARCH64 if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { if (!cpu_sve_finalize_features(cpu, &local_err)) { error_propagate(errp, local_err); @@ -492,7 +491,6 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp) } } } -#endif /* TARGET_AARCH64 */ if (kvm_enabled()) { kvm_arm_steal_time_finalize(cpu, &local_err); diff --git a/target/arm/monitor.c b/target/arm/monitor.c index 95c1e72cd1..8a31c4dd04 100644 --- a/target/arm/monitor.c +++ b/target/arm/monitor.c @@ -186,7 +186,7 @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type, } #ifdef TARGET_AARCH64 if (!err) { - arm_cpu_finalize_features(ARM_CPU(obj), &err); + aarch64_cpu_finalize_features(ARM_CPU(obj), &err); } #endif /* TARGET_AARCH64 */ visit_end_struct(visitor, NULL); @@ -198,7 +198,7 @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type, } } else { #ifdef TARGET_AARCH64 - arm_cpu_finalize_features(ARM_CPU(obj), &error_abort); + aarch64_cpu_finalize_features(ARM_CPU(obj), &error_abort); #endif /* TARGET_AARCH64 */ } From patchwork Fri Jun 4 15:53:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454118 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp600810jae; Fri, 4 Jun 2021 10:12:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz2EvBWiAIriRN1LbTYk5sg2tmqfkmjkBbOWCQ0iX3Z1hyihVGEO83NNVvaTMm1zPTRwzSw X-Received: by 2002:a05:6122:104f:: with SMTP id z15mr3369319vkn.14.1622826735425; Fri, 04 Jun 2021 10:12:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622826735; cv=none; d=google.com; s=arc-20160816; b=OBq7zieCPxD5EJdGnWuMqeX+hIf92tpC6kK8KIuHakV8XYaiLm5F8sD9ZD8iRVa9Se ifVKVciDCQ7u8aHHwag+5sbtRUUfkCdKyR/wzknfXyYWzbHMYXXkB80xf0VO2v7HSPmL /xh/Nv8Y09Y5EIemJEcgbfvmmIKWTifX5y1knexzMjylp+Aql83sf22SrjEQkh0LqKlW ak5FysXNigqrc+ZDWw010V5VyJwvW0rQ9PC/YSPiZ3N4LBt4zW7NFd88i9UPM7A9XSCx +qcGi40XmsoQqClX3J5DCimMoU88i13bRU7zY0Q19rXZv+IynCZJPy28m7Ln9IX6IqmS 0BMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=QruW2yuHWroD/WpqnI/5LDTpQWpXeY+42xPYVRJQ+80=; b=hk3GMMszhV/ip+ixNH73ykbNYvRN3fqT0XVm7ghggbkkwZsdNyJxRj/hrPZfd0dNDr zs0BvpTf/rCF/PjlgnlSj1KK+xXzHBP8BW0z9c7c7a8uT0svgwdZYPWKCQ18g2Q/o9PV gdArDqY5T+/UC42oTVUSm63q5ZERlN6OZNUEK2uds8rr+Nrsd/AzCAu0XHfnrQvluYjR y3uWCO0zjBPMxluyCsc8FPWYUBo62l63JkPlCF1m+znESHz6zJtpioYnA0cfSxc4a2RU aDOtpdxL1HVkgL792lw5aoMcsqj58A4NdYkNRJlwktYUq06NNtRi+5Rj3HWJIJIN2gVH OuAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=vFqXM8X5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u26si2988071uaq.210.2021.06.04.10.12.15 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:12:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=vFqXM8X5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:50334 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDMk-00068a-SP for patch@linaro.org; Fri, 04 Jun 2021 13:12:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33624) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkk-0008QU-8I for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:58 -0400 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]:56029) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkb-0002Ah-Gw for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:32:57 -0400 Received: by mail-wm1-x330.google.com with SMTP id g204so5720154wmf.5 for ; Fri, 04 Jun 2021 09:32:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QruW2yuHWroD/WpqnI/5LDTpQWpXeY+42xPYVRJQ+80=; b=vFqXM8X5vIPtKdmqPez8EQFPlIFyhbZeU71gi8KwCYJ0+yz7ha44qVqvSTjQLbN4zu mWgTia7lJReCWrKA5y49AcqoleL1c/BNLvFtkn3OUGlI6wWFqQJODMdWE+RatWzIzMMk CnjXllbRbQsrKZaBJet3ytN/2eMXXVD8u1tIZY9dArpq4BdywOIgASYFM9y4rLVMtHBk w3mfPZVknWqYBR05JnN+bMvT4xEeFK1rMqmF5/RSCz/wFmQ9RCQlDiFrQwCrKA5dP/2Z zIvR1LmIu29s5iRP6popwNplpL/Oc6f+JkOIhZR1FNbutFZsfFUZsmAWVPpEsRS+lb2A 0qww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QruW2yuHWroD/WpqnI/5LDTpQWpXeY+42xPYVRJQ+80=; b=OITNgVXwb8i6rD5X9AeThexlzKlkz/Lj/tZm0r9c2HYp0vT/nttR8dqbqdPO9/sS8c hY0r+Y4TLCAfVrWq56tFur1rUCPhJV8bBj6DdNsMv5vtuUv8gxllW0NBhWc2ow0Zeb4k x3X0PoYOResZhUNVS9mDkkX6ZgodO7V3fR3Mjs0DZRWGEDpkB5HFObqDahrsI0Brx6mg j9jeI0ztGc25nYqnKqIMAMEeAminZ93Wn5ZV3Nh5/JnNTmz/OTuzSQo4gaZ36jsRjs5F s5PPMtkEb0lrH7hqAdx4gQK4aKS6MaozctazybmtZfGhGnTq781FByzMvb1MBpQcXD60 9hZQ== X-Gm-Message-State: AOAM531BopWN9PTiNs3dFyVXOp83a2gACBADqEZSe3WRDTdOAa2ljwNr JJb3uCdJJX1jw8RqXxK/B/+KAg== X-Received: by 2002:a05:600c:47d7:: with SMTP id l23mr4611072wmo.49.1622824367987; Fri, 04 Jun 2021 09:32:47 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id u14sm8911091wmc.41.2021.06.04.09.32.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id A886D2000C; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 89/99] target/arm: cpu64: some final cleanup on aarch64_cpu_finalize_features Date: Fri, 4 Jun 2021 16:53:02 +0100 Message-Id: <20210604155312.15902-90-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana bail out immediately if ARM_FEATURE_AARCH64 is not set, and add an else statement when checking for accelerators. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu64.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index 3058e2c273..ecce8c4308 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -473,26 +473,25 @@ void aarch64_cpu_finalize_features(ARMCPU *cpu, Error **errp) { Error *local_err = NULL; - if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { - if (!cpu_sve_finalize_features(cpu, &local_err)) { + if (!arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { + return; + } + if (!cpu_sve_finalize_features(cpu, &local_err)) { + error_propagate(errp, local_err); + return; + } + + /* + * KVM does not support modifications to this feature. + * We have not registered the cpu properties when KVM + * is in use, so the user will not be able to set them. + */ + if (tcg_enabled()) { + if (!cpu_pauth_finalize(cpu, &local_err)) { error_propagate(errp, local_err); return; } - - /* - * KVM does not support modifications to this feature. - * We have not registered the cpu properties when KVM - * is in use, so the user will not be able to set them. - */ - if (tcg_enabled()) { - if (!cpu_pauth_finalize(cpu, &local_err)) { - error_propagate(errp, local_err); - return; - } - } - } - - if (kvm_enabled()) { + } else if (kvm_enabled()) { kvm_arm_steal_time_finalize(cpu, &local_err); if (local_err != NULL) { error_propagate(errp, local_err); From patchwork Fri Jun 4 15:53:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454131 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp608628jae; Fri, 4 Jun 2021 10:22:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzgzftEaEmHi/+D55mqmNf5+0B5/onmuuG/T67Go6NJaphXYDtBhYQYuErg+gvnGglqshZR X-Received: by 2002:a1f:9505:: with SMTP id x5mr3362589vkd.6.1622827352499; Fri, 04 Jun 2021 10:22:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827352; cv=none; d=google.com; s=arc-20160816; b=s3SFDmK6Za/dPApEELgITSXpy2k3dz6PB4PNKBy54S+kCh3nz3Ih58vc7LLhbLvzG0 RDVM1f3g2LysrIMrjk5mdgMUIz7CrV1P9ReWGOQUeCiQbvoA6rPcnemRLPLlv//Fyt3K 1V9RG/IsFKft49NLb5xcyFnfkbNaqJ517GrlQHVPjtX7dAuECL1XwSqlz2v4F2qMNSyT YrZhd/+5359RAfhoI16bJSc+iUHl0boWwnkOcZNBnFDYTIG2iwVJHnhswjIQWGP13A58 oyHun2p5r7ZmPJrJq2efTROl1uNWcKmOTp/c/A8jY7Hq63ogC/AH8H7vj7OAFccoLksN 3JgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=DAHUANG/vqqFP+x4dLlbQOPag/TtMMrlsBRBzfaoKto=; b=G/W94ft2gG5TRbcX+WBF+GnDP+r4rdNEmCy0BBuPDnua66rtFCv/xBFofBoWbgF6OM Hpme7giXM2867nHblav4MgQR2EMCqFViPo1ssVu2Jt7L6dmhifsl5uOESv57ig4clF4c 0igznyvpBAqJwgmDkUGA4uT1mbL/4Dx2si58b5qopiEELG/VK1sIPIhjUmytWWIuL+69 xNG8hytBsiY9tBvFw/rqWyXivMk0MOT7Vq00l6sEjWqQ0q5le1E0i0P7p/vV576DXoHE 8JW1ObCC4i8taPj/aFXdIzStlaKNUyGzbUjKtU+8DmKRw6c2TvqqIdY2BI5oBTQvjLFg A+kg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=t3Bo5nS5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n185si3354858vsc.61.2021.06.04.10.22.32 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:22:32 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=t3Bo5nS5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58058 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDWh-0006ie-QR for patch@linaro.org; Fri, 04 Jun 2021 13:22:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33838) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkx-0000OL-0x for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:11 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:41719) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkh-0002De-VW for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:10 -0400 Received: by mail-wr1-x431.google.com with SMTP id h8so9890415wrz.8 for ; Fri, 04 Jun 2021 09:32:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DAHUANG/vqqFP+x4dLlbQOPag/TtMMrlsBRBzfaoKto=; b=t3Bo5nS5YYhppyuoNE30Y56mzdpN+u8Zlwcal0nsKvWNE5o1bwSgMMB2LJPsDmm3Gq exVJ7Elt3n16uUm+T3hJlMbkYq9LLbboE56GXHC5cER+3pTO4Ye/58alchXm8kvfRUW+ exTQokNHAvAMOts3Pd3zNadmaQbNWqqF3mHhgfdXQmEZLGPZZx5Jt5NKs8JcYl4FBsxm 6k38NpH5eMO69jNhEinbwioep71YWxNVK0FKrKqwwQIGWuxhpIeaWF3+d2KpDGUAeXqj ydIgQ2cicPCVQgnI/f4g91O/H6TQo6itCpEEoUi77x5YiCGjlQbg3bkF1QMpajpjz29n 8mwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DAHUANG/vqqFP+x4dLlbQOPag/TtMMrlsBRBzfaoKto=; b=EpKVUsZ2Ea51QPctQQxHh5qUYNt+LYf2ptOnSYYVkCTyrO1UgjoElpgSskYwS3plzd aYtIFHN2A9uDNjnCS6wuQxDYnFtUk6w++kpVms5wHmXmm2zUGjR6e11R899UofIckqPl LIFUfauCPxd+qbf86EoBQCfdmqLwNm7avIq2hM7/TWrqXZkdA22igqjCcxl1iUCpf3zV KG3CcvQamDEKL0pV+beFXyaxzZ6Q5OcRxHfMwQ73O4DTLdK0rp5KyntKiZL39etZoPb0 yM35at0hoRe5WQz7gyRmglncTuZLl8fX3yxJDjGkjJltIwAvtEwdTB4xZ2IgO+9n6Qlo wBuQ== X-Gm-Message-State: AOAM530En+CbNZKaI4kQmI/wQI9Duo29zyy2IPU8qIweEcOjeuxT7LgS p0kw8h58Q/aOBODTBQwDPIg2mQ== X-Received: by 2002:a5d:414e:: with SMTP id c14mr4619104wrq.81.1622824374611; Fri, 04 Jun 2021 09:32:54 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id r7sm4615381wmq.23.2021.06.04.09.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:51 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id CD5EF20013; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 90/99] XXX target/arm: experiment refactoring cpu "max" Date: Fri, 4 Jun 2021 16:53:03 +0100 Message-Id: <20210604155312.15902-91-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana XXX Someone who really understands which properties should be added where should review this attentively. What goes into cpu leaf class initialization? What goes into arm_post_init / accel_cpu? What goes into arm_cpu_finalize_features / aarch64_cpu_finalize_features? Should there be shift of more code into finalize_features? Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu.h | 3 + target/arm/cpu64.c | 175 ++---------------------- target/arm/kvm/kvm-cpu.c | 4 +- target/arm/tcg/tcg-cpu-models.c | 63 +-------- target/arm/tcg/tcg-cpu.c | 228 +++++++++++++++++++++++++++++++- 5 files changed, 241 insertions(+), 232 deletions(-) -- 2.20.1 diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 847d3628e9..daa3e5f8d0 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1015,6 +1015,9 @@ struct ARMCPU { /* Generic timer counter frequency, in Hz */ uint64_t gt_cntfrq_hz; + + /* MAX features requested via cpu="max" */ + bool max_features; }; unsigned int gt_cntfrq_period_ns(ARMCPU *cpu); diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c index ecce8c4308..9595587ee0 100644 --- a/target/arm/cpu64.c +++ b/target/arm/cpu64.c @@ -247,10 +247,15 @@ static void aarch64_a72_initfn(Object *obj) define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo); } -/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host); - * otherwise, a CPU with as many features enabled as our emulation supports. - * The version of '-cpu max' for qemu-system-arm is defined in cpu.c; - * this only needs to handle 64 bits. +/* + * -cpu max: if KVM is enabled, like -cpu host (best possible with this host), + * plus some "max"-only properties, see f.e. cpu_sve_add_props_max(). + * + * if TCG is enabled, a CPU with as many features enabled as our + * emulation supports. + * + * The version of '-cpu max' for qemu-system-arm is defined in + * tcg/tcg-cpu-models.c, while this version only handles 64bit. */ static void aarch64_max_initfn(Object *obj) { @@ -259,170 +264,12 @@ static void aarch64_max_initfn(Object *obj) if (kvm_enabled()) { kvm_arm_set_cpu_features_from_host(cpu); } else if (tcg_enabled()) { - uint64_t t; - uint32_t u; aarch64_a57_initfn(obj); - - /* - * Reset MIDR so the guest doesn't mistake our 'max' CPU type for a real - * one and try to apply errata workarounds or use impdef features we - * don't provide. - * An IMPLEMENTER field of 0 means "reserved for software use"; - * ARCHITECTURE must be 0xf indicating "v7 or later, check ID registers - * to see which features are present"; - * the VARIANT, PARTNUM and REVISION fields are all implementation - * defined and we choose to define PARTNUM just in case guest - * code needs to distinguish this QEMU CPU from other software - * implementations, though this shouldn't be needed. - */ - t = FIELD_DP64(0, MIDR_EL1, IMPLEMENTER, 0); - t = FIELD_DP64(t, MIDR_EL1, ARCHITECTURE, 0xf); - t = FIELD_DP64(t, MIDR_EL1, PARTNUM, 'Q'); - t = FIELD_DP64(t, MIDR_EL1, VARIANT, 0); - t = FIELD_DP64(t, MIDR_EL1, REVISION, 0); - cpu->midr = t; - - t = cpu->isar.id_aa64isar0; - t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */ - t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */ - t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); - t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); - t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */ - t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */ - t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); - cpu->isar.id_aa64isar0 = t; - - t = cpu->isar.id_aa64isar1; - t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); - t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); - t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); - t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); - t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); - t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); - t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */ - t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); - cpu->isar.id_aa64isar1 = t; - - t = cpu->isar.id_aa64pfr0; - t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1); - t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); - t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); - t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); - t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); - cpu->isar.id_aa64pfr0 = t; - - t = cpu->isar.id_aa64pfr1; - t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); - t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); - /* - * Begin with full support for MTE. This will be downgraded to MTE=0 - * during realize if the board provides no tag memory, much like - * we do for EL2 with the virtualization=on property. - */ - t = FIELD_DP64(t, ID_AA64PFR1, MTE, 2); - cpu->isar.id_aa64pfr1 = t; - - t = cpu->isar.id_aa64mmfr0; - t = FIELD_DP64(t, ID_AA64MMFR0, PARANGE, 5); /* PARange: 48 bits */ - cpu->isar.id_aa64mmfr0 = t; - - t = cpu->isar.id_aa64mmfr1; - t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */ - t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); - t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); - t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */ - t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */ - t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */ - cpu->isar.id_aa64mmfr1 = t; - - t = cpu->isar.id_aa64mmfr2; - t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); - t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */ - t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */ - cpu->isar.id_aa64mmfr2 = t; - - t = cpu->isar.id_aa64zfr0; - t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1); - t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */ - t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1); - t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1); - t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1); - t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1); - t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1); - t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1); - cpu->isar.id_aa64zfr0 = t; - - /* Replicate the same data to the 32-bit id registers. */ - u = cpu->isar.id_isar5; - u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */ - u = FIELD_DP32(u, ID_ISAR5, SHA1, 1); - u = FIELD_DP32(u, ID_ISAR5, SHA2, 1); - u = FIELD_DP32(u, ID_ISAR5, CRC32, 1); - u = FIELD_DP32(u, ID_ISAR5, RDM, 1); - u = FIELD_DP32(u, ID_ISAR5, VCMA, 1); - cpu->isar.id_isar5 = u; - - u = cpu->isar.id_isar6; - u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1); - u = FIELD_DP32(u, ID_ISAR6, DP, 1); - u = FIELD_DP32(u, ID_ISAR6, FHM, 1); - u = FIELD_DP32(u, ID_ISAR6, SB, 1); - u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1); - u = FIELD_DP32(u, ID_ISAR6, I8MM, 1); - cpu->isar.id_isar6 = u; - - u = cpu->isar.id_pfr0; - u = FIELD_DP32(u, ID_PFR0, DIT, 1); - cpu->isar.id_pfr0 = u; - - u = cpu->isar.id_pfr2; - u = FIELD_DP32(u, ID_PFR2, SSBS, 1); - cpu->isar.id_pfr2 = u; - - u = cpu->isar.id_mmfr3; - u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */ - cpu->isar.id_mmfr3 = u; - - u = cpu->isar.id_mmfr4; - u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */ - u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */ - u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */ - u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */ - cpu->isar.id_mmfr4 = u; - - t = cpu->isar.id_aa64dfr0; - t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */ - cpu->isar.id_aa64dfr0 = t; - - u = cpu->isar.id_dfr0; - u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */ - cpu->isar.id_dfr0 = u; - - u = cpu->isar.mvfr1; - u = FIELD_DP32(u, MVFR1, FPHP, 3); /* v8.2-FP16 */ - u = FIELD_DP32(u, MVFR1, SIMDHP, 2); /* v8.2-FP16 */ - cpu->isar.mvfr1 = u; - -#ifdef CONFIG_USER_ONLY - /* For usermode -cpu max we can use a larger and more efficient DCZ - * blocksize since we don't have to follow what the hardware does. - */ - cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */ - cpu->dcz_blocksize = 7; /* 512 bytes */ -#endif - - cpu_pauth_add_props(obj); } - cpu_sve_add_props(obj); cpu_sve_add_props_max(obj); + + cpu->max_features = true; } static const ARMCPUInfo aarch64_cpus[] = { diff --git a/target/arm/kvm/kvm-cpu.c b/target/arm/kvm/kvm-cpu.c index 09aede9319..1157888f85 100644 --- a/target/arm/kvm/kvm-cpu.c +++ b/target/arm/kvm/kvm-cpu.c @@ -88,9 +88,7 @@ static void host_cpu_instance_init(Object *obj) ARMCPU *cpu = ARM_CPU(obj); kvm_arm_set_cpu_features_from_host(cpu); - if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { - cpu_sve_add_props(obj); - } + cpu_sve_add_props(obj); arm_cpu_post_init(obj); } diff --git a/target/arm/tcg/tcg-cpu-models.c b/target/arm/tcg/tcg-cpu-models.c index 975869f276..1be953ad1a 100644 --- a/target/arm/tcg/tcg-cpu-models.c +++ b/target/arm/tcg/tcg-cpu-models.c @@ -872,68 +872,7 @@ static void arm_max_initfn(Object *obj) ARMCPU *cpu = ARM_CPU(obj); cortex_a15_initfn(obj); - - /* old-style VFP short-vector support */ - cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1); - -#ifdef CONFIG_USER_ONLY - /* - * We don't set these in system emulation mode for the moment, - * since we don't correctly set (all of) the ID registers to - * advertise them. - */ - set_feature(&cpu->env, ARM_FEATURE_V8); - { - uint32_t t; - - t = cpu->isar.id_isar5; - t = FIELD_DP32(t, ID_ISAR5, AES, 2); - t = FIELD_DP32(t, ID_ISAR5, SHA1, 1); - t = FIELD_DP32(t, ID_ISAR5, SHA2, 1); - t = FIELD_DP32(t, ID_ISAR5, CRC32, 1); - t = FIELD_DP32(t, ID_ISAR5, RDM, 1); - t = FIELD_DP32(t, ID_ISAR5, VCMA, 1); - cpu->isar.id_isar5 = t; - - t = cpu->isar.id_isar6; - t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1); - t = FIELD_DP32(t, ID_ISAR6, DP, 1); - t = FIELD_DP32(t, ID_ISAR6, FHM, 1); - t = FIELD_DP32(t, ID_ISAR6, SB, 1); - t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1); - t = FIELD_DP32(t, ID_ISAR6, I8MM, 1); - cpu->isar.id_isar6 = t; - - t = cpu->isar.mvfr1; - t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */ - t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */ - cpu->isar.mvfr1 = t; - - t = cpu->isar.mvfr2; - t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */ - t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */ - cpu->isar.mvfr2 = t; - - t = cpu->isar.id_mmfr3; - t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */ - cpu->isar.id_mmfr3 = t; - - t = cpu->isar.id_mmfr4; - t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */ - t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */ - t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */ - t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */ - cpu->isar.id_mmfr4 = t; - - t = cpu->isar.id_pfr0; - t = FIELD_DP32(t, ID_PFR0, DIT, 1); - cpu->isar.id_pfr0 = t; - - t = cpu->isar.id_pfr2; - t = FIELD_DP32(t, ID_PFR2, SSBS, 1); - cpu->isar.id_pfr2 = t; - } -#endif /* CONFIG_USER_ONLY */ + cpu->max_features = true; } #endif /* !TARGET_AARCH64 */ diff --git a/target/arm/tcg/tcg-cpu.c b/target/arm/tcg/tcg-cpu.c index db677bc71c..675f36be27 100644 --- a/target/arm/tcg/tcg-cpu.c +++ b/target/arm/tcg/tcg-cpu.c @@ -26,6 +26,10 @@ #include "internals.h" #include "exec/exec-all.h" +#ifdef TARGET_AARCH64 +#include "tcg/cpu-pauth.h" +#endif + void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb) { @@ -228,16 +232,234 @@ static struct TCGCPUOps arm_tcg_ops = { #endif /* !CONFIG_USER_ONLY */ }; -static void tcg_cpu_instance_init(CPUState *cs) +#ifdef TARGET_AARCH64 +static void tcg_cpu_max_instance_init(CPUState *cs) { + uint64_t t; + uint32_t u; + Object *obj = OBJECT(cs); ARMCPU *cpu = ARM_CPU(cs); /* - * this would be the place to move TCG-specific props - * in future refactoring of cpu properties. + * Reset MIDR so the guest doesn't mistake our 'max' CPU type for a real + * one and try to apply errata workarounds or use impdef features we + * don't provide. + * An IMPLEMENTER field of 0 means "reserved for software use"; + * ARCHITECTURE must be 0xf indicating "v7 or later, check ID registers + * to see which features are present"; + * the VARIANT, PARTNUM and REVISION fields are all implementation + * defined and we choose to define PARTNUM just in case guest + * code needs to distinguish this QEMU CPU from other software + * implementations, though this shouldn't be needed. + */ + t = FIELD_DP64(0, MIDR_EL1, IMPLEMENTER, 0); + t = FIELD_DP64(t, MIDR_EL1, ARCHITECTURE, 0xf); + t = FIELD_DP64(t, MIDR_EL1, PARTNUM, 'Q'); + t = FIELD_DP64(t, MIDR_EL1, VARIANT, 0); + t = FIELD_DP64(t, MIDR_EL1, REVISION, 0); + cpu->midr = t; + + t = cpu->isar.id_aa64isar0; + t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */ + t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */ + t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); + t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); + t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */ + t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); + cpu->isar.id_aa64isar0 = t; + + t = cpu->isar.id_aa64isar1; + t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); + t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); + t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); + t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); + t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); + t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); + t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */ + cpu->isar.id_aa64isar1 = t; + + t = cpu->isar.id_aa64pfr0; + t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1); + t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); + t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); + t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); + t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); + cpu->isar.id_aa64pfr0 = t; + + t = cpu->isar.id_aa64pfr1; + t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); + t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); + /* + * Begin with full support for MTE. This will be downgraded to MTE=0 + * during realize if the board provides no tag memory, much like + * we do for EL2 with the virtualization=on property. + */ + t = FIELD_DP64(t, ID_AA64PFR1, MTE, 2); + cpu->isar.id_aa64pfr1 = t; + + t = cpu->isar.id_aa64mmfr0; + t = FIELD_DP64(t, ID_AA64MMFR0, PARANGE, 5); /* PARange: 48 bits */ + cpu->isar.id_aa64mmfr0 = t; + + t = cpu->isar.id_aa64mmfr1; + t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */ + t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); + t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); + t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */ + t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */ + t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */ + cpu->isar.id_aa64mmfr1 = t; + + t = cpu->isar.id_aa64mmfr2; + t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); + t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */ + t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */ + cpu->isar.id_aa64mmfr2 = t; + + /* Replicate the same data to the 32-bit id registers. */ + u = cpu->isar.id_isar5; + u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */ + u = FIELD_DP32(u, ID_ISAR5, SHA1, 1); + u = FIELD_DP32(u, ID_ISAR5, SHA2, 1); + u = FIELD_DP32(u, ID_ISAR5, CRC32, 1); + u = FIELD_DP32(u, ID_ISAR5, RDM, 1); + u = FIELD_DP32(u, ID_ISAR5, VCMA, 1); + cpu->isar.id_isar5 = u; + + u = cpu->isar.id_isar6; + u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1); + u = FIELD_DP32(u, ID_ISAR6, DP, 1); + u = FIELD_DP32(u, ID_ISAR6, FHM, 1); + u = FIELD_DP32(u, ID_ISAR6, SB, 1); + u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1); + cpu->isar.id_isar6 = u; + + u = cpu->isar.id_pfr0; + u = FIELD_DP32(u, ID_PFR0, DIT, 1); + cpu->isar.id_pfr0 = u; + + u = cpu->isar.id_pfr2; + u = FIELD_DP32(u, ID_PFR2, SSBS, 1); + cpu->isar.id_pfr2 = u; + + u = cpu->isar.id_mmfr3; + u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */ + cpu->isar.id_mmfr3 = u; + + u = cpu->isar.id_mmfr4; + u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */ + u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */ + u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */ + u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */ + cpu->isar.id_mmfr4 = u; + + t = cpu->isar.id_aa64dfr0; + t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */ + cpu->isar.id_aa64dfr0 = t; + + u = cpu->isar.id_dfr0; + u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */ + cpu->isar.id_dfr0 = u; + + u = cpu->isar.mvfr1; + u = FIELD_DP32(u, MVFR1, FPHP, 3); /* v8.2-FP16 */ + u = FIELD_DP32(u, MVFR1, SIMDHP, 2); /* v8.2-FP16 */ + cpu->isar.mvfr1 = u; + +#ifdef CONFIG_USER_ONLY + /* + * For usermode -cpu max we can use a larger and more efficient DCZ + * blocksize since we don't have to follow what the hardware does. */ + cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */ + cpu->dcz_blocksize = 7; /* 512 bytes */ +#endif + cpu_pauth_add_props(obj); +} + +#else /* !TARGET_AARCH64 */ +static void tcg_cpu_max_instance_init(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); + + /* old-style VFP short-vector support */ + cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1); + +#ifdef CONFIG_USER_ONLY + /* + * We don't set these in system emulation mode for the moment, + * since we don't correctly set (all of) the ID registers to + * advertise them. + */ + set_feature(&cpu->env, ARM_FEATURE_V8); + { + uint32_t t; + + t = cpu->isar.id_isar5; + t = FIELD_DP32(t, ID_ISAR5, AES, 2); + t = FIELD_DP32(t, ID_ISAR5, SHA1, 1); + t = FIELD_DP32(t, ID_ISAR5, SHA2, 1); + t = FIELD_DP32(t, ID_ISAR5, CRC32, 1); + t = FIELD_DP32(t, ID_ISAR5, RDM, 1); + t = FIELD_DP32(t, ID_ISAR5, VCMA, 1); + cpu->isar.id_isar5 = t; + + t = cpu->isar.id_isar6; + t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1); + t = FIELD_DP32(t, ID_ISAR6, DP, 1); + t = FIELD_DP32(t, ID_ISAR6, FHM, 1); + t = FIELD_DP32(t, ID_ISAR6, SB, 1); + t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1); + cpu->isar.id_isar6 = t; + + t = cpu->isar.mvfr1; + t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */ + t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */ + cpu->isar.mvfr1 = t; + + t = cpu->isar.mvfr2; + t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */ + t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */ + cpu->isar.mvfr2 = t; + + t = cpu->isar.id_mmfr3; + t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */ + cpu->isar.id_mmfr3 = t; + + t = cpu->isar.id_mmfr4; + t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */ + t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */ + t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */ + t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */ + cpu->isar.id_mmfr4 = t; + + t = cpu->isar.id_pfr0; + t = FIELD_DP32(t, ID_PFR0, DIT, 1); + cpu->isar.id_pfr0 = t; + + t = cpu->isar.id_pfr2; + t = FIELD_DP32(t, ID_PFR2, SSBS, 1); + cpu->isar.id_pfr2 = t; + } +#endif /* CONFIG_USER_ONLY */ +} +#endif /* TARGET_AARCH64 */ + +static void tcg_cpu_instance_init(CPUState *cs) +{ + ARMCPU *cpu = ARM_CPU(cs); cpu->psci_version = 2; /* TCG implements PSCI 0.2 */ + if (cpu->max_features) { + tcg_cpu_max_instance_init(cs); + } } static void tcg_cpu_reset(CPUState *cs) From patchwork Fri Jun 4 15:53:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454134 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp611462jae; Fri, 4 Jun 2021 10:26:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw+h7kwESdW982aigNeXwwz/NU0z046S+WMWiGxzFyNaVPzBP8DptXDRD0FWuSASXA8+jEl X-Received: by 2002:a02:b808:: with SMTP id o8mr5045018jam.1.1622827602176; Fri, 04 Jun 2021 10:26:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827602; cv=none; d=google.com; s=arc-20160816; b=THJcfrgF1CVRPBWZoxE3FgeyRte8gTlkIXcDBlB81HEvg+anQJJsXHyxdvJnh4odF2 5tIVNl8M/kxjD7wXyjYr9qJWB5UFpg7mhh4sO1b7ih7O4J3FJrr+RFCbPrRaT6mmqUud RmzYIykRIGiGBDZFCWfJQUOdZ3TI+a9pFDWtQtDFHOTGKs1b4Je9q2xat61VPg/9GfsM kuEaMJ8Rhy8BwJYgZQeuV/OOk+vLiWKvkWt2jvBE5Stb/6Y9M4KrlQ0e06oyuqcvY7lu nSoJbmPItG/n529mcnWSHuyH6Y9ptxmp0EgEVzSnjappcI9hHWqh3gGjzrsCsPA5hvxi +KHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=xfsi0lTAgR9azxMfpWXQUQ//2Zf5ZUMfbhMWxCScq+0=; b=WKPBV3RFo0mKSBA+OiLC2YREZTUPxJjSa5wyy/DwztYo7OPE51jZCkOVErbkm6Wjzc onJssH+P+UKEtgiTcdcFwN52SkCfyCyhhcCDii0e8UlxPZkiD3Wy9+rb20uttzqlXIM1 n4GyrREoCfL2vpZoHPF7KmRTkOvSH2j7bNor7i9TwWe5vWZiEH+875m2FfM1hd0Y2Brf qrC3NcpmuE4DqQXOpBcepVbBEpD4el68BYBGDoXJ0wewUd3W1EjOmWGL69RKxdGSYsOm 8e7g9XAKyTWS0XtTNyBE7MwreajyEh2ZGUek7WjNy6YiNmJStANqlEA5Jdu5sO4GQf+Z Yyog== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zH2kST0K; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y196si9011891iof.24.2021.06.04.10.26.42 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:26:42 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zH2kST0K; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43760 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDaj-0007mH-Lu for patch@linaro.org; Fri, 04 Jun 2021 13:26:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33922) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCl0-0000a9-7n for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:14 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]:53779) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkn-0002FF-55 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:14 -0400 Received: by mail-wm1-x333.google.com with SMTP id h3so5739709wmq.3 for ; Fri, 04 Jun 2021 09:32:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xfsi0lTAgR9azxMfpWXQUQ//2Zf5ZUMfbhMWxCScq+0=; b=zH2kST0KdUrWybiGIhRu12LOpnFkhLJqwAGTA774+26P8thcPUioyiwZsnewErKmUc edQKrlvM5deNcd8bM7BD/f+5P4Jojt4APhudjziIFl26rRjhWIFzuIcKJgHV55dNpoDI So+D9c7Ktk+cc2Fjv1vhd7Qk1lPZDhIQIq3UXEGoo4OeoAlLgCJan2GYzDIiTWo051jA m09I0ynBOHGrxTiOAY5QlN3HsdaTytQLf5rZFScrYAQjzwPyd0TV9naJpzLskYkK0mWe MMwP3UNjkwjCBsiBTv8Mnd8AW9qpBzyt9GVJzfhaKB2TGgls01/ASPaGDDIo9mW74Kw1 vHFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xfsi0lTAgR9azxMfpWXQUQ//2Zf5ZUMfbhMWxCScq+0=; b=pCLLkG8Zq9xzYS0x5lw7gxaDKCpoEdNzbuXCy/jeWeZXoj8lPPEg7mlopjLKkFWzNV uEC5IkfhwxgOcNkkj0bznUHuzdX77rLTRgkShmNqt8m4hi7S2m9OAjiDkPwdjNQHM0WQ ZqVrXfDvz+fWjl/jDaWfKOHNuFe5+CCaeH6siX5z57lB75JXBWAQZPL3J0OsZpNmxLcT TFZ9d1nHxopgvpEq2P3S0GTqv+9Or9AhPTxFiisxM2+kzDL+rXLtnp97/b8JsB6Djq7A HeAC2RztM2Kwpqa7v0Hg4q5Yjtm20SCgg6fLIOzpItkKuf5HLFXDlzrBfpb+DRgulQrV vqRQ== X-Gm-Message-State: AOAM530VkbVLvLBI3ULKiCnp2GL6Jhwp0PnFmQXcrBUyw6BW/r4KlKhu yhQcKVk475SVNLg7CguFHMU8Wq2w73iHKw== X-Received: by 2002:a05:600c:358f:: with SMTP id p15mr4621878wmq.14.1622824378322; Fri, 04 Jun 2021 09:32:58 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id c12sm8239845wrr.90.2021.06.04.09.32.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:57 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E371E20014; Fri, 4 Jun 2021 16:53:23 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 91/99] target/arm: tcg: remove superfluous CONFIG_TCG check Date: Fri, 4 Jun 2021 16:53:04 +0100 Message-Id: <20210604155312.15902-92-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Richard Henderson , Claudio Fontana , Peter Maydell Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana modules under tcg/ are only built for CONFIG_TCG anyway. Signed-off-by: Claudio Fontana Reviewed-by: Richard Henderson Signed-off-by: Alex Bennée --- target/arm/tcg/vfp_helper.c | 6 ------ 1 file changed, 6 deletions(-) -- 2.20.1 diff --git a/target/arm/tcg/vfp_helper.c b/target/arm/tcg/vfp_helper.c index 521719f327..0cc6c85270 100644 --- a/target/arm/tcg/vfp_helper.c +++ b/target/arm/tcg/vfp_helper.c @@ -21,10 +21,8 @@ #include "cpu.h" #include "exec/helper-proto.h" #include "internals.h" -#ifdef CONFIG_TCG #include "qemu/log.h" #include "fpu/softfloat.h" -#endif /* VFP support. We follow the convention used for VFP instructions: Single precision routines have a "s" suffix, double precision a @@ -40,8 +38,6 @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val) vfp_set_fpscr(env, val); } -#ifdef CONFIG_TCG - #define VFP_HELPER(name, p) HELPER(glue(glue(vfp_,name),p)) #define VFP_BINOP(name) \ @@ -1110,5 +1106,3 @@ void HELPER(check_hcr_el2_trap)(CPUARMState *env, uint32_t rt, uint32_t reg) raise_exception(env, EXCP_HYP_TRAP, syndrome, 2); } - -#endif From patchwork Fri Jun 4 15:53:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454102 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp582857jae; Fri, 4 Jun 2021 09:50:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwRZCsd4G1XZ50Iv43gpaZFsektg5J6+wMwBsgj7fu14KS9tWddKVNLlElEvaLwTZmNj0sb X-Received: by 2002:a05:6102:392:: with SMTP id m18mr3758353vsq.40.1622825419946; Fri, 04 Jun 2021 09:50:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622825419; cv=none; d=google.com; s=arc-20160816; b=gSrMb4YAuUC/Sv04JguY8igqQwnHAIcc0ll0yACv+Oiyj8pf32D+o0IQ3NEFycW/L4 D9LsPAIzWsdAZ8aKAZZN7Moag1ba5nNXlVFx4HwUj4Vvr/1tEBmx7RUOvryT0aNoJJCv LEj77yV1OgcvOJwmhIbT8p2Rh6f/a5Uy5NicfoDZ+Yx2bXHKoSqvZT41Mjrhtda892Dp qSH8cUYrcL1MuH2WH2Qf0taXnBghYVmYHhGZGGVx3d4mhHgL2EDudKgjN51HawRDZVU5 09UbunKNh8cxBCs434MM1Ig6irDI4FVs8xRmq0/tIiysPRnOe8Jb1faLJaW3je0HJErT ZnIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=n32frv1K2Ssc6wYL5ATggtwpjBZGV8bN4E1z0WJz+l0=; b=mO9E3pi8lcGzkUuaCukOYmaUKA+VH5s/iWHMRTc44VDw45ZmSPpv4jbV/HCsFzMv2y UdUD6iDmZPwnnDmOAPvKwcB9/IP5tqNEvP96wE1aOiGb4tDgvdsUga3919vng+G2HUnT FpsRXSHqtVuYX/RHhpP+cO5KNtN4L1ZtFvn46j88gdLV0ygAkctc4Q0+OL2g0PEFu14W f/cVk7vN3xWk4aFraMzmkKKozJPrvrIWOnmsUoaUElN3Orib4nUEG02rpCHZurH5V138 F3r1jBL7GYI2VZNURx6OSu8ipYBNtsGwbkZ8TuukfELZrKWrm1QPm9iKqNxpQIO9hJpw dZPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=L2rxQn4z; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id m203si3082676vkm.44.2021.06.04.09.50.19 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:50:19 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=L2rxQn4z; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:53954 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpD1X-0001Qz-9b for patch@linaro.org; Fri, 04 Jun 2021 12:50:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51952) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRN-00035I-Gm for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:58 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]:37780) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRH-0003qp-RQ for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:12:57 -0400 Received: by mail-wr1-x42b.google.com with SMTP id i94so4813657wri.4 for ; Fri, 04 Jun 2021 09:12:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n32frv1K2Ssc6wYL5ATggtwpjBZGV8bN4E1z0WJz+l0=; b=L2rxQn4zfSGnN9D4kyT975Q2rQwLdBLyQQXCnb64NqjZ79LhZpcgH9/SYKGuECmsYA S40RNff0PxqWAao8n/RYPVXUlM8dj8SqQg1xFlLMU3241xGmbkr0Wspa8F21RZfR2dVl xHYwfb6ZO75vAg9aNuoEspL3UGd5HXu2GhQCVrbJFfFJq4u+xl0rfT332c5jqLEK/hoD 6R28wc2w2pmvxbmLiok0/2gYaQ9jjmTnIXnfwRxXw5yBy9Pm0VxaaxkuVytSAzzOCV2f 55up5gOvM/k5FLzmclpHGproCaNwXzur+bCQZ3VlavT8CCDeV1ZvJDZ5TZ5L6LJcVA1e UzOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n32frv1K2Ssc6wYL5ATggtwpjBZGV8bN4E1z0WJz+l0=; b=ovqe97Rx9INN4v8E+7iY1Ri/T8q39c8LKQB/Nkn2Q2c5jK3zIhiyzOdpoEktmWiUiV 6xPRCZSfCNDvPbrbu9hzEKz5lJFW5WPU9TuEPZDL5rMauZQCZBY72tXUq2vITiD7Ztbz /Wsip6k3WRapLThFZfTkQf9DEvtlJbr6sBlsyKr6vSi4AyuFWZ/Z9EkNfnQHp6t/R3lg OPn2/S2N9fHFZF6v4g81ByUGcXD8oTpawNC89g3UH9pbnk9QZF7xdtN79/kVK16z0xV3 QegdZF+BPNJSdeuBOh8ghFK5MHZQeGoRGyQ9UtMqVYNOO19Nh+v/k/tzxfPk5L/x40oD ks4g== X-Gm-Message-State: AOAM533USqvJLZC/bG+s0HnaNDBN+oWIr7MgMQ3rs1rc+Nn1oHQWf1tV Ye2Z6uShUvvmzsMiOvXwAln2kQ== X-Received: by 2002:a5d:4203:: with SMTP id n3mr4765868wrq.132.1622823170108; Fri, 04 Jun 2021 09:12:50 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l9sm5987730wme.21.2021.06.04.09.12.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 0518A20015; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 92/99] target/arm: remove v7m stub function for !CONFIG_TCG Date: Fri, 4 Jun 2021 16:53:05 +0100 Message-Id: <20210604155312.15902-93-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Claudio Fontana Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Claudio Fontana it is needed just once, so just move the CONFIG_TCG check in place. Signed-off-by: Claudio Fontana Signed-off-by: Alex Bennée --- target/arm/cpu-mmu.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu-mmu.c b/target/arm/cpu-mmu.c index c6ac90a61e..e1bebbf73e 100644 --- a/target/arm/cpu-mmu.c +++ b/target/arm/cpu-mmu.c @@ -19,6 +19,7 @@ */ #include "qemu/osdep.h" +#include "sysemu/tcg.h" #include "cpu-mmu.h" int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx) @@ -155,20 +156,15 @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) } } -#ifndef CONFIG_TCG -ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate) -{ - g_assert_not_reached(); -} -#endif - ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) { ARMMMUIdx idx; uint64_t hcr; - if (arm_feature(env, ARM_FEATURE_M)) { - return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure); + if (tcg_enabled()) { + if (arm_feature(env, ARM_FEATURE_M)) { + return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure); + } } /* See ARM pseudo-function ELIsInHost. */ From patchwork Fri Jun 4 15:53:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454081 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp566016jae; Fri, 4 Jun 2021 09:29:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx1aTRxwnWC5yFDtFeiZ+HKh+CYDdd6v2pJkgsw1RvN5FeZL+uI104o6tgCWGlpu8+fQpbr X-Received: by 2002:ab0:6f90:: with SMTP id f16mr4181087uav.112.1622824173543; Fri, 04 Jun 2021 09:29:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824173; cv=none; d=google.com; s=arc-20160816; b=yvzVwYCMrrkRaeBbdVyWECYmktYrjsvpIdx4yE/7FMMIU9eYwdtNwEKsihil1Wx28o UlQZB6JBcXJ5fQKasSUmiaRrya86rIZyS1nRYnDcr4s6lOkJHI8U99lfQqUAZs6Ae9E9 cE2FNmqcngoezpCEYuFKnwXEctO82qLDvSTQdnaGT1OIzCuGPBjhmFImmDSc5p6NN8l7 Trr8VVbg/C2VIFQiPA5TQnQ+la9jpD9f1obEIBnK7Si/F9cyuKUv2CB+Tcof0wSIpHZy Z13hALpdSt96qRU6+LgIaa7qIMAsB3/HnlefNcXXlCqFwPytSXoRIa8FVF6Jkvmh6+vz VnQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=QeEI4sc2dcuV1npg4L31+R2Mrp6pBAsdJJMzJp4sGcE=; b=nMYcRr3zZOjbF0yYeP4rxObC4h1IMVZj0PbyI4gFO4TUZDo4D3hvyy0zKkfgAwucvC E2ESmBEJRsEcKEwcxrsx71tloc0lu9yN5efTD9tFAaajvtDHDxeSW1lhN3otfLKrF28q iIz6My9jOelJkHnG3Y3D4Pd9d1FVnhLoNiuojJAtxpgyyCI0+EvgEbjuj3mOpcfLBERM K4gI/S7Ln2Rcbp3NaruHT5ango4u2FeHCwMbIcBcGtJv4YrqTpLfm0kAJf6C9TOpaNx4 lyFgyaf20X0Z3vgt8ymH6EoHOzpalBnTIEtgXaCBUrr26ASPB5zo7PNkL+takzMo/52i dVug== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=eeamy+Hh; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q13si3466582ual.75.2021.06.04.09.29.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:29:33 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=eeamy+Hh; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49416 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpChQ-00019t-To for patch@linaro.org; Fri, 04 Jun 2021 12:29:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48326) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCHg-0007uC-JP for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:57 -0400 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]:35700) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHX-0005mB-Nm for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:02:56 -0400 Received: by mail-wr1-x435.google.com with SMTP id m18so9836976wrv.2 for ; Fri, 04 Jun 2021 09:02:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QeEI4sc2dcuV1npg4L31+R2Mrp6pBAsdJJMzJp4sGcE=; b=eeamy+HhDQrUpgVH84rB+qmGjwMURu271d/xyE/eihURSMWrojHNhK9ic0YqaT6WQY zBL/B3Y84IsrP05IvsCzv2rRf82cKNtu5awnNsqzk1eotP/Wb1ejksc02qKa3WdOF8jl Cc+1z8LbaawVjITEG0E1B0sQzYDEeuNG8KA+yumxqJ19AdWBkPgzWWKNhi903mSWaXoD KWLNy8dYo33CppPeNArEWdmf+CfowWChZjeRt+daX+onP9koJe3r7qevC3x/8O65hUh8 qPRDcgF8RE6Aku8XrEuXQ8nmcXGFXTfpxCJmrDUORS/HPY+2pqgAjC6xHbBSEclRUyPB v11Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QeEI4sc2dcuV1npg4L31+R2Mrp6pBAsdJJMzJp4sGcE=; b=VD4ecCVByfXTxaVp3eDfj78+r9psVFfe2bmx4ZUexMa8Vj6P5dtViXSb+evtc8+kX0 EFR03Zn6Y44Fl7qHj8dvYc9A/YibCzge0jqgQFrTGqaGNi3OvZO9v781Jqh6ZGaZAhsE v+qKHM1UQrgiu7citvGIF7SgmpA/3201RnTMDhtlGf5kRJqcgAD/LsVBDABA0oSJqpdT GcUIv9BQB9MbJ8tSGdnhauCrOKUBMswW8NaI80t37adgRacHkBVOgsy4wi8OyhyXdn7V kTRLmPaC9ImBMMycfzjn1yhHZLCLpEctzoQ0lG55fps8SqMiko3WSolbWjzBf5DdwNBs ML1Q== X-Gm-Message-State: AOAM533BJxP9IMUVfmaXrN2YKlaqv38I9ha0zEQlciNvChgtBSsuLVW3 h0uJ1bBespW/BnN+cr/pg8SMdw== X-Received: by 2002:a5d:414e:: with SMTP id c14mr4484404wrq.81.1622822566165; Fri, 04 Jun 2021 09:02:46 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id n9sm8226282wrt.81.2021.06.04.09.02.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 50DBB2001C; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 93/99] meson: Introduce target-specific Kconfig Date: Fri, 4 Jun 2021 16:53:06 +0100 Message-Id: <20210604155312.15902-94-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::435; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Cornelia Huck , David Hildenbrand , Bin Meng , Mark Cave-Ayland , Thomas Huth , Laurent Vivier , Max Filippov , Alistair Francis , "Edgar E. Iglesias" , Marek Vasut , Yoshinori Sato , "open list:PowerPC TCG CPUs" , Artyom Tarasenko , Aleksandar Rikalo , Richard Henderson , Greg Kurz , "open list:S390 TCG CPUs" , qemu-arm@nongnu.org, Michael Rolnik , Stafford Horne , David Gibson , "open list:RISC-V TCG CPUs" , Bastian Koppelmann , Chris Wulff , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?b?w6k=?= , Palmer Dabbelt , Aurelien Jarno Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daudé Add a target-specific Kconfig. Target foo now has CONFIG_FOO defined. Two architecture have a particularity, ARM and MIPS: their 64-bit version include the 32-bit subset. Signed-off-by: Philippe Mathieu-Daudé Message-Id: <20210131111316.232778-6-f4bug@amsat.org> --- meson.build | 3 ++- Kconfig | 1 + target/Kconfig | 23 +++++++++++++++++++++++ target/alpha/Kconfig | 2 ++ target/arm/Kconfig | 6 ++++++ target/avr/Kconfig | 2 ++ target/cris/Kconfig | 2 ++ target/hppa/Kconfig | 2 ++ target/i386/Kconfig | 5 +++++ target/lm32/Kconfig | 2 ++ target/m68k/Kconfig | 2 ++ target/microblaze/Kconfig | 2 ++ target/mips/Kconfig | 6 ++++++ target/moxie/Kconfig | 2 ++ target/nios2/Kconfig | 2 ++ target/openrisc/Kconfig | 2 ++ target/ppc/Kconfig | 5 +++++ target/riscv/Kconfig | 5 +++++ target/rx/Kconfig | 2 ++ target/s390x/Kconfig | 2 ++ target/sh4/Kconfig | 2 ++ target/sparc/Kconfig | 5 +++++ target/tilegx/Kconfig | 2 ++ target/tricore/Kconfig | 2 ++ target/unicore32/Kconfig | 2 ++ target/xtensa/Kconfig | 2 ++ 26 files changed, 92 insertions(+), 1 deletion(-) create mode 100644 target/Kconfig create mode 100644 target/alpha/Kconfig create mode 100644 target/arm/Kconfig create mode 100644 target/avr/Kconfig create mode 100644 target/cris/Kconfig create mode 100644 target/hppa/Kconfig create mode 100644 target/i386/Kconfig create mode 100644 target/lm32/Kconfig create mode 100644 target/m68k/Kconfig create mode 100644 target/microblaze/Kconfig create mode 100644 target/mips/Kconfig create mode 100644 target/moxie/Kconfig create mode 100644 target/nios2/Kconfig create mode 100644 target/openrisc/Kconfig create mode 100644 target/ppc/Kconfig create mode 100644 target/riscv/Kconfig create mode 100644 target/rx/Kconfig create mode 100644 target/s390x/Kconfig create mode 100644 target/sh4/Kconfig create mode 100644 target/sparc/Kconfig create mode 100644 target/tilegx/Kconfig create mode 100644 target/tricore/Kconfig create mode 100644 target/unicore32/Kconfig create mode 100644 target/xtensa/Kconfig -- 2.20.1 diff --git a/meson.build b/meson.build index e2a22984b8..09c7809d6b 100644 --- a/meson.build +++ b/meson.build @@ -1359,7 +1359,8 @@ foreach target : target_dirs command: [minikconf, get_option('default_devices') ? '--defconfig' : '--allnoconfig', config_devices_mak, '@DEPFILE@', '@INPUT@', - host_kconfig, accel_kconfig]) + host_kconfig, accel_kconfig, + 'CONFIG_' + config_target['TARGET_ARCH'].to_upper() + '=y']) config_devices_data = configuration_data() config_devices = keyval.load(config_devices_mak) diff --git a/Kconfig b/Kconfig index d52ebd839b..fb6a24a2de 100644 --- a/Kconfig +++ b/Kconfig @@ -1,5 +1,6 @@ source Kconfig.host source backends/Kconfig source accel/Kconfig +source target/Kconfig source hw/Kconfig source semihosting/Kconfig diff --git a/target/Kconfig b/target/Kconfig new file mode 100644 index 0000000000..a6f719f223 --- /dev/null +++ b/target/Kconfig @@ -0,0 +1,23 @@ +source alpha/Kconfig +source arm/Kconfig +source avr/Kconfig +source cris/Kconfig +source hppa/Kconfig +source i386/Kconfig +source lm32/Kconfig +source m68k/Kconfig +source microblaze/Kconfig +source mips/Kconfig +source moxie/Kconfig +source nios2/Kconfig +source openrisc/Kconfig +source ppc/Kconfig +source riscv/Kconfig +source rx/Kconfig +source s390x/Kconfig +source sh4/Kconfig +source sparc/Kconfig +source tilegx/Kconfig +source tricore/Kconfig +source unicore32/Kconfig +source xtensa/Kconfig diff --git a/target/alpha/Kconfig b/target/alpha/Kconfig new file mode 100644 index 0000000000..267222c05b --- /dev/null +++ b/target/alpha/Kconfig @@ -0,0 +1,2 @@ +config ALPHA + bool diff --git a/target/arm/Kconfig b/target/arm/Kconfig new file mode 100644 index 0000000000..3f3394a22b --- /dev/null +++ b/target/arm/Kconfig @@ -0,0 +1,6 @@ +config ARM + bool + +config AARCH64 + bool + select ARM diff --git a/target/avr/Kconfig b/target/avr/Kconfig new file mode 100644 index 0000000000..155592d353 --- /dev/null +++ b/target/avr/Kconfig @@ -0,0 +1,2 @@ +config AVR + bool diff --git a/target/cris/Kconfig b/target/cris/Kconfig new file mode 100644 index 0000000000..3fdc309fbb --- /dev/null +++ b/target/cris/Kconfig @@ -0,0 +1,2 @@ +config CRIS + bool diff --git a/target/hppa/Kconfig b/target/hppa/Kconfig new file mode 100644 index 0000000000..395a35d799 --- /dev/null +++ b/target/hppa/Kconfig @@ -0,0 +1,2 @@ +config HPPA + bool diff --git a/target/i386/Kconfig b/target/i386/Kconfig new file mode 100644 index 0000000000..ce6968906e --- /dev/null +++ b/target/i386/Kconfig @@ -0,0 +1,5 @@ +config I386 + bool + +config X86_64 + bool diff --git a/target/lm32/Kconfig b/target/lm32/Kconfig new file mode 100644 index 0000000000..09de5b703a --- /dev/null +++ b/target/lm32/Kconfig @@ -0,0 +1,2 @@ +config LM32 + bool diff --git a/target/m68k/Kconfig b/target/m68k/Kconfig new file mode 100644 index 0000000000..23debad519 --- /dev/null +++ b/target/m68k/Kconfig @@ -0,0 +1,2 @@ +config M68K + bool diff --git a/target/microblaze/Kconfig b/target/microblaze/Kconfig new file mode 100644 index 0000000000..a5410d9218 --- /dev/null +++ b/target/microblaze/Kconfig @@ -0,0 +1,2 @@ +config MICROBLAZE + bool diff --git a/target/mips/Kconfig b/target/mips/Kconfig new file mode 100644 index 0000000000..6adf145354 --- /dev/null +++ b/target/mips/Kconfig @@ -0,0 +1,6 @@ +config MIPS + bool + +config MIPS64 + bool + select MIPS diff --git a/target/moxie/Kconfig b/target/moxie/Kconfig new file mode 100644 index 0000000000..52391bbd28 --- /dev/null +++ b/target/moxie/Kconfig @@ -0,0 +1,2 @@ +config MOXIE + bool diff --git a/target/nios2/Kconfig b/target/nios2/Kconfig new file mode 100644 index 0000000000..1529ab8950 --- /dev/null +++ b/target/nios2/Kconfig @@ -0,0 +1,2 @@ +config NIOS2 + bool diff --git a/target/openrisc/Kconfig b/target/openrisc/Kconfig new file mode 100644 index 0000000000..e0da4ac1df --- /dev/null +++ b/target/openrisc/Kconfig @@ -0,0 +1,2 @@ +config OPENRISC + bool diff --git a/target/ppc/Kconfig b/target/ppc/Kconfig new file mode 100644 index 0000000000..3ff152051a --- /dev/null +++ b/target/ppc/Kconfig @@ -0,0 +1,5 @@ +config PPC + bool + +config PPC64 + bool diff --git a/target/riscv/Kconfig b/target/riscv/Kconfig new file mode 100644 index 0000000000..b9e5932f13 --- /dev/null +++ b/target/riscv/Kconfig @@ -0,0 +1,5 @@ +config RISCV32 + bool + +config RISCV64 + bool diff --git a/target/rx/Kconfig b/target/rx/Kconfig new file mode 100644 index 0000000000..aceb5ed28f --- /dev/null +++ b/target/rx/Kconfig @@ -0,0 +1,2 @@ +config RX + bool diff --git a/target/s390x/Kconfig b/target/s390x/Kconfig new file mode 100644 index 0000000000..72da48136c --- /dev/null +++ b/target/s390x/Kconfig @@ -0,0 +1,2 @@ +config S390X + bool diff --git a/target/sh4/Kconfig b/target/sh4/Kconfig new file mode 100644 index 0000000000..2397c86028 --- /dev/null +++ b/target/sh4/Kconfig @@ -0,0 +1,2 @@ +config SH4 + bool diff --git a/target/sparc/Kconfig b/target/sparc/Kconfig new file mode 100644 index 0000000000..70cc0f3a21 --- /dev/null +++ b/target/sparc/Kconfig @@ -0,0 +1,5 @@ +config SPARC + bool + +config SPARC64 + bool diff --git a/target/tilegx/Kconfig b/target/tilegx/Kconfig new file mode 100644 index 0000000000..aad882826a --- /dev/null +++ b/target/tilegx/Kconfig @@ -0,0 +1,2 @@ +config TILEGX + bool diff --git a/target/tricore/Kconfig b/target/tricore/Kconfig new file mode 100644 index 0000000000..9313409309 --- /dev/null +++ b/target/tricore/Kconfig @@ -0,0 +1,2 @@ +config TRICORE + bool diff --git a/target/unicore32/Kconfig b/target/unicore32/Kconfig new file mode 100644 index 0000000000..62c9d10b38 --- /dev/null +++ b/target/unicore32/Kconfig @@ -0,0 +1,2 @@ +config UNICORE32 + bool diff --git a/target/xtensa/Kconfig b/target/xtensa/Kconfig new file mode 100644 index 0000000000..a3c8dc7f6d --- /dev/null +++ b/target/xtensa/Kconfig @@ -0,0 +1,2 @@ +config XTENSA + bool From patchwork Fri Jun 4 15:53:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454072 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp556481jae; Fri, 4 Jun 2021 09:18:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxf8VulCJyijXal0ty+mHwx9Vv3mZ+wRjEmDETZIFcevgqgRQpbMWA3bBNxeWt3Dg6siguz X-Received: by 2002:a17:906:2d51:: with SMTP id e17mr4859173eji.500.1622823488960; Fri, 04 Jun 2021 09:18:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823488; cv=none; d=google.com; s=arc-20160816; b=RfSAn9N13ALezalN7/0nYnH4CmT7Ab2GjynAH8D9MMLdkgI+D8I5T0B3AMI6y5LvTz NS55dl7Wozfgok0JbAGQgj26obRck3laq4gCuiix9JImbvoRgwi+ohsCT2UsNb6MaeUy tzMTBnMGr4WuxF0QDk6vrO7vfAqZAKcFGEa0jr/keM4CCPz6G+CAbU28+6stxissOefi Y2xmRu/sa1c8FZ4qaN6sCvfnY0xtmBDJ10Sy52FvePrPdUwI4jVsQ5Gwoxj1qcpLvAhG rEjoulYwBTvyXmC/0JEP2Chn0G0a35XwKBQ2XRDVKPzMVNKrglpNh3WxAd/+6SiIBqvo k7FQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=/YG66hyD1VOf/fOI4qw/ZTnuURBGgVxAEUhf5rfH4Y0=; b=WCSM3LNrAueTsDFEFl8SkvsxCRbaobCX3FA30iKPEsbVWfO6Ic5lASsA+a9YS84gY6 prSpSkQUazLJVFVGwVePLF+6ZNkI42jsjlHQCX8gMQGjLve6ALcKWdW5Xp3EThEZuxMc lY+PEpiUwq1hVuvUS6MAhMDiRxoZmzVaourY4ZkssRkUe/Ru7F7EKzN2SoyIl6dm1+Hz Ky7w1z/8LJN1tItfAqw2wc0E7QhIHALiUiNpnCX4d0+XT4YMeTEubIqdunhcvhb1KH1a lUZrVKxFJvDtJVbKcuPs/wIu0pKIl3b/pQAvNf43wKZAu52tJfz6hLLSrA1cwLAlbvt2 SeAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=fC6Z4tu+; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id s6si4788592edx.64.2021.06.04.09.18.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:18:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=fC6Z4tu+; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:35100 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCWO-0005mR-0R for patch@linaro.org; Fri, 04 Jun 2021 12:18:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48594) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCI4-0008Ny-7X for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:20 -0400 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]:43536) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCHd-0005qK-JY for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:03:19 -0400 Received: by mail-wr1-x436.google.com with SMTP id u7so4384708wrs.10 for ; Fri, 04 Jun 2021 09:02:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/YG66hyD1VOf/fOI4qw/ZTnuURBGgVxAEUhf5rfH4Y0=; b=fC6Z4tu+Pi3NnHTTj1S95aaBJf3XfPsnkTphTm3RJjD9o+QNOsVgm5+CQjJfyd/Dg5 9OGU/p8Bks+cAY+zoQLH9We4C7sZ4Z0aGsBQ3/HuSvmqop3xpF5PJ4PaSxq7OA3b0zR7 qi3pjjl6cbWqRNsaBJZ3nc8Y7v4fq05u+nGUQUrI1/qHk1VJp1mJYv2u0Oa2MswHYHNH XHFwNy0XxbKpSK9lnXN9sJBLlyheeBYHkfNUh6Mfd7UBMVjuJCdimNpaKaMgudOU9W3B bDlb9dOz5UVokJn+7Qj6z+52HyHAsa0ObA3nDgEXyQevHJE0jGD/weNGp9eUlgM1zHkV xYQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/YG66hyD1VOf/fOI4qw/ZTnuURBGgVxAEUhf5rfH4Y0=; b=iYQ7S68iJbwXjm422NppglgOwbABL4rvo2MA8m8XukRkqTldtSF6pnFhR6Opl8mE5Y ncJNzFq2q6uGyL49JJKyLY82M8TY3WEMr5m26eGYHfYmD4CyCMS4M5HA79FoxDXQfi4i /fbwyXjl64dbasbbnp3fs76TPsjJ5TEefCwAtsLPsoPWN23pOl14XTcq8z5Mk8khI9MM G4Zb3tKzofGkucgV0rSzafRZaXtdnJaB3jV50i+5m5xgZHDAwEeSYgl1xKicTxbE6dMW hwex5cWFVnAfxOjThaRokZRnb36nTxX5mgC+bURxebxqaUecN9sAkIi9wWE/LrcVCPMK NSYQ== X-Gm-Message-State: AOAM532speN1Dqmx4QLpl/tUf3vb37yzmTsaGJZtFQqiZ+jOhvMNqOn2 WXW1Cm9Pe10Vdc2c+VnyuuZ8FA== X-Received: by 2002:adf:f98a:: with SMTP id f10mr4644621wrr.143.1622822572137; Fri, 04 Jun 2021 09:02:52 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id l2sm6843694wrp.21.2021.06.04.09.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:02:43 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 6C56E20021; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 94/99] target/arm: move CONFIG_V7M out of default-devices Date: Fri, 4 Jun 2021 16:53:07 +0100 Message-Id: <20210604155312.15902-95-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::436; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x436.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" We currently select CONFIG_V7M for a bunch of our m-profile devices. The last sticking point is translate.c which cannot be compiled without expecting v7m support. Express this dependency in Kconfig rather than in default devices as a stepping stone to a fully configurable translate.c. While we are at it we also need to select ARM_COMPATIBLE_SEMIHOSTING as that is implied for M profile machines. Signed-off-by: Alex Bennée --- default-configs/devices/arm-softmmu.mak | 3 --- hw/arm/Kconfig | 3 +++ target/arm/tcg/sysemu/meson.build | 5 ++++- 3 files changed, 7 insertions(+), 4 deletions(-) -- 2.20.1 Acked-by: Richard Henderson diff --git a/default-configs/devices/arm-softmmu.mak b/default-configs/devices/arm-softmmu.mak index 0500156a0c..4114aa9e35 100644 --- a/default-configs/devices/arm-softmmu.mak +++ b/default-configs/devices/arm-softmmu.mak @@ -1,8 +1,5 @@ # Default configuration for arm-softmmu -# TODO: ARM_V7M is currently always required - make this more flexible! -CONFIG_ARM_V7M=y - # CONFIG_PCI_DEVICES=n # CONFIG_TEST_DEVICES=n diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig index 67723d9ea6..afaf807c92 100644 --- a/hw/arm/Kconfig +++ b/hw/arm/Kconfig @@ -296,7 +296,10 @@ config ZYNQ config ARM_V7M bool + # currently v7M must be included in a TCG build due to translate.c + default y if TCG && (ARM || AARCH64) select PTIMER + select ARM_COMPATIBLE_SEMIHOSTING config ALLWINNER_A10 bool diff --git a/target/arm/tcg/sysemu/meson.build b/target/arm/tcg/sysemu/meson.build index 56e4b5ccea..520f305deb 100644 --- a/target/arm/tcg/sysemu/meson.build +++ b/target/arm/tcg/sysemu/meson.build @@ -1,7 +1,10 @@ arm_softmmu_ss.add(when: 'CONFIG_TCG', if_true: files( 'debug_helper.c', - 'm_helper.c', 'mte_helper.c', 'tcg-cpu.c', 'tlb_helper.c', )) + +arm_softmmu_ss.add(when: 'CONFIG_ARM_V7M', if_true: files( + 'm_helper.c', +)) From patchwork Fri Jun 4 15:53:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454139 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp620458jae; Fri, 4 Jun 2021 10:38:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxppyeXYBoo4nOq5GRgAw0fy8S0JlJlDPRZKIqGm4On5U0Ipj5Pc1P+OAWRzDULsxv+2vue X-Received: by 2002:a67:f7cb:: with SMTP id a11mr3845532vsp.47.1622828336647; Fri, 04 Jun 2021 10:38:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622828336; cv=none; d=google.com; s=arc-20160816; b=HwvE+aV2s4TrIYZyCw4A5X5B7qICE/6T4oaaA/jHMwFO89RC9xYrqGhHN+wsYZCtrW KnbsFPMbgeDNlqQMs6whUu5YwsbSzXsYtPzp+esHl5B9XGixTbBD/vy5jKik90wkNlPY Ee7iCnC38oa2dRt54zRWCmw30a0DVDlBNauNiCNGCNNytaC6oVbVCL1UQEKqI/CbvBpH Ufed4rpx5yENIPii4x/V5Sg5/wvpPE+ChwiT00zxEkAcakGXgaDf+X/t9306HxdY0MmE t8I2+OkqMCVQXkiQfkjOqJhiZbT7IWs36Vm23YwnKxZeRLPTMlchpqjucXuf6Jux4+3m zNXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=YHX7W7pdovA8sEaHbdvBxPdYJmSQGO3xjxfiDb+VvUU=; b=AF19DaeF7uUj9uGyU090w3VeoIgsYeSQwJFOpVyJjyt2NALVTvXz2ZTZNjbjiXEmNs lsoxMD6dQTk2TrjK5wlvWgZd9HMHnBQzS4ce2bbDizzVf1ZDg3Sq/8aCL8jc+LWff3uV WVbOS4vjRzGz1XdKq+MojJxchMiw+e04oDShVR23/Du9pakgt9lZ/VPpnow7T15ASKT2 TxHjYkn9A1pc6yT6wkM6yADdUKF7JHeUJnv5FB1QnZseXSHNyTyagxOcmduA91g0gGmn gTSnDm2T78EIlInFNXrgUXkfG0pJA3viYn98Q5MPRj3lzlcyoM8ceCpSDkth+oug03It rlAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=rs7ezZV7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id p2si3408590vse.244.2021.06.04.10.38.56 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:38:56 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=rs7ezZV7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:53616 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDma-0008Mj-2o for patch@linaro.org; Fri, 04 Jun 2021 13:38:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48944) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpDNT-0000N6-Nq for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:59 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:33727) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpDNR-00028U-AH for qemu-devel@nongnu.org; Fri, 04 Jun 2021 13:12:59 -0400 Received: by mail-wr1-x431.google.com with SMTP id a20so10078149wrc.0 for ; Fri, 04 Jun 2021 10:12:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YHX7W7pdovA8sEaHbdvBxPdYJmSQGO3xjxfiDb+VvUU=; b=rs7ezZV7gCdf3aUQB0E6qL/jm7MgBEpBi7CEhaulBEMsi6cOzQyepxi4s2BukgV748 PwQ247qsBXs2KM6kwmf1JVDYtrRPHBiCn0kB+ZSCimu+BcswEbBXz6koH8Ux4OGTYbAx 7opJTl1qdFOOEVsbJCJOCM6h7644KP2CDomrX4c+9buz2TkgtnosELU4xFvtVZVNxC/V d7Lk9g//cn0MXAhRg+Ztd26YTaW6Quxzpe1JRH48b2Cte7SfnpXcZeyi8zZgXbALfl/b c50SPeqIXBhNBiQpazzZMJHVzu1cZ98DQT26qGNZVHTxPlsHxSOZDA3VI1Wflh19dTtF CcYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YHX7W7pdovA8sEaHbdvBxPdYJmSQGO3xjxfiDb+VvUU=; b=cua+OmrLg6ulPq10/wFun2Vc839AI8GfeS4f+MkbneT1WCJndJ3+O4RlqfDSTiq+Vw lW9IUdLYAtHQV4BG0ocDwwNI1G5ZLeJ4kvJbmubz8w0Ub6IwQ98n7JAZhWuV1TfjazoX xdPiB6bkUJa7tp04DwFVI+eLLoqhWfjBPQUTAjTt6jU8C6jAiyX//RUmQhleBmk8/wtT 5mH/1PIVzQeDNBpbssAZbZW/T9pPMA7a9hgpSeRIXZ0IfFTK03GXEgonAsiUt0GPE4wu yCZ6d3gZOTzkkZEkF394W2LAvV5bCqZs8kni/myqkfCqSIQyM2U+xGK0v4Rqh1g7c/HW zTXA== X-Gm-Message-State: AOAM533Z1pFcnehW/XMeSCN2fK56ERQO7yotQS6l55LHcpuvharNm4fK gz/47G0TxEtjcwOBTrFpZNUtZA== X-Received: by 2002:adf:df87:: with SMTP id z7mr5029279wrl.56.1622826775698; Fri, 04 Jun 2021 10:12:55 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id m22sm1190276wmq.21.2021.06.04.10.12.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:12:46 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 8150B20023; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 95/99] hw/arm: add dependency on OR_IRQ for XLNX_VERSAL Date: Fri, 4 Jun 2021 16:53:08 +0100 Message-Id: <20210604155312.15902-96-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" We need this functionality due to: /* XRAM IRQs get ORed into a single line. */ object_initialize_child(OBJECT(s), "xram-irq-orgate", &s->lpd.xram.irq_orgate, TYPE_OR_IRQ); Signed-off-by: Alex Bennée --- hw/arm/Kconfig | 1 + 1 file changed, 1 insertion(+) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig index afaf807c92..02962c0987 100644 --- a/hw/arm/Kconfig +++ b/hw/arm/Kconfig @@ -371,6 +371,7 @@ config XLNX_VERSAL select UNIMP select XLNX_ZDMA select XLNX_ZYNQMP + select OR_IRQ config NPCM7XX bool From patchwork Fri Jun 4 15:53:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454073 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp558218jae; Fri, 4 Jun 2021 09:20:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx3N7lOqeuNJ61k2FUzZKRuRzOWtg4ITdZ0jlH0oHRuPw6/2LPSQ6ECssGLCTa3jTBkzvYA X-Received: by 2002:a1f:1385:: with SMTP id 127mr2141148vkt.8.1622823614936; Fri, 04 Jun 2021 09:20:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622823614; cv=none; d=google.com; s=arc-20160816; b=hxTy1F4hxzhgWqqHt4gKmqga0HshVYTSsa1X70Bs7dHGxRYR9KBeYoCKcpvmAeE6FQ LCZ/qQX8N1jPstsapNU4cFDk/1UYpgmdMYgyMvPZEacPWgL8e2blaTyjLvISQrSEGsl+ NZJgS5ZtzBUXzJwqwZgbsknl2UFK2dUW6Zhcx+SeWXzYjLMgSqIkbvelHai5JeNP4ZMK e5jd/abINX4mYAbaGZW248w8g9E7kH3dEwYKTjPhFquCn7G3W9bqztUlX2SYy3clGlfJ DP2Ok+yNsRizXQ9pg8JrJ8s7FIqWyLyz+2zInFoP7S7rsrqPrYyTlJl9q3q+DlZlF5DL W4rQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=dsyKHQJNOhAXErDDhKEWsE1reIuTyEMwEV0R3vqvPqo=; b=evYsSxgNf6hv85ZTgCaoM5+yY5/G16KsnHToaDFMiShbz1MoLzTvLt+ZIT7+7fZOZy PC6bjbpw0u7q9ilpCRQMxQYGhNqjLUFYFz3Ud4YbPcvmPN9BPgfQb3g7om2HtMqqRjlT pgeKEhIFHxVJzG6MTOp4TiwvJobeBU/SH+ySQw6WGde7mGj1STZRBKeI7+1NvBrLhK43 EaQyqT2Q4Ah9SdIvSPB3y75MmBNRlDaWiwjfPhg8XT/QimjdjXnpiq8OWDdwB1Lc++u+ pIG1QxHujdhb1ROHsbDlq8O5R/+39+AgcSzqOJ50NxhmqR+BL5eKu3jyKXE5qQEJqepZ WYFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ylkJNz4G; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id l14si3236209vsh.370.2021.06.04.09.20.14 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:20:14 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ylkJNz4G; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:41764 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCYQ-0001zf-7z for patch@linaro.org; Fri, 04 Jun 2021 12:20:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44846) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8m-0003Vk-CZ for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:44 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]:38650) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8i-0000EG-8p for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:44 -0400 Received: by mail-wm1-x331.google.com with SMTP id t4-20020a1c77040000b029019d22d84ebdso8196543wmi.3 for ; Fri, 04 Jun 2021 08:53:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dsyKHQJNOhAXErDDhKEWsE1reIuTyEMwEV0R3vqvPqo=; b=ylkJNz4GI+pdjTJ8e7ZpdDZFzkDAO/0NVJO2mS/6czgbyE6dMwlXVTNjBmjnDCJ8ib LmDB1Aqtb57TQEIhiutaZ/3S7KYR6grnTepTaLHA8faoCcmB3x0EgXex7SXoAyniTgW5 S4GemIN4svMgYyh/o63H7UXSy16BEZnxnYCFk8NxSfN3m3QN67FGLvzLT4/ZKeONndCF tc/wpq9AeQmfO0fYSptBj65S10PhJfIS4qRXu0ZQ1GII+8HbxzKc0+8ccDRLmfoYVLDA hmhDtF3YAh0mEd0xaBLnD/ZLghicvwnI35Krb42+buE9jYLn/lx3o2OVrm+IGdlwUIA7 SjNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dsyKHQJNOhAXErDDhKEWsE1reIuTyEMwEV0R3vqvPqo=; b=gnwC4d/K8AOc3+qsgPWRQJdWzK4HDy2BPUgHBSCXxZXfOLReHR2EVu5UtcpycrhBbU HUQ/z1McU5b9GM58mh5MFA2YcAn36X0DBJXbgJGFqRJPfgnPsYCYORLq53bmOmCiFkg+ yUu72DT8lUywJGS/v+jjq34D2dZECo9fVW25G8py7ezz3b0SIZOP5hjKpimiB3yPrbuJ T/dgh5ccgMxbnJmKe6S0GiQajE87H5l4vvVksJSq2Xtngdpj23l/innWSuSikrsHGP7y 8lk22NBKqzRSAGUGumIdgMIlWpDuTUMRAjavMAgm2Ac0TR4pIdBqv0qAvjey7fa6zIE6 lMIA== X-Gm-Message-State: AOAM530H2HS1ayJuvGlBalK11vEZJyF241F7mHgwnVeu+Dg01PtYPBWW F6my2t3zT0YydQRH/WtUT8+UjA== X-Received: by 2002:a1c:8016:: with SMTP id b22mr4252034wmd.43.1622822018959; Fri, 04 Jun 2021 08:53:38 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id c23sm9110099wme.37.2021.06.04.08.53.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:31 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 96AC31FF92; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 96/99] tests/qtest: split the cdrom-test into arm/aarch64 Date: Fri, 4 Jun 2021 16:53:09 +0100 Message-Id: <20210604155312.15902-97-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=alex.bennee@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , "open list:IDE" , =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-arm@nongnu.org, Paolo Bonzini , John Snow Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" The assumption that the qemu-system-aarch64 image can run all 32 bit machines is about to be broken and besides it's not likely this is improving out coverage by much. Test the "virt" machine for both arm and aarch64 as it can be used by either architecture. Signed-off-by: Alex Bennée --- tests/qtest/cdrom-test.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) -- 2.20.1 diff --git a/tests/qtest/cdrom-test.c b/tests/qtest/cdrom-test.c index 5af944a5fb..1e74354624 100644 --- a/tests/qtest/cdrom-test.c +++ b/tests/qtest/cdrom-test.c @@ -220,13 +220,16 @@ int main(int argc, char **argv) "magnum", "malta", "pica61", NULL }; add_cdrom_param_tests(mips64machines); - } else if (g_str_equal(arch, "arm") || g_str_equal(arch, "aarch64")) { + } else if (g_str_equal(arch, "arm")) { const char *armmachines[] = { "realview-eb", "realview-eb-mpcore", "realview-pb-a8", "realview-pbx-a9", "versatileab", "versatilepb", "vexpress-a15", "vexpress-a9", "virt", NULL }; add_cdrom_param_tests(armmachines); + } else if (g_str_equal(arch, "aarch64")) { + const char *aarch64machines[] = { "virt", NULL }; + add_cdrom_param_tests(aarch64machines); } else { const char *nonemachine[] = { "none", NULL }; add_cdrom_param_tests(nonemachine); From patchwork Fri Jun 4 15:53:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454065 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp546535jae; Fri, 4 Jun 2021 09:06:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyLB/e8wovp/634HV1wxJFBqUccldZ1vF8hiVT3e7oeh6seJUs4qYXSi9NQ7NrDdU2NOFWW X-Received: by 2002:a2e:8653:: with SMTP id i19mr3970805ljj.11.1622822819441; Fri, 04 Jun 2021 09:06:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622822819; cv=none; d=google.com; s=arc-20160816; b=MsI2jr5vuCdS/9VmCJHCAn8pF0C6EZkrn09svX4Bb9ir5VSjjfvtD1PL7XsrCY2QvL LozsaIzbDB/g9abWSbfW3rpkeIv7l5F8n4/Gca1UHTiHTdCy1dwzh9bH3DivRx1GLEr8 cesjd+734MX/0PVeQ4grFfnrjbIPq7UQJReZfpEKqnWiaL9jUv/E63AlE9FxdhrEEJ5v E3Yo3ppj6+MPk9ghkoNCcDPOGT7LHdc6gCDoKuPpqPiwmINr8mo3AICWf8RBFOLBB+UQ 7OH/KceJCRB3P4g32FRCKJNRuHoWtFU8Tuu33i9cykJ1XwpH4uPHUf9cyGrK9+nDum0h sIQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=HwjS13hI3FClI76JoclT19qTXBq1v/r670a52v458qs=; b=a4vlYsyo6TVRcMx9OiGijWU7ZVra0o2MJw9E56K9pztTRWMZYTsC5Yu9ysNEbqiTmf zABa+AKI/v6CqmAXHLJHJNbl6zFJDArlaVvfgT06R6D+bbcqv9U58ahJSVb9Dm8PQJug PIT7ZVd+vpfuXXycW4/eWfvhx7aKKNYj66zdY1Z1jsDFVuin+XkLRV0gGRJCGs7Bsvy1 ml4lDZevhHmoN///7j1w3oINZc6fV+auz/dkx5KlvltFvOCjuIYAZ2AGwNrAOrHEWw3v Ka8qdoYcVkROPJ6IRQAoPlR7+2Eb44dA96sJKzfJZhZDijdxbwQXYlt2doI5nbSSGfY4 e6Pw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="wGtFvCu/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y12si6825533lfk.551.2021.06.04.09.06.59 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:06:59 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="wGtFvCu/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:57926 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCLa-0006o9-85 for patch@linaro.org; Fri, 04 Jun 2021 12:06:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:44820) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpC8k-0003TB-Mm for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:42 -0400 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]:34309) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpC8h-0000DH-In for qemu-devel@nongnu.org; Fri, 04 Jun 2021 11:53:42 -0400 Received: by mail-wr1-x429.google.com with SMTP id q5so9825067wrm.1 for ; Fri, 04 Jun 2021 08:53:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HwjS13hI3FClI76JoclT19qTXBq1v/r670a52v458qs=; b=wGtFvCu/mJEFs8jdHdb5UJW1O0+frpqTH0dgI5ZU2ZUrJdVQb36LLNss1hTciHhA63 GMvCx0iB64yVt/oXBPVS5khvecTWwC6wUKn62mjdnmnLSOc8v9bPwTtB87C3ES3YCUrx gVVNNehqrfMfSLc8qB5+GM+7hFfLISKeCWeuKOBRuyKIBJFWrGStyI+oL2C6syZNGELc AEcPf0goktZ0cActcS3jK1UMq4Ho68WUvApPNl+s16pwlDvDApt/thEqLKrxwCOFysQc Hy9kbyN7HowNcsu6Dwc5nwt22Lqzt63q/xFLQGs6nPZTMBv9cQ5SwUv9e22rfwwH1des oStA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HwjS13hI3FClI76JoclT19qTXBq1v/r670a52v458qs=; b=LJ7cNmChfzvedT9JfdnWx/LkJXMtakjUTo5ikrLWqlU8vKcYWBPCheXD3fL1e9SMf8 uYX9UKBYaFt0Q6WE3b+UiOXSxEH4UMA5ksFg4bPd1Xt5798UDWdyQCxp+v7Zpfws6ukW Ztc1f/AvsxjqnSizP6yjRd//pu+IS4/GzjCp4zwACnkissfITnyHnYMBqzvBcf749p/G nUJmBXGLCn8TSnlxZqScMaoIowaqjtqs5MHnLxZwUijBzF96jRNr5hMLAcHsQEUtGm4t 4e5R3q9vRK73IBlI1CLL4MqrURbZHIiGv1mS0JEAQ1TXkaAb7LeUoIQxQUqSl9pu3aCl 24hg== X-Gm-Message-State: AOAM530JEY5ZBqF3kLr9dg7OA1dZmzMeDe3pAvUyJAl2F2ki1AGkZIHJ Gy174Mjzrq4ER+yOaer4WjSGCw== X-Received: by 2002:adf:bc07:: with SMTP id s7mr4668803wrg.301.1622822018180; Fri, 04 Jun 2021 08:53:38 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id x125sm2617808wmg.37.2021.06.04.08.53.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:32 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id AC58020024; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 97/99] tests/qtest: make xlnx-can-test conditional on being configured Date: Fri, 4 Jun 2021 16:53:10 +0100 Message-Id: <20210604155312.15902-98-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::429; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Paolo Bonzini , Thomas Huth , qemu-arm@nongnu.org, =?utf-8?q?Alex_B?= =?utf-8?b?ZW5uw6ll?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" It will soon be possible to build an qemu-system-aarch64 system that doesn't have this. Signed-off-by: Alex Bennée --- tests/qtest/meson.build | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.20.1 Reviewed-by: Richard Henderson diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build index 2c7415d616..772e62920c 100644 --- a/tests/qtest/meson.build +++ b/tests/qtest/meson.build @@ -179,11 +179,11 @@ qtests_arm = \ qtests_aarch64 = \ (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \ (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \ + (config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test'] : []) + \ ['arm-cpu-features', 'numa-test', 'boot-serial-test', 'bios-tables-test', - 'xlnx-can-test', 'migration-test'] qtests_s390x = \ From patchwork Fri Jun 4 15:53:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454125 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp605568jae; Fri, 4 Jun 2021 10:18:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6yRdzGSD2T6ys6Cx35gKSRY/I1NT5W5tpIhXKAqO+IqTPcX6S1VPvkJsS0LqOhRTRj5Mx X-Received: by 2002:a50:cb85:: with SMTP id k5mr5893845edi.170.1622827091731; Fri, 04 Jun 2021 10:18:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622827091; cv=none; d=google.com; s=arc-20160816; b=pjHI5QYJ6IRb/m/7niFT7QY5hP624P56msmLdYnYZyxaV63IzFyWkWTQ2qByd9B6qH 2XauZ3kbftDg/OyvCUHhj0pfnXp+PGKgy8S1aYDSI6h5DVht2K0IpO98//MghJtEJprQ 8HVZberw8LBjVCx3Rwzupc6gD7D5ci7GBLhM/AG+DGaHFqT/W58GlWY9cdZm3PbDswhs 0W0ld3ZyAeKhHO58cRdga+PbAab/0kfBnnidmhHw3I1wAITBeN3bHfv/GZtAp4dGe0mn +1TpJ+iyGNcvENfLLheRK9OHE9WJYKG1xAU4evEwN99VLPUVZ/0jGnRYi3GeDiYlNexD rNCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=79gl5zIRvGor1ajCQ3OtdQMP/o/iWQG5y8hlHOshVx0=; b=EfUMIGztU8PLKK/MzSYcvOB1F9SiULkw+O3gLAUHfBpfADiC4c/+6n21wWWA6SgmLf nA26xqNAv/3SeCEEYob8UJitLjChT7MZ/hk7/5GKZbxjQ6FU35njmjh5TawpiWvj8nR5 3E/uwyAmaNn23Q37WniidVM2S+O3mCQJkqMBLV6Nw0KZZvaTzJsvVSo6acYEWr9lQY7n O8ufW1Mmu0zPgj2kuiTO97LiUfWxZ7pbblo1GYf7GsWpLE1FVhpyN0BGQHIPpHJ1aCVJ yCbJ/i2kBdqjLVmVmAsYsK4nPqJwgLrAIBAV8W92SpYK4rzEWqj57rZpTuP9QIxMfAYm knKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=cosZHVJq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id f8si5280678edk.566.2021.06.04.10.18.11 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 10:18:11 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=cosZHVJq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43358 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpDSU-0004ao-Is for patch@linaro.org; Fri, 04 Jun 2021 13:18:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33894) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCkz-0000XQ-E2 for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:13 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]:41714) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCkl-0002EP-6G for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:33:13 -0400 Received: by mail-wr1-x42b.google.com with SMTP id h8so9890538wrz.8 for ; Fri, 04 Jun 2021 09:32:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=79gl5zIRvGor1ajCQ3OtdQMP/o/iWQG5y8hlHOshVx0=; b=cosZHVJqYiqSsO9gtZOu09LMFBHi+HLcGtBn5yfdWzeHis1OzWVeByF6tT5Y4BFCiN BwVTmlOfN/yXWllQXCtdz662amm44Eyhdc0BspAljUgGIP9ZqmGXsMS6GTzr7FZUzW94 97NbSlcZ8U9wkoELJK0DyqsXC3XaeRHJkgjMnvmGZ7mdpo2Miq4FaIUrz+YUv7DTDdC7 InMOdFXh53CB0snTkE/6CdNx2XPuiJn+C5wQj4CJVVBsrbz/K7L1i91SZT4V2LKDw3Jy 5+eTbY+XlXfHrxGJCQwa3hJWjc9f+DXzjPjy6HRkkEB6MkMnT5cL0YE8oVSzu3bUoG+Q FJtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=79gl5zIRvGor1ajCQ3OtdQMP/o/iWQG5y8hlHOshVx0=; b=ezVTEoy0ko8B/Pwzmi8cjOgV9CXDOsi9p5n++6RHAMcdAtO6T+wlUi6InD14mkbyuS 7iz9lrJKgRfbArAROZ6Ph7qOzaA2ueNbVIaP00jYRtK8XQwGtCjdX8zGuH8L1xAoN6/3 aKZWUBhuI+VcgidP91QbHyWd/PCW5tA8lu1f+GXzU2u+6TR3VvXcURaXVRIiG02hKEjb xEqJ/pgO9TWEH1Ndir185h5nhDjqX1zlcDooZReW66BZe6Kysu8XTPsIMm1q0mciHcvj UEJi39ivkVOai2Lz+8V8vZcxNyd0oIKulbiXmU9z6BCz23Cu5eFdfvrS5vrQVIW7R+HT A07w== X-Gm-Message-State: AOAM532iO9U2Th6cLp8DJvQni6eCIGvWY4kJv0bH2+wQOcTuzUt6hDBT ilotZkXc4aCeeEFZLmwiymWK1Q== X-Received: by 2002:a05:6000:1b8f:: with SMTP id r15mr4696767wru.119.1622824377325; Fri, 04 Jun 2021 09:32:57 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id z188sm6319988wme.38.2021.06.04.09.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:32:51 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id C857720025; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 98/99] configure: allow the overriding of default-config in the build Date: Fri, 4 Jun 2021 16:53:11 +0100 Message-Id: <20210604155312.15902-99-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Paolo Bonzini Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" While the default config works well enough it does end up enabling a lot of stuff. For more minimal builds we can pass a slimmed down list of devices and let Kconfig work out what we want. For example: ../../configure --without-default-features \ --target-list=arm-softmmu,aarch64-softmmu \ --with-devices-aarch64=(pwd)/../../configs/aarch64-softmmu/64bit-only.mak will override the aarch64-softmmu default devices to one of our own choosing. Currently there are two configs provided: - 64bit-only, to build without any 32 bit boards at all - virt, even more minimal set for --disable-tcg builds Signed-off-by: Alex Bennée Reviewed-by: Philippe Mathieu-Daudé Cc: Paolo Bonzini Message-Id: <20210528163116.31902-1-alex.bennee@linaro.org> --- v2 - remove extraneous cc - dropped pathname from config - add virt.mak config - drop ZYNQMP from the 64bit only build - test -f the --with-devices-FOO file --- configure | 20 ++++++++++++++++++++ configs/aarch64-softmmu/64bit-only.mak | 10 ++++++++++ configs/aarch64-softmmu/virt-only.mak | 8 ++++++++ meson.build | 3 ++- 4 files changed, 40 insertions(+), 1 deletion(-) create mode 100644 configs/aarch64-softmmu/64bit-only.mak create mode 100644 configs/aarch64-softmmu/virt-only.mak -- 2.20.1 diff --git a/configure b/configure index f0c8629dc6..5bf2f56ac6 100755 --- a/configure +++ b/configure @@ -920,6 +920,16 @@ for opt do ;; --without-default-devices) default_devices="false" ;; + --with-devices-*[!a-zA-Z0-9_-]*=*) error_exit "Passed bad --with-devices-FOO option" + ;; + --with-devices-*) device_arch=${opt#--with-devices-}; device_arch=${device_arch%%=*} + if test -f "$optarg"; then + device_archs="$device_archs $device_arch" + eval "devices_${device_arch}=\$optarg" + else + error_exit "File $optarg does not exist" + fi + ;; --without-default-features) # processed above ;; --enable-gprof) gprof="yes" @@ -1766,6 +1776,7 @@ Advanced options (experts only): --without-default-devices do not include any device that is not needed to start the emulator (only use if you are including desired devices in default-configs/devices/) + --with-devices-ARCH=PATH override default-configs/devices with your own file --enable-debug enable common debug build options --enable-sanitizers enable default sanitizers --enable-tsan enable thread sanitizer @@ -6343,6 +6354,15 @@ if test "$skip_meson" = no; then echo "# Automatically generated by configure - do not modify" > $cross echo "[properties]" >> $cross + + # unroll any custom device configs + if test -n "$device_archs"; then + for a in $device_archs; do + eval "c=\$devices_${a}" + echo "${a}-softmmu = [ '$c' ]" >> $cross + done + fi + test -z "$cxx" && echo "link_language = 'c'" >> $cross echo "[built-in options]" >> $cross echo "c_args = [${CFLAGS:+$(meson_quote $CFLAGS)}]" >> $cross diff --git a/configs/aarch64-softmmu/64bit-only.mak b/configs/aarch64-softmmu/64bit-only.mak new file mode 100644 index 0000000000..19638a56cf --- /dev/null +++ b/configs/aarch64-softmmu/64bit-only.mak @@ -0,0 +1,10 @@ +# +# A version of the config that only supports 64bits and their devices. +# This doesn't quite eliminate all 32 bit devices as some boards like +# "virt" support both. The CONFIG_XLNX_ZYNQMP_ARM isn't included as it +# also requires 32 bit support for the R5s +# + +CONFIG_ARM_VIRT=y +CONFIG_XLNX_VERSAL=y +CONFIG_SBSA_REF=y diff --git a/configs/aarch64-softmmu/virt-only.mak b/configs/aarch64-softmmu/virt-only.mak new file mode 100644 index 0000000000..cadacf3e89 --- /dev/null +++ b/configs/aarch64-softmmu/virt-only.mak @@ -0,0 +1,8 @@ +# +# A version of the config that only supports virtual machines. This is +# intended to be combined with options like --disable-tcg for a +# minimal build supporting only machines we can virtualise with a +# hypervisor. +# + +CONFIG_ARM_VIRT=y diff --git a/meson.build b/meson.build index 09c7809d6b..9d25906219 100644 --- a/meson.build +++ b/meson.build @@ -1350,9 +1350,10 @@ foreach target : target_dirs configuration: config_target_data)} if target.endswith('-softmmu') + config_input = meson.get_external_property(target, 'default-configs/devices' / target + '.mak') config_devices_mak = target + '-config-devices.mak' config_devices_mak = configure_file( - input: ['default-configs/devices' / target + '.mak', 'Kconfig'], + input: [config_input, 'Kconfig'], output: config_devices_mak, depfile: config_devices_mak + '.d', capture: true, From patchwork Fri Jun 4 15:53:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 454094 Delivered-To: patch@linaro.org Received: by 2002:a02:735a:0:0:0:0:0 with SMTP id a26csp574978jae; Fri, 4 Jun 2021 09:40:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwqwC2/uOOnMjTHoHjetxLNG3j8ue1TVwpwd4QhfA27g2cL1I5N6gUfEFRmitR2uypTVPcN X-Received: by 2002:a17:906:6ad0:: with SMTP id q16mr5206282ejs.286.1622824848286; Fri, 04 Jun 2021 09:40:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622824848; cv=none; d=google.com; s=arc-20160816; b=GBPbAsvidhyywkAH12/nihzyjUUjDFr+2JP/6IeLrHmAsAdlZvcUcPDHkkrCPGhrW8 Vau4NULFdHOXyk+2hKSLjAVWFoFC0ydRvI77GPZZr/qyRmXCV/AK0MTUYRtjNiS6J/1d OnUww1MHuWcjVPLTCiYHVSPPSimHA4C07zBWpt1O7bB3kzJgjXKxU34243I2W//1ttnw 887QG/S+i2sIpuHU3IcMB5d9SUdZSuCypkGB/FgZEIg6vjEM6IvFgcpx2S9HKspc/yR5 jbeBncNDc/wW+XxQWVb/Odx2RwA532PlgQcDXo1fOrcthlvzeacHabIlCGzJ2wY9K7+e 8kpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=oBDOyQH7E7gFoIfvmNRWV3L6dZVZeclFGF8pX/YLgLo=; b=BZxCxZG7mF65pACtZjPVDjAD/IDKNgbqq7vQgS0w//qJjZ36QE7mcCEZ3DK5632GQv 8CoG8Yq0WwIiVksZeZwBwd7nMnr336UEcdGtzp6S7/jKBI50ZF+SwtxXLJUkUpxJHdwV UvjkMHK4joyYHws8bdgRJUw55D7DPCxM26D2LOFRfIkNKJnPCb5NQ+gjUKZLykUiATXS fJ+3B6RNpLYrTjiiWSIJr9BGgwmVFEzS87iBAipLtNjl7tedSs9Rr3rb9pkcV9/tR85O qj5FeZonTP0gH6TS5KKzVqTsj2KfeXRARx5h5jWfg+PNoDIXjw89MR2KaDHpm6tyK+gT HpHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=VJPJiOBL; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 12si672477ejh.440.2021.06.04.09.40.48 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 Jun 2021 09:40:48 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=VJPJiOBL; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:54562 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lpCsJ-0007U3-7d for patch@linaro.org; Fri, 04 Jun 2021 12:40:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52160) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lpCRa-0003U7-5R for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:10 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]:43624) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lpCRP-0003t0-1v for qemu-devel@nongnu.org; Fri, 04 Jun 2021 12:13:09 -0400 Received: by mail-wr1-x431.google.com with SMTP id u7so4415019wrs.10 for ; Fri, 04 Jun 2021 09:12:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oBDOyQH7E7gFoIfvmNRWV3L6dZVZeclFGF8pX/YLgLo=; b=VJPJiOBLcu5S27efU17cDZylFxY1kO5tqe1r4Uf8ak294k8DwnHJTNLOpktsYVTbHm VCypjXUwxAK6cl5SpUNtKab9Xigh5XP39gXP5l6dIQekKBmaOVe0GnluaqsKc7OsfK+9 Rp3ausL/J4WK3CsME5d7us7GZylLYVDVH7dWk9REhLRkmEZknUTWcfe84Q2FnEaQdfXL AHNW5eTV/3uwWsyxECVRMu+Rr6w5n00kV9b473SbJCBiX6O+SgmBegsvDNdZh7m/4etX GBOSj2dEAwNZbq1VwkR2bP+8y7lDwJAJkOiS+pdjL0rYM1q8+zB6aCwinUvuFy5ePyWq w+cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oBDOyQH7E7gFoIfvmNRWV3L6dZVZeclFGF8pX/YLgLo=; b=Okbt/zNTQ9WCvoWEth2ai2EBhK/C1ybZoSSKmXBeBzDZ+9Z1nuaSwcYKubHnePmE4j 5RE3A6uERfmBf0WeNpfSVBXMXWisJUjQnc8TVhVMZ5pDJQuYlMzy/Am/5R30yhQ49j4e Q5FNAGjzLHav/M3eLL2jWoNQxBPY6ET23qcFIkpALRbMQPkvDx5tPL00zbm1oI2MazJG UaqOXueRS3bx2S372xFYFs81iS6DxgXohAo/t2eMI4qfVcn0P+nq83ROsiQhvb4aOi4d duO83RAQXSD4NxnWlsMDUGMi+RxMadxu8NjbSXEVzmM8D0aMhXG1JqGfYOhT3RBJlVAd DNOA== X-Gm-Message-State: AOAM533LoBRxTassaKr8qiXA0Y+sBwQ/LE7Umt62aeE/W4Brq6Q1RoV4 D26+AocQOGDmC1Cl/mAuRRsWSA== X-Received: by 2002:adf:ee50:: with SMTP id w16mr4658027wro.187.1622823177020; Fri, 04 Jun 2021 09:12:57 -0700 (PDT) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id g17sm2392881wrp.61.2021.06.04.09.12.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 09:12:53 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id DEA8720026; Fri, 4 Jun 2021 16:53:24 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Subject: [PATCH v16 99/99] gitlab: defend the new stripped down arm64 configs Date: Fri, 4 Jun 2021 16:53:12 +0100 Message-Id: <20210604155312.15902-100-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210604155312.15902-1-alex.bennee@linaro.org> References: <20210604155312.15902-1-alex.bennee@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Wainer dos Santos Moschetta , Willian Rampazzo , qemu-arm@nongnu.org, =?utf-8?q?Alex_Benn=C3=A9?= =?utf-8?q?e?= Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" We can now build a KVM only aarch64-softmmu image which we need to cross build. We can also build a version that only supports a limited set of 64 bit images. Signed-off-by: Alex Bennée --- .gitlab-ci.d/buildtest.yml | 10 ++++++++++ .gitlab-ci.d/crossbuilds.yml | 9 +++++++++ 2 files changed, 19 insertions(+) -- 2.20.1 diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml index b72c57e4df..a48e723efe 100644 --- a/.gitlab-ci.d/buildtest.yml +++ b/.gitlab-ci.d/buildtest.yml @@ -645,6 +645,16 @@ build-without-default-features: --target-list-exclude=arm-softmmu,i386-softmmu,mipsel-softmmu,mips64-softmmu,ppc-softmmu MAKE_CHECK_ARGS: check-unit +build-64bit-only-aarch64-softmmu: + extends: .native_build_job_template + needs: + job: amd64-debian-container + variables: + IMAGE: debian-amd64 + TARGETS: aarch64-softmmu + CONFIGURE_ARGS: --with-devices-aarch64=../configs/aarch64-softmmu/64bit-only.mak + MAKE_CHECK_ARGS: check + build-libvhost-user: stage: build image: $CI_REGISTRY_IMAGE/qemu/fedora:latest diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml index 6b3865c9e8..a118aa3052 100644 --- a/.gitlab-ci.d/crossbuilds.yml +++ b/.gitlab-ci.d/crossbuilds.yml @@ -36,6 +36,15 @@ cross-arm64-system: variables: IMAGE: debian-arm64-cross +cross-arm64-kvm-only-system: + extends: .cross_accel_build_job + needs: + job: arm64-debian-cross-container + variables: + IMAGE: debian-arm64-cross + ACCEL: kvm + EXTRA_CONFIGURE_OPTS: --disable-tcg + cross-arm64-user: extends: .cross_user_build_job needs: