From patchwork Fri Feb 22 02:41:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 158968 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp1218919jaa; Thu, 21 Feb 2019 18:51:04 -0800 (PST) X-Google-Smtp-Source: AHgI3Ia8kSUeizpidHIPzgN9Hz6hs8GXB4Q+krazXwN3ZF0r+8FR+/dreWOzRoTszJJzTQlDxWmL X-Received: by 2002:a25:aa8d:: with SMTP id t13mr1605328ybi.143.1550803864673; Thu, 21 Feb 2019 18:51:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550803864; cv=none; d=google.com; s=arc-20160816; b=D9VYFFKFdvMArCtEvZsv66ei3HZsev+vqNMWvzs9/I9bCrRnKlv7L8hwC2rXZ2LFb7 PLX24wH0B8U0FGC6f07Adp0zCNSflf0Y0RYbjOdgiUnZiApZwrxPr615Jg0Ro3+nmhWX SXxv1w531sqKgG7EJYAYLNvzSGg/rdSsC8ELWPN+E3rDJc+UDHR/6WMe6PxzlGdIFojy uzT+65udRoLGgBTAULFiJBE6OIXoKNbhM4VLop5rF2sMyzSiuT4iTXdh69B5wj9+c/dL miTskJEiS85YsK4IPUqhHSRjM600+ghzLNwgsgr/jPDw7j6zeTzrzSV6nH0AX0SxMWYj v9PQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=CBQ8Xc2iiIXC8ygx5OS3Q6XQ/FWqjNqGCzR2YV4MDI8=; b=t0svi8bbwK30symXjPBK3ZULj/gmk1E1nhEukf2NAEjiWzCwXOAnPnil6PmXJGpdbO fucOT2FYBFsYezlpExET6CKLfdEdcQ9zBZ9ZTKyAPRaooBpLxWjY5JHFOu1FghHD9ux1 w+S+81VZAxLyL1v60I4wfZvA9aHNSBAuCbNPIjOwLeSzPMv9GuZECApBewg+0fEy6kio e4/T0l2SJGf6BA5m/P9iO5S4lMAKNc2wAO3U1dALbPEKbtEiHr0HQo7TchsmPYMYlgl4 e8wAdj1ix+QbS4q+Y614loJKMSvXZJ5WceuK9EriHGDOKOiQWtOuZRBUzK8UhITjPEtT iNdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="b6A/M7LR"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o12si141701ybk.74.2019.02.21.18.51.04 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 21 Feb 2019 18:51:04 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b="b6A/M7LR"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:43059 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gx0vY-0003Pi-3K for patch@linaro.org; Thu, 21 Feb 2019 21:51:04 -0500 Received: from eggs.gnu.org ([209.51.188.92]:50698) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gx0mf-0005Ap-UG for qemu-devel@nongnu.org; Thu, 21 Feb 2019 21:41:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gx0mU-0005BG-QY for qemu-devel@nongnu.org; Thu, 21 Feb 2019 21:41:45 -0500 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]:38207) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gx0mU-0004ZG-3y for qemu-devel@nongnu.org; Thu, 21 Feb 2019 21:41:42 -0500 Received: by mail-pf1-x429.google.com with SMTP id n125so384710pfn.5 for ; Thu, 21 Feb 2019 18:41:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CBQ8Xc2iiIXC8ygx5OS3Q6XQ/FWqjNqGCzR2YV4MDI8=; b=b6A/M7LRo7nTov26SXnejYZzXSS1/K/AO4JIo65sv0MAq10Y5l0YIUuX7KVPnrw0Vs mE6z3teLIp4Duj6/WBJN73lSqn3GeRoEH+PMQ0BNFZ8HjIazHII04RRtEGCN1cquZ9KR RRyoSjM3LlWphP/uV4u5x+iVUt+5szrOK4HERiYEgz13T5hrcBqx5vGHnxbdBIImZyWu 7hqlBoPGnbOZwfie55vx/4vQVjhk6oHzQScVvYU/wLBVGw0Bo1Tdlc+XmCyRpNn/01Oq WojJpDACyh4VAsaX670ZUEZ14mqtQzOYQw9ClDaCta3g7LjE8eJ6fy8ycRc0yKpThHbr GkLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CBQ8Xc2iiIXC8ygx5OS3Q6XQ/FWqjNqGCzR2YV4MDI8=; b=S/RriYo9fPnP6VMiPbVxc/lBr07DpYWuGs1e+PG5UVAIW0MHTv04RdNLudsMtoa9p9 ckCZGodG77m3X0izr1j8Wh500lvfPPNsgUZZdpHeF++ps5pQCGD4W3OBlw67kxLvsrcm 9r/amm+e6dcMnzqOrSNKQrPVnZbJfMzqCC8uhZR0MBcoe1Mxse9TdkU4ZA86ibxNK01w TPZ/N/SnF4naqTj1U1sSsS/jpMaM+IzDFUjfCBuh6Oij2jhAGIU0iKpUgqxVwdl77wXz 6ejHPu02H7yyVw38hVjrwNZEeyS28fM8PUgDLaj2Z4aCsC0EAFuuePfQ4t0M2YzJBfxW T4aw== X-Gm-Message-State: AHQUAuYGognzrOTJYsGz1oHEHbcPYtyjitsQtHKO8uj/ILh4SjEIfXRw DMs6AIA1VLDlXDhBJ0jV97r0s5UTHHs= X-Received: by 2002:a62:e911:: with SMTP id j17mr1324190pfh.107.1550803270024; Thu, 21 Feb 2019 18:41:10 -0800 (PST) Received: from cloudburst.twiddle.net (97-113-188-82.tukw.qwest.net. [97.113.188.82]) by smtp.gmail.com with ESMTPSA id v6sm187429pgb.2.2019.02.21.18.41.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 21 Feb 2019 18:41:08 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Date: Thu, 21 Feb 2019 18:41:04 -0800 Message-Id: <20190222024106.9167-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190222024106.9167-1-richard.henderson@linaro.org> References: <20190222024106.9167-1-richard.henderson@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::429 Subject: [Qemu-devel] [PATCH v3 1/3] target/arm: Split out recompute_hflags et al X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, cota@braap.org, alex.bennee@linaro.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" We will use these to minimize the computation for every call to cpu_get_tb_cpu_state. For now, the env->hflags variable is not used. Reviewed-by: Alex Bennée Signed-off-by: Richard Henderson --- v3: Do not cache VECLEN, VECSTRIDE, VFPEN. Move HANDLER and STACKCHECK to rebuild_hflags_a32. --- target/arm/cpu.h | 28 +++-- target/arm/helper.h | 3 + target/arm/internals.h | 3 + target/arm/helper.c | 254 ++++++++++++++++++++++++----------------- 4 files changed, 175 insertions(+), 113 deletions(-) -- 2.17.2 diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 84ae6849c2..30532bf53e 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -240,6 +240,9 @@ typedef struct CPUARMState { uint32_t pstate; uint32_t aarch64; /* 1 if CPU is in aarch64 state; inverse of PSTATE.nRW */ + /* Cached TBFLAGS state. See below for which bits are included. */ + uint32_t hflags; + /* Frequently accessed CPSR bits are stored separately for efficiency. This contains all the other bits. Use cpsr_{read,write} to access the whole CPSR. */ @@ -3065,25 +3068,28 @@ static inline bool arm_cpu_data_is_big_endian(CPUARMState *env) #include "exec/cpu-all.h" -/* Bit usage in the TB flags field: bit 31 indicates whether we are +/* + * Bit usage in the TB flags field: bit 31 indicates whether we are * in 32 or 64 bit mode. The meaning of the other bits depends on that. * We put flags which are shared between 32 and 64 bit mode at the top * of the word, and flags which apply to only one mode at the bottom. + * + * Unless otherwise noted, these bits are cached in env->hflags. */ FIELD(TBFLAG_ANY, AARCH64_STATE, 31, 1) FIELD(TBFLAG_ANY, MMUIDX, 28, 3) FIELD(TBFLAG_ANY, SS_ACTIVE, 27, 1) -FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1) +FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1) /* Not cached. */ /* Target EL if we take a floating-point-disabled exception */ FIELD(TBFLAG_ANY, FPEXC_EL, 24, 2) FIELD(TBFLAG_ANY, BE_DATA, 23, 1) /* Bit usage when in AArch32 state: */ -FIELD(TBFLAG_A32, THUMB, 0, 1) -FIELD(TBFLAG_A32, VECLEN, 1, 3) -FIELD(TBFLAG_A32, VECSTRIDE, 4, 2) -FIELD(TBFLAG_A32, VFPEN, 7, 1) -FIELD(TBFLAG_A32, CONDEXEC, 8, 8) +FIELD(TBFLAG_A32, THUMB, 0, 1) /* Not cached. */ +FIELD(TBFLAG_A32, VECLEN, 1, 3) /* Not cached. */ +FIELD(TBFLAG_A32, VECSTRIDE, 4, 2) /* Not cached. */ +FIELD(TBFLAG_A32, VFPEN, 7, 1) /* Not cached. */ +FIELD(TBFLAG_A32, CONDEXEC, 8, 8) /* Not cached. */ FIELD(TBFLAG_A32, SCTLR_B, 16, 1) /* We store the bottom two bits of the CPAR as TB flags and handle * checks on the other bits at runtime @@ -3105,7 +3111,7 @@ FIELD(TBFLAG_A64, SVEEXC_EL, 2, 2) FIELD(TBFLAG_A64, ZCR_LEN, 4, 4) FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1) FIELD(TBFLAG_A64, BT, 9, 1) -FIELD(TBFLAG_A64, BTYPE, 10, 2) +FIELD(TBFLAG_A64, BTYPE, 10, 2) /* Not cached. */ FIELD(TBFLAG_A64, TBID, 12, 2) static inline bool bswap_code(bool sctlr_b) @@ -3190,6 +3196,12 @@ void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook, void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook, void *opaque); +/** + * arm_rebuild_hflags: + * Rebuild the cached TBFLAGS for arbitrary changed processor state. + */ +void arm_rebuild_hflags(CPUARMState *env); + /** * aa32_vfp_dreg: * Return a pointer to the Dn register within env in 32-bit mode. diff --git a/target/arm/helper.h b/target/arm/helper.h index 923e8e1525..bbc1a48089 100644 --- a/target/arm/helper.h +++ b/target/arm/helper.h @@ -89,6 +89,9 @@ DEF_HELPER_4(msr_banked, void, env, i32, i32, i32) DEF_HELPER_2(get_user_reg, i32, env, i32) DEF_HELPER_3(set_user_reg, void, env, i32, i32) +DEF_HELPER_FLAGS_2(rebuild_hflags_a32, TCG_CALL_NO_RWG, void, env, i32) +DEF_HELPER_FLAGS_2(rebuild_hflags_a64, TCG_CALL_NO_RWG, void, env, i32) + DEF_HELPER_1(vfp_get_fpscr, i32, env) DEF_HELPER_2(vfp_set_fpscr, void, env, i32) diff --git a/target/arm/internals.h b/target/arm/internals.h index a4bd1becb7..8c1b813364 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -968,4 +968,7 @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va, ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, ARMMMUIdx mmu_idx, bool data); +uint32_t rebuild_hflags_a32(CPUARMState *env, int el); +uint32_t rebuild_hflags_a64(CPUARMState *env, int el); + #endif diff --git a/target/arm/helper.c b/target/arm/helper.c index a018eb23fe..29486a09f6 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -13886,139 +13886,183 @@ ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env) } #endif +static uint32_t common_hflags(CPUARMState *env, int el, ARMMMUIdx mmu_idx, + int fp_el, uint32_t flags) +{ + flags = FIELD_DP32(flags, TBFLAG_ANY, FPEXC_EL, fp_el); + flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, + arm_to_core_mmu_idx(mmu_idx)); + if (arm_cpu_data_is_big_endian(env)) { + flags = FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1); + } + if (arm_singlestep_active(env)) { + flags = FIELD_DP32(flags, TBFLAG_ANY, SS_ACTIVE, 1); + } + return flags; +} + +uint32_t rebuild_hflags_a32(CPUARMState *env, int el) +{ + uint32_t flags = 0; + ARMMMUIdx mmu_idx; + int fp_el; + + flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env)); + flags = FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env)); + flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar); + + if (arm_v7m_is_handler_mode(env)) { + flags = FIELD_DP32(flags, TBFLAG_A32, HANDLER, 1); + } + + mmu_idx = arm_mmu_idx(env); + + /* v8M always applies stack limit checks unless CCR.STKOFHFNMIGN is + * suppressing them because the requested execution priority is less than 0. + */ + if (arm_feature(env, ARM_FEATURE_V8) && + arm_feature(env, ARM_FEATURE_M) && + !((mmu_idx & ARM_MMU_IDX_M_NEGPRI) && + (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKOFHFNMIGN_MASK))) { + flags = FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1); + } + + fp_el = fp_exception_el(env, el); + return common_hflags(env, el, mmu_idx, fp_el, flags); +} + +uint32_t rebuild_hflags_a64(CPUARMState *env, int el) +{ + ARMCPU *cpu = arm_env_get_cpu(env); + ARMMMUIdx mmu_idx = arm_mmu_idx(env); + ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx); + ARMVAParameters p0 = aa64_va_parameters_both(env, 0, stage1); + int fp_el = fp_exception_el(env, el); + uint32_t flags = 0; + uint64_t sctlr; + int tbii, tbid; + + flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1); + + /* Get control bits for tagged addresses. */ + /* FIXME: ARMv8.1-VHE S2 translation regime. */ + if (regime_el(env, stage1) < 2) { + ARMVAParameters p1 = aa64_va_parameters_both(env, -1, stage1); + tbid = (p1.tbi << 1) | p0.tbi; + tbii = tbid & ~((p1.tbid << 1) | p0.tbid); + } else { + tbid = p0.tbi; + tbii = tbid & !p0.tbid; + } + + flags = FIELD_DP32(flags, TBFLAG_A64, TBII, tbii); + flags = FIELD_DP32(flags, TBFLAG_A64, TBID, tbid); + + if (cpu_isar_feature(aa64_sve, cpu)) { + int sve_el = sve_exception_el(env, el); + uint32_t zcr_len; + + /* If SVE is disabled, but FP is enabled, + * then the effective len is 0. + */ + if (sve_el != 0 && fp_el == 0) { + zcr_len = 0; + } else { + zcr_len = sve_zcr_len_for_el(env, el); + } + flags = FIELD_DP32(flags, TBFLAG_A64, SVEEXC_EL, sve_el); + flags = FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len); + } + + if (el == 0) { + /* FIXME: ARMv8.1-VHE S2 translation regime. */ + sctlr = env->cp15.sctlr_el[1]; + } else { + sctlr = env->cp15.sctlr_el[el]; + } + if (cpu_isar_feature(aa64_pauth, cpu)) { + /* + * In order to save space in flags, we record only whether + * pauth is "inactive", meaning all insns are implemented as + * a nop, or "active" when some action must be performed. + * The decision of which action to take is left to a helper. + */ + if (sctlr & (SCTLR_EnIA | SCTLR_EnIB | SCTLR_EnDA | SCTLR_EnDB)) { + flags = FIELD_DP32(flags, TBFLAG_A64, PAUTH_ACTIVE, 1); + } + } + + if (cpu_isar_feature(aa64_bti, cpu)) { + /* Note that SCTLR_EL[23].BT == SCTLR_BT1. */ + if (sctlr & (el == 0 ? SCTLR_BT0 : SCTLR_BT1)) { + flags = FIELD_DP32(flags, TBFLAG_A64, BT, 1); + } + } + + return common_hflags(env, el, mmu_idx, fp_el, flags); +} + +void arm_rebuild_hflags(CPUARMState *env) +{ + int el = arm_current_el(env); + env->hflags = (is_a64(env) + ? rebuild_hflags_a64(env, el) + : rebuild_hflags_a32(env, el)); +} + +void HELPER(rebuild_hflags_a32)(CPUARMState *env, uint32_t el) +{ + tcg_debug_assert(!is_a64(env)); + env->hflags = rebuild_hflags_a32(env, el); +} + +void HELPER(rebuild_hflags_a64)(CPUARMState *env, uint32_t el) +{ + tcg_debug_assert(is_a64(env)); + env->hflags = rebuild_hflags_a64(env, el); +} + void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc, target_ulong *cs_base, uint32_t *pflags) { - ARMMMUIdx mmu_idx = arm_mmu_idx(env); int current_el = arm_current_el(env); - int fp_el = fp_exception_el(env, current_el); - uint32_t flags = 0; + uint32_t flags; + uint32_t pstate_for_ss; + *cs_base = 0; if (is_a64(env)) { - ARMCPU *cpu = arm_env_get_cpu(env); - uint64_t sctlr; - *pc = env->pc; - flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1); - - /* Get control bits for tagged addresses. */ - { - ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx); - ARMVAParameters p0 = aa64_va_parameters_both(env, 0, stage1); - int tbii, tbid; - - /* FIXME: ARMv8.1-VHE S2 translation regime. */ - if (regime_el(env, stage1) < 2) { - ARMVAParameters p1 = aa64_va_parameters_both(env, -1, stage1); - tbid = (p1.tbi << 1) | p0.tbi; - tbii = tbid & ~((p1.tbid << 1) | p0.tbid); - } else { - tbid = p0.tbi; - tbii = tbid & !p0.tbid; - } - - flags = FIELD_DP32(flags, TBFLAG_A64, TBII, tbii); - flags = FIELD_DP32(flags, TBFLAG_A64, TBID, tbid); - } - - if (cpu_isar_feature(aa64_sve, cpu)) { - int sve_el = sve_exception_el(env, current_el); - uint32_t zcr_len; - - /* If SVE is disabled, but FP is enabled, - * then the effective len is 0. - */ - if (sve_el != 0 && fp_el == 0) { - zcr_len = 0; - } else { - zcr_len = sve_zcr_len_for_el(env, current_el); - } - flags = FIELD_DP32(flags, TBFLAG_A64, SVEEXC_EL, sve_el); - flags = FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len); - } - - if (current_el == 0) { - /* FIXME: ARMv8.1-VHE S2 translation regime. */ - sctlr = env->cp15.sctlr_el[1]; - } else { - sctlr = env->cp15.sctlr_el[current_el]; - } - if (cpu_isar_feature(aa64_pauth, cpu)) { - /* - * In order to save space in flags, we record only whether - * pauth is "inactive", meaning all insns are implemented as - * a nop, or "active" when some action must be performed. - * The decision of which action to take is left to a helper. - */ - if (sctlr & (SCTLR_EnIA | SCTLR_EnIB | SCTLR_EnDA | SCTLR_EnDB)) { - flags = FIELD_DP32(flags, TBFLAG_A64, PAUTH_ACTIVE, 1); - } - } - - if (cpu_isar_feature(aa64_bti, cpu)) { - /* Note that SCTLR_EL[23].BT == SCTLR_BT1. */ - if (sctlr & (current_el == 0 ? SCTLR_BT0 : SCTLR_BT1)) { - flags = FIELD_DP32(flags, TBFLAG_A64, BT, 1); - } - flags = FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype); - } + flags = rebuild_hflags_a64(env, current_el); + flags = FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype); + pstate_for_ss = env->pstate; } else { *pc = env->regs[15]; + flags = rebuild_hflags_a32(env, current_el); flags = FIELD_DP32(flags, TBFLAG_A32, THUMB, env->thumb); + flags = FIELD_DP32(flags, TBFLAG_A32, CONDEXEC, env->condexec_bits); flags = FIELD_DP32(flags, TBFLAG_A32, VECLEN, env->vfp.vec_len); flags = FIELD_DP32(flags, TBFLAG_A32, VECSTRIDE, env->vfp.vec_stride); - flags = FIELD_DP32(flags, TBFLAG_A32, CONDEXEC, env->condexec_bits); - flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env)); - flags = FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env)); if (env->vfp.xregs[ARM_VFP_FPEXC] & (1 << 30) || arm_el_is_aa64(env, 1)) { flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1); } - flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar); + pstate_for_ss = env->uncached_cpsr; } - flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, arm_to_core_mmu_idx(mmu_idx)); - /* The SS_ACTIVE and PSTATE_SS bits correspond to the state machine * states defined in the ARM ARM for software singlestep: * SS_ACTIVE PSTATE.SS State * 0 x Inactive (the TB flag for SS is always 0) * 1 0 Active-pending * 1 1 Active-not-pending + * SS_ACTIVE is set in hflags; PSTATE_SS is computed every TB. */ - if (arm_singlestep_active(env)) { - flags = FIELD_DP32(flags, TBFLAG_ANY, SS_ACTIVE, 1); - if (is_a64(env)) { - if (env->pstate & PSTATE_SS) { - flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1); - } - } else { - if (env->uncached_cpsr & PSTATE_SS) { - flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1); - } - } - } - if (arm_cpu_data_is_big_endian(env)) { - flags = FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1); - } - flags = FIELD_DP32(flags, TBFLAG_ANY, FPEXC_EL, fp_el); - - if (arm_v7m_is_handler_mode(env)) { - flags = FIELD_DP32(flags, TBFLAG_A32, HANDLER, 1); - } - - /* v8M always applies stack limit checks unless CCR.STKOFHFNMIGN is - * suppressing them because the requested execution priority is less than 0. - */ - if (arm_feature(env, ARM_FEATURE_V8) && - arm_feature(env, ARM_FEATURE_M) && - !((mmu_idx & ARM_MMU_IDX_M_NEGPRI) && - (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKOFHFNMIGN_MASK))) { - flags = FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1); + if (FIELD_EX32(flags, TBFLAG_ANY, SS_ACTIVE) + && (pstate_for_ss & PSTATE_SS)) { + flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1); } *pflags = flags; - *cs_base = 0; } #ifdef TARGET_AARCH64