From patchwork Mon Oct 9 13:48:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115231 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401562edb; Mon, 9 Oct 2017 06:48:40 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDF1DyDq5OWIf14h+7ymW0oyFx0ZqUOoj+pgDIPxerWJXy9M3ErbvZFGxkRA/2aub5H0rHJ X-Received: by 10.223.176.70 with SMTP id g6mr10583402wra.1.1507556920253; Mon, 09 Oct 2017 06:48:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556920; cv=none; d=google.com; s=arc-20160816; b=CZFNoCbVI/3Whm6q+ybSfrgzKEUoAzywFlGxgTJFrit77aE6Ge3qpGg93tTUrVWgH3 T+yiJgKE70vwBDh71TAF6K9zUNFaWVSfYEgOeLefAcvC3eIwlBQ2j+qtTh04HUrEeYfn nyZFj5v1keM/A396O6qi0vzpes4T38Fi/gP04IHpIrj77kTECqvDJiOxK7X1wylHI+cM 3Biq08Wclp5dHTEWMoYjqkfUtNdRoQMGSL3RW+e0fG7avwCkeZtlemFgywhQ4XvynJyT h5SoCJMlXWWpBHm50tGQB+yt9nzHv75VBXjfyZDsZyiWQ9ZO/gCJbcnMdBd9nF8fOE5m pdNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Qs9Mwb9RgNKEmO5MBIOJUrybtGWUJwYhs35qW+fajDk=; b=lH/5BDg9JZfRrtihvHL1/n3FASryQ98Zd9TAkQSlhcp3MFf2z63uIyzUWms2xTpBnO TbZLR4F+IOtp3COmJ4vZbghAlJSlEBoWj4tEaoFrXeWjWFvdi8vId8H/eacNBtSieJfO C33Mpxk4l/s7/TfAjUMzSygQITQuSUnWKjfk6+e9BeG4OZVud0Ne9O8mEv7QGmQXIvZn 4bZw01nvZbE99jCIOgrt1nSjLfOiEDppksbbzMWBlsRSK9TUnZOD8TYWgl9rgBfTQXxL wmkMpQ1uaz2iLrTatgl07R/JPx38EW1FAvVQureE1EofaZcWzzqlCLBlYRggUDzshPfv OqUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id t66si6814722wme.78.2017.10.09.06.48.40 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQA-0004WU-Rs; Mon, 09 Oct 2017 14:48:38 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 1/9] target/arm: Add M profile secure MMU index values to get_a32_user_mem_index() Date: Mon, 9 Oct 2017 14:48:31 +0100 Message-Id: <1507556919-24992-2-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> Add the M profile secure MMU index values to the switch in get_a32_user_mem_index() so that LDRT/STRT work correctly rather than asserting at translate time. Signed-off-by: Peter Maydell --- target/arm/translate.c | 4 ++++ 1 file changed, 4 insertions(+) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/translate.c b/target/arm/translate.c index ab1a12a..e1b83b7 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -165,6 +165,10 @@ static inline int get_a32_user_mem_index(DisasContext *s) case ARMMMUIdx_MPriv: case ARMMMUIdx_MNegPri: return arm_to_core_mmu_idx(ARMMMUIdx_MUser); + case ARMMMUIdx_MSUser: + case ARMMMUIdx_MSPriv: + case ARMMMUIdx_MSNegPri: + return arm_to_core_mmu_idx(ARMMMUIdx_MSUser); case ARMMMUIdx_S2NS: default: g_assert_not_reached(); From patchwork Mon Oct 9 13:48:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115232 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401575edb; Mon, 9 Oct 2017 06:48:40 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBAjflPT39KHxFRzOOktdAnjXWL3hG/NipedrFgjpq3HbeGW0WfvIS92+F0roOjoT3gF+PB X-Received: by 10.46.48.20 with SMTP id w20mr3829816ljw.51.1507556920830; Mon, 09 Oct 2017 06:48:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556920; cv=none; d=google.com; s=arc-20160816; b=L4wfvH3smYhB49eVjCS+io931HuFaYJR5r4lSRBYnWtdB1HgvHolwDh3qgccWEkw7g WCeehHK/kp1WpUsVMvOtan7Z2sdext6WQIckqdyEGBigOMGY3wDkilbPOFoaJb+c1Lvh ZpPc6UTLcR1DzF6b08AyarxuP4eUo53FcGnEeZABJI2huJrdFBnmXkD05u7LoYxEfNKb C5f1x0ml8gbd9pQ6rIvKQNDAJx3IoOmaoF59mvVUZosrnPsBupZume4mEY+d70TGNmwK h+Lslo2MHE8fYx6d+JBD7Wan5VmZIRKDD/hprCpJ0Ecn4wLfYDxJZIgu7QYupXQRC0Mv xy1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Hc4zuKkgkLVP4IfGau02IUw9tL5QARxuGyoYExgV++4=; b=JPDZC08EgSfnd+Pt6q1MWU/5KCrOZ3beI60Pxrw6wuL6bkfHn/jJIWvbpO1cjVDl6Q s4hELvnOvMVe2OkXUWOc5jpxsrz/yisHcipiikGhtCrVNlzvuLOL8LlYzFs1GgNr53dm e8DgqPKZ+Hh/apJqYjiKp/XdzCfb1FbwpReUHRcJhFC6wtanExvfQJ+GggpgTpo1Z8YV QMsgHQ894E+bwlWWZSjW+YR3pZBNibGzWYxN0ILMPMV9WQtlCI/OZQYO+9C3StXLGncJ peVpYlvuuSGh0BHHk0ISoA68ON5hofBLUwloBWMaNuDWcBi32m1+BccmaTroePRp9orq yu8Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id f23si4206748lja.200.2017.10.09.06.48.40 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQB-0004Wh-HR; Mon, 09 Oct 2017 14:48:39 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 2/9] target/arm: Implement SG instruction Date: Mon, 9 Oct 2017 14:48:32 +0100 Message-Id: <1507556919-24992-3-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> Implement the SG instruction, which we emulate 'by hand' in the exception handling code path. Signed-off-by: Peter Maydell --- target/arm/helper.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 127 insertions(+), 5 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index 1d689f0..9cc881e 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -41,6 +41,10 @@ typedef struct V8M_SAttributes { bool irvalid; } V8M_SAttributes; +static void v8m_security_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + V8M_SAttributes *sattrs); + /* Definitions for the PMCCNTR and PMCR registers */ #define PMCRD 0x8 #define PMCRC 0x4 @@ -6736,6 +6740,126 @@ static void arm_log_exception(int idx) } } +static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, + uint32_t addr, uint16_t *insn) +{ + /* Load a 16-bit portion of a v7M instruction, returning true on success, + * or false on failure (in which case we will have pended the appropriate + * exception). + * We need to do the instruction fetch's MPU and SAU checks + * like this because there is no MMU index that would allow + * doing the load with a single function call. Instead we must + * first check that the security attributes permit the load + * and that they don't mismatch on the two halves of the instruction, + * and then we do the load as a secure load (ie using the security + * attributes of the address, not the CPU, as architecturally required). + */ + CPUState *cs = CPU(cpu); + CPUARMState *env = &cpu->env; + V8M_SAttributes sattrs = {}; + MemTxAttrs attrs = {}; + ARMMMUFaultInfo fi = {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + uint32_t fsr; + + v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs); + if (!sattrs.nsc || sattrs.ns) { + /* This must be the second half of the insn, and it straddles a + * region boundary with the second half not being S&NSC. + */ + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; + } + if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx, + &physaddr, &attrs, &prot, &page_size, &fsr, &fi)) { + /* the MPU lookup failed */ + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n"); + return false; + } + *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr, + attrs, &txres); + if (txres != MEMTX_OK) { + env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); + qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n"); + return false; + } + return true; +} + +static bool v7m_handle_execute_nsc(ARMCPU *cpu) +{ + /* Check whether this attempt to execute code in a Secure & NS-Callable + * memory region is for an SG instruction; if so, then emulate the + * effect of the SG instruction and return true. Otherwise pend + * the correct kind of exception and return false. + */ + CPUARMState *env = &cpu->env; + ARMMMUIdx mmu_idx; + uint16_t insn; + + /* We should never get here unless get_phys_addr_pmsav8() caused + * an exception for NS executing in S&NSC memory. + */ + assert(!env->v7m.secure); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + + /* We want to do the MPU lookup as secure; work out what mmu_idx that is */ + mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); + + if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) { + return false; + } + + if (!env->thumb) { + goto gen_invep; + } + + if (insn != 0xe97f) { + /* Not an SG instruction first half (we choose the IMPDEF + * early-SG-check option). + */ + goto gen_invep; + } + + if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) { + return false; + } + + if (insn != 0xe97f) { + /* Not an SG instruction second half (yes, both halves of the SG + * insn have the same hex value) + */ + goto gen_invep; + } + + /* OK, we have confirmed that we really have an SG instruction. + * We know we're NS in S memory so don't need to repeat those checks. + */ + qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32 + ", executing it\n", env->regs[15]); + env->regs[14] &= ~1; + switch_v7m_security_state(env, true); + xpsr_write(env, 0, XPSR_IT); + env->regs[15] += 4; + return true; + +gen_invep: + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; +} + void arm_v7m_cpu_do_interrupt(CPUState *cs) { ARMCPU *cpu = ARM_CPU(cs); @@ -6778,12 +6902,10 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) * the SG instruction have the same security attributes.) * Everything else must generate an INVEP SecureFault, so we * emulate the SG instruction here. - * TODO: actually emulate SG. */ - env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n"); + if (v7m_handle_execute_nsc(cpu)) { + return; + } break; case M_FAKE_FSR_SFAULT: /* Various flavours of SecureFault for attempts to execute or From patchwork Mon Oct 9 13:48:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115233 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401574edb; Mon, 9 Oct 2017 06:48:40 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBVp7sfgT0RewIv/PSPrSSH9e4KEkJBEw7KNfZBHaTmBhGYW2TwrhqVthrAu/rgfGNS0Zms X-Received: by 10.28.0.66 with SMTP id 63mr7948416wma.7.1507556920810; Mon, 09 Oct 2017 06:48:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556920; cv=none; d=google.com; s=arc-20160816; b=uA/QiSPbV+W7zOq1zw1Svq+BpS/4IXaxMLhFwUVZJbQkVRYkNf8FPG6X2n2hy6j69a T2sTa4qCVR4AOAw0FMxq0fQDlTUlNkTa0ZrRRFB53XOjsEFNROd93XRG3owPRN0ZNPk+ sRfLogI0nJolDT+uFTJcUAK3grXLfr3rpHXf+6Pkqb8SFcI/rVNw1EOcFS1Sn+WCitqQ zemLqhLBym5zPGF51cuGozly0MHFKL8ZCRDsSqZ63d/qmUOhT47C0AKCkXODS9JXV3Ab apY9G6lcJGHBeSACx0pn5dtyBrnWAIb7hKPorAhY3CEaEtOx6eyaKdIdRteSmwkJWAWr 3JZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=THx3ZXqWonUw2b2EXVywQKMTRUlzbLQq7WeXq5ufwrw=; b=0SQYo4PXfD5JdrT0KKR4zNrE6BB34DgSfj4OxlCWjj3sDJv4aa8Pz/N8N4CMPew/xR 0mtHttAdbdO1xRinHSa796IkDetTYIDQ6t+9O1Lw8/nJp1UCk0wmeg6NIu+nBTH2SMv4 p5brcAOZULoQmfHZxMm5E5gBMXuhi1ZX+8aKczTtrgEwbW1WZeaYJQFByuQIx4pD9eFm 0TLOjxlGmc3FeObsoyA5jqz013LG5mKvBBr+JIQAHlm0HLgkiITgabWrw+m3PvQktTeM oy7LTCrjbJp8YbujNbL+VgmJPoCJJfA9p9rvYI6YxRpjWp8/pjSiHhmoX2kVnM5KtX9Z CN+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id l25si3691924wrb.461.2017.10.09.06.48.40 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQC-0004X2-5J; Mon, 09 Oct 2017 14:48:40 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 3/9] target/arm: Implement BLXNS Date: Mon, 9 Oct 2017 14:48:33 +0100 Message-Id: <1507556919-24992-4-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> Implement the BLXNS instruction, which allows secure code to call non-secure code. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/helper.h | 1 + target/arm/internals.h | 1 + target/arm/helper.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++ target/arm/translate.c | 17 +++++++++++++-- 4 files changed, 76 insertions(+), 2 deletions(-) -- 2.7.4 diff --git a/target/arm/helper.h b/target/arm/helper.h index 64afbac..2cf6f74 100644 --- a/target/arm/helper.h +++ b/target/arm/helper.h @@ -64,6 +64,7 @@ DEF_HELPER_3(v7m_msr, void, env, i32, i32) DEF_HELPER_2(v7m_mrs, i32, env, i32) DEF_HELPER_2(v7m_bxns, void, env, i32) +DEF_HELPER_2(v7m_blxns, void, env, i32) DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32) DEF_HELPER_3(set_cp_reg, void, env, ptr, i32) diff --git a/target/arm/internals.h b/target/arm/internals.h index fd9a7e8..1746737 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -60,6 +60,7 @@ static inline bool excp_is_internal(int excp) FIELD(V7M_CONTROL, NPRIV, 0, 1) FIELD(V7M_CONTROL, SPSEL, 1, 1) FIELD(V7M_CONTROL, FPCA, 2, 1) +FIELD(V7M_CONTROL, SFPA, 3, 1) /* Bit definitions for v7M exception return payload */ FIELD(V7M_EXCRET, ES, 0, 1) diff --git a/target/arm/helper.c b/target/arm/helper.c index 9cc881e..47c5767 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -5897,6 +5897,12 @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) g_assert_not_reached(); } +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + void switch_mode(CPUARMState *env, int mode) { ARMCPU *cpu = arm_env_get_cpu(env); @@ -6189,6 +6195,59 @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) env->regs[15] = dest & ~1; } +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* Handle v7M BLXNS: + * - bit 0 of the destination address is the target security state + */ + + /* At this point regs[15] is the address just after the BLXNS */ + uint32_t nextinst = env->regs[15] | 1; + uint32_t sp = env->regs[13] - 8; + uint32_t saved_psr; + + /* translate.c will have made BLXNS UNDEF unless we're secure */ + assert(env->v7m.secure); + + if (dest & 1) { + /* target is Secure, so this is just a normal BLX, + * except that the low bit doesn't indicate Thumb/not. + */ + env->regs[14] = nextinst; + env->thumb = 1; + env->regs[15] = dest & ~1; + return; + } + + /* Target is non-secure: first push a stack frame */ + if (!QEMU_IS_ALIGNED(sp, 8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "BLXNS with misaligned SP is UNPREDICTABLE\n"); + } + + saved_psr = env->v7m.exception; + if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) { + saved_psr |= XPSR_SFPA; + } + + /* Note that these stores can throw exceptions on MPU faults */ + cpu_stl_data(env, sp, nextinst); + cpu_stl_data(env, sp + 4, saved_psr); + + env->regs[13] = sp; + env->regs[14] = 0xfeffffff; + if (arm_v7m_is_handler_mode(env)) { + /* Write a dummy value to IPSR, to avoid leaking the current secure + * exception number to non-secure code. This is guaranteed not + * to cause write_v7m_exception() to actually change stacks. + */ + write_v7m_exception(env, 1); + } + switch_v7m_security_state(env, 0); + env->thumb = 1; + env->regs[15] = dest; +} + static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, bool spsel) { diff --git a/target/arm/translate.c b/target/arm/translate.c index e1b83b7..933a52f 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -1017,6 +1017,20 @@ static inline void gen_bxns(DisasContext *s, int rm) s->base.is_jmp = DISAS_EXIT; } +static inline void gen_blxns(DisasContext *s, int rm) +{ + TCGv_i32 var = load_reg(s, rm); + + /* We don't need to sync condexec state, for the same reason as bxns. + * We do however need to set the PC, because the blxns helper reads it. + * The blxns helper may throw an exception. + */ + gen_set_pc_im(s, s->pc); + gen_helper_v7m_blxns(cpu_env, var); + tcg_temp_free_i32(var); + s->base.is_jmp = DISAS_EXIT; +} + /* Variant of store_reg which uses branch&exchange logic when storing to r15 in ARM architecture v7 and above. The source must be a temporary and will be marked as dead. */ @@ -11225,8 +11239,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) goto undef; } if (link) { - /* BLXNS: not yet implemented */ - goto undef; + gen_blxns(s, rm); } else { gen_bxns(s, rm); } From patchwork Mon Oct 9 13:48:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115235 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401579edb; Mon, 9 Oct 2017 06:48:41 -0700 (PDT) X-Google-Smtp-Source: AOwi7QAXQ37Udln7g5XgKO5UA8+FO6RxursmIljFH9Ky0QMSw5vFricIVeZIZkfEx3/7fUEI/b3o X-Received: by 10.28.34.3 with SMTP id i3mr9277378wmi.94.1507556921362; Mon, 09 Oct 2017 06:48:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556921; cv=none; d=google.com; s=arc-20160816; b=BZs/JE/rSo2MfmNRdLq6LHNlOrJxADApXoX1i0pMSX11ZFluzao9xGRTF+gYnKdgrL 5ZWlWQ2+6la9z+pts9Ajhd+rVWyG07D1YTMEGmooeDZFMYhluJQDGOSkCrE4iSzy5iw3 GOdMK6ti36Vjnezg09YU2Kb+pI6lZVaMt1ptlziJ1bC0i23X6M7G+gxIa5Jk3JVtN9Bt s3D6X01GNMwNswf+TifrmfBx1dTKcDm1AsaSrwjxpGFpYdPfh2G/6K4198cGXrFjhEkx kMvefXDl+08idhZfSbbPUfeW6/gwz6RO1I6ei5xjACwR62gbFi+IidWYNOb4Bo4IKKpQ q1Kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=Ltn1xSOSWK0PIYfYewNYnY+wyTw2NDsM2ASRuRl31M8=; b=fiMFA432xi94gvVwUErfTIg0gAssdcSybrLTTcGNdkSJc/qwQbaEGGuIMDuL4Ff284 As1JCNw6q3kIKV0+zZ3pefye0l0fTKCWxIFgG/SZ5lAcUS0sw8jVzbWz95ageucaJ0vf X4zilToGEab1GuEnbf9xQNdegWijMrG6irYHXrM/xEr44ow2LGgg2Ua3XnqTDUI1eX+4 mt2YBcw2kUvlugzk+QwwVkGOipbcO9a5/GKsnk74KDAMpt5V4e/Tm3oP+OXvEBXvhGPk 7g889JimVpDhiPM8/66a5XwYNxyGBeTUD3PzRc2ol4R+sWQsDPrRskSn/4nNYS4c6n8O IWgw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id t83si5302560wmb.38.2017.10.09.06.48.41 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQC-0004XX-Oz; Mon, 09 Oct 2017 14:48:40 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 4/9] target/arm: Implement secure function return Date: Mon, 9 Oct 2017 14:48:34 +0100 Message-Id: <1507556919-24992-5-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> MIME-Version: 1.0 Secure function return happens when a non-secure function has been called using BLXNS and so has a particular magic LR value (either 0xfefffffe or 0xfeffffff). The function return via BX behaves specially when the new PC value is this magic value, in the same way that exception returns are handled. Adjust our BX excret guards so that they recognize the function return magic number as well, and perform the function-return unstacking in do_v7m_exception_exit(). Signed-off-by: Peter Maydell Acked-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/arm/internals.h | 7 +++ target/arm/helper.c | 115 +++++++++++++++++++++++++++++++++++++++++++++---- target/arm/translate.c | 14 +++++- 3 files changed, 126 insertions(+), 10 deletions(-) -- 2.7.4 diff --git a/target/arm/internals.h b/target/arm/internals.h index 1746737..43106a2 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -72,6 +72,13 @@ FIELD(V7M_EXCRET, DCRS, 5, 1) FIELD(V7M_EXCRET, S, 6, 1) FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */ +/* Minimum value which is a magic number for exception return */ +#define EXC_RETURN_MIN_MAGIC 0xff000000 +/* Minimum number which is a magic number for function or exception return + * when using v8M security extension + */ +#define FNC_RETURN_MIN_MAGIC 0xfefffffe + /* We use a few fake FSR values for internal purposes in M profile. * M profile cores don't have A/R format FSRs, but currently our * get_phys_addr() code assumes A/R profile and reports failures via diff --git a/target/arm/helper.c b/target/arm/helper.c index 47c5767..96113fe 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6174,7 +6174,17 @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) * - if the return value is a magic value, do exception return (like BX) * - otherwise bit 0 of the return value is the target security state */ - if (dest >= 0xff000000) { + uint32_t min_magic; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* Covers FNC_RETURN and EXC_RETURN magic */ + min_magic = FNC_RETURN_MIN_MAGIC; + } else { + /* EXC_RETURN magic only */ + min_magic = EXC_RETURN_MIN_MAGIC; + } + + if (dest >= min_magic) { /* This is an exception return magic value; put it where * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT. * Note that if we ever add gen_ss_advance() singlestep support to @@ -6470,12 +6480,19 @@ static void do_v7m_exception_exit(ARMCPU *cpu) bool exc_secure = false; bool return_to_secure; - /* We can only get here from an EXCP_EXCEPTION_EXIT, and - * gen_bx_excret() enforces the architectural rule - * that jumps to magic addresses don't have magic behaviour unless - * we're in Handler mode (compare pseudocode BXWritePC()). + /* If we're not in Handler mode then jumps to magic exception-exit + * addresses don't have magic behaviour. However for the v8M + * security extensions the magic secure-function-return has to + * work in thread mode too, so to avoid doing an extra check in + * the generated code we allow exception-exit magic to also cause the + * internal exception and bring us here in thread mode. Correct code + * will never try to do this (the following insn fetch will always + * fault) so we the overhead of having taken an unnecessary exception + * doesn't matter. */ - assert(arm_v7m_is_handler_mode(env)); + if (!arm_v7m_is_handler_mode(env)) { + return; + } /* In the spec pseudocode ExceptionReturn() is called directly * from BXWritePC() and gets the full target PC value including @@ -6765,6 +6782,78 @@ static void do_v7m_exception_exit(ARMCPU *cpu) qemu_log_mask(CPU_LOG_INT, "...successful exception return\n"); } +static bool do_v7m_function_return(ARMCPU *cpu) +{ + /* v8M security extensions magic function return. + * We may either: + * (1) throw an exception (longjump) + * (2) return true if we successfully handled the function return + * (3) return false if we failed a consistency check and have + * pended a UsageFault that needs to be taken now + * + * At this point the magic return value is split between env->regs[15] + * and env->thumb. We don't bother to reconstitute it because we don't + * need it (all values are handled the same way). + */ + CPUARMState *env = &cpu->env; + uint32_t newpc, newpsr, newpsr_exc; + + qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n"); + + { + bool threadmode, spsel; + TCGMemOpIdx oi; + ARMMMUIdx mmu_idx; + uint32_t *frame_sp_p; + uint32_t frameptr; + + /* Pull the return address and IPSR from the Secure stack */ + threadmode = !arm_v7m_is_handler_mode(env); + spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK; + + frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel); + frameptr = *frame_sp_p; + + /* These loads may throw an exception (for MPU faults). We want to + * do them as secure, so work out what MMU index that is. + */ + mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); + oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx)); + newpc = helper_le_ldul_mmu(env, frameptr, oi, 0); + newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0); + + /* Consistency checks on new IPSR */ + newpsr_exc = newpsr & XPSR_EXCP; + if (!((env->v7m.exception == 0 && newpsr_exc == 0) || + (env->v7m.exception == 1 && newpsr_exc != 0))) { + /* Pend the fault and tell our caller to take it */ + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, + "...taking INVPC UsageFault: " + "IPSR consistency check failed\n"); + return false; + } + + *frame_sp_p = frameptr + 8; + } + + /* This invalidates frame_sp_p */ + switch_v7m_security_state(env, true); + env->v7m.exception = newpsr_exc; + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; + if (newpsr & XPSR_SFPA) { + env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK; + } + xpsr_write(env, 0, XPSR_IT); + env->thumb = newpc & 1; + env->regs[15] = newpc & ~1; + + qemu_log_mask(CPU_LOG_INT, "...function return successful\n"); + return true; +} + static void arm_log_exception(int idx) { if (qemu_loglevel_mask(CPU_LOG_INT)) { @@ -7049,8 +7138,18 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) case EXCP_IRQ: break; case EXCP_EXCEPTION_EXIT: - do_v7m_exception_exit(cpu); - return; + if (env->regs[15] < EXC_RETURN_MIN_MAGIC) { + /* Must be v8M security extension function return */ + assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + if (do_v7m_function_return(cpu)) { + return; + } + } else { + do_v7m_exception_exit(cpu); + return; + } + break; default: cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); return; /* Never happens. Keep compiler happy. */ diff --git a/target/arm/translate.c b/target/arm/translate.c index 933a52f..58d706c 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -964,7 +964,8 @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var) * s->base.is_jmp that we need to do the rest of the work later. */ gen_bx(s, var); - if (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M)) { + if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY) || + (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M))) { s->base.is_jmp = DISAS_BX_EXCRET; } } @@ -973,9 +974,18 @@ static inline void gen_bx_excret_final_code(DisasContext *s) { /* Generate the code to finish possible exception return and end the TB */ TCGLabel *excret_label = gen_new_label(); + uint32_t min_magic; + + if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) { + /* Covers FNC_RETURN and EXC_RETURN magic */ + min_magic = FNC_RETURN_MIN_MAGIC; + } else { + /* EXC_RETURN magic only */ + min_magic = EXC_RETURN_MIN_MAGIC; + } /* Is the new PC value in the magic range indicating exception return? */ - tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], 0xff000000, excret_label); + tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label); /* No: end the TB as we would for a DISAS_JMP */ if (is_singlestepping(s)) { gen_singlestep_exception(s); From patchwork Mon Oct 9 13:48:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115234 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401587edb; Mon, 9 Oct 2017 06:48:41 -0700 (PDT) X-Google-Smtp-Source: AOwi7QC0+k58kp/ihHj5iGGP6hZ4Fv4nWYESzogd+JInzCrl2XtTIbI1IN2UMTddDTx16uCfBWni X-Received: by 10.223.199.68 with SMTP id b4mr10732970wrh.14.1507556921855; Mon, 09 Oct 2017 06:48:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556921; cv=none; d=google.com; s=arc-20160816; b=VdKbVa2pAL3kjA7s1vy9pUbkEJ/JvCqnfw59gveq9mST+M9QgkzPUM2LYhaNrg6E4U V0PLyKL3xYEdDVeEOsDw9cTa6L2t5rbcvt3rf3e5NM3nOQnsRUtFbq7IAnaRL+T7GiNe 0Yj1UJheth6ZjnsYK9/zbX3zIasMEwIeXXu2ntQcAHRlhHpWCx4XJH6eol186Q2jvpxa GWMVqXS/aJZeO8izXvL6pdZa1ovaoDHWfU+fyzWRCXULMbeltdFur2YnHHIdqf38qOt6 KdJj4qm3E4zpyiRBFNFRsD66UGnZQJ1wJNYb6Ex1vvlaDPWr+ns72QwKQv9WJU9ol2Y7 ofGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=3cimKu6ZxsmhZIJyth9r+ECjUHkneklgLmUhWltrPI4=; b=UuGfwnMBCHrewfW7uPeYMk4tXTyPA46nPSlgz9+0+MikiBGR7lb7mVprbzGbskglR9 DfC2YBBRH1BqrEPLo845SdQDs7OLIl1o7upzJBfOGDGoK1ObxeODNI3gr901s+kpNMH0 +T3W99wYY4pGuNNUEaEmS9wLet4oqMgX5PgEP8RRQsEQEN/lvkSfE70TbKliMZyx5HKN g7Cw2y9KrQWEikVdaq+6yzmovdx+jFZubemXtFSQ3LWE29p+dniveWhz3sP0mWnjpEm5 kDeNPJTmjhqBRiJbj7EoONTEHrP+kpyuz1QBPxd9ZDXVWckB3Xey1PgZiTM5DLwhmiW/ Z6Qg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id x1si6802945wrc.326.2017.10.09.06.48.41 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQD-0004YK-C3; Mon, 09 Oct 2017 14:48:41 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 5/9] target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1 Date: Mon, 9 Oct 2017 14:48:35 +0100 Message-Id: <1507556919-24992-6-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> The code which implements the Thumb1 split BL/BLX instructions is guarded by a check on "not M or THUMB2". All we really need to check here is "not THUMB2" (and we assume that elsewhere too, eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns). This doesn't change behaviour because all M profile cores have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2. (v6M implements a very restricted subset of Thumb2, but we can cross that bridge when we get to it with appropriate feature bits.) Signed-off-by: Peter Maydell --- target/arm/translate.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/translate.c b/target/arm/translate.c index 58d706c..f5ca87f 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -9722,8 +9722,7 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw int conds; int logic_cc; - if (!(arm_dc_feature(s, ARM_FEATURE_THUMB2) - || arm_dc_feature(s, ARM_FEATURE_M))) { + if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) { /* Thumb-1 cores may need to treat bl and blx as a pair of 16-bit instructions to get correct prefetch abort behavior. */ insn = insn_hw1; From patchwork Mon Oct 9 13:48:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115236 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401609edb; Mon, 9 Oct 2017 06:48:42 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBCe2pYEzfugv0YquqeDYZmjtdyHHSd/1WhylM5alx7XHWd1iegP5fBwoxmrtmcDkcPdCev X-Received: by 10.223.165.65 with SMTP id j1mr3124356wrb.206.1507556922727; Mon, 09 Oct 2017 06:48:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556922; cv=none; d=google.com; s=arc-20160816; b=GJOQCcWUAz3LbEjqwUDbcRU91jv8EueS4BFYvmzK6JadEKF6LMTiciHhLGHkU2c+9Z u5Yh3IKj+j95dTqVoi4PB/t4J4m/GBA3RGg0HbdJrp34fq/nZRJzOh9RzyR8CigatDh3 7CGe91o6nTqL01Kalc3a2WOJiQ3AGDrZyth2PHKS3XMn9+dR0RD8lqSYZPTIFAzmK82H BN428Px0lJ3OcwzLKKkqy8vLDQjcpZrPRYSnqqOrZbZvtyogE7q+SRkGmh6oSRt4DWqm lqVcjnybb6++a27h+zQvDPLmXXddLvj/Vj77kCYOCpSpOGNxoaLqZxXQfEyYKCEJG482 Tf+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=vkZ/5cqPi88upaUh4IdVsehnid4e3jomyHY/bFZ9SJw=; b=C5s8Fw/e2Rdu1iFAIFiQLJ5uKwKbi+9eFIvTMA8z/guxGs6VFYwn2syRVo+BgGrqS2 1KGyObfo1oijhxn3DPm3zeSSWgxGDjf3/EdUjrrmA80Xi3yl/Y+8f157chpo8Go94/kU WAqs6Id2+5+gw/YWUJpPuRJUcyon7L5Wg0KstxwTftfzhcgYqpSzL5r2K9CDOcK7YWH3 Usg5+o5LW182uMfplRFnEzlnsANKSqXefDxCTkkAosPlmt12ojhkDCv3x2xt20JWrtTA 1CpOQJ94zc2ZnvRpDttSQ3jVbkydA0cI/zDSJkp7qp3mv7wmlulh1+QmJmDke8MjJ1eI dx8Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id 17si7079498wmf.236.2017.10.09.06.48.42 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQE-0004Z0-6G; Mon, 09 Oct 2017 14:48:42 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 6/9] target/arm: Pull Thumb insn word loads up to top level Date: Mon, 9 Oct 2017 14:48:36 +0100 Message-Id: <1507556919-24992-7-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> Refactor the Thumb decode to do the loads of the instruction words at the top level rather than only loading the second half of a 32-bit Thumb insn in the middle of the decode. This is simple apart from the awkward case of Thumb1, where the BL/BLX prefix and suffix instructions live in what in Thumb2 is the 32-bit insn space. To handle these we decode enough to identify whether we're looking at a prefix/suffix that we handle as a 16 bit insn, or a prefix that we're going to merge with the following suffix to consider as a 32 bit insn. The translation of the 16 bit cases then moves from disas_thumb2_insn() to disas_thumb_insn(). The refactoring has the benefit that we don't need to pass the CPUARMState* down into the decoder code any more, but the major reason for doing this is that some Thumb instructions must be always unconditional regardless of the IT state bits, so we need to know the whole insn before we emit the "skip this insn if the IT bits and cond state tell us to" code. (The always unconditional insns are BKPT, HLT and SG; the last of these is 32 bits.) Signed-off-by: Peter Maydell --- target/arm/translate.c | 178 ++++++++++++++++++++++++++++++------------------- 1 file changed, 108 insertions(+), 70 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/translate.c b/target/arm/translate.c index f5ca87f..8d3203e 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -9623,6 +9623,44 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn) } } +static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn) +{ + /* Return true if this is a 16 bit instruction. We must be precise + * about this (matching the decode). We assume that s->pc still + * points to the first 16 bits of the insn. + */ + if ((insn >> 11) < 0x1d) { + /* Definitely a 16-bit instruction */ + return true; + } + + /* Top five bits 0b11101 / 0b11110 / 0b11111 : this is the + * first half of a 32-bit Thumb insn. Thumb-1 cores might + * end up actually treating this as two 16-bit insns, though, + * if it's half of a bl/blx pair that might span a page boundary. + */ + if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) { + /* Thumb2 cores (including all M profile ones) always treat + * 32-bit insns as 32-bit. + */ + return false; + } + + if ((insn >> 11) == 0x1e && (s->pc < s->next_page_start - 3)) { + /* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix, and the suffix + * is not on the next page; we merge this into a 32-bit + * insn. + */ + return false; + } + /* 0b1110_1xxx_xxxx_xxxx : BLX suffix (or UNDEF); + * 0b1111_1xxx_xxxx_xxxx : BL suffix; + * 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix on the end of a page + * -- handle as single 16 bit insn + */ + return true; +} + /* Return true if this is a Thumb-2 logical op. */ static int thumb2_logic_op(int op) @@ -9708,9 +9746,9 @@ gen_thumb2_data_op(DisasContext *s, int op, int conds, uint32_t shifter_out, /* Translate a 32-bit thumb instruction. Returns nonzero if the instruction is not legal. */ -static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw1) +static int disas_thumb2_insn(DisasContext *s, uint32_t insn) { - uint32_t insn, imm, shift, offset; + uint32_t imm, shift, offset; uint32_t rd, rn, rm, rs; TCGv_i32 tmp; TCGv_i32 tmp2; @@ -9722,51 +9760,9 @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw int conds; int logic_cc; - if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) { - /* Thumb-1 cores may need to treat bl and blx as a pair of - 16-bit instructions to get correct prefetch abort behavior. */ - insn = insn_hw1; - if ((insn & (1 << 12)) == 0) { - ARCH(5); - /* Second half of blx. */ - offset = ((insn & 0x7ff) << 1); - tmp = load_reg(s, 14); - tcg_gen_addi_i32(tmp, tmp, offset); - tcg_gen_andi_i32(tmp, tmp, 0xfffffffc); - - tmp2 = tcg_temp_new_i32(); - tcg_gen_movi_i32(tmp2, s->pc | 1); - store_reg(s, 14, tmp2); - gen_bx(s, tmp); - return 0; - } - if (insn & (1 << 11)) { - /* Second half of bl. */ - offset = ((insn & 0x7ff) << 1) | 1; - tmp = load_reg(s, 14); - tcg_gen_addi_i32(tmp, tmp, offset); - - tmp2 = tcg_temp_new_i32(); - tcg_gen_movi_i32(tmp2, s->pc | 1); - store_reg(s, 14, tmp2); - gen_bx(s, tmp); - return 0; - } - if ((s->pc & ~TARGET_PAGE_MASK) == 0) { - /* Instruction spans a page boundary. Implement it as two - 16-bit instructions in case the second half causes an - prefetch abort. */ - offset = ((int32_t)insn << 21) >> 9; - tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + offset); - return 0; - } - /* Fall through to 32-bit decode. */ - } - - insn = arm_lduw_code(env, s->pc, s->sctlr_b); - s->pc += 2; - insn |= (uint32_t)insn_hw1 << 16; - + /* The only 32 bit insn that's allowed for Thumb1 is the combined + * BL/BLX prefix and suffix. + */ if ((insn & 0xf800e800) != 0xf000e800) { ARCH(6T2); } @@ -11081,27 +11077,15 @@ illegal_op: return 1; } -static void disas_thumb_insn(CPUARMState *env, DisasContext *s) +static void disas_thumb_insn(DisasContext *s, uint32_t insn) { - uint32_t val, insn, op, rm, rn, rd, shift, cond; + uint32_t val, op, rm, rn, rd, shift, cond; int32_t offset; int i; TCGv_i32 tmp; TCGv_i32 tmp2; TCGv_i32 addr; - if (s->condexec_mask) { - cond = s->condexec_cond; - if (cond != 0x0e) { /* Skip conditional when condition is AL. */ - s->condlabel = gen_new_label(); - arm_gen_test_cc(cond ^ 1, s->condlabel); - s->condjmp = 1; - } - } - - insn = arm_lduw_code(env, s->pc, s->sctlr_b); - s->pc += 2; - switch (insn >> 12) { case 0: case 1: @@ -11832,8 +11816,21 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) case 14: if (insn & (1 << 11)) { - if (disas_thumb2_insn(env, s, insn)) - goto undef32; + /* thumb_insn_is_16bit() ensures we can't get here for + * a Thumb2 CPU, so this must be a thumb1 split BL/BLX: + * 0b1110_1xxx_xxxx_xxxx : BLX suffix (or UNDEF) + */ + assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2)); + ARCH(5); + offset = ((insn & 0x7ff) << 1); + tmp = load_reg(s, 14); + tcg_gen_addi_i32(tmp, tmp, offset); + tcg_gen_andi_i32(tmp, tmp, 0xfffffffc); + + tmp2 = tcg_temp_new_i32(); + tcg_gen_movi_i32(tmp2, s->pc | 1); + store_reg(s, 14, tmp2); + gen_bx(s, tmp); break; } /* unconditional branch */ @@ -11844,15 +11841,30 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) break; case 15: - if (disas_thumb2_insn(env, s, insn)) - goto undef32; + /* thumb_insn_is_16bit() ensures we can't get here for + * a Thumb2 CPU, so this must be a thumb1 split BL/BLX. + */ + assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2)); + + if (insn & (1 << 11)) { + /* 0b1111_1xxx_xxxx_xxxx : BL suffix */ + offset = ((insn & 0x7ff) << 1) | 1; + tmp = load_reg(s, 14); + tcg_gen_addi_i32(tmp, tmp, offset); + + tmp2 = tcg_temp_new_i32(); + tcg_gen_movi_i32(tmp2, s->pc | 1); + store_reg(s, 14, tmp2); + gen_bx(s, tmp); + } else { + /* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix */ + uint32_t uoffset = ((int32_t)insn << 21) >> 9; + + tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + uoffset); + } break; } return; -undef32: - gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(), - default_exception_el(s)); - return; illegal_op: undef: gen_exception_insn(s, 2, EXCP_UDEF, syn_uncategorized(), @@ -12122,12 +12134,38 @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu) { DisasContext *dc = container_of(dcbase, DisasContext, base); CPUARMState *env = cpu->env_ptr; + uint32_t insn; + bool is_16bit; if (arm_pre_translate_insn(dc)) { return; } - disas_thumb_insn(env, dc); + insn = arm_lduw_code(env, dc->pc, dc->sctlr_b); + is_16bit = thumb_insn_is_16bit(dc, insn); + dc->pc += 2; + if (!is_16bit) { + uint32_t insn2 = arm_lduw_code(env, dc->pc, dc->sctlr_b); + + insn = insn << 16 | insn2; + dc->pc += 2; + } + + if (dc->condexec_mask) { + uint32_t cond = dc->condexec_cond; + + if (cond != 0x0e) { /* Skip conditional when condition is AL. */ + dc->condlabel = gen_new_label(); + arm_gen_test_cc(cond ^ 1, dc->condlabel); + dc->condjmp = 1; + } + } + + if (is_16bit) { + disas_thumb_insn(dc, insn); + } else { + disas_thumb2_insn(dc, insn); + } /* Advance the Thumb condexec condition. */ if (dc->condexec_mask) { From patchwork Mon Oct 9 13:48:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115240 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401684edb; Mon, 9 Oct 2017 06:48:47 -0700 (PDT) X-Google-Smtp-Source: AOwi7QAs5lEfIdySp+3HAzutafsgqHVb25llCe9Foi/sOZaOhN6qr+l1UIUtodoi10nnAL5LOO4q X-Received: by 10.99.147.69 with SMTP id w5mr6277449pgm.401.1507556927592; Mon, 09 Oct 2017 06:48:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556927; cv=none; d=google.com; s=arc-20160816; b=WEU11FJUx9ZFhMprKjc8Jvd9CsUU8ydKsjQbazLMFqNIlPyIhaO+I8rucOAqPOizWg hMZxahJVUn/2Nza9t/OTS4+nT4UC/hePf0cTV7eIC3/Dg9AiHwfSFc8wTpSsyMG4yjB9 5JIsXmNCjtE+kP7io6/K3d/CkueTkFEAJZlkpaJiQy4Iwq1vsawhXdq0wiKlRmUQmYJh ZSFlvTQqKbc4Ivsd7WmvNrYVEewNNqNey+UvFx2ctNoBlDAXZKv9AdWWOip2BmWaMrRs ewoCOqPEi1yiEDXXIG7jhpX8Akm5kSBjKAriqYueNfg2W41UpAJDiiVXVG7dIG7DFNeY VNXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=HfkvtX/5ex7btrwBCHOLOHxshPqOEyQgHN6qs1CgxiQ=; b=eXR/NbKKy5Q4Up/bUXjoJMIqS9Uqgg4OMSNdYWKzz9BjsaakSULL5AXXxPNsS3mgpB hc9VN+511o2cMi99cD5MyDBiMOgJB4AVOOCOwIJg644OOHqzBRJCi9ytCuZZVrCDDYMI e5FiUS/TiN08qevs3NX8kd7fqVX+DAHqfnkChi4xeu577l74CO4OBdCU8vdnG6QrV9sE LaIVyB1TaYgLMs+ztWSh2VWyygGWdWCNE52WD+0oyM46MmvdHVNZb2+1QY9dXKDqLgJ1 KvbIdVcI3MYI/9DYEpfuUPZrudfv4UkHf8Dhwn/xK0WYK4YGQdkAeUOvr8qQ5tMnIC+Q 1N8Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id 201si6292061pga.453.2017.10.09.06.48.46 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQE-0004ZG-UP; Mon, 09 Oct 2017 14:48:42 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 7/9] target-arm: Simplify insn_crosses_page() Date: Mon, 9 Oct 2017 14:48:37 +0100 Message-Id: <1507556919-24992-8-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> Recent changes have left insn_crosses_page() more complicated than it needed to be: * it's only called from thumb_tr_translate_insn() so we know for certain that we're looking at a Thumb insn * the caller's check for dc->pc >= dc->next_page_start - 3 means that dc->pc can't possibly be 4 aligned, so there's no need to check that (the check was partly there to ensure that we didn't treat an ARM insn as Thumb, I think) * we now have thumb_insn_is_16bit() which lets us do a precise check of the length of the next insn, rather than opencoding an inaccurate check Simplify it down to just loading the first half of the insn and calling thumb_insn_is_16bit() on it. Signed-off-by: Peter Maydell --- target/arm/translate.c | 27 ++++++--------------------- 1 file changed, 6 insertions(+), 21 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/translate.c b/target/arm/translate.c index 8d3203e..5838e67 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -11875,29 +11875,14 @@ static bool insn_crosses_page(CPUARMState *env, DisasContext *s) { /* Return true if the insn at dc->pc might cross a page boundary. * (False positives are OK, false negatives are not.) + * We know this is a Thumb insn, and our caller ensures we are + * only called if dc->pc is less than 4 bytes from the page + * boundary, so we cross the page if the first 16 bits indicate + * that this is a 32 bit insn. */ - uint16_t insn; + uint16_t insn = arm_lduw_code(env, s->pc, s->sctlr_b); - if ((s->pc & 3) == 0) { - /* At a 4-aligned address we can't be crossing a page */ - return false; - } - - /* This must be a Thumb insn */ - insn = arm_lduw_code(env, s->pc, s->sctlr_b); - - if ((insn >> 11) >= 0x1d) { - /* Top five bits 0b11101 / 0b11110 / 0b11111 : this is the - * First half of a 32-bit Thumb insn. Thumb-1 cores might - * end up actually treating this as two 16-bit insns (see the - * code at the start of disas_thumb2_insn()) but we don't bother - * to check for that as it is unlikely, and false positives here - * are harmless. - */ - return true; - } - /* Definitely a 16-bit insn, can't be crossing a page. */ - return false; + return !thumb_insn_is_16bit(s, insn); } static int arm_tr_init_disas_context(DisasContextBase *dcbase, From patchwork Mon Oct 9 13:48:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115238 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401635edb; Mon, 9 Oct 2017 06:48:44 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBsCF7MU1gQzLjNA1XeTUoieJg1yRvInIkTP/Xe6CrvWIRJ+l4IJtvKkKXQ9IWzH9seRGXt X-Received: by 10.28.144.206 with SMTP id s197mr9677651wmd.142.1507556924187; Mon, 09 Oct 2017 06:48:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556924; cv=none; d=google.com; s=arc-20160816; b=HjUQCiSlRo+yF0ZgLuAGUZpJ/m4OiNkonQla35nzdbkSMI8sVWRl1S3cM8rPNC2H+u /t0utENX3Vi+urEejqqm5cEJrJ2o5EwIMBbeNjdZiYkFaJENFfD+YF+V7wAP0WU8uF7X Q3KP+ut8ghcbZjv/mVEw5LaAf4Hc/sXR3nBbPo3y/jlhq1z0pycnRP1lT4Vhwh1XIvHm dOPIUwPcRhlVlbRug7pGFoMq5LkX0Mo5VRHWfZBMz3T33A1eGz9VtVsitjXaiFAe4Uc5 gKhTV/O6Vi8RARL6uSBENft5627QOd/Y93e11p4MJtkSNdjpZbRlnynJfdyAc15N22tK vW1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=LGIVxSmqV0Y2+kGBIH9c1nY2ZcPEumJZr/CqF3MXwLA=; b=swjqJwMJhGmO7riMnigoG3L9B+X3YXkqJPsjHV7D/AUZQGFBAP2ENnGu7JpkyoVjtW z3P5ZqQnsXPehLApQh7wXPB+jLZp0Cfq8tRYuMxmoQENhFdPGEnoHd6s1BOT8C8jD7sg 5+q5v5IaIWbF6fH31JcNOSbXZRgwxIwdV8+y2IyUd8oipPXOopb+1OuD8rfpRDRDMs4B w3U8IQdQWhpt34NybGxmgSzmpdClYlmL5ykPninZMEMzLlLahx35x7eKb2CTl1DZBZGo abp4gpG3h8d2XbU1FySGXdpCYg4OfjslJUV/x16ZrsEVv4w4J6Jovb0wVWiUm5tWXvEw Gvbw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id w18si7119785wra.243.2017.10.09.06.48.44 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQF-0004Zo-KS; Mon, 09 Oct 2017 14:48:43 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 8/9] target/arm: Support some Thumb insns being always unconditional Date: Mon, 9 Oct 2017 14:48:38 +0100 Message-Id: <1507556919-24992-9-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> A few Thumb instructions are always unconditional even inside an IT block (as opposed to being UNPREDICTABLE if used inside an IT block): BKPT, the v8M SG instruction, and the A profile HLT (debug halt) instruction. This means we need to suppress the jump-over-instruction-on-condfail code generation (though the IT state still advances as usual and subsequent insns in the IT block may be conditional). Signed-off-by: Peter Maydell --- target/arm/translate.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 47 insertions(+), 1 deletion(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/translate.c b/target/arm/translate.c index 5838e67..9d16760 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -12115,6 +12115,52 @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu) in init_disas_context by adjusting max_insns. */ } +static bool thumb_insn_is_unconditional(DisasContext *s, uint32_t insn) +{ + /* Return true if this Thumb insn is always unconditional, + * even inside an IT block. This is true of only a very few + * instructions: BKPT, HLT, and SG. + * + * A larger class of instructions are UNPREDICTABLE if used + * inside an IT block; we do not need to detect those here, because + * what we do by default (perform the cc check and update the IT + * bits state machine) is a permitted CONSTRAINED UNPREDICTABLE + * choice for those situations. + * + * insn is either a 16-bit or a 32-bit instruction; the two are + * distinguishable because for the 16-bit case the top 16 bits + * are zeroes, and that isn't a valid 32-bit encoding. + */ + if ((insn & 0xffffff00) == 0xbe00) { + /* BKPT */ + return true; + } + + if ((insn & 0xffffffc0) == 0xba80 && arm_dc_feature(s, ARM_FEATURE_V8) && + !arm_dc_feature(s, ARM_FEATURE_M)) { + /* HLT: v8A only. This is unconditional even when it is going to + * UNDEF; see the v8A ARM ARM DDI0487B.a H3.3. + * For v7 cores this was a plain old undefined encoding and so + * honours its cc check. (We might be using the encoding as + * a semihosting trap, but we don't change the cc check behaviour + * on that account, because a debugger connected to a real v7A + * core and emulating semihosting traps by catching the UNDEF + * exception would also only see cases where the cc check passed. + * No guest code should be trying to do a HLT semihosting trap + * in an IT block anyway. + */ + return true; + } + + if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_V8) && + arm_dc_feature(s, ARM_FEATURE_M)) { + /* SG: v8M only */ + return true; + } + + return false; +} + static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu) { DisasContext *dc = container_of(dcbase, DisasContext, base); @@ -12136,7 +12182,7 @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu) dc->pc += 2; } - if (dc->condexec_mask) { + if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) { uint32_t cond = dc->condexec_cond; if (cond != 0x0e) { /* Skip conditional when condition is AL. */ From patchwork Mon Oct 9 13:48:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 115239 Delivered-To: patches@linaro.org Received: by 10.80.163.170 with SMTP id s39csp2401641edb; Mon, 9 Oct 2017 06:48:44 -0700 (PDT) X-Google-Smtp-Source: AOwi7QC9MISgmYcdjSZNwLI3/p2Na10b5kw/K4qnoG5JHg+IhM9mg9icQgkrtjse+SSLPLwFPgGP X-Received: by 10.28.183.67 with SMTP id h64mr8194031wmf.76.1507556924753; Mon, 09 Oct 2017 06:48:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507556924; cv=none; d=google.com; s=arc-20160816; b=Wc/Brt3PZRemM9VNcf5j+jWva5o0TMThxisz/ToU0f2bioLK6khljT8GI4jqCU7bVC HYTjlvQqzP4B9AGwtBFhpwytuhabm35c8D1X4ZmA+brsJifjvFLxQ+pqR1y9bwlsWvmZ nEou9iXq1I+rfgHnUS12ClPCnpGsiluDXt3iCUlMetKDGmlFoljx4F2T/PUi6E7GDgc5 cN/JayHjRnJtKHkrbBFo19sa8AEeTF3N/gVaDoDpdU9qzhuEvVxhHEUVQuzOJ9DQeWPO 2IOVhICLIhHIa5iZfi//RxwHVwRmcPfibayuLDEZs5XcI7e/4Fr+0Tz/T/I+oaUzl8zM j7Vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=RTIgApJRaK/i4iWNBjTprubZ65FNC6P8cp5+ka6vUK0=; b=pEi5hjHwatCBn8n7W1kL+pqxG4uErJSauhkGN5FTzUSi4w5B2N7Qexu+MCmBtBN2Je A+2KrWqOZfcYNQk/2SGDHI61+3N0NVuNRjj55pWp+UTCNNf8JsCJfWywD9VAOSj0KXx6 b4owDYkvXMYxHU/W7tye8txL81ef08WkjXpkF9T+a/+70LqnKmDpjWiNeMoCCH7Wvm8f Meumbhs37XUkCZ+PMFpeHsX+c3kTVoU3FaC0vS9LMjorVVj0tElfBCtgtYrCF9Xdt3uy TzAYWQIac0mAWgfZYMxUf9zwtlIaWUfU8uMBA1SC+Ncc9EBKgXHN4jBD+ajYbsZLKFI2 l9jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id v5si6748578wrd.498.2017.10.09.06.48.44 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Oct 2017 06:48:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e1YQG-0004a4-8e; Mon, 09 Oct 2017 14:48:44 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: Richard Henderson , patches@linaro.org Subject: [PATCH 9/9] target/arm: Implement SG instruction corner cases Date: Mon, 9 Oct 2017 14:48:39 +0100 Message-Id: <1507556919-24992-10-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> References: <1507556919-24992-1-git-send-email-peter.maydell@linaro.org> The common situation of the SG instruction is that it is executed from S&NSC memory by a CPU in NS state. That case is handled by v7m_handle_execute_nsc(). However the instruction also has defined behaviour in a couple of other cases: * SG instruction in NS memory (behaves as a NOP) * SG in S memory but CPU already secure (clears IT bits and does nothing else) * SG instruction in v8M without Security Extension (NOP) These can be implemented in translate.c. Signed-off-by: Peter Maydell --- target/arm/translate.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/translate.c b/target/arm/translate.c index 9d16760..3db6d73 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -9781,7 +9781,28 @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn) * - load/store doubleword, load/store exclusive, ldacq/strel, * table branch. */ - if (insn & 0x01200000) { + if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_M) && + arm_dc_feature(s, ARM_FEATURE_V8)) { + /* 0b1110_1001_0111_1111_1110_1001_0111_111 + * - SG (v8M only) + * The bulk of the behaviour for this instruction is implemented + * in v7m_handle_execute_nsc(), which deals with the insn when + * it is executed by a CPU in non-secure state from memory + * which is Secure & NonSecure-Callable. + * Here we only need to handle the remaining cases: + * * in NS memory (including the "security extension not + * implemented" case) : NOP + * * in S memory but CPU already secure (clear IT bits) + * We know that the attribute for the memory this insn is + * in must match the current CPU state, because otherwise + * get_phys_addr_pmsav8 would have generated an exception. + */ + if (s->v8m_secure) { + /* Like the IT insn, we don't need to generate any code */ + s->condexec_cond = 0; + s->condexec_mask = 0; + } + } else if (insn & 0x01200000) { /* 0b1110_1000_x11x_xxxx_xxxx_xxxx_xxxx_xxxx * - load/store dual (post-indexed) * 0b1111_1001_x10x_xxxx_xxxx_xxxx_xxxx_xxxx