From patchwork Fri Dec 14 05:23:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 153759 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1679986ljp; Thu, 13 Dec 2018 21:39:47 -0800 (PST) X-Google-Smtp-Source: AFSGD/XfTwnoxB7Qk7BNB47TYHrIRxIPihmGc1K8lr/+9c+x8kVV0KxoTCDg0wKQ0hU37OGO8OzZ X-Received: by 2002:a37:298f:: with SMTP id p15mr1376966qkp.327.1544765986986; Thu, 13 Dec 2018 21:39:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544765986; cv=none; d=google.com; s=arc-20160816; b=0ZMftI95wKp38zsFUEWlgULlQO2eLmJH2VI01nlfJf539ANwbPkGZJQrnWeusMinI/ eBXPEU0amSyjrI29bWaaWJxOYMV9WoFzf5LUVUP1tYOWtQGRBmwCe0aN8l756zPu5nBz TKLMZWpDxawgnTaYrHqFAAZu4+1OosDfNB/iKgRv8NpABJhfn80ApEN9bsr+ofuSNdW4 6fBYoVGTd18eLvAVDSlfDuq6k9xygvOWDzDfi/ZwrGCghni+Qn/q0GLAphQaSBM/12sc xfgU13KI/ic+XsrE1xkWUl68u94SqIz363hq+i/2Bi3bTRjGY1RMs2zXeErx07IAhpRn B53g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=803Yw2OXJqRSKdeY9Wz9b9X5C0iN/r0nE17YIv2B6sg=; b=AQA/KrAoIUK545vGDgtVZY9lE/tG3NJT+pOUn2opXzt15ymavn+bp17VXvx95wNP2y FiZzi3UMwUc78HPBvP7JHwEd2zev2uy14lnVzTx9s97+1Zy6WMHHAJCH1F884Bl95laL yOOUF6c7f964Sjmomwa1Py/04a/JrCAoilu0Uy7qreKIq5d8i0EMr3BdrYjOFm2cEj7C 4+53TWcQ2icoBQ/AbaQBH1HfPVGhw7d52LZdj+AkHjcyHULH4rlm8pGXwSTrE8ohf43g BD0nU3fcMNfs39KBW5yqYvc9q09qL9Z2lH+VFDoAzRqeeOgCDFZhUlDpXgPpuDB/GTk5 lDxg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=C+VBhPKn; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id z14si18719qtn.132.2018.12.13.21.39.46 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 13 Dec 2018 21:39:46 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=C+VBhPKn; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59507 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gXgCQ-0001JJ-Em for patch@linaro.org; Fri, 14 Dec 2018 00:39:46 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55718) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gXfxa-0004PZ-ES for qemu-devel@nongnu.org; Fri, 14 Dec 2018 00:24:32 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gXfxY-0005N6-T8 for qemu-devel@nongnu.org; Fri, 14 Dec 2018 00:24:26 -0500 Received: from mail-ot1-x341.google.com ([2607:f8b0:4864:20::341]:33401) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gXfxY-0005Mi-LR for qemu-devel@nongnu.org; Fri, 14 Dec 2018 00:24:24 -0500 Received: by mail-ot1-x341.google.com with SMTP id i20so4344664otl.0 for ; Thu, 13 Dec 2018 21:24:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=803Yw2OXJqRSKdeY9Wz9b9X5C0iN/r0nE17YIv2B6sg=; b=C+VBhPKnwxw8QdfQdpltRznRhvYbaU9nu+AkGEtUBHK/wW5SHL83bLcKcx5xa00Mxa pkIzJEaYsXngOUnzNJjPw2A8t7YytrA5sz/zkdXvRcJ6g5dfFKjUF++Jj1fvfGGK4Pog 7Elqzbu0+996+KYTn24fXmKe0i6LdMgj4JksE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=803Yw2OXJqRSKdeY9Wz9b9X5C0iN/r0nE17YIv2B6sg=; b=FW4YugPT6IHNOLOsE2fdG6PCt7EqEfdilAkxSpuPCaoGy6LYNycMoNf91h3iob2RZS kEREMJTwAWS4LqCdSGNO6XKgVYuy8ecxFo7UCZKmYxFUrlDgA4jC9lSdUJL6I5TRkcPp nWY/JSBkTeNlWLo+QejvWecSJKl6eZyWKJNaic3rRqbYf92YLH6j1Z6ircSsXUUcphBW +ZMZ+kos+8eCBHKXS6DzC/YrL0T+nnPPaa2CUrzC46kAONVvYA9y5nG0CExcz5DSGN9Q VqCANGM/9XWuuRw2liYMA6ddgfm7hWgGtUuZ43q3WhrhzUu33w0lkQTzKPFfFy1KS8xs D7+Q== X-Gm-Message-State: AA+aEWYxDFoJyAWY9yvwUbgQN+Ug4B1IwxkEInq2V0WgzafkPp1fyzHY ZL02wcWoeHMf8ESZSOF2mVQFyMwD1PtYcA== X-Received: by 2002:a9d:4b15:: with SMTP id q21mr1203379otf.30.1544765063577; Thu, 13 Dec 2018 21:24:23 -0800 (PST) Received: from cloudburst.twiddle.net ([187.217.227.243]) by smtp.gmail.com with ESMTPSA id r1sm1845379oti.44.2018.12.13.21.24.22 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Dec 2018 21:24:22 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Date: Thu, 13 Dec 2018 23:23:52 -0600 Message-Id: <20181214052410.11863-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181214052410.11863-1-richard.henderson@linaro.org> References: <20181214052410.11863-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::341 Subject: [Qemu-devel] [PATCH v2 09/27] target/arm: Move helper_exception_return to helper-a64.c X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This function is only used by AArch64. Code movement only. Reviewed-by: Peter Maydell Signed-off-by: Richard Henderson --- target/arm/helper-a64.h | 2 + target/arm/helper.h | 1 - target/arm/helper-a64.c | 155 ++++++++++++++++++++++++++++++++++++++++ target/arm/op_helper.c | 155 ---------------------------------------- 4 files changed, 157 insertions(+), 156 deletions(-) -- 2.17.2 diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h index 28aa0af69d..55299896c4 100644 --- a/target/arm/helper-a64.h +++ b/target/arm/helper-a64.h @@ -86,6 +86,8 @@ DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr) DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr) DEF_HELPER_2(sqrt_f16, f16, f16, ptr) +DEF_HELPER_1(exception_return, void, env) + DEF_HELPER_FLAGS_3(pacia, TCG_CALL_NO_WG, i64, env, i64, i64) DEF_HELPER_FLAGS_3(pacib, TCG_CALL_NO_WG, i64, env, i64, i64) DEF_HELPER_FLAGS_3(pacda, TCG_CALL_NO_WG, i64, env, i64, i64) diff --git a/target/arm/helper.h b/target/arm/helper.h index 8c9590091b..53a38188c6 100644 --- a/target/arm/helper.h +++ b/target/arm/helper.h @@ -79,7 +79,6 @@ DEF_HELPER_2(get_cp_reg64, i64, env, ptr) DEF_HELPER_3(msr_i_pstate, void, env, i32, i32) DEF_HELPER_1(clear_pstate_ss, void, env) -DEF_HELPER_1(exception_return, void, env) DEF_HELPER_2(get_r13_banked, i32, env, i32) DEF_HELPER_3(set_r13_banked, void, env, i32, i32) diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c index bb64700e10..f70c8d9818 100644 --- a/target/arm/helper-a64.c +++ b/target/arm/helper-a64.c @@ -887,6 +887,161 @@ uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp) return float16_to_uint16(a, fpst); } +static int el_from_spsr(uint32_t spsr) +{ + /* Return the exception level that this SPSR is requesting a return to, + * or -1 if it is invalid (an illegal return) + */ + if (spsr & PSTATE_nRW) { + switch (spsr & CPSR_M) { + case ARM_CPU_MODE_USR: + return 0; + case ARM_CPU_MODE_HYP: + return 2; + case ARM_CPU_MODE_FIQ: + case ARM_CPU_MODE_IRQ: + case ARM_CPU_MODE_SVC: + case ARM_CPU_MODE_ABT: + case ARM_CPU_MODE_UND: + case ARM_CPU_MODE_SYS: + return 1; + case ARM_CPU_MODE_MON: + /* Returning to Mon from AArch64 is never possible, + * so this is an illegal return. + */ + default: + return -1; + } + } else { + if (extract32(spsr, 1, 1)) { + /* Return with reserved M[1] bit set */ + return -1; + } + if (extract32(spsr, 0, 4) == 1) { + /* return to EL0 with M[0] bit set */ + return -1; + } + return extract32(spsr, 2, 2); + } +} + +void HELPER(exception_return)(CPUARMState *env) +{ + int cur_el = arm_current_el(env); + unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el); + uint32_t spsr = env->banked_spsr[spsr_idx]; + int new_el; + bool return_to_aa64 = (spsr & PSTATE_nRW) == 0; + + aarch64_save_sp(env, cur_el); + + arm_clear_exclusive(env); + + /* We must squash the PSTATE.SS bit to zero unless both of the + * following hold: + * 1. debug exceptions are currently disabled + * 2. singlestep will be active in the EL we return to + * We check 1 here and 2 after we've done the pstate/cpsr write() to + * transition to the EL we're going to. + */ + if (arm_generate_debug_exceptions(env)) { + spsr &= ~PSTATE_SS; + } + + new_el = el_from_spsr(spsr); + if (new_el == -1) { + goto illegal_return; + } + if (new_el > cur_el + || (new_el == 2 && !arm_feature(env, ARM_FEATURE_EL2))) { + /* Disallow return to an EL which is unimplemented or higher + * than the current one. + */ + goto illegal_return; + } + + if (new_el != 0 && arm_el_is_aa64(env, new_el) != return_to_aa64) { + /* Return to an EL which is configured for a different register width */ + goto illegal_return; + } + + if (new_el == 2 && arm_is_secure_below_el3(env)) { + /* Return to the non-existent secure-EL2 */ + goto illegal_return; + } + + if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) { + goto illegal_return; + } + + qemu_mutex_lock_iothread(); + arm_call_pre_el_change_hook(arm_env_get_cpu(env)); + qemu_mutex_unlock_iothread(); + + if (!return_to_aa64) { + env->aarch64 = 0; + /* We do a raw CPSR write because aarch64_sync_64_to_32() + * will sort the register banks out for us, and we've already + * caught all the bad-mode cases in el_from_spsr(). + */ + cpsr_write(env, spsr, ~0, CPSRWriteRaw); + if (!arm_singlestep_active(env)) { + env->uncached_cpsr &= ~PSTATE_SS; + } + aarch64_sync_64_to_32(env); + + if (spsr & CPSR_T) { + env->regs[15] = env->elr_el[cur_el] & ~0x1; + } else { + env->regs[15] = env->elr_el[cur_el] & ~0x3; + } + qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to " + "AArch32 EL%d PC 0x%" PRIx32 "\n", + cur_el, new_el, env->regs[15]); + } else { + env->aarch64 = 1; + pstate_write(env, spsr); + if (!arm_singlestep_active(env)) { + env->pstate &= ~PSTATE_SS; + } + aarch64_restore_sp(env, new_el); + env->pc = env->elr_el[cur_el]; + qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to " + "AArch64 EL%d PC 0x%" PRIx64 "\n", + cur_el, new_el, env->pc); + } + /* + * Note that cur_el can never be 0. If new_el is 0, then + * el0_a64 is return_to_aa64, else el0_a64 is ignored. + */ + aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64); + + qemu_mutex_lock_iothread(); + arm_call_el_change_hook(arm_env_get_cpu(env)); + qemu_mutex_unlock_iothread(); + + return; + +illegal_return: + /* Illegal return events of various kinds have architecturally + * mandated behaviour: + * restore NZCV and DAIF from SPSR_ELx + * set PSTATE.IL + * restore PC from ELR_ELx + * no change to exception level, execution state or stack pointer + */ + env->pstate |= PSTATE_IL; + env->pc = env->elr_el[cur_el]; + spsr &= PSTATE_NZCV | PSTATE_DAIF; + spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF); + pstate_write(env, spsr); + if (!arm_singlestep_active(env)) { + env->pstate &= ~PSTATE_SS; + } + qemu_log_mask(LOG_GUEST_ERROR, "Illegal exception return at EL%d: " + "resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc); +} + /* * Square Root and Reciprocal square root */ diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c index ef72361a36..24229981cd 100644 --- a/target/arm/op_helper.c +++ b/target/arm/op_helper.c @@ -1014,161 +1014,6 @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome) } } -static int el_from_spsr(uint32_t spsr) -{ - /* Return the exception level that this SPSR is requesting a return to, - * or -1 if it is invalid (an illegal return) - */ - if (spsr & PSTATE_nRW) { - switch (spsr & CPSR_M) { - case ARM_CPU_MODE_USR: - return 0; - case ARM_CPU_MODE_HYP: - return 2; - case ARM_CPU_MODE_FIQ: - case ARM_CPU_MODE_IRQ: - case ARM_CPU_MODE_SVC: - case ARM_CPU_MODE_ABT: - case ARM_CPU_MODE_UND: - case ARM_CPU_MODE_SYS: - return 1; - case ARM_CPU_MODE_MON: - /* Returning to Mon from AArch64 is never possible, - * so this is an illegal return. - */ - default: - return -1; - } - } else { - if (extract32(spsr, 1, 1)) { - /* Return with reserved M[1] bit set */ - return -1; - } - if (extract32(spsr, 0, 4) == 1) { - /* return to EL0 with M[0] bit set */ - return -1; - } - return extract32(spsr, 2, 2); - } -} - -void HELPER(exception_return)(CPUARMState *env) -{ - int cur_el = arm_current_el(env); - unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el); - uint32_t spsr = env->banked_spsr[spsr_idx]; - int new_el; - bool return_to_aa64 = (spsr & PSTATE_nRW) == 0; - - aarch64_save_sp(env, cur_el); - - arm_clear_exclusive(env); - - /* We must squash the PSTATE.SS bit to zero unless both of the - * following hold: - * 1. debug exceptions are currently disabled - * 2. singlestep will be active in the EL we return to - * We check 1 here and 2 after we've done the pstate/cpsr write() to - * transition to the EL we're going to. - */ - if (arm_generate_debug_exceptions(env)) { - spsr &= ~PSTATE_SS; - } - - new_el = el_from_spsr(spsr); - if (new_el == -1) { - goto illegal_return; - } - if (new_el > cur_el - || (new_el == 2 && !arm_feature(env, ARM_FEATURE_EL2))) { - /* Disallow return to an EL which is unimplemented or higher - * than the current one. - */ - goto illegal_return; - } - - if (new_el != 0 && arm_el_is_aa64(env, new_el) != return_to_aa64) { - /* Return to an EL which is configured for a different register width */ - goto illegal_return; - } - - if (new_el == 2 && arm_is_secure_below_el3(env)) { - /* Return to the non-existent secure-EL2 */ - goto illegal_return; - } - - if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) { - goto illegal_return; - } - - qemu_mutex_lock_iothread(); - arm_call_pre_el_change_hook(arm_env_get_cpu(env)); - qemu_mutex_unlock_iothread(); - - if (!return_to_aa64) { - env->aarch64 = 0; - /* We do a raw CPSR write because aarch64_sync_64_to_32() - * will sort the register banks out for us, and we've already - * caught all the bad-mode cases in el_from_spsr(). - */ - cpsr_write(env, spsr, ~0, CPSRWriteRaw); - if (!arm_singlestep_active(env)) { - env->uncached_cpsr &= ~PSTATE_SS; - } - aarch64_sync_64_to_32(env); - - if (spsr & CPSR_T) { - env->regs[15] = env->elr_el[cur_el] & ~0x1; - } else { - env->regs[15] = env->elr_el[cur_el] & ~0x3; - } - qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to " - "AArch32 EL%d PC 0x%" PRIx32 "\n", - cur_el, new_el, env->regs[15]); - } else { - env->aarch64 = 1; - pstate_write(env, spsr); - if (!arm_singlestep_active(env)) { - env->pstate &= ~PSTATE_SS; - } - aarch64_restore_sp(env, new_el); - env->pc = env->elr_el[cur_el]; - qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to " - "AArch64 EL%d PC 0x%" PRIx64 "\n", - cur_el, new_el, env->pc); - } - /* - * Note that cur_el can never be 0. If new_el is 0, then - * el0_a64 is return_to_aa64, else el0_a64 is ignored. - */ - aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64); - - qemu_mutex_lock_iothread(); - arm_call_el_change_hook(arm_env_get_cpu(env)); - qemu_mutex_unlock_iothread(); - - return; - -illegal_return: - /* Illegal return events of various kinds have architecturally - * mandated behaviour: - * restore NZCV and DAIF from SPSR_ELx - * set PSTATE.IL - * restore PC from ELR_ELx - * no change to exception level, execution state or stack pointer - */ - env->pstate |= PSTATE_IL; - env->pc = env->elr_el[cur_el]; - spsr &= PSTATE_NZCV | PSTATE_DAIF; - spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF); - pstate_write(env, spsr); - if (!arm_singlestep_active(env)) { - env->pstate &= ~PSTATE_SS; - } - qemu_log_mask(LOG_GUEST_ERROR, "Illegal exception return at EL%d: " - "resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc); -} - /* Return true if the linked breakpoint entry lbn passes its checks */ static bool linked_bp_matches(ARMCPU *cpu, int lbn) {