From patchwork Fri Sep 22 14:59:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114035 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394787qgf; Fri, 22 Sep 2017 07:59:31 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCJGyLzTYhsp+Qal2IJ5szxOUblrn1+Y9fn5HjJ/v3OrX2BK+7AL8bRUm5pq5+aZaR2y/oa X-Received: by 10.46.9.80 with SMTP id 77mr1456968ljj.54.1506092371866; Fri, 22 Sep 2017 07:59:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092371; cv=none; d=google.com; s=arc-20160816; b=k2zYjbiWK6Fk3jHeu3z4xPi2fiyaQlI5LuJn+lp2pUKtJ7eyoouAJfr1EUzrXWha31 ozjQ7bpGlb6S3zwLgCDyK3pJQF1zbF9NoIH6xibeVNaGjnOsyYaqmdDmVzygc53Ie18x Zqe5tiYrIaJrEwRUCscIj5khqqw5mIFZ8bwp3/7IJcqttMO8S6jrJhgh096WB2Xh5Jer QYpIqBzuhIMgvmbmxDZ3Bab2M+KgCIjTT8rEXEKM76eW9YCmTPjww+PptXqfrw7oqo0C gAQUNh4ohornS3LS/+HeEXw1OoUVvjzPX2vvqyyn9O1ANIgvjafcPQdnEocgiiVR67uY FYNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=eYjzFvsOeZ7TOJ9PjbfOpFu+JiyM+T227XmUcJoz6dk=; b=B9VLXZIa9yIW0EJzDQwIQFdJRottRTQfxeZUwUOxdFbpSEBzpkgc3pl5jiOSkmNga+ s/ib2S5Ph3o8kkv+Plegiy++EcVZSwaOQBcTonoieOAPDvYlJZ0zp1ZtnI8yrGiLrwIV lruK/bB2hYkRowY69TSNumZygiiz3LiugGunwonyCNWAP5bFW6fzUXuYKg1aU3ezoZDT 7DSYQtrW2fA6A8WNLK8veafLGBHpH7BOkctRp/id+dRJFwS4UzJjUyu0vRmus0lFY7kb aD2e1WSKe6dDK+8nyph2g6w/zpC5Db9jcjThv0uXHckc/U2+UIoH97sttYNr/bZ6dyJK c8Dg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id 65si6913lfp.343.2017.09.22.07.59.31 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQQ-00077x-HY; Fri, 22 Sep 2017 15:59:30 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 01/20] nvic: Clear the vector arrays and prigroup on reset Date: Fri, 22 Sep 2017 15:59:48 +0100 Message-Id: <1506092407-26985-2-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Reset for devices does not include an automatic clear of the device state (unlike CPU state, where most of the state structure is cleared to zero). Add some missing initialization of NVIC state that meant that the device was left in the wrong state if the guest did a warm reset. (In particular, since we were resetting the computed state like s->exception_prio but not all the state it was computed from like s->vectors[x].active, the NVIC wound up in an inconsistent state that could later trigger assertion failures.) Signed-off-by: Peter Maydell --- hw/intc/armv7m_nvic.c | 5 +++++ 1 file changed, 5 insertions(+) -- 2.7.4 Reviewed-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c index d90d8d0..bc7b66d 100644 --- a/hw/intc/armv7m_nvic.c +++ b/hw/intc/armv7m_nvic.c @@ -1782,6 +1782,11 @@ static void armv7m_nvic_reset(DeviceState *dev) int resetprio; NVICState *s = NVIC(dev); + memset(s->vectors, 0, sizeof(s->vectors)); + memset(s->sec_vectors, 0, sizeof(s->sec_vectors)); + s->prigroup[M_REG_NS] = 0; + s->prigroup[M_REG_S] = 0; + s->vectors[ARMV7M_EXCP_NMI].enabled = 1; /* MEM, BUS, and USAGE are enabled through * the System Handler Control register From patchwork Fri Sep 22 14:59:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114054 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394858qgf; Fri, 22 Sep 2017 07:59:36 -0700 (PDT) X-Google-Smtp-Source: AOwi7QB/CIXIIwoKU65khBMAZKLUfOtQUs6ZIM9wZCKxioRQNYIT/h1Rn/8qICCSHXUbhrVa1iUM X-Received: by 10.99.123.4 with SMTP id w4mr9972091pgc.307.1506092375960; Fri, 22 Sep 2017 07:59:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092375; cv=none; d=google.com; s=arc-20160816; b=H2RPfVDPKlRtFQSDianAwYQztq+jBRvYIAW3pgBaZjpRYAQQXBLztE7Snntyfj8Cbo HHHRJdFo59VTOdFqtNZLZuHJhjAs569x2F6uOXb/Xu1OsPOPthJpBJqWzDIWihhjZw9s 3PbbufgjEkc90gRUzrLoaCVbv7+43TTlYI7h6cokEXYh/d+tawEy7FxFd52PSGAWhXYx l8m7RP07Z42OtWlEq6I/mld6URyOw6dCRxe5AGLVnA0fzMfiHEAaPftp0ii0vEBjeQxA ydVmBW1RYDY2dV3ioqA0/ONl9RRBr3kvHG6rxO5sYFvSiuL3+rIqw4Ls19FQO+xWgmt0 247Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Fx37YsEywhqylNENqWWTJEik+WvjkclDdFLnKW84MWA=; b=KAhj6A7NeDbJ5HgYGSlnlPjhk1yFV2Gt0NNI3vRbG1C4j2u9MWaYgTnW3UwKDUbZ0f oLxEybVm57ueO1kkx3JZ4tmOS1xwCq7PC5iNk2YQ95eacVimHbFHGsQzjBe4tGfBfdIE gbAjqcB4hTwwsHFrq52vU8vmrwJ6ExMMi17xQnfGJp2kh57sNKtHklPIpGQRSm9m53De bzublOJiHH/ylJBk0vU+qbs8n84JnNQ7fWuttKzfJdjYxlwfGFw36EWtnDba2HdUtedc FzudDgh8FpThcdjsyBj5O/iE1dFfZLj/SZt3aMewYKRgalvy3POdiuJUQbB+zc65I9Nk 3Lww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id 67si15183plf.391.2017.09.22.07.59.34 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQR-00078E-6k; Fri, 22 Sep 2017 15:59:31 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 02/20] target/arm: Don't switch to target stack early in v7M exception return Date: Fri, 22 Sep 2017 15:59:49 +0100 Message-Id: <1506092407-26985-3-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Currently our M profile exception return code switches to the target stack pointer relatively early in the process, before it tries to pop the exception frame off the stack. This is awkward for v8M for two reasons: * in v8M the process vs main stack pointer is not selected purely by the value of CONTROL.SPSEL, so updating SPSEL and relying on that to switch to the right stack pointer won't work * the stack we should be reading the stack frame from and the stack we will eventually switch to might not be the same if the guest is doing strange things Change our exception return code to use a 'frame pointer' to read the exception frame rather than assuming that we can switch the live stack pointer this early. Signed-off-by: Peter Maydell --- target/arm/helper.c | 127 +++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 95 insertions(+), 32 deletions(-) -- 2.7.4 Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index 8be78ea..f13b99d 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6040,16 +6040,6 @@ static void v7m_push(CPUARMState *env, uint32_t val) stl_phys(cs->as, env->regs[13], val); } -static uint32_t v7m_pop(CPUARMState *env) -{ - CPUState *cs = CPU(arm_env_get_cpu(env)); - uint32_t val; - - val = ldl_phys(cs->as, env->regs[13]); - env->regs[13] += 4; - return val; -} - /* Return true if we're using the process stack pointer (not the MSP) */ static bool v7m_using_psp(CPUARMState *env) { @@ -6141,6 +6131,40 @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) env->regs[15] = dest & ~1; } +static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, + bool spsel) +{ + /* Return a pointer to the location where we currently store the + * stack pointer for the requested security state and thread mode. + * This pointer will become invalid if the CPU state is updated + * such that the stack pointers are switched around (eg changing + * the SPSEL control bit). + * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode(). + * Unlike that pseudocode, we require the caller to pass us in the + * SPSEL control bit value; this is because we also use this + * function in handling of pushing of the callee-saves registers + * part of the v8M stack frame, and in that case the SPSEL bit + * comes from the exception return magic LR value. + */ + bool want_psp = threadmode && spsel; + + if (secure == env->v7m.secure) { + /* Currently switch_v7m_sp switches SP as it updates SPSEL, + * so the SP we want is always in regs[13]. + * When we decouple SPSEL from the actually selected SP + * we need to check want_psp against v7m_using_psp() + * to see whether we need regs[13] or v7m.other_sp. + */ + return &env->regs[13]; + } else { + if (want_psp) { + return &env->v7m.other_ss_psp; + } else { + return &env->v7m.other_ss_msp; + } + } +} + static uint32_t arm_v7m_load_vector(ARMCPU *cpu) { CPUState *cs = CPU(cpu); @@ -6212,6 +6236,7 @@ static void v7m_push_stack(ARMCPU *cpu) static void do_v7m_exception_exit(ARMCPU *cpu) { CPUARMState *env = &cpu->env; + CPUState *cs = CPU(cpu); uint32_t excret; uint32_t xpsr; bool ufault = false; @@ -6219,6 +6244,7 @@ static void do_v7m_exception_exit(ARMCPU *cpu) bool return_to_handler = false; bool rettobase = false; bool exc_secure = false; + bool return_to_secure; /* We can only get here from an EXCP_EXCEPTION_EXIT, and * gen_bx_excret() enforces the architectural rule @@ -6286,6 +6312,9 @@ static void do_v7m_exception_exit(ARMCPU *cpu) g_assert_not_reached(); } + return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && + (excret & R_V7M_EXCRET_S_MASK); + switch (excret & 0xf) { case 1: /* Return to Handler */ return_to_handler = true; @@ -6315,32 +6344,66 @@ static void do_v7m_exception_exit(ARMCPU *cpu) return; } - /* Switch to the target stack. */ + /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently + * causes us to switch the active SP, but we will change this + * later to not do that so we can support v8M. + */ switch_v7m_sp(env, return_to_sp_process); - /* Pop registers. */ - env->regs[0] = v7m_pop(env); - env->regs[1] = v7m_pop(env); - env->regs[2] = v7m_pop(env); - env->regs[3] = v7m_pop(env); - env->regs[12] = v7m_pop(env); - env->regs[14] = v7m_pop(env); - env->regs[15] = v7m_pop(env); - if (env->regs[15] & 1) { - qemu_log_mask(LOG_GUEST_ERROR, - "M profile return from interrupt with misaligned " - "PC is UNPREDICTABLE\n"); - /* Actual hardware seems to ignore the lsbit, and there are several - * RTOSes out there which incorrectly assume the r15 in the stack - * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value. + + { + /* The stack pointer we should be reading the exception frame from + * depends on bits in the magic exception return type value (and + * for v8M isn't necessarily the stack pointer we will eventually + * end up resuming execution with). Get a pointer to the location + * in the CPU state struct where the SP we need is currently being + * stored; we will use and modify it in place. + * We use this limited C variable scope so we don't accidentally + * use 'frame_sp_p' after we do something that makes it invalid. + */ + uint32_t *frame_sp_p = get_v7m_sp_ptr(env, + return_to_secure, + !return_to_handler, + return_to_sp_process); + uint32_t frameptr = *frame_sp_p; + + /* Pop registers. TODO: make these accesses use the correct + * attributes and address space (S/NS, priv/unpriv) and handle + * memory transaction failures. */ - env->regs[15] &= ~1U; + env->regs[0] = ldl_phys(cs->as, frameptr); + env->regs[1] = ldl_phys(cs->as, frameptr + 0x4); + env->regs[2] = ldl_phys(cs->as, frameptr + 0x8); + env->regs[3] = ldl_phys(cs->as, frameptr + 0xc); + env->regs[12] = ldl_phys(cs->as, frameptr + 0x10); + env->regs[14] = ldl_phys(cs->as, frameptr + 0x14); + env->regs[15] = ldl_phys(cs->as, frameptr + 0x18); + if (env->regs[15] & 1) { + qemu_log_mask(LOG_GUEST_ERROR, + "M profile return from interrupt with misaligned " + "PC is UNPREDICTABLE\n"); + /* Actual hardware seems to ignore the lsbit, and there are several + * RTOSes out there which incorrectly assume the r15 in the stack + * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value. + */ + env->regs[15] &= ~1U; + } + xpsr = ldl_phys(cs->as, frameptr + 0x1c); + + /* Commit to consuming the stack frame */ + frameptr += 0x20; + /* Undo stack alignment (the SPREALIGN bit indicates that the original + * pre-exception SP was not 8-aligned and we added a padding word to + * align it, so we undo this by ORing in the bit that increases it + * from the current 8-aligned value to the 8-unaligned value. (Adding 4 + * would work too but a logical OR is how the pseudocode specifies it.) + */ + if (xpsr & XPSR_SPREALIGN) { + frameptr |= 4; + } + *frame_sp_p = frameptr; } - xpsr = v7m_pop(env); + /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */ xpsr_write(env, xpsr, ~XPSR_SPREALIGN); - /* Undo stack alignment. */ - if (xpsr & XPSR_SPREALIGN) { - env->regs[13] |= 4; - } /* The restored xPSR exception field will be zero if we're * resuming in Thread mode. If that doesn't match what the From patchwork Fri Sep 22 14:59:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114036 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394797qgf; Fri, 22 Sep 2017 07:59:32 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCRcexxHqAqzSwMjoWeJHm+3OX1W1NAMRA/667bofwBQG4WekfzwsC6JSNN+HNTE7VpjVIS X-Received: by 10.223.146.197 with SMTP id 63mr4837223wrn.180.1506092372463; Fri, 22 Sep 2017 07:59:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092372; cv=none; d=google.com; s=arc-20160816; b=mN92a3VMZTc+xrIMvfVjxOBCWPTuWYVWUUcOU0yaRVuVd0gtzx+9YsJAXUKMUlbNDg N0z9x+HMNEcXesEHFv+IoQXIz+pWeCDVqXUbg1kliR303Xd4SDUY5IOIWTJ8bSbq+92u +h4DAbOYxJE4DdDmJKX5mjtsvVDpg7xzU0V1/la2w2Y6FDj5faypeqRP1PtusTet9dnU fiwd8mYNT1WvgkSGOxoOK0L29YaqgJYCZkjdYATJ+LClfRErJSlmSuZ4NRjfQrkPHNQn 6nMH936jkoQcCPSRgcOL4g3/yCbZMTOQcuBux49z9daCT7RUwJVV97Pc0cWHgNvU3WoF wzRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=sPxUO8QkNQEaFr91yw3IroTAzLpq/Yayy5FMDNAo8hQ=; b=MD6uJvF8qdBWlFOywKegFXFNiepRv4G2kSxecaEv+MF/nVmYXCHbNYbJ4s0Ci9VqB+ qnfa+U8zfU7CNMNTmQx+85ZsmJf57Kt7cIYOFDpu3h/3DPvuGyWpiKJhUlP2mI1H1XVR fATtbXxY9OY/wTbQgk8sX/QYgb7MPOJViHz2jkb3YA3ew09FNWdIqzT0XasWEwEExO25 EFHQNrVsgxDg4F+yDj0h2faUg1cbfl7t2RPrHIXbq+dY3wplZvzER8C4aVMdfIOtmKxs M98t6SrrWwKkjAlA7saUz06oEP6nIqtqns+85Cp8HuciUmk9O1iDvju8q+HufLuPDjA4 PwPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id r12si27541wrr.97.2017.09.22.07.59.32 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQR-00078T-S7; Fri, 22 Sep 2017 15:59:31 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 03/20] target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode Date: Fri, 22 Sep 2017 15:59:50 +0100 Message-Id: <1506092407-26985-4-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> In the v7M architecture, there is an invariant that if the CPU is in Handler mode then the CONTROL.SPSEL bit cannot be nonzero. This in turn means that the current stack pointer is always indicated by CONTROL.SPSEL, even though Handler mode always uses the Main stack pointer. In v8M, this invariant is removed, and CONTROL.SPSEL may now be nonzero in Handler mode (though Handler mode still always uses the Main stack pointer). In preparation for this change, change how we handle this bit: rename switch_v7m_sp() to the now more accurate write_v7m_control_spsel(), and make it check both the handler mode state and the SPSEL bit. Note that this implicitly changes the point at which we switch active SP on exception exit from before we pop the exception frame to after it. Signed-off-by: Peter Maydell --- target/arm/cpu.h | 8 ++++++- hw/intc/armv7m_nvic.c | 2 +- target/arm/helper.c | 65 ++++++++++++++++++++++++++++++++++----------------- 3 files changed, 51 insertions(+), 24 deletions(-) -- 2.7.4 Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 8afceca..ad6eff4 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -991,6 +991,11 @@ void pmccntr_sync(CPUARMState *env); #define PSTATE_MODE_EL1t 4 #define PSTATE_MODE_EL0t 0 +/* Write a new value to v7m.exception, thus transitioning into or out + * of Handler mode; this may result in a change of active stack pointer. + */ +void write_v7m_exception(CPUARMState *env, uint32_t new_exc); + /* Map EL and handler into a PSTATE_MODE. */ static inline unsigned int aarch64_pstate_mode(unsigned int el, bool handler) { @@ -1071,7 +1076,8 @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask) env->condexec_bits |= (val >> 8) & 0xfc; } if (mask & XPSR_EXCP) { - env->v7m.exception = val & XPSR_EXCP; + /* Note that this only happens on exception exit */ + write_v7m_exception(env, val & XPSR_EXCP); } } diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c index bc7b66d..a1041c2 100644 --- a/hw/intc/armv7m_nvic.c +++ b/hw/intc/armv7m_nvic.c @@ -616,7 +616,7 @@ bool armv7m_nvic_acknowledge_irq(void *opaque) vec->active = 1; vec->pending = 0; - env->v7m.exception = s->vectpending; + write_v7m_exception(env, s->vectpending); nvic_irq_update(s); diff --git a/target/arm/helper.c b/target/arm/helper.c index f13b99d..509a1aa 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6052,21 +6052,44 @@ static bool v7m_using_psp(CPUARMState *env) env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK; } -/* Switch to V7M main or process stack pointer. */ -static void switch_v7m_sp(CPUARMState *env, bool new_spsel) +/* Write to v7M CONTROL.SPSEL bit. This may change the current + * stack pointer between Main and Process stack pointers. + */ +static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) { uint32_t tmp; - uint32_t old_control = env->v7m.control[env->v7m.secure]; - bool old_spsel = old_control & R_V7M_CONTROL_SPSEL_MASK; + bool new_is_psp, old_is_psp = v7m_using_psp(env); + + env->v7m.control[env->v7m.secure] = + deposit32(env->v7m.control[env->v7m.secure], + R_V7M_CONTROL_SPSEL_SHIFT, + R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); + + new_is_psp = v7m_using_psp(env); - if (old_spsel != new_spsel) { + if (old_is_psp != new_is_psp) { tmp = env->v7m.other_sp; env->v7m.other_sp = env->regs[13]; env->regs[13] = tmp; + } +} + +void write_v7m_exception(CPUARMState *env, uint32_t new_exc) +{ + /* Write a new value to v7m.exception, thus transitioning into or out + * of Handler mode; this may result in a change of active stack pointer. + */ + bool new_is_psp, old_is_psp = v7m_using_psp(env); + uint32_t tmp; - env->v7m.control[env->v7m.secure] = deposit32(old_control, - R_V7M_CONTROL_SPSEL_SHIFT, - R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); + env->v7m.exception = new_exc; + + new_is_psp = v7m_using_psp(env); + + if (old_is_psp != new_is_psp) { + tmp = env->v7m.other_sp; + env->v7m.other_sp = env->regs[13]; + env->regs[13] = tmp; } } @@ -6149,13 +6172,11 @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, bool want_psp = threadmode && spsel; if (secure == env->v7m.secure) { - /* Currently switch_v7m_sp switches SP as it updates SPSEL, - * so the SP we want is always in regs[13]. - * When we decouple SPSEL from the actually selected SP - * we need to check want_psp against v7m_using_psp() - * to see whether we need regs[13] or v7m.other_sp. - */ - return &env->regs[13]; + if (want_psp == v7m_using_psp(env)) { + return &env->regs[13]; + } else { + return &env->v7m.other_sp; + } } else { if (want_psp) { return &env->v7m.other_ss_psp; @@ -6198,7 +6219,7 @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr) uint32_t addr; armv7m_nvic_acknowledge_irq(env->nvic); - switch_v7m_sp(env, 0); + write_v7m_control_spsel(env, 0); arm_clear_exclusive(env); /* Clear IT bits */ env->condexec_bits = 0; @@ -6344,11 +6365,11 @@ static void do_v7m_exception_exit(ARMCPU *cpu) return; } - /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently - * causes us to switch the active SP, but we will change this - * later to not do that so we can support v8M. + /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in + * Handler mode (and will be until we write the new XPSR.Interrupt + * field) this does not switch around the current stack pointer. */ - switch_v7m_sp(env, return_to_sp_process); + write_v7m_control_spsel(env, return_to_sp_process); { /* The stack pointer we should be reading the exception frame from @@ -9163,11 +9184,11 @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) case 20: /* CONTROL */ /* Writing to the SPSEL bit only has an effect if we are in * thread mode; other bits can be updated by any privileged code. - * switch_v7m_sp() deals with updating the SPSEL bit in + * write_v7m_control_spsel() deals with updating the SPSEL bit in * env->v7m.control, so we only need update the others. */ if (!arm_v7m_is_handler_mode(env)) { - switch_v7m_sp(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0); + write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0); } env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK; env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK; From patchwork Fri Sep 22 14:59:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114037 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394802qgf; Fri, 22 Sep 2017 07:59:33 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCsTCD0oUQcqP73OHwV7qTECNSWejCzIgTYZURYZoPKwpSiuB03SrR5ms/PKVMXn+8fgHQX X-Received: by 10.223.195.37 with SMTP id n34mr4939737wrf.219.1506092373210; Fri, 22 Sep 2017 07:59:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092373; cv=none; d=google.com; s=arc-20160816; b=QilRfuuVseE2/QDjE/c8AQPKu7uRZ4w/J0N+vHyYY6NxW5jchY+EhQDDFIAsqsQx/T Vpgth+XKU/+H57drE+lvlwLR7YhApa5X/1gqw5gcTp5j5TrtuDY8Lx+BvfhRN0XEqIxo z/TJGvC7uz4ug6eMFdW1ya8Y5GOyp3xvD8jtP4/pmwLTo+ErkcJWI9O8XQ+TTJKAMRce Ml6HuUl3v7rS590BOBaWFldkdB2hzQVTXRQabnZUNn0siRx0U3QkezUQFA7qFigHi4Fl qXpMORx3yhEp2ucuGEjfC4/7LDcmXE09N6MRfeK9EajB8kgMgSFz1Kud1SI2L1oXIUrH CwUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=WciqT5eXNhVuhRnPYJEgWwPg+vb1oPeS/CAmqHTXfhA=; b=O6PZBGz6Omdim5fAz0hUdLnrY+/8Foc9Wb8GzPkAsH7k4dSxl96m6NvAqOHp7tg0bs ilzdFxC9najoYHF4jq3HXVxPGST+SqdgzRm58ZkOjbmbBbhBSjip5V4JeH0rzHye/GoZ 6MGQtxK67Y8Wx0CvDe43T2kG0UAZPRKVTju5BgOQU9PaHLXniLr2dihPg/cRYF6RoWGr colH950AOxgW0SOAJhYA3WKxypig1iOipr5E6SXKKRBbb1wwTQ2J3VI7MdrGVCFKtN+3 NOKavCE1GFCZ0eIYOW5GsjBt5balvOvmhz93hK6oCCd9tcfeja1yujB/YXn1Wy/HzUFj YwXw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id u17si14096wmu.230.2017.09.22.07.59.32 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQS-000790-H2; Fri, 22 Sep 2017 15:59:32 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 04/20] target/arm: Restore security state on exception return Date: Fri, 22 Sep 2017 15:59:51 +0100 Message-Id: <1506092407-26985-5-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Now that we can handle the CONTROL.SPSEL bit not necessarily being in sync with the current stack pointer, we can restore the correct security state on exception return. This happens before we start to read registers off the stack frame, but after we have taken possible usage faults for bad exception return magic values and updated CONTROL.SPSEL. Signed-off-by: Peter Maydell --- target/arm/helper.c | 2 ++ 1 file changed, 2 insertions(+) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index 509a1aa..a3c63c3 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6371,6 +6371,8 @@ static void do_v7m_exception_exit(ARMCPU *cpu) */ write_v7m_control_spsel(env, return_to_sp_process); + switch_v7m_security_state(env, return_to_secure); + { /* The stack pointer we should be reading the exception frame from * depends on bits in the magic exception return type value (and From patchwork Fri Sep 22 14:59:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114038 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394821qgf; Fri, 22 Sep 2017 07:59:33 -0700 (PDT) X-Google-Smtp-Source: AOwi7QD1DctbBgIGSf+Tseq6jZmCCNRnylI1NTcBTfELD/KvJw5tmsonKk//uGioU1u6iaY9x4Y9 X-Received: by 10.223.195.37 with SMTP id n34mr4939761wrf.219.1506092373829; Fri, 22 Sep 2017 07:59:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092373; cv=none; d=google.com; s=arc-20160816; b=YTVcnubQdC6vpAN4VwEFAOROnSW/SMagQgNl87FTu8qF8Xl7cQaDSTFNvL92desZ8V PJSB8cDE4ZhX0UVOcZ3eo3qAU064uPasiD/wsJ9h6EKc3BIx/TKC4fR5W597hQI7ABz9 Xt0MTp3/n6IfQ6ZaUsXj8OVv+LPCUS+Fsahm9Y22ENfGZFNgxF+7SDbpNq7u9b4K3GHF MU3DOu6sqcdyCqBouXivrKNOrYjupwltF+eVT+g8qpkYfnUEodZ33OAlZgQ5HB6ihhRm 0E3HOd358DSYiAh76ACVTkVYayJd1GMKWI7i8oFc8LpoRYwvSYHuY6eUZb3aPAcMQFM0 v4qA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=5rjsnNt1/no6ImERMX7mheTxJaPJRmMoitVrFKz2qfc=; b=auXdoJOy7tpStmQxD4GdwDZqwmhCPmF8dK5iCB0InCCfNIiX0Q8ILEwtWopY6zkmyG IC569Yzl6nYgqPngHqF4hzuJgvPjBhhOCV10ea7Wl5/Sm72DT1vktb5+B43znTokHvXQ NVud5HEzKj8BGDjIWtomFKTF7tYFWCcO4wM1tFumre2FQZvfbfiKBvN0eSSrFqGk6Ukk lsBQ+DYo4TaHtFdUOqvxr06R4FGy0MB3tPjRoRLDo3xU0FNYtMR+nRcUaEDw7gLSN5hd zVFBmed/DiHsjtDD0xTaxv4odUvxgpBbPL2W+OL9kfmCL+3udX50rNlk5WqI59J0EfJ+ ZGww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id a11si20048wrf.264.2017.09.22.07.59.33 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQT-00079V-6D; Fri, 22 Sep 2017 15:59:33 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 05/20] target/arm: Restore SPSEL to correct CONTROL register on exception return Date: Fri, 22 Sep 2017 15:59:52 +0100 Message-Id: <1506092407-26985-6-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> On exception return for v8M, the SPSEL bit in the EXC_RETURN magic value should be restored to the SPSEL bit in the CONTROL register banked specified by the EXC_RETURN.ES bit. Add write_v7m_control_spsel_for_secstate() which behaves like write_v7m_control_spsel() but allows the caller to specify which CONTROL bank to use, reimplement write_v7m_control_spsel() in terms of it, and use it in exception return. Signed-off-by: Peter Maydell --- target/arm/helper.c | 40 +++++++++++++++++++++++++++------------- 1 file changed, 27 insertions(+), 13 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index a3c63c3..4444d04 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6052,28 +6052,42 @@ static bool v7m_using_psp(CPUARMState *env) env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK; } -/* Write to v7M CONTROL.SPSEL bit. This may change the current - * stack pointer between Main and Process stack pointers. +/* Write to v7M CONTROL.SPSEL bit for the specified security bank. + * This may change the current stack pointer between Main and Process + * stack pointers if it is done for the CONTROL register for the current + * security state. */ -static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) +static void write_v7m_control_spsel_for_secstate(CPUARMState *env, + bool new_spsel, + bool secstate) { - uint32_t tmp; - bool new_is_psp, old_is_psp = v7m_using_psp(env); + bool old_is_psp = v7m_using_psp(env); - env->v7m.control[env->v7m.secure] = - deposit32(env->v7m.control[env->v7m.secure], + env->v7m.control[secstate] = + deposit32(env->v7m.control[secstate], R_V7M_CONTROL_SPSEL_SHIFT, R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); - new_is_psp = v7m_using_psp(env); + if (secstate == env->v7m.secure) { + bool new_is_psp = v7m_using_psp(env); + uint32_t tmp; - if (old_is_psp != new_is_psp) { - tmp = env->v7m.other_sp; - env->v7m.other_sp = env->regs[13]; - env->regs[13] = tmp; + if (old_is_psp != new_is_psp) { + tmp = env->v7m.other_sp; + env->v7m.other_sp = env->regs[13]; + env->regs[13] = tmp; + } } } +/* Write to v7M CONTROL.SPSEL bit. This may change the current + * stack pointer between Main and Process stack pointers. + */ +static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) +{ + write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure); +} + void write_v7m_exception(CPUARMState *env, uint32_t new_exc) { /* Write a new value to v7m.exception, thus transitioning into or out @@ -6369,7 +6383,7 @@ static void do_v7m_exception_exit(ARMCPU *cpu) * Handler mode (and will be until we write the new XPSR.Interrupt * field) this does not switch around the current stack pointer. */ - write_v7m_control_spsel(env, return_to_sp_process); + write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure); switch_v7m_security_state(env, return_to_secure); From patchwork Fri Sep 22 14:59:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114039 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394831qgf; Fri, 22 Sep 2017 07:59:34 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCmwFvH9bzeKGWNKy5y343j/b7uvbSA6sj4YwfWnGsqh8eRtv52dakum6Y27VX/+xEGeyRN X-Received: by 10.28.66.65 with SMTP id p62mr4196091wma.159.1506092374517; Fri, 22 Sep 2017 07:59:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092374; cv=none; d=google.com; s=arc-20160816; b=zkKcNNfExVtzkj0CjBOn0kS/BnqzvIDW+oOY8xL7srrlzda9b/T2w5OSBsyKTf4rNy Jn3Soiirsr8RXAU8n2odO1r0KN/gtZczf/a+fjACYiQaWqICJ8m1xkfsXRkq2e6+H0Nq 4Gi3W/BfjJA3e7UntgYUrR+UBVZCSRFaIUEidxI2n5vhzgZAgDq5vrJd86dDtV9ukORd Y6yWxrzJeS5/YsfFCDfjZx8eAn9iJURuX89dJZOGEwCY2j9q4sdSUqF9z5dLw9rf0bWt xNhxmzV6YhJGbJnFvmAkRk5kHLGzPVpFareru0N904jY9q6ayh86O7VMEQmLlIv/lRRp PRWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=6HmA2U4klQOsgHF1rsyUvRRab2OhshniWamNmBWDaNA=; b=UMwi1KbYBSKRJXNbr+aEKExT1dGkC42EuapDM3pe5tH0zbMDtP3qlMDQq59kcUanoc wTkKKwpC0oWU4X94yFNuk34SwlEUH55Cfa3w/mWAZxhGajj/faFnIk8DlhaLfVXeABhk uPcHWQ9k12dRIkCrIQJj7s81+B/HkI48sU4iIVv5dn8dR8chCMpi41EXOc7sMeqgCV+O zuRZctThSy84jSbXmz0xduihz6qlkqXHxU3zKxL5SGvjQTWevZaw5iQMc9hkdtghUcUj Y+dQC6aVr+q8VK8LzUfEJDUB4vsWEFBmdyIkzXVHBZe7BNGyOm1J/RfwYxULyOdWfsef +bcg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id l128si14532wmb.229.2017.09.22.07.59.34 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQT-00079n-Rq; Fri, 22 Sep 2017 15:59:33 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 06/20] target/arm: Check for xPSR mismatch usage faults earlier for v8M Date: Fri, 22 Sep 2017 15:59:53 +0100 Message-Id: <1506092407-26985-7-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> ARM v8M specifies that the INVPC usage fault for mismatched xPSR exception field and handler mode bit should be checked before updating the PSR and SP, so that the fault is taken with the existing stack frame rather than by pushing a new one. Perform this check in the right place for v8M. Since v7M specifies in its pseudocode that this usage fault check should happen later, we have to retain the original code for that check rather than being able to merge the two. (The distinction is architecturally visible but only in very obscure corner cases like attempting an invalid exception return with an exception frame in read only memory.) Signed-off-by: Peter Maydell --- target/arm/helper.c | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index 4444d04..a2e46fb 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6426,6 +6426,29 @@ static void do_v7m_exception_exit(ARMCPU *cpu) } xpsr = ldl_phys(cs->as, frameptr + 0x1c); + if (arm_feature(env, ARM_FEATURE_V8)) { + /* For v8M we have to check whether the xPSR exception field + * matches the EXCRET value for return to handler/thread + * before we commit to changing the SP and xPSR. + */ + bool will_be_handler = (xpsr & XPSR_EXCP) != 0; + if (return_to_handler != will_be_handler) { + /* Take an INVPC UsageFault on the current stack. + * By this point we will have switched to the security state + * for the background state, so this UsageFault will target + * that state. + */ + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; + v7m_exception_taken(cpu, excret); + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " + "stackframe: failed exception return integrity " + "check\n"); + return; + } + } + /* Commit to consuming the stack frame */ frameptr += 0x20; /* Undo stack alignment (the SPREALIGN bit indicates that the original @@ -6445,12 +6468,13 @@ static void do_v7m_exception_exit(ARMCPU *cpu) /* The restored xPSR exception field will be zero if we're * resuming in Thread mode. If that doesn't match what the * exception return excret specified then this is a UsageFault. + * v7M requires we make this check here; v8M did it earlier. */ if (return_to_handler != arm_v7m_is_handler_mode(env)) { - /* Take an INVPC UsageFault by pushing the stack again. - * TODO: the v8M version of this code should target the - * background state for this exception. + /* Take an INVPC UsageFault by pushing the stack again; + * we know we're v7M so this is never a Secure UsageFault. */ + assert(!arm_feature(env, ARM_FEATURE_V8)); armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; v7m_push_stack(cpu); From patchwork Fri Sep 22 14:59:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114040 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394842qgf; Fri, 22 Sep 2017 07:59:35 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDLXP5r/Xtq4GBUJXpXfPyBHc4u04c6nKpYzS8St3irAiLp/6EhTOrcrnEiN5xE4PYJbXhJ X-Received: by 10.28.105.12 with SMTP id e12mr3956957wmc.29.1506092375136; Fri, 22 Sep 2017 07:59:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092375; cv=none; d=google.com; s=arc-20160816; b=cLfTmukmy002xMdiyhJLl8B/rjo9NgC8HSZo5CCRAmDewiaUAjPKVgndgmpBLolsO/ 8RRONZb8LJvx3RefIDVO00hkyONVTyQv8cx0ULfd4uY2qwrbZlhh+b583azojuT2c9FJ p+ul1BNSCas55lRbTXwD/CYf9001OEF8IswlNJSUj1CpARMSN0YphKOm6FJEG1C5cISa UzMii4GyXpIGd2wEzhlpFQwx+GT0Hm7GCqMm3kSrjk0ZnMAsSJ4YQEfhiAHmWdwawHJ9 ntTo1hRdoUEz2uir4Dg5Alaw7eZxeok0VpXjP6+4Is1MvQN5DQNIjWgCVfFX+trlX6XS Tgdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=1XB2bZM6McpwLcdJ/J1b0+QQ1qmOnpqQ8EyA3JE5xOA=; b=nelU92PpHph3kIFSOo4ADwlNuLI5vQRtZqGjEW50gv0P0JpNYSn/Qwh0olco4NITlD K1L0Cc2HTTf57OfJLYTsc1nSXVZHDmrEYEI2JNQD0s92oPU8mb1tMtYbu/Dg4TCLQCos Tm+V46lwOYGwEK2M+8ThfVKSns4H7lQs13lgP5OPnB/9oUc9JvqgGFeN3Qpxbg3Rx/79 usNvk25xM/Xfz6IMm/YdvxmPhgUbUC0xMV8QCw4f4oHBT8+JSxPBO7C+ZIFS5u+5SEal WfeeTltf+w2kJTxQKpgXteClitqNmYSakdGqV4r6V4N9EL0FwrBnUPZsx9J2jiZHOI1D wteg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id u111si14428wrc.361.2017.09.22.07.59.34 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQU-0007AY-IF; Fri, 22 Sep 2017 15:59:34 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 07/20] target/arm: Warn about restoring to unaligned stack Date: Fri, 22 Sep 2017 15:59:54 +0100 Message-Id: <1506092407-26985-8-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Attempting to do an exception return with an exception frame that is not 8-aligned is UNPREDICTABLE in v8M; warn about this. (It is not UNPREDICTABLE in v7M, and our implementation can handle the merely-4-aligned case fine, so we don't need to do anything except warn.) Signed-off-by: Peter Maydell --- target/arm/helper.c | 7 +++++++ 1 file changed, 7 insertions(+) -- 2.7.4 Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index a2e46fb..979129e 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6403,6 +6403,13 @@ static void do_v7m_exception_exit(ARMCPU *cpu) return_to_sp_process); uint32_t frameptr = *frame_sp_p; + if (!QEMU_IS_ALIGNED(frameptr, 8) && + arm_feature(env, ARM_FEATURE_V8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "M profile exception return with non-8-aligned SP " + "for destination state is UNPREDICTABLE\n"); + } + /* Pop registers. TODO: make these accesses use the correct * attributes and address space (S/NS, priv/unpriv) and handle * memory transaction failures. From patchwork Fri Sep 22 14:59:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114041 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394857qgf; Fri, 22 Sep 2017 07:59:36 -0700 (PDT) X-Google-Smtp-Source: AOwi7QByuebf6CWgUibPoAGLPYiZnTgrRq9o+Q9/ML+YGBBXxJf2LXaUCug9LXHXYXaPS+YIe64+ X-Received: by 10.28.152.70 with SMTP id a67mr3845775wme.132.1506092375940; Fri, 22 Sep 2017 07:59:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092375; cv=none; d=google.com; s=arc-20160816; b=DjNAj07MYOVsryVrmM5B5Hu1/ra6zSVLITbyTJ98mul71pLlB2MIQufxe4qWSbqVFE MQMOaQwOJMpadXbmod3CzdpqlL7Q/GUz/iNsSH8ESVIkCo9bYBKogme4/TvSK3gEJa0K tn+2pSKDlkf1XvFk2IoW4D+1jILduYKli3PPM4gupwmmCo3lJ6qfI3JK/Q1D+OZWBCsU IFH8wtiF1SCzGh8Rst5WITeoPT1l6pW8CahuYO6+OQRB4910ihBYZOuI8Nnel30mXsoF 9CZbpDrhShdzPXkN7bRACK63bc4p6Hexamr7fgEsaMPBGfstZ/q5/6/TNzBo9qVdQgSc aUZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=oX3f+gdFBwxt23pFDN8S+Q+tZpoTLOMEnvtLYCzE930=; b=fl4EJz9qpZIUcGEVNIwNVbySTvtWpGBkNDKKv+m+qxOUPjOXUNsnjfDi2kS+FjJ+6v 0wz5FNkWt09am1IbXIEGBQ7baLUJYnBb1DNJNdQjBAoES2ve2OqbhUrUaXm+3tjMHsMN st4fJB6gTAIs8Hpc3L0CNnA5C5D3q659YpRCYEElBs32Fv82D8CAzQCpnzZnnU1xN+cX wwGtEfuZYwHLbsh38+kUUN1BtTu6ZVEO20L5V+CGdW59ODJuOGhXEQLBGyD0kv4g/3qO MG0NPVY96utb1FIkUuzbquRJ3kYprVfo2p4ZSB29+cH6A1JI+HIdbXVaBce3iHrhKjMe b5vQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id m30si8040wrf.518.2017.09.22.07.59.35 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQV-0007B3-A3; Fri, 22 Sep 2017 15:59:35 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 08/20] target/arm: Don't warn about exception return with PC low bit set for v8M Date: Fri, 22 Sep 2017 15:59:55 +0100 Message-Id: <1506092407-26985-9-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> In the v8M architecture, return from an exception to a PC which has bit 0 set is not UNPREDICTABLE; it is defined that bit 0 is discarded [R_HRJH]. Restrict our complaint about this to v7M. Signed-off-by: Peter Maydell --- target/arm/helper.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) -- 2.7.4 Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index 979129e..59a07d2 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6421,16 +6421,22 @@ static void do_v7m_exception_exit(ARMCPU *cpu) env->regs[12] = ldl_phys(cs->as, frameptr + 0x10); env->regs[14] = ldl_phys(cs->as, frameptr + 0x14); env->regs[15] = ldl_phys(cs->as, frameptr + 0x18); - if (env->regs[15] & 1) { + + /* Returning from an exception with a PC with bit 0 set is defined + * behaviour on v8M (bit 0 is ignored), but for v7M it was specified + * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore + * the lsbit, and there are several RTOSes out there which incorrectly + * assume the r15 in the stack frame should be a Thumb-style "lsbit + * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but + * complain about the badly behaved guest. + */ + if ((env->regs[15] & 1) && !arm_feature(env, ARM_FEATURE_V8)) { qemu_log_mask(LOG_GUEST_ERROR, "M profile return from interrupt with misaligned " - "PC is UNPREDICTABLE\n"); - /* Actual hardware seems to ignore the lsbit, and there are several - * RTOSes out there which incorrectly assume the r15 in the stack - * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value. - */ - env->regs[15] &= ~1U; + "PC is UNPREDICTABLE on v7M\n"); } + env->regs[15] &= ~1U; + xpsr = ldl_phys(cs->as, frameptr + 0x1c); if (arm_feature(env, ARM_FEATURE_V8)) { From patchwork Fri Sep 22 14:59:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114042 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394875qgf; Fri, 22 Sep 2017 07:59:36 -0700 (PDT) X-Google-Smtp-Source: AOwi7QB0jlSSuo0hzTTr4o0MUXoZG7jUyuwkk3QINDJdQySZ8pQTBrHNeLiruljEJg3MG6nz3yo3 X-Received: by 10.28.30.139 with SMTP id e133mr3875400wme.8.1506092376694; Fri, 22 Sep 2017 07:59:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092376; cv=none; d=google.com; s=arc-20160816; b=cerBaObsaE5u2urFsLYuaXPgEYWr2Aeu7F6W6aKeXRe1NhY90DODy9tuuJfJtC0bsC uLMKnOoaIQd4gt+I9Hkiy2awzLFR0VSkIrQG6JdfoO33AXjizBfaSpEZYeVMyXR/G0ZB /mbUgpNIdzZQl6FzGe93NC1bbschBr7ilnjbwXnPwVhYlQWTRQyw01LN2L0DH6pg6SOn bYLFlbUcXrZliaiH25lHS+6wA+MlNb/szPqHFlB2xw9RLJsQtH5fS+EE7lXT4tMi45MX MvT/MZYzm/mIf5KNSB4MdUkPqZ0fp5a20SWfyH8vSBipwWpi78klInz8nIhaIegy6FED U/cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=tv0vi7XK/sBxrkfE7IXmSnAisFluoy5I3Hs9Y7O6DXw=; b=ugMkzSCyBPkDuF+alD4N1FiGhWLOY3ntyudnBBcEABIJf2zV5M8YG+ahVm0v5H3rdJ 3i79FamjDh3Cmy3iBt8W/hiy6GKooEWsr8j8cD6Dq2+9tz8e45d4RRRVsVlsC4oqCFTl gZHQui7lAubEEBb0+mBd/zQjbSmxqim0uynkKnVauAt0Phlu7QsdmI/DFW9PJ8AQ/m2T jq2Jnv/dd0bPFcu9hGH1vzNYxuwEIB8qdzfe5exB4lNyolDzrv6/0kCaUkb31vcYHgxq JOUwgSfwP8gH7dTmTNALRqboXGE4vm1b24lw9Em82bHkXbgDzG0Ysdjdj6lbYqJsNixo wsRw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id c24si20432wre.230.2017.09.22.07.59.36 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQV-0007Bi-WB; Fri, 22 Sep 2017 15:59:36 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 09/20] target/arm: Add new-in-v8M SFSR and SFAR Date: Fri, 22 Sep 2017 15:59:56 +0100 Message-Id: <1506092407-26985-10-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Add the new M profile Secure Fault Status Register and Secure Fault Address Register. Signed-off-by: Peter Maydell --- target/arm/cpu.h | 12 ++++++++++++ hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++++++++++++++ target/arm/machine.c | 2 ++ 3 files changed, 48 insertions(+) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index ad6eff4..9e3a16d 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -443,8 +443,10 @@ typedef struct CPUARMState { uint32_t cfsr[M_REG_NUM_BANKS]; /* Configurable Fault Status */ uint32_t hfsr; /* HardFault Status */ uint32_t dfsr; /* Debug Fault Status Register */ + uint32_t sfsr; /* Secure Fault Status Register */ uint32_t mmfar[M_REG_NUM_BANKS]; /* MemManage Fault Address */ uint32_t bfar; /* BusFault Address */ + uint32_t sfar; /* Secure Fault Address Register */ unsigned mpu_ctrl[M_REG_NUM_BANKS]; /* MPU_CTRL */ int exception; uint32_t primask[M_REG_NUM_BANKS]; @@ -1260,6 +1262,16 @@ FIELD(V7M_DFSR, DWTTRAP, 2, 1) FIELD(V7M_DFSR, VCATCH, 3, 1) FIELD(V7M_DFSR, EXTERNAL, 4, 1) +/* V7M SFSR bits */ +FIELD(V7M_SFSR, INVEP, 0, 1) +FIELD(V7M_SFSR, INVIS, 1, 1) +FIELD(V7M_SFSR, INVER, 2, 1) +FIELD(V7M_SFSR, AUVIOL, 3, 1) +FIELD(V7M_SFSR, INVTRAN, 4, 1) +FIELD(V7M_SFSR, LSPERR, 5, 1) +FIELD(V7M_SFSR, SFARVALID, 6, 1) +FIELD(V7M_SFSR, LSERR, 7, 1) + /* v7M MPU_CTRL bits */ FIELD(V7M_MPU_CTRL, ENABLE, 0, 1) FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1) diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c index a1041c2..deea637 100644 --- a/hw/intc/armv7m_nvic.c +++ b/hw/intc/armv7m_nvic.c @@ -1017,6 +1017,22 @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs) goto bad_offset; } return cpu->env.pmsav8.mair1[attrs.secure]; + case 0xde4: /* SFSR */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return 0; + } + return cpu->env.v7m.sfsr; + case 0xde8: /* SFAR */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return 0; + } + return cpu->env.v7m.sfar; default: bad_offset: qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset); @@ -1368,6 +1384,24 @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value, * only affect cacheability, and we don't implement caching. */ break; + case 0xde4: /* SFSR */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return; + } + cpu->env.v7m.sfsr &= ~value; /* W1C */ + break; + case 0xde8: /* SFAR */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return; + } + cpu->env.v7m.sfsr = value; + break; case 0xf00: /* Software Triggered Interrupt Register */ { int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ; diff --git a/target/arm/machine.c b/target/arm/machine.c index e5fe083..d4b3baf 100644 --- a/target/arm/machine.c +++ b/target/arm/machine.c @@ -276,6 +276,8 @@ static const VMStateDescription vmstate_m_security = { VMSTATE_UINT32(env.v7m.ccr[M_REG_S], ARMCPU), VMSTATE_UINT32(env.v7m.mmfar[M_REG_S], ARMCPU), VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU), + VMSTATE_UINT32(env.v7m.sfsr, ARMCPU), + VMSTATE_UINT32(env.v7m.sfar, ARMCPU), VMSTATE_END_OF_LIST() } }; From patchwork Fri Sep 22 14:59:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114043 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394882qgf; Fri, 22 Sep 2017 07:59:37 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDuJVgSo8n3N3oIDcUGas5ib+rZOd7GDlNu2v0nbsCtAJLN4pgdLHwpmDHvomWeB8PcerxT X-Received: by 10.28.30.84 with SMTP id e81mr4396458wme.39.1506092377262; Fri, 22 Sep 2017 07:59:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092377; cv=none; d=google.com; s=arc-20160816; b=xgZCvDsZUk6zCwkDfeqmnBCDME4tsMckSnF32E5CuIVrMu30esEROKItfMUrn1bApo 749Uru6pAxcdlj7Y4Th0m8APG/sxoINnUl0QG6Gtmx40ZFcN5qORoy85+6Cv2DYaPYsb yJjfvGj8d6ytKTQ5Me7wG1/blmrvdKEk3aqn0FaZVy2TDnifGu0kJsLBGtxPjtgmAOvC hNapI0ocSw5GKZBVdzhTGUny8mvwFhVVyofs1V3LPqWlN3RMc1Rn2F3lB1SRPY7PQ/a5 nLN/2JmyDB4uoza7zJXUCFRYZBBg+wvqpS+yTsoKMmQjhFdNpHjmCAE5ZySSaM0qBP9V qYeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=fa6mdZIut2A2NQf1izgJ1b4XVtoS/B4w08kdnPJjIP4=; b=t7N+Hit5dR8ppwZUwPqi1l9mXFI2JgTnrceX04cU7ju0oNm2LWq6kLthrVG0IF+gUs P5hrPnzOaOyWLw4z9s5oSb+y2NvKlmQfyKJaqTYpWm/gQMJN9fXUHjvfjmxB+CfTLLog IvwlIi2nrDdTOOA4UaDtwUu1UsksRbb0uGSYr6sCLHmA+jaUnah0iSWtmsnsNHdx3wXO 8oGFOsXciSLDBBDgzO4IwBwmJ5ferHxrMrSiArJncC18wj/HsJdj64tJXraG0JtHfFWQ 24wgtQz4ff6P91dyPVluamBDA4m/QWfy0X1Kwbqgagr7rjvCc3EAAiKK0BuHso4Uo0hR FxCg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id q124si22149wma.146.2017.09.22.07.59.37 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQW-0007CA-L7; Fri, 22 Sep 2017 15:59:36 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 10/20] target/arm: Update excret sanity checks for v8M Date: Fri, 22 Sep 2017 15:59:57 +0100 Message-Id: <1506092407-26985-11-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> In v8M, more bits are defined in the exception-return magic values; update the code that checks these so we accept the v8M values when the CPU permits them. Signed-off-by: Peter Maydell --- target/arm/helper.c | 73 ++++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 58 insertions(+), 15 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index 59a07d2..da3a36e 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6275,8 +6275,9 @@ static void do_v7m_exception_exit(ARMCPU *cpu) uint32_t excret; uint32_t xpsr; bool ufault = false; - bool return_to_sp_process = false; - bool return_to_handler = false; + bool sfault = false; + bool return_to_sp_process; + bool return_to_handler; bool rettobase = false; bool exc_secure = false; bool return_to_secure; @@ -6310,6 +6311,19 @@ static void do_v7m_exception_exit(ARMCPU *cpu) excret); } + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* EXC_RETURN.ES validation check (R_SMFL). We must do this before + * we pick which FAULTMASK to clear. + */ + if (!env->v7m.secure && + ((excret & R_V7M_EXCRET_ES_MASK) || + !(excret & R_V7M_EXCRET_DCRS_MASK))) { + sfault = 1; + /* For all other purposes, treat ES as 0 (R_HXSR) */ + excret &= ~R_V7M_EXCRET_ES_MASK; + } + } + if (env->v7m.exception != ARMV7M_EXCP_NMI) { /* Auto-clear FAULTMASK on return from other than NMI. * If the security extension is implemented then this only @@ -6347,24 +6361,53 @@ static void do_v7m_exception_exit(ARMCPU *cpu) g_assert_not_reached(); } + return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK); + return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK; return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && (excret & R_V7M_EXCRET_S_MASK); - switch (excret & 0xf) { - case 1: /* Return to Handler */ - return_to_handler = true; - break; - case 13: /* Return to Thread using Process stack */ - return_to_sp_process = true; - /* fall through */ - case 9: /* Return to Thread using Main stack */ - if (!rettobase && - !(env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_NONBASETHRDENA_MASK)) { + if (arm_feature(env, ARM_FEATURE_V8)) { + if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP); + * we choose to take the UsageFault. + */ + if ((excret & R_V7M_EXCRET_S_MASK) || + (excret & R_V7M_EXCRET_ES_MASK) || + !(excret & R_V7M_EXCRET_DCRS_MASK)) { + ufault = true; + } + } + if (excret & R_V7M_EXCRET_RES0_MASK) { ufault = true; } - break; - default: - ufault = true; + } else { + /* For v7M we only recognize certain combinations of the low bits */ + switch (excret & 0xf) { + case 1: /* Return to Handler */ + break; + case 13: /* Return to Thread using Process stack */ + case 9: /* Return to Thread using Main stack */ + /* We only need to check NONBASETHRDENA for v7M, because in + * v8M this bit does not exist (it is RES1). + */ + if (!rettobase && + !(env->v7m.ccr[env->v7m.secure] & + R_V7M_CCR_NONBASETHRDENA_MASK)) { + ufault = true; + } + break; + default: + ufault = true; + } + } + + if (sfault) { + env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + v7m_exception_taken(cpu, excret); + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " + "stackframe: failed EXC_RETURN.ES validity check\n"); + return; } if (ufault) { From patchwork Fri Sep 22 14:59:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114044 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394889qgf; Fri, 22 Sep 2017 07:59:38 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBYqKnKWUmeNr6CKlDm1VyJVVB3rtEopWGT8yZMUN/EFg9f7YJDXbjqlO/jXWju0qEFDT85 X-Received: by 10.28.214.197 with SMTP id n188mr174485wmg.1.1506092377885; Fri, 22 Sep 2017 07:59:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092377; cv=none; d=google.com; s=arc-20160816; b=rFVynH1FHGY22vRz8KvPv2rthW9n6aYkN+hK618xdWCbLqmf5GGnM+wf+PSOCavHOj PGZsSucyj4eZXBrAk6z6q//1SfGBQI+CLrDrUxZ60ZgIlJc7Ucu0JXHL2xy7iQTRnKig ai1UAkIV2Xh3Op/Cv+2IifWpiWMEoNB36jivvvkuFp79quojM2DtGUs8S14IQOokZVkG FbC/EAsQTZB/qXbM/yGOPWP2kxCb32TL84bEAJBkIUF2qmKRhvrfGh6GtPzobTqjDHKA MflgFGQ0iVfgB7ZvMOUa9oDzM5VLZgnpBvfD8z2oJXUSX9CR1rNy8z2fRKYFH7ecK2MJ KmEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Fsh4oIJ3B3CaidDWd4z+MOGxN18AGoi0ep7C3Z8eiWY=; b=b8x3xBoOGvVUWXaRGV2i09KdHLpoqn+dsG/0yAp7pURkQoNXenW47SR8Hl8Ev5Z3sd cGPo2HeY0flUTncPqIqnayLbczfCw3Rrl/lpo0Aacw5PZF5ZBqwuNg4LCWddn4Ve6HAc rusDAjtUTJCiB9NqM3DGkeNlOykOI9yaKvt02ieiJyV1xWrAk5lFPUJMtzJ53qj1Vn7C h5pG5SGUL+zQOuu8ycBiqfKbspcGLz78KMmr/fltveHzGyvh5maocsvoltOa0hOzofad pV4rhB5fIkJc87+qzBMLKiocg07FpLSYQSfzWNOjxreEulH6hd+1Z5ClrJtiVGObUn2f ZMlg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id 93si12154wrb.408.2017.09.22.07.59.37 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQX-0007Cf-AG; Fri, 22 Sep 2017 15:59:37 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 11/20] target/arm: Add support for restoring v8M additional state context Date: Fri, 22 Sep 2017 15:59:58 +0100 Message-Id: <1506092407-26985-12-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> For v8M, exceptions from Secure to Non-Secure state will save callee-saved registers to the exception frame as well as the caller-saved registers. Add support for unstacking these registers in exception exit when necessary. Signed-off-by: Peter Maydell --- target/arm/helper.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index da3a36e..25f5675 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6453,6 +6453,36 @@ static void do_v7m_exception_exit(ARMCPU *cpu) "for destination state is UNPREDICTABLE\n"); } + /* Do we need to pop callee-saved registers? */ + if (return_to_secure && + ((excret & R_V7M_EXCRET_ES_MASK) == 0 || + (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) { + uint32_t expected_sig = 0xfefa125b; + uint32_t actual_sig = ldl_phys(cs->as, frameptr); + + if (expected_sig != actual_sig) { + /* Take a SecureFault on the current stack */ + env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + v7m_exception_taken(cpu, excret); + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " + "stackframe: failed exception return integrity " + "signature check\n"); + return; + } + + env->regs[4] = ldl_phys(cs->as, frameptr + 0x8); + env->regs[5] = ldl_phys(cs->as, frameptr + 0xc); + env->regs[6] = ldl_phys(cs->as, frameptr + 0x10); + env->regs[7] = ldl_phys(cs->as, frameptr + 0x14); + env->regs[8] = ldl_phys(cs->as, frameptr + 0x18); + env->regs[9] = ldl_phys(cs->as, frameptr + 0x1c); + env->regs[10] = ldl_phys(cs->as, frameptr + 0x20); + env->regs[11] = ldl_phys(cs->as, frameptr + 0x24); + + frameptr += 0x28; + } + /* Pop registers. TODO: make these accesses use the correct * attributes and address space (S/NS, priv/unpriv) and handle * memory transaction failures. From patchwork Fri Sep 22 14:59:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114045 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394898qgf; Fri, 22 Sep 2017 07:59:38 -0700 (PDT) X-Google-Smtp-Source: AOwi7QAVgWe155PQjsg7JEy13IMj63j+Ob9Vn+RrX1N8aI0rAFSOGsC/Nrt12GIY+nMID2+duqaY X-Received: by 10.28.104.67 with SMTP id d64mr4453673wmc.15.1506092378665; Fri, 22 Sep 2017 07:59:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092378; cv=none; d=google.com; s=arc-20160816; b=gZ+YkgaGLZkvcksdJ+s2G0cks3QDt8CjGUy998rzwm4Jt2p3uGTQUwXlLm/kyZBjHN gfoPfvDq1zedcRd49pW5MdOWe3v0wx/wUmMPUmKz/+Z22yJucYkYqrkbV2Axh2V/CpUi Kwd5Ca/vT8wEl/J9O7MRz85Z+wVivLYdCfFhOGg0HZlpVIKJqpQjmfCbWF2zETPfBf8Y W1wrY+KF4dvcVom8z0Ma3HIMKP2UoFPc+fTp5DVSd8YKYdF8FRjDUo7ebfsmOvVbJFXo SvW0HRUDO/rRuBDGLEWuUEzird2NGuapdvjKEHQGqQJiYv05yQX5DlbvIGmwvXsy01el iuQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Sm6Bw8Y4XjNK/KZ8VohivUy7aPVbPvq/jdDIEFnyZts=; b=apgv90rzVAPzvY149NlekvjVJEkTxQQhTO66GuelRmBE82sdkT6rplsFnocbtQJqWa xPpbh20Xf1IFuHK0IOvOSuhAzy2Z49jb+CAzXi8yaPZnYUCF4/Nvd45WZvS4ixNLvvWp 78/Rox+iJZ/SAx6WSHi5zBoufXh6tvlCmZ70I2XS73DxzdocfeBkdkFkFqeoVpZVR7Zh YQ9jcVsccRybEc4wAq0D9sFE6ch+EiG7QW495J8f4zXJlGQZyPw88KbF5G+S527NidB0 xDYjVzO4wk0U84mkyUSBQ2X5m28g4HtBzvsH2auT11raC5/M9re53eXPkZugKMdcm0LI 89uQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id i20si9733wrb.482.2017.09.22.07.59.38 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQY-0007Cx-1Y; Fri, 22 Sep 2017 15:59:38 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 12/20] target/arm: Add v8M support to exception entry code Date: Fri, 22 Sep 2017 15:59:59 +0100 Message-Id: <1506092407-26985-13-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Add support for v8M and in particular the security extension to the exception entry code. This requires changes to: * calculation of the exception-return magic LR value * push the callee-saves registers in certain cases * clear registers when taking non-secure exceptions to avoid leaking information from the interrupted secure code * switch to the correct security state on entry * use the vector table for the security state we're targeting Signed-off-by: Peter Maydell --- target/arm/helper.c | 165 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 145 insertions(+), 20 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/helper.c b/target/arm/helper.c index 25f5675..7511566 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6200,12 +6200,12 @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, } } -static uint32_t arm_v7m_load_vector(ARMCPU *cpu) +static uint32_t arm_v7m_load_vector(ARMCPU *cpu, bool targets_secure) { CPUState *cs = CPU(cpu); CPUARMState *env = &cpu->env; MemTxResult result; - hwaddr vec = env->v7m.vecbase[env->v7m.secure] + env->v7m.exception * 4; + hwaddr vec = env->v7m.vecbase[targets_secure] + env->v7m.exception * 4; uint32_t addr; addr = address_space_ldl(cs->as, vec, @@ -6217,13 +6217,48 @@ static uint32_t arm_v7m_load_vector(ARMCPU *cpu) * Since we don't model Lockup, we just report this guest error * via cpu_abort(). */ - cpu_abort(cs, "Failed to read from exception vector table " - "entry %08x\n", (unsigned)vec); + cpu_abort(cs, "Failed to read from %s exception vector table " + "entry %08x\n", targets_secure ? "secure" : "nonsecure", + (unsigned)vec); } return addr; } -static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr) +static void v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain) +{ + /* For v8M, push the callee-saves register part of the stack frame. + * Compare the v8M pseudocode PushCalleeStack(). + * In the tailchaining case this may not be the current stack. + */ + CPUARMState *env = &cpu->env; + CPUState *cs = CPU(cpu); + uint32_t *frame_sp_p; + uint32_t frameptr; + + if (dotailchain) { + frame_sp_p = get_v7m_sp_ptr(env, true, + lr & R_V7M_EXCRET_MODE_MASK, + lr & R_V7M_EXCRET_SPSEL_MASK); + } else { + frame_sp_p = &env->regs[13]; + } + + frameptr = *frame_sp_p - 0x28; + + stl_phys(cs->as, frameptr, 0xfefa125b); + stl_phys(cs->as, frameptr + 0x8, env->regs[4]); + stl_phys(cs->as, frameptr + 0xc, env->regs[5]); + stl_phys(cs->as, frameptr + 0x10, env->regs[6]); + stl_phys(cs->as, frameptr + 0x14, env->regs[7]); + stl_phys(cs->as, frameptr + 0x18, env->regs[8]); + stl_phys(cs->as, frameptr + 0x1c, env->regs[9]); + stl_phys(cs->as, frameptr + 0x20, env->regs[10]); + stl_phys(cs->as, frameptr + 0x24, env->regs[11]); + + *frame_sp_p = frameptr; +} + +static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain) { /* Do the "take the exception" parts of exception entry, * but not the pushing of state to the stack. This is @@ -6231,14 +6266,84 @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr) */ CPUARMState *env = &cpu->env; uint32_t addr; + bool targets_secure; + + targets_secure = armv7m_nvic_acknowledge_irq(env->nvic); - armv7m_nvic_acknowledge_irq(env->nvic); + if (arm_feature(env, ARM_FEATURE_V8)) { + if (arm_feature(env, ARM_FEATURE_M_SECURITY) && + (lr & R_V7M_EXCRET_S_MASK)) { + /* The background code (the owner of the registers in the + * exception frame) is Secure. This means it may either already + * have or now needs to push callee-saves registers. + */ + if (targets_secure) { + if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) { + /* We took an exception from Secure to NonSecure + * (which means the callee-saved registers got stacked) + * and are now tailchaining to a Secure exception. + * Clear DCRS so eventual return from this Secure + * exception unstacks the callee-saved registers. + */ + lr &= ~R_V7M_EXCRET_DCRS_MASK; + } + } else { + /* We're going to a non-secure exception; push the + * callee-saves registers to the stack now, if they're + * not already saved. + */ + if (lr & R_V7M_EXCRET_DCRS_MASK && + !(dotailchain && (lr & R_V7M_EXCRET_ES_MASK))) { + v7m_push_callee_stack(cpu, lr, dotailchain); + } + lr |= R_V7M_EXCRET_DCRS_MASK; + } + } + + lr &= ~R_V7M_EXCRET_ES_MASK; + if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) { + lr |= R_V7M_EXCRET_ES_MASK; + } + lr &= ~R_V7M_EXCRET_SPSEL_MASK; + if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) { + lr |= R_V7M_EXCRET_SPSEL_MASK; + } + + /* Clear registers if necessary to prevent non-secure exception + * code being able to see register values from secure code. + * Where register values become architecturally UNKNOWN we leave + * them with their previous values. + */ + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + if (!targets_secure) { + /* Always clear the caller-saved registers (they have been + * pushed to the stack earlier in v7m_push_stack()). + * Clear callee-saved registers if the background code is + * Secure (in which case these regs were saved in + * v7m_push_callee_stack()). + */ + int i; + + for (i = 0; i < 13; i++) { + /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */ + if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) { + env->regs[i] = 0; + } + } + /* Clear EAPSR */ + xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT); + } + } + } + + /* Switch to target security state -- must do this before writing SPSEL */ + switch_v7m_security_state(env, targets_secure); write_v7m_control_spsel(env, 0); arm_clear_exclusive(env); /* Clear IT bits */ env->condexec_bits = 0; env->regs[14] = lr; - addr = arm_v7m_load_vector(cpu); + addr = arm_v7m_load_vector(cpu, targets_secure); env->regs[15] = addr & 0xfffffffe; env->thumb = addr & 1; } @@ -6404,7 +6509,7 @@ static void do_v7m_exception_exit(ARMCPU *cpu) if (sfault) { env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK; armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - v7m_exception_taken(cpu, excret); + v7m_exception_taken(cpu, excret, true); qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " "stackframe: failed EXC_RETURN.ES validity check\n"); return; @@ -6416,7 +6521,7 @@ static void do_v7m_exception_exit(ARMCPU *cpu) */ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); - v7m_exception_taken(cpu, excret); + v7m_exception_taken(cpu, excret, true); qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " "stackframe: failed exception return integrity check\n"); return; @@ -6464,7 +6569,7 @@ static void do_v7m_exception_exit(ARMCPU *cpu) /* Take a SecureFault on the current stack */ env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK; armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - v7m_exception_taken(cpu, excret); + v7m_exception_taken(cpu, excret, true); qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " "stackframe: failed exception return integrity " "signature check\n"); @@ -6527,7 +6632,7 @@ static void do_v7m_exception_exit(ARMCPU *cpu) armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; - v7m_exception_taken(cpu, excret); + v7m_exception_taken(cpu, excret, true); qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " "stackframe: failed exception return integrity " "check\n"); @@ -6564,7 +6669,7 @@ static void do_v7m_exception_exit(ARMCPU *cpu) armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; v7m_push_stack(cpu); - v7m_exception_taken(cpu, excret); + v7m_exception_taken(cpu, excret, false); qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: " "failed exception return integrity check\n"); return; @@ -6708,20 +6813,40 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) return; /* Never happens. Keep compiler happy. */ } - lr = R_V7M_EXCRET_RES1_MASK | - R_V7M_EXCRET_S_MASK | - R_V7M_EXCRET_DCRS_MASK | - R_V7M_EXCRET_FTYPE_MASK | - R_V7M_EXCRET_ES_MASK; - if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) { - lr |= R_V7M_EXCRET_SPSEL_MASK; + if (arm_feature(env, ARM_FEATURE_V8)) { + lr = R_V7M_EXCRET_RES1_MASK | + R_V7M_EXCRET_DCRS_MASK | + R_V7M_EXCRET_FTYPE_MASK; + /* The S bit indicates whether we should return to Secure + * or NonSecure (ie our current state). + * The ES bit indicates whether we're taking this exception + * to Secure or NonSecure (ie our target state). We set it + * later, in v7m_exception_taken(). + * The SPSEL bit is also set in v7m_exception_taken() for v8M. + * This corresponds to the ARM ARM pseudocode for v8M setting + * some LR bits in PushStack() and some in ExceptionTaken(); + * the distinction matters for the tailchain cases where we + * can take an exception without pushing the stack. + */ + if (env->v7m.secure) { + lr |= R_V7M_EXCRET_S_MASK; + } + } else { + lr = R_V7M_EXCRET_RES1_MASK | + R_V7M_EXCRET_S_MASK | + R_V7M_EXCRET_DCRS_MASK | + R_V7M_EXCRET_FTYPE_MASK | + R_V7M_EXCRET_ES_MASK; + if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) { + lr |= R_V7M_EXCRET_SPSEL_MASK; + } } if (!arm_v7m_is_handler_mode(env)) { lr |= R_V7M_EXCRET_MODE_MASK; } v7m_push_stack(cpu); - v7m_exception_taken(cpu, lr); + v7m_exception_taken(cpu, lr, false); qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception); } From patchwork Fri Sep 22 15:00:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114046 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394908qgf; Fri, 22 Sep 2017 07:59:39 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDmBIjN8wNpsrHUEBY8ypzTjr5vbskAsKhZBpwH5OLdyt6hGi2E8T3Qx2u7EtgUjdHpmjWu X-Received: by 10.223.157.198 with SMTP id q6mr5077827wre.102.1506092379307; Fri, 22 Sep 2017 07:59:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092379; cv=none; d=google.com; s=arc-20160816; b=V1wd26brToLxaMj+RZb+dwKK/TuncUHO6ZLxTCn1XvpvcB7XatEE/JqugHbRzWz773 F7YsUXJg1LJExJRKSjTel0PPvFIHxNNVIMD1gPEKZm6nUEx29iFGmt08YWJS2tpsEypm oILSkq1GskpxrMPkKBn7KS4IgbMyJ6pCBS8L5qXOm4bjlGcNhqRT2L3XYU6pBcjtvMK8 fXym0h/f8u8oE1LcOFVxfImWjMJ+WuQbM/ttu0DlX5mpne/UP+wNaljq7GIEn65DEJIm wgBiIogadKuvXkU1pC/7j4kDi6tBq6pu72prS2pNB5mriuuLki+rkV2cze9Dyf9vCSro tyfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=1AJmFOH8KELGK1m3+QtL7eqLncqwhDjEiC98TIY0/NU=; b=phImNiqGONqLB/30LH7/g78UNfCHdH7adIczQcVv1fdW5MEx+Y18svy6iC6WvIIwQ3 cH7fzveT3+aYz9Z1vlCpUG6t34vDvLFukBIl9wZAdVi/kY6RQaHlRQsXjy72EvtxgkU1 h8Wb5CBQ/JWFwMWkwGUCLU26WYz4QQGR79BNyc33jFtCTpgzvAy78aIemnbx4zZvTtv1 E3h2DwVLpRwROw3LY79QTuhPvmwlmTtI1rJqyUZ39ovz5UJRRPfRwJokBSnqsM9ZiK4y z1FhA9cqy6uqNTMQLDR3KJiJ+W+p50dsiygRH12qEtK/3xFpBOmhqyv774kRKV9XDUUA DzMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id r13si10299wrg.462.2017.09.22.07.59.39 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQY-0007DU-Ln; Fri, 22 Sep 2017 15:59:38 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 13/20] nvic: Implement Security Attribution Unit registers Date: Fri, 22 Sep 2017 16:00:00 +0100 Message-Id: <1506092407-26985-14-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Implement the register interface for the SAU: SAU_CTRL, SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the actual behaviour is implemented here; registers just read back as written. When the CPU definition for Cortex-M33 is eventually added, its initfn will set cpu->sau_sregion, in the same way that we currently set cpu->pmsav7_dregion for the M3 and M4. Number of SAU regions is typically a configurable CPU parameter, but this patch doesn't provide a QEMU CPU property for it. We can easily add one when we have a board that requires it. Signed-off-by: Peter Maydell --- target/arm/cpu.h | 10 +++++ hw/intc/armv7m_nvic.c | 116 ++++++++++++++++++++++++++++++++++++++++++++++++++ target/arm/cpu.c | 27 ++++++++++++ target/arm/machine.c | 14 ++++++ 4 files changed, 167 insertions(+) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 9e3a16d..441e584 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -568,6 +568,14 @@ typedef struct CPUARMState { uint32_t mair1[M_REG_NUM_BANKS]; } pmsav8; + /* v8M SAU */ + struct { + uint32_t *rbar; + uint32_t *rlar; + uint32_t rnr; + uint32_t ctrl; + } sau; + void *nvic; const struct arm_boot_info *boot_info; /* Store GICv3CPUState to access from this struct */ @@ -663,6 +671,8 @@ struct ARMCPU { bool has_mpu; /* PMSAv7 MPU number of supported regions */ uint32_t pmsav7_dregion; + /* v8M SAU number of supported regions */ + uint32_t sau_sregion; /* PSCI conduit used to invoke PSCI methods * 0 - disabled, 1 - smc, 2 - hvc diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c index deea637..bd1d5d3 100644 --- a/hw/intc/armv7m_nvic.c +++ b/hw/intc/armv7m_nvic.c @@ -1017,6 +1017,60 @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs) goto bad_offset; } return cpu->env.pmsav8.mair1[attrs.secure]; + case 0xdd0: /* SAU_CTRL */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return 0; + } + return cpu->env.sau.ctrl; + case 0xdd4: /* SAU_TYPE */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return 0; + } + return cpu->sau_sregion; + case 0xdd8: /* SAU_RNR */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return 0; + } + return cpu->env.sau.rnr; + case 0xddc: /* SAU_RBAR */ + { + int region = cpu->env.sau.rnr; + + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return 0; + } + if (region >= cpu->sau_sregion) { + return 0; + } + return cpu->env.sau.rbar[region]; + } + case 0xde0: /* SAU_RLAR */ + { + int region = cpu->env.sau.rnr; + + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return 0; + } + if (region >= cpu->sau_sregion) { + return 0; + } + return cpu->env.sau.rlar[region]; + } case 0xde4: /* SFSR */ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { goto bad_offset; @@ -1384,6 +1438,68 @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value, * only affect cacheability, and we don't implement caching. */ break; + case 0xdd0: /* SAU_CTRL */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return; + } + cpu->env.sau.ctrl = value & 3; + case 0xdd4: /* SAU_TYPE */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + break; + case 0xdd8: /* SAU_RNR */ + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return; + } + if (value >= cpu->sau_sregion) { + qemu_log_mask(LOG_GUEST_ERROR, "SAU region out of range %" + PRIu32 "/%" PRIu32 "\n", + value, cpu->sau_sregion); + } else { + cpu->env.sau.rnr = value; + } + break; + case 0xddc: /* SAU_RBAR */ + { + int region = cpu->env.sau.rnr; + + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return; + } + if (region >= cpu->sau_sregion) { + return; + } + cpu->env.sau.rbar[region] = value & ~0x1f; + tlb_flush(CPU(cpu)); + break; + } + case 0xde0: /* SAU_RLAR */ + { + int region = cpu->env.sau.rnr; + + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { + goto bad_offset; + } + if (!attrs.secure) { + return; + } + if (region >= cpu->sau_sregion) { + return; + } + cpu->env.sau.rlar[region] = value & ~0x1c; + tlb_flush(CPU(cpu)); + break; + } case 0xde4: /* SFSR */ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { goto bad_offset; diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 3344979..1627836 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -285,6 +285,18 @@ static void arm_cpu_reset(CPUState *s) env->pmsav8.mair1[M_REG_S] = 0; } + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + if (cpu->sau_sregion > 0) { + memset(env->sau.rbar, 0, sizeof(*env->sau.rbar) * cpu->sau_sregion); + memset(env->sau.rlar, 0, sizeof(*env->sau.rlar) * cpu->sau_sregion); + } + env->sau.rnr = 0; + /* SAU_CTRL reset value is IMPDEF; we choose 0, which is what + * the Cortex-M33 does. + */ + env->sau.ctrl = 0; + } + set_flush_to_zero(1, &env->vfp.standard_fp_status); set_flush_inputs_to_zero(1, &env->vfp.standard_fp_status); set_default_nan_mode(1, &env->vfp.standard_fp_status); @@ -870,6 +882,20 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) } } + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + uint32_t nr = cpu->sau_sregion; + + if (nr > 0xff) { + error_setg(errp, "v8M SAU #regions invalid %" PRIu32, nr); + return; + } + + if (nr) { + env->sau.rbar = g_new0(uint32_t, nr); + env->sau.rlar = g_new0(uint32_t, nr); + } + } + if (arm_feature(env, ARM_FEATURE_EL3)) { set_feature(env, ARM_FEATURE_VBAR); } @@ -1141,6 +1167,7 @@ static void cortex_m4_initfn(Object *obj) cpu->midr = 0x410fc240; /* r0p0 */ cpu->pmsav7_dregion = 8; } + static void arm_v7m_class_init(ObjectClass *oc, void *data) { CPUClass *cc = CPU_CLASS(oc); diff --git a/target/arm/machine.c b/target/arm/machine.c index d4b3baf..a52d0f9 100644 --- a/target/arm/machine.c +++ b/target/arm/machine.c @@ -242,6 +242,13 @@ static bool s_rnr_vmstate_validate(void *opaque, int version_id) return cpu->env.pmsav7.rnr[M_REG_S] < cpu->pmsav7_dregion; } +static bool sau_rnr_vmstate_validate(void *opaque, int version_id) +{ + ARMCPU *cpu = opaque; + + return cpu->env.sau.rnr < cpu->sau_sregion; +} + static bool m_security_needed(void *opaque) { ARMCPU *cpu = opaque; @@ -278,6 +285,13 @@ static const VMStateDescription vmstate_m_security = { VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU), VMSTATE_UINT32(env.v7m.sfsr, ARMCPU), VMSTATE_UINT32(env.v7m.sfar, ARMCPU), + VMSTATE_VARRAY_UINT32(env.sau.rbar, ARMCPU, sau_sregion, 0, + vmstate_info_uint32, uint32_t), + VMSTATE_VARRAY_UINT32(env.sau.rlar, ARMCPU, sau_sregion, 0, + vmstate_info_uint32, uint32_t), + VMSTATE_UINT32(env.sau.rnr, ARMCPU), + VMSTATE_VALIDATE("SAU_RNR is valid", sau_rnr_vmstate_validate), + VMSTATE_UINT32(env.sau.ctrl, ARMCPU), VMSTATE_END_OF_LIST() } }; From patchwork Fri Sep 22 15:00:01 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114047 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394937qgf; Fri, 22 Sep 2017 07:59:40 -0700 (PDT) X-Google-Smtp-Source: AOwi7QC5/lmre3zGXFGFXnX1blbaKTMG++W29VXrt6efCgSxRgXevScurQHh40k+4xPJTY9ANeTg X-Received: by 10.223.157.11 with SMTP id k11mr5439886wre.252.1506092380255; Fri, 22 Sep 2017 07:59:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092380; cv=none; d=google.com; s=arc-20160816; b=e0CKK+0w6J/ijuAHqRNnPG62x9Z8Ju2Wqc4Jvt+3a1P4L4EmJ2dMEgMibUSFpS//GZ DOpDBqocuzpAeCYr4eqDd6PBLWqUu0UyE5F8BcWWVf5uxTrtLM32sdzqQmpfVQrwMU5N Lqtn76hUgCVodlSllsBOTKBuQ20EMuByeyMK4QiWwlIo1h70UnmTlyzEoU1kKddurDv8 TQI/zRr+jCjd079zq7ULZHx3m46RUKqFrcAFpIhWtQ5iNVdwvnOjndcaYWXYKf5ij+BV d8GGwQwQ+Eo/0bnG4cnhjrP00lU7EE8MFnWHOE8eeEZncsxooOtaoH7tHNpLm9JIdEKH 993g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=iw4WQTW2yjc9Fux9OS7jGMwL2ZR6nxnzwpVq3EQY5ZM=; b=KZ694pmqWUEojcAOucEYkkqU0K+HzL+Q2wd1G6PtJcFaDbw4ncHDQmWpqt71IbxTXh UBUwhc3esGHRRdGgdG3KfXmK3waL7MHheTeKzphXOrKj9J7YCiXi1YX0C08v5U3A8vQ7 CeMQRrWMKcGe60k0yyAnulFg1M5jHndn349N+tQVEkpPwNf3mjlpp4A0wY3R5H3v3Gqa g8K5S0JiBHI83Wq7I5kqSVseTrI7PbtjOwhlnZpWfwn6z6zA+QB+k8vADFJUcWd9ZHkD 3MwxuOiQNH5tTeAJpDyzFdeMXcAvetQGL8Iy20w+mgwHBDB6tmcB66T0ALaZcn03gm+n 0X6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id x191si15226wmf.218.2017.09.22.07.59.39 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQZ-0007EA-Ju; Fri, 22 Sep 2017 15:59:39 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 14/20] target/arm: Implement security attribute lookups for memory accesses Date: Fri, 22 Sep 2017 16:00:01 +0100 Message-Id: <1506092407-26985-15-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Implement the security attribute lookups for memory accesses in the get_phys_addr() functions, causing these to generate various kinds of SecureFault for bad accesses. The major subtlety in this code relates to handling of the case when the security attributes the SAU assigns to the address don't match the current security state of the CPU. In the ARM ARM pseudocode for validating instruction accesses, the security attributes of the address determine whether the Secure or NonSecure MPU state is used. At face value, handling this would require us to encode the relevant bits of state into mmu_idx for both S and NS at once, which would result in our needing 16 mmu indexes. Fortunately we don't actually need to do this because a mismatch between address attributes and CPU state means either: * some kind of fault (usually a SecureFault, but in theory perhaps a UserFault for unaligned access to Device memory) * execution of the SG instruction in NS state from a Secure & NonSecure code region The purpose of SG is simply to flip the CPU into Secure state, so we can handle it by emulating execution of that instruction directly in arm_v7m_cpu_do_interrupt(), which means we can treat all the mismatch cases as "throw an exception" and we don't need to encode the state of the other MPU bank into our mmu_idx values. This commit doesn't include the actual emulation of SG; it also doesn't include implementation of the IDAU, which is a per-board way to specify hard-coded memory attributes for addresses, which override the CPU-internal SAU if they specify a more secure setting than the SAU is programmed to. Signed-off-by: Peter Maydell --- target/arm/internals.h | 15 ++++ target/arm/helper.c | 182 ++++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 195 insertions(+), 2 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/internals.h b/target/arm/internals.h index 18be370..fd9a7e8 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -71,6 +71,21 @@ FIELD(V7M_EXCRET, DCRS, 5, 1) FIELD(V7M_EXCRET, S, 6, 1) FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */ +/* We use a few fake FSR values for internal purposes in M profile. + * M profile cores don't have A/R format FSRs, but currently our + * get_phys_addr() code assumes A/R profile and reports failures via + * an A/R format FSR value. We then translate that into the proper + * M profile exception and FSR status bit in arm_v7m_cpu_do_interrupt(). + * Mostly the FSR values we use for this are those defined for v7PMSA, + * since we share some of that codepath. A few kinds of fault are + * only for M profile and have no A/R equivalent, though, so we have + * to pick a value from the reserved range (which we never otherwise + * generate) to use for these. + * These values will never be visible to the guest. + */ +#define M_FAKE_FSR_NSC_EXEC 0xf /* NS executing in S&NSC memory */ +#define M_FAKE_FSR_SFAULT 0xe /* SecureFault INVTRAN, INVEP or AUVIOL */ + /* * For AArch64, map a given EL to an index in the banked_spsr array. * Note that this mapping and the AArch32 mapping defined in bank_number() diff --git a/target/arm/helper.c b/target/arm/helper.c index 7511566..b1ecb66 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -31,6 +31,16 @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address, target_ulong *page_size_ptr, uint32_t *fsr, ARMMMUFaultInfo *fi); +/* Security attributes for an address, as returned by v8m_security_lookup. */ +typedef struct V8M_SAttributes { + bool ns; + bool nsc; + uint8_t sregion; + bool srvalid; + uint8_t iregion; + bool irvalid; +} V8M_SAttributes; + /* Definitions for the PMCCNTR and PMCR registers */ #define PMCRD 0x8 #define PMCRC 0x4 @@ -6748,6 +6758,46 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) * raises the fault, in the A profile short-descriptor format. */ switch (env->exception.fsr & 0xf) { + case M_FAKE_FSR_NSC_EXEC: + /* Exception generated when we try to execute code at an address + * which is marked as Secure & Non-Secure Callable and the CPU + * is in the Non-Secure state. The only instruction which can + * be executed like this is SG (and that only if both halves of + * the SG instruction have the same security attributes.) + * Everything else must generate an INVEP SecureFault, so we + * emulate the SG instruction here. + * TODO: actually emulate SG. + */ + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + break; + case M_FAKE_FSR_SFAULT: + /* Various flavours of SecureFault for attempts to execute or + * access data in the wrong security state. + */ + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + if (env->v7m.secure) { + env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVTRAN\n"); + } else { + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + } + break; + case EXCP_DATA_ABORT: + /* This must be an NS access to S memory */ + env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.AUVIOL\n"); + break; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + break; case 0x8: /* External Abort */ switch (cs->exception_index) { case EXCP_PREFETCH_ABORT: @@ -8834,9 +8884,89 @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, return !(*prot & (1 << access_type)); } +static bool v8m_is_sau_exempt(CPUARMState *env, + uint32_t address, MMUAccessType access_type) +{ + /* The architecture specifies that certain address ranges are + * exempt from v8M SAU/IDAU checks. + */ + return + (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) || + (address >= 0xe0000000 && address <= 0xe0002fff) || + (address >= 0xe000e000 && address <= 0xe000efff) || + (address >= 0xe002e000 && address <= 0xe002efff) || + (address >= 0xe0040000 && address <= 0xe0041fff) || + (address >= 0xe00ff000 && address <= 0xe00fffff); +} + +static void v8m_security_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + V8M_SAttributes *sattrs) +{ + /* Look up the security attributes for this address. Compare the + * pseudocode SecurityCheck() function. + * We assume the caller has zero-initialized *sattrs. + */ + ARMCPU *cpu = arm_env_get_cpu(env); + int r; + + /* TODO: implement IDAU */ + + if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) { + /* 0xf0000000..0xffffffff is always S for insn fetches */ + return; + } + + if (v8m_is_sau_exempt(env, address, access_type)) { + sattrs->ns = !regime_is_secure(env, mmu_idx); + return; + } + + switch (env->sau.ctrl & 3) { + case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */ + break; + case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */ + sattrs->ns = true; + break; + default: /* SAU.ENABLE == 1 */ + for (r = 0; r < cpu->sau_sregion; r++) { + if (env->sau.rlar[r] & 1) { + uint32_t base = env->sau.rbar[r] & ~0x1f; + uint32_t limit = env->sau.rlar[r] | 0x1f; + + if (base <= address && limit >= address) { + if (sattrs->srvalid) { + /* If we hit in more than one region then we must report + * as Secure, not NS-Callable, with no valid region + * number info. + */ + sattrs->ns = false; + sattrs->nsc = false; + sattrs->sregion = 0; + sattrs->srvalid = false; + break; + } else { + if (env->sau.rlar[r] & 2) { + sattrs->nsc = true; + } else { + sattrs->ns = true; + } + sattrs->srvalid = true; + sattrs->sregion = r; + } + } + } + } + + /* TODO when we support the IDAU then it may override the result here */ + break; + } +} + static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, int *prot, uint32_t *fsr) + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, uint32_t *fsr) { ARMCPU *cpu = arm_env_get_cpu(env); bool is_user = regime_is_user(env, mmu_idx); @@ -8844,10 +8974,58 @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, int n; int matchregion = -1; bool hit = false; + V8M_SAttributes sattrs = {}; *phys_ptr = address; *prot = 0; + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs); + if (access_type == MMU_INST_FETCH) { + /* Instruction fetches always use the MMU bank and the + * transaction attribute determined by the fetch address, + * regardless of CPU state. This is painful for QEMU + * to handle, because it would mean we need to encode + * into the mmu_idx not just the (user, negpri) information + * for the current security state but also that for the + * other security state, which would balloon the number + * of mmu_idx values needed alarmingly. + * Fortunately we can avoid this because it's not actually + * possible to arbitrarily execute code from memory with + * the wrong security attribute: it will always generate + * an exception of some kind or another, apart from the + * special case of an NS CPU executing an SG instruction + * in S&NSC memory. So we always just fail the translation + * here and sort things out in the exception handler + * (including possibly emulating an SG instruction). + */ + if (sattrs.ns != !secure) { + *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT; + return true; + } + } else { + /* For data accesses we always use the MMU bank indicated + * by the current CPU state, but the security attributes + * might downgrade a secure access to nonsecure. + */ + if (sattrs.ns) { + txattrs->secure = false; + } else if (!secure) { + /* NS access to S memory must fault. + * Architecturally we should first check whether the + * MPU information for this address indicates that we + * are doing an unaligned access to Device memory, which + * should generate a UsageFault instead. QEMU does not + * currently check for that kind of unaligned access though. + * If we added it we would need to do so as a special case + * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt(). + */ + *fsr = M_FAKE_FSR_SFAULT; + return true; + } + } + } + /* Unlike the ARM ARM pseudocode, we don't need to check whether this * was an exception vector read from the vector table (which is always * done using the default system address map), because those accesses @@ -9112,7 +9290,7 @@ static bool get_phys_addr(CPUARMState *env, target_ulong address, if (arm_feature(env, ARM_FEATURE_V8)) { /* PMSAv8 */ ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx, - phys_ptr, prot, fsr); + phys_ptr, attrs, prot, fsr); } else if (arm_feature(env, ARM_FEATURE_V7)) { /* PMSAv7 */ ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx, From patchwork Fri Sep 22 15:00:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114048 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394946qgf; Fri, 22 Sep 2017 07:59:41 -0700 (PDT) X-Google-Smtp-Source: AOwi7QBBGEbSH4ofOyK0DPGYw6aQ3vY6MAyFE5XRCgLILFptsIXvglRnhcIJyVdeT3kaLiG4Cbwb X-Received: by 10.28.131.210 with SMTP id f201mr4494308wmd.71.1506092380863; Fri, 22 Sep 2017 07:59:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092380; cv=none; d=google.com; s=arc-20160816; b=jl0uA2A9Yp/1OOM3PKgC6awv+u1DTi2gSulesMKn/WH1j6heX7OxWWRuo0lTsw8V8J 80gRs/kWPKB9kprpTtNt/oO40feZQydjCNNL3nn/d1VulK/dmJPLARH6ITfnP2x35brq LboC1Vvngx43P7CErQ+L8KaedDNDfmpzFhzwMKM7RPel5xSihooqaUIDw8LhORTtzXm3 +9P9NNz+AMj1UvzpB9juS56DlozCBiA2qlD+Hw5+WaKyyakiw9CZ9IpkrEG1sXBLXWuJ QfrtBAnJFoPmMZwtGvH91ScSY8IPh4y6iZuJONrjLtH0ir0fh1yXHK2X9BhHyikdoJXk 71rA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=nZJTm1jYWSJXDE0BXZ/1HmYMdMyn1LUtURgi85avWko=; b=I1MVz1RbKTiVBsknqIFAKIBN5KSyEQa6YlWNZO/UOSB7A0GovqjSLgp7lnypKrK3Aq eOP/mAdH0rqR6AN5ATfDXAFROfK3UvjcbC3XS0Z8BxuYrbG0c6lCHy3RuRv0PtBdVkKn EimpbcAln/okOIkjdcxcFKKX6DeDtjUr39OIEyJQJTciJvbHIAc+msb+diNtSRmec29b 7zX6GS1Ybz3H68loAF9F24GC5vPH8KwDEDPZ1dZgxg51QbSJGLtQxDdWHPZ5jBc+oVlZ RhfC/VRWwmSMKSjZpLGHbn0WfBuav9+yc0cYtVa8RB2nMPnSAYqCBwhmxj/KxAYtA5M4 WTPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id o43si22862wrb.207.2017.09.22.07.59.40 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQa-0007Ef-8q; Fri, 22 Sep 2017 15:59:40 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 15/20] target/arm: Fix calculation of secure mm_idx values Date: Fri, 22 Sep 2017 16:00:02 +0100 Message-Id: <1506092407-26985-16-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> In cpu_mmu_index() we try to do this: if (env->v7m.secure) { mmu_idx += ARMMMUIdx_MSUser; } but it will give the wrong answer, because ARMMMUIdx_MSUser includes the 0x40 ARM_MMU_IDX_M field, and so does the mmu_idx we're adding to, and we'll end up with 0x8n rather than 0x4n. This error is then nullified by the call to arm_to_core_mmu_idx() which masks out the high part, but we're about to factor out the code that calculates the ARMMMUIdx values so it can be used without passing it through arm_to_core_mmu_idx(), so fix this bug first. Signed-off-by: Peter Maydell --- target/arm/cpu.h | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) -- 2.7.4 Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 441e584..70c1f85 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2335,14 +2335,16 @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch) int el = arm_current_el(env); if (arm_feature(env, ARM_FEATURE_M)) { - ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv; + ARMMMUIdx mmu_idx; - if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) { - mmu_idx = ARMMMUIdx_MNegPri; + if (el == 0) { + mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser; + } else { + mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv; } - if (env->v7m.secure) { - mmu_idx += ARMMMUIdx_MSUser; + if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) { + mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri; } return arm_to_core_mmu_idx(mmu_idx); From patchwork Fri Sep 22 15:00:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114049 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394953qgf; Fri, 22 Sep 2017 07:59:41 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDWSUKejhfuM24i7fynDXqO5/vmCAMtYZOHcCfZvPAAdm8omHbdfeHbM93Tc5sJCPO6zZzS X-Received: by 10.223.196.238 with SMTP id o43mr5426187wrf.276.1506092381484; Fri, 22 Sep 2017 07:59:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092381; cv=none; d=google.com; s=arc-20160816; b=Mqla7gulFVhhKUPUpQ2eR6XUEWDdGnV1H+CeDkNAsOEU8T453wlITEGOKFC3oPAmn0 NdtPK9vEBQo7h3WHfqUs0U+ELL3BxnCVd6+6Khsxc6vbizou/6A2xJ6RoJMEjBUjJG/x O6OlslHBATL0RkenqBDhFbzyt0LTFIS8G0gLH/xxvzBmdmjPHyLLb6LZAOMPVUmn9o8Q onsIlII017edSv4QoVu5t4kCR3EvsnzwR5BsTSz34D96WDzo574aPKGQ2fJnl1/RYMNr YtqUI9LywqVi3PO0C0JHnCcIA+n6Pu91NynlcQ46swJUcvU4M+cZGbBKFxJ6bRg/c8gW PVKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=KiiJP6oAzDtBo6/IBM+/6QYagpA5xijRaGXqBgwBKz4=; b=x3h5djF5Rm99I62OEBjJpv7TOjoVtzAtUuhMrEskBwTZui7QdJyZR9IYuta+lVekhx JFJwsJ9taUVCdu4iOiXKe7JAz0G9KoMl5MlLG4XhuPOLfpz7Qwdd9Da3OrBXzaghaLZf rG31IInZyKPU08QMwzoeHBOKBfS4ije1LKdgrF+dHTl5zMFGcGt8QQn4S0t/jwzKb+EN 1Vf5EL5hqQ0GTePcJhhTE0npxHZ3Vu85sP+PMamWD5aVV3gKF0sLK2vYxCh9aM0G1FMV KABhf80wRvS0jO+JW8TPZ6U9+MR/kXI3krgqIi2wae5N5TQiLsuSZl72nzujzcwN0aWI gpvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id i71si30486wme.59.2017.09.22.07.59.41 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQa-0007F6-Tc; Fri, 22 Sep 2017 15:59:40 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 16/20] target/arm: Factor out "get mmuidx for specified security state" Date: Fri, 22 Sep 2017 16:00:03 +0100 Message-Id: <1506092407-26985-17-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> For the SG instruction and secure function return we are going to want to do memory accesses using the MMU index of the CPU in secure state, even though the CPU is currently in non-secure state. Write arm_v7m_mmu_idx_for_secstate() to do this job, and use it in cpu_mmu_index(). Signed-off-by: Peter Maydell --- target/arm/cpu.h | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) -- 2.7.4 Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 70c1f85..89d49cd 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2329,23 +2329,33 @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) } } +/* Return the MMU index for a v7M CPU in the specified security state */ +static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, + bool secstate) +{ + int el = arm_current_el(env); + ARMMMUIdx mmu_idx; + + if (el == 0) { + mmu_idx = secstate ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser; + } else { + mmu_idx = secstate ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv; + } + + if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) { + mmu_idx = secstate ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri; + } + + return mmu_idx; +} + /* Determine the current mmu_idx to use for normal loads/stores */ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch) { int el = arm_current_el(env); if (arm_feature(env, ARM_FEATURE_M)) { - ARMMMUIdx mmu_idx; - - if (el == 0) { - mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser; - } else { - mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv; - } - - if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) { - mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri; - } + ARMMMUIdx mmu_idx = arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure); return arm_to_core_mmu_idx(mmu_idx); } From patchwork Fri Sep 22 15:00:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114050 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394961qgf; Fri, 22 Sep 2017 07:59:42 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCunlwyCvfCDYstomeOPwYOCpkxckKXrWARJMcnbbOQuq89pV3N+uoi7mRx7QNEesUMERo8 X-Received: by 10.223.136.170 with SMTP id f39mr5197606wrf.164.1506092382162; Fri, 22 Sep 2017 07:59:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092382; cv=none; d=google.com; s=arc-20160816; b=VfxTgxztagYohM5hUsYDFBwiSWVyuwx1SMiVIZsUuzUTW3Ns/3iWYhUD03Ju06c5Hk Y/OpThFkIcTycJD83Ev2J40jrzLgut1h/6ShhUTOxjUUXNtsSlfKDRB2b+VzHPqjEykL 6iuDz/rVgo4s7wxNcaGedvcodEjXqfN63tkHtezAhUK4IipV2SrRpCIwZ9wB5BkS/EsO p5annjkyxI1xyiJbttDfLWSyT/w/lwLHT46FvfOXG9AN7Ssw0W1PuXg0i2zMEfGyZfa0 gtoDe4BzSQG1AoLZ6Kp8fzF0nurmRfBtW8rSPOgzHQqs3PL++3qETSKBVIslpNIIavQs Q1lQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=BOoCOfC+1qbvuvyfjQ9poLtH5D60YCc2sPGh1InuxKE=; b=xbukoxdVW5/RS8QSziNrJ19IPDh9E32/aW7F2AbLUIufC6XFvVtOJQS54eDSwIkE8h WIqyA05VEN7dbNHfQtoJt8HlMbRr09knGAxxs/f17OlA7kK3p12u9WXfEnZIZ2tBT7Bk 5orDnMKxq6JVvPhXnS+TqLQXN57aQlGMQJjGAotr/la+IbmKdiIWZgDpfRaQXsha+ezH BPKxv1238QkbgdOnZORMkq9K+tYoXddB+6tScVQU2KoFhNQ8vmKBO9mG8ucdhmAA0ZEZ c0trWU4VnuhH9NIZFrEVpOuNLkZn926gTa+tOY7+nZ1PZum7cF1IgXlUhnWppiA16FJU ySYw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id z2si30278wrg.40.2017.09.22.07.59.41 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQb-0007Fb-Hk; Fri, 22 Sep 2017 15:59:41 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 17/20] target/arm: Implement SG instruction Date: Fri, 22 Sep 2017 16:00:04 +0100 Message-Id: <1506092407-26985-18-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Implement the SG instruction, which we emulate 'by hand' in the exception handling code path. Signed-off-by: Peter Maydell --- target/arm/helper.c | 129 ++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 124 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/target/arm/helper.c b/target/arm/helper.c index b1ecb66..8df819d 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -41,6 +41,10 @@ typedef struct V8M_SAttributes { bool irvalid; } V8M_SAttributes; +static void v8m_security_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + V8M_SAttributes *sattrs); + /* Definitions for the PMCCNTR and PMCR registers */ #define PMCRD 0x8 #define PMCRC 0x4 @@ -6724,6 +6728,123 @@ static void arm_log_exception(int idx) } } +static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, uint16_t *insn) +{ + /* Load a 16-bit portion of a v7M instruction, returning true on success, + * or false on failure (in which case we will have pended the appropriate + * exception). + * We need to do the instruction fetch's MPU and SAU checks + * like this because there is no MMU index that would allow + * doing the load with a single function call. Instead we must + * first check that the security attributes permit the load + * and that they don't mismatch on the two halves of the instruction, + * and then we do the load as a secure load (ie using the security + * attributes of the address, not the CPU, as architecturally required). + */ + CPUState *cs = CPU(cpu); + CPUARMState *env = &cpu->env; + V8M_SAttributes sattrs = {}; + MemTxAttrs attrs = {}; + ARMMMUFaultInfo fi = {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + uint32_t fsr; + + v8m_security_lookup(env, env->regs[15], MMU_INST_FETCH, mmu_idx, &sattrs); + if (!sattrs.nsc || sattrs.ns) { + /* This must be the second half of the insn, and it straddles a + * region boundary with the second half not being S&NSC. + */ + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; + } + if (get_phys_addr(env, env->regs[15], MMU_INST_FETCH, mmu_idx, + &physaddr, &attrs, &prot, &page_size, &fsr, &fi)) { + /* the MPU lookup failed */ + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n"); + return false; + } + *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr, + attrs, &txres); + if (txres != MEMTX_OK) { + env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); + qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n"); + return false; + } + return true; +} + +static bool v7m_handle_execute_nsc(ARMCPU *cpu) +{ + /* Check whether this attempt to execute code in a Secure & NS-Callable + * memory region is for an SG instruction; if so, then emulate the + * effect of the SG instruction and return true. Otherwise pend + * the correct kind of exception and return false. + */ + CPUARMState *env = &cpu->env; + ARMMMUIdx mmu_idx; + uint16_t insn; + + /* We should never get here unless get_phys_addr_pmsav8() caused + * an exception for NS executing in S&NSC memory. + */ + assert(!env->v7m.secure); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + + /* We want to do the MPU lookup as secure; work out what mmu_idx that is */ + mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); + + if (!v7m_read_half_insn(cpu, mmu_idx, &insn)) { + return false; + } + + if (!env->thumb) { + goto gen_invep; + } + + if (insn != 0xe97f) { + /* Not an SG instruction first half (we choose the IMPDEF + * early-SG-check option). + */ + goto gen_invep; + } + + if (!v7m_read_half_insn(cpu, mmu_idx, &insn)) { + return false; + } + + if (insn != 0xe97f) { + /* Not an SG instruction second half */ + goto gen_invep; + } + + /* OK, we have confirmed that we really have an SG instruction. + * We know we're NS in S memory so don't need to repeat those checks. + */ + qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32 + ", executing it\n", env->regs[15]); + env->regs[14] &= ~1; + switch_v7m_security_state(env, true); + xpsr_write(env, 0, XPSR_IT); + env->regs[15] += 4; + return true; + +gen_invep: + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; +} + void arm_v7m_cpu_do_interrupt(CPUState *cs) { ARMCPU *cpu = ARM_CPU(cs); @@ -6766,12 +6887,10 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) * the SG instruction have the same security attributes.) * Everything else must generate an INVEP SecureFault, so we * emulate the SG instruction here. - * TODO: actually emulate SG. */ - env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n"); + if (v7m_handle_execute_nsc(cpu)) { + return; + } break; case M_FAKE_FSR_SFAULT: /* Various flavours of SecureFault for attempts to execute or From patchwork Fri Sep 22 15:00:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114051 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394974qgf; Fri, 22 Sep 2017 07:59:42 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDLzLxzEoz8Hb1+6AshWOF8E2SC08lC74bRSwy30CfCsZ3qNOqT2RUmMurCww3OjxFGFL/i X-Received: by 10.28.184.151 with SMTP id i145mr3837353wmf.14.1506092382886; Fri, 22 Sep 2017 07:59:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092382; cv=none; d=google.com; s=arc-20160816; b=ZEcxKzOXFtrG91B93/RPEvK5g6+NFmvFQeIUKAxXjzvcEoHkodmRWbXhkyqmLFpUrw VEYd5pOfMXGv46QerS6M15bnwouo5faF19H/OPTZJcOzKtztWHlB4JLzZk8XONKvMq4B v5CT0Mj0pG/jApQ5L/7AlYT4Yy2YcDf6iTipsNepdRT8N7FdnWSTwl53a1FS63wBUwpC Eok5usGWAoh4H8rfXpme8PJaNe9UtemdRiV/HnYNS2yVSD1gPa8W2HdeWpV8nYeqPf8J jnNyaAsmDonoV9MVaEfOgvSy7NNc0AoQr22NMe5S4XvjAh05vPwG3KhT4iusZ+Eppe2s t+XQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=JTSY1yCbKXv0l7tT7enzEnUly7oxJmImYn3jwCpbj98=; b=zbx7z4PECo8PhVMv5AZ6ymjM8xNFUdOVYg6jX2bPIXC2LPXF22loqTchwksilTQPAb 8XPAJmbuJrcCYcFhrJN1NvQpiEI5fvIeevIb/upTlxyyJtEQoi/mJ+irh0j8sjc5jpta owcrxQX/83taDlpCXXxD09n2o26BQSP6fcCsqNMAUnXJe9cFb6Ozo29giQjM5lXXShIw oE9RCzdBRX+TEnU03pyLF8DsaNJZiyG/gkPuH+Wdm1+HswcqsbClquLOOxjkEhQwV1j4 +xuIiEZAEZFcVpsAb43Lpe7AOf4cW3YWBZYUXHopdXQvL+hUJtdGKebEjakPStJavW0d QPLw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id i74si26782wri.116.2017.09.22.07.59.42 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQc-0007G2-7Y; Fri, 22 Sep 2017 15:59:42 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 18/20] target/arm: Implement BLXNS Date: Fri, 22 Sep 2017 16:00:05 +0100 Message-Id: <1506092407-26985-19-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Implement the BLXNS instruction, which allows secure code to call non-secure code. Signed-off-by: Peter Maydell --- target/arm/helper.h | 1 + target/arm/internals.h | 1 + target/arm/helper.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++ target/arm/translate.c | 17 +++++++++++++-- 4 files changed, 76 insertions(+), 2 deletions(-) -- 2.7.4 Acked-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/helper.h b/target/arm/helper.h index 64afbac..2cf6f74 100644 --- a/target/arm/helper.h +++ b/target/arm/helper.h @@ -64,6 +64,7 @@ DEF_HELPER_3(v7m_msr, void, env, i32, i32) DEF_HELPER_2(v7m_mrs, i32, env, i32) DEF_HELPER_2(v7m_bxns, void, env, i32) +DEF_HELPER_2(v7m_blxns, void, env, i32) DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32) DEF_HELPER_3(set_cp_reg, void, env, ptr, i32) diff --git a/target/arm/internals.h b/target/arm/internals.h index fd9a7e8..1746737 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -60,6 +60,7 @@ static inline bool excp_is_internal(int excp) FIELD(V7M_CONTROL, NPRIV, 0, 1) FIELD(V7M_CONTROL, SPSEL, 1, 1) FIELD(V7M_CONTROL, FPCA, 2, 1) +FIELD(V7M_CONTROL, SFPA, 3, 1) /* Bit definitions for v7M exception return payload */ FIELD(V7M_EXCRET, ES, 0, 1) diff --git a/target/arm/helper.c b/target/arm/helper.c index 8df819d..30dc2a9 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -5890,6 +5890,12 @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) g_assert_not_reached(); } +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + void switch_mode(CPUARMState *env, int mode) { ARMCPU *cpu = arm_env_get_cpu(env); @@ -6182,6 +6188,59 @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) env->regs[15] = dest & ~1; } +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* Handle v7M BLXNS: + * - bit 0 of the destination address is the target security state + */ + + /* At this point regs[15] is the address just after the BLXNS */ + uint32_t nextinst = env->regs[15] | 1; + uint32_t sp = env->regs[13] - 8; + uint32_t saved_psr; + + /* translate.c will have made BLXNS UNDEF unless we're secure */ + assert(env->v7m.secure); + + if (dest & 1) { + /* target is Secure, so this is just a normal BLX, + * except that the low bit doesn't indicate Thumb/not. + */ + env->regs[14] = nextinst; + env->thumb = 1; + env->regs[15] = dest & ~1; + return; + } + + /* Target is non-secure: first push a stack frame */ + if (!QEMU_IS_ALIGNED(sp, 8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "BLXNS with misaligned SP is UNPREDICTABLE\n"); + } + + saved_psr = env->v7m.exception; + if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) { + saved_psr |= XPSR_SFPA; + } + + /* Note that these stores can throw exceptions on MPU faults */ + cpu_stl_data(env, sp, nextinst); + cpu_stl_data(env, sp + 4, saved_psr); + + env->regs[13] = sp; + env->regs[14] = 0xfeffffff; + if (arm_v7m_is_handler_mode(env)) { + /* Write a dummy value to IPSR, to avoid leaking the current secure + * exception number to non-secure code. This is guaranteed not + * to cause write_v7m_exception() to actually change stacks. + */ + write_v7m_exception(env, 1); + } + switch_v7m_security_state(env, dest & 1); + env->thumb = 1; + env->regs[15] = dest & ~1; +} + static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, bool spsel) { diff --git a/target/arm/translate.c b/target/arm/translate.c index ab1a12a..53694bb 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -1013,6 +1013,20 @@ static inline void gen_bxns(DisasContext *s, int rm) s->base.is_jmp = DISAS_EXIT; } +static inline void gen_blxns(DisasContext *s, int rm) +{ + TCGv_i32 var = load_reg(s, rm); + + /* We don't need to sync condexec state, for the same reason as blxns. + * We do however need to set the PC, because the blxns helper reads it. + * The blxns helper may throw an exception. + */ + gen_set_pc_im(s, s->pc); + gen_helper_v7m_blxns(cpu_env, var); + tcg_temp_free_i32(var); + s->base.is_jmp = DISAS_EXIT; +} + /* Variant of store_reg which uses branch&exchange logic when storing to r15 in ARM architecture v7 and above. The source must be a temporary and will be marked as dead. */ @@ -11221,8 +11235,7 @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s) goto undef; } if (link) { - /* BLXNS: not yet implemented */ - goto undef; + gen_blxns(s, rm); } else { gen_bxns(s, rm); } From patchwork Fri Sep 22 15:00:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114052 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394993qgf; Fri, 22 Sep 2017 07:59:43 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDoDtjoJopgUR6jqaeDEc+oKI54eyxTDXbEXPJnigN+3zX1nNe1rSVwEw4CTK4h4GSGTweE X-Received: by 10.28.214.206 with SMTP id n197mr3964079wmg.21.1506092383695; Fri, 22 Sep 2017 07:59:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092383; cv=none; d=google.com; s=arc-20160816; b=VOFb44/BqLrufyyKhqdH1XBt2DEj7/Zu1vJgSTKpel9YoD7/7sHDkhfwYzCvqIj2oG Gyh5cVoIblgXdLiR8bBrHX6BUVaHUc5oWKCsti2mi1DYmB+bkYUZeEVlJ8CAI6HwQ/t7 Wd5iOCU+LEWjI+hM/NyhDax1/0LDHvP316vtX52mr7rCwGCMLGS310JHdcRLMst41b8l VynhxyHsgcEWBNCZlYbEy6czJcJyj2kx7hHxaq4IlKDmhdfuBc6nyAKKgz7Req2d3sna XWKxy08DMcwxor/pAnwwiPMNVTPCuUq/FGWxrwG1k+1iaGg5QABU0h1V9InBQYJ1LJkf hhJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=6F2k0RvDuEQkRymXVYMoOqP2IjBcU1gUE6d/4U3ThTE=; b=ql9z485IyrJ2iHRJ8gnXSwksf7YpLCmXz4zv/XRrRoA05r4DVEOV2iuh1DH0Uhy1X+ UQDNzzfEpBPVAtTnWwS/8fwUC7HjRWuHpiW4siDz+5riNDbRsoLgRuHJ1hhqsVMGeTkE 1gnqO+OINTQmf2HsE1g45cktzWGkMu1VKJKyaQP0woz+zonFH/FbYSjpT4Q1834yujzQ BElthCVTI0rGm9qtSE/xZFdJcNSOyqm+LHwWQtFoKqUbel/8F6AzRVeiskeLuNfFuaFE Tp4YC/V0GE60qPA8KK/u7EwdCJZVPBoX9ndYCU5w2ny4SF8gd+BfMKCbWqhr4R9neykL ttcg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id i96si17120wri.295.2017.09.22.07.59.43 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQd-0007GK-1I; Fri, 22 Sep 2017 15:59:43 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 19/20] target/arm: Implement secure function return Date: Fri, 22 Sep 2017 16:00:06 +0100 Message-Id: <1506092407-26985-20-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> Secure function return happens when a non-secure function has been called using BLXNS and so has a particular magic LR value (either 0xfefffffe or 0xfeffffff). The function return via BX behaves specially when the new PC value is this magic value, in the same way that exception returns are handled. Adjust our BX excret guards so that they recognize the function return magic number as well, and perform the function-return unstacking in do_v7m_exception_exit(). Signed-off-by: Peter Maydell --- target/arm/internals.h | 7 +++ target/arm/helper.c | 115 +++++++++++++++++++++++++++++++++++++++++++++---- target/arm/translate.c | 14 +++++- 3 files changed, 126 insertions(+), 10 deletions(-) -- 2.7.4 Acked-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/target/arm/internals.h b/target/arm/internals.h index 1746737..43106a2 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -72,6 +72,13 @@ FIELD(V7M_EXCRET, DCRS, 5, 1) FIELD(V7M_EXCRET, S, 6, 1) FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */ +/* Minimum value which is a magic number for exception return */ +#define EXC_RETURN_MIN_MAGIC 0xff000000 +/* Minimum number which is a magic number for function or exception return + * when using v8M security extension + */ +#define FNC_RETURN_MIN_MAGIC 0xfefffffe + /* We use a few fake FSR values for internal purposes in M profile. * M profile cores don't have A/R format FSRs, but currently our * get_phys_addr() code assumes A/R profile and reports failures via diff --git a/target/arm/helper.c b/target/arm/helper.c index 30dc2a9..888fe0a 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6167,7 +6167,17 @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) * - if the return value is a magic value, do exception return (like BX) * - otherwise bit 0 of the return value is the target security state */ - if (dest >= 0xff000000) { + uint32_t min_magic; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* Covers FNC_RETURN and EXC_RETURN magic */ + min_magic = FNC_RETURN_MIN_MAGIC; + } else { + /* EXC_RETURN magic only */ + min_magic = EXC_RETURN_MIN_MAGIC; + } + + if (dest >= min_magic) { /* This is an exception return magic value; put it where * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT. * Note that if we ever add gen_ss_advance() singlestep support to @@ -6460,12 +6470,19 @@ static void do_v7m_exception_exit(ARMCPU *cpu) bool exc_secure = false; bool return_to_secure; - /* We can only get here from an EXCP_EXCEPTION_EXIT, and - * gen_bx_excret() enforces the architectural rule - * that jumps to magic addresses don't have magic behaviour unless - * we're in Handler mode (compare pseudocode BXWritePC()). + /* If we're not in Handler mode then jumps to magic exception-exit + * addresses don't have magic behaviour. However for the v8M + * security extensions the magic secure-function-return has to + * work in thread mode too, so to avoid doing an extra check in + * the generated code we allow exception-exit magic to also cause the + * internal exception and bring us here in thread mode. Correct code + * will never try to do this (the following insn fetch will always + * fault) so we the overhead of having taken an unnecessary exception + * doesn't matter. */ - assert(arm_v7m_is_handler_mode(env)); + if (!arm_v7m_is_handler_mode(env)) { + return; + } /* In the spec pseudocode ExceptionReturn() is called directly * from BXWritePC() and gets the full target PC value including @@ -6753,6 +6770,78 @@ static void do_v7m_exception_exit(ARMCPU *cpu) qemu_log_mask(CPU_LOG_INT, "...successful exception return\n"); } +static bool do_v7m_function_return(ARMCPU *cpu) +{ + /* v8M security extensions magic function return. + * We may either: + * (1) throw an exception (longjump) + * (2) return true if we successfully handled the function return + * (3) return false if we failed a consistency check and have + * pended a UsageFault that needs to be taken now + * + * At this point the magic return value is split between env->regs[15] + * and env->thumb. We don't bother to reconstitute it because we don't + * need it (all values are handled the same way). + */ + CPUARMState *env = &cpu->env; + uint32_t newpc, newpsr, newpsr_exc; + + qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n"); + + { + bool threadmode, spsel; + TCGMemOpIdx oi; + ARMMMUIdx mmu_idx; + uint32_t *frame_sp_p; + uint32_t frameptr; + + /* Pull the return address and IPSR from the Secure stack */ + threadmode = !arm_v7m_is_handler_mode(env); + spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK; + + frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel); + frameptr = *frame_sp_p; + + /* These loads may throw an exception (for MPU faults). We want to + * do them as secure, so work out what MMU index that is. + */ + mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); + oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx)); + newpc = helper_le_ldul_mmu(env, frameptr, oi, 0); + newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0); + + /* Consistency checks on new IPSR */ + newpsr_exc = newpsr & XPSR_EXCP; + if (!((env->v7m.exception == 0 && newpsr_exc == 0) || + (env->v7m.exception == 1 && newpsr_exc != 0))) { + /* Pend the fault and tell our caller to take it */ + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, + "...taking INVPC UsageFault: " + "IPSR consistency check failed\n"); + return false; + } + + *frame_sp_p = frameptr + 8; + } + + /* This invalidates frame_sp_p */ + switch_v7m_security_state(env, true); + env->v7m.exception = newpsr_exc; + env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK; + if (newpsr & XPSR_SFPA) { + env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK; + } + xpsr_write(env, 0, XPSR_IT); + env->thumb = newpc & 1; + env->regs[15] = newpc & ~1; + + qemu_log_mask(CPU_LOG_INT, "...function return successful\n"); + return true; +} + static void arm_log_exception(int idx) { if (qemu_loglevel_mask(CPU_LOG_INT)) { @@ -7034,8 +7123,18 @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) case EXCP_IRQ: break; case EXCP_EXCEPTION_EXIT: - do_v7m_exception_exit(cpu); - return; + if (env->regs[15] < EXC_RETURN_MIN_MAGIC) { + /* Must be v8M security extension function return */ + assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + if (do_v7m_function_return(cpu)) { + return; + } + } else { + do_v7m_exception_exit(cpu); + return; + } + break; default: cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); return; /* Never happens. Keep compiler happy. */ diff --git a/target/arm/translate.c b/target/arm/translate.c index 53694bb..f5cca07 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -960,7 +960,8 @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var) * s->base.is_jmp that we need to do the rest of the work later. */ gen_bx(s, var); - if (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M)) { + if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY) || + (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M))) { s->base.is_jmp = DISAS_BX_EXCRET; } } @@ -969,9 +970,18 @@ static inline void gen_bx_excret_final_code(DisasContext *s) { /* Generate the code to finish possible exception return and end the TB */ TCGLabel *excret_label = gen_new_label(); + uint32_t min_magic; + + if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) { + /* Covers FNC_RETURN and EXC_RETURN magic */ + min_magic = FNC_RETURN_MIN_MAGIC; + } else { + /* EXC_RETURN magic only */ + min_magic = EXC_RETURN_MIN_MAGIC; + } /* Is the new PC value in the magic range indicating exception return? */ - tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], 0xff000000, excret_label); + tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label); /* No: end the TB as we would for a DISAS_JMP */ if (is_singlestepping(s)) { gen_singlestep_exception(s); From patchwork Fri Sep 22 15:00:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 114053 Delivered-To: patches@linaro.org Received: by 10.140.106.117 with SMTP id d108csp3394999qgf; Fri, 22 Sep 2017 07:59:44 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCOiaaFc20s/nPfaDUTaj6xpBUP+cmyE9aJWGhbm7lkavBfFRjvrnqlkLyBG+FMKXVU8k8v X-Received: by 10.223.160.68 with SMTP id l4mr5013018wrl.79.1506092384215; Fri, 22 Sep 2017 07:59:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506092384; cv=none; d=google.com; s=arc-20160816; b=DbejJlRMc2w2opi8Q28x2Mg65ys3XjcQ0HyqZNhf29bXIytr1VYgTUg6rvAoTxnXkE B2lP4eGcNMRtFTJdB7HF1Ym5LiqUKn6KY6wQllBmJdWujoWy1nVi3+31fiCT/pI30fqX iaaC66vsi1xNdK+mBoLbSxKXjIfwFWH0E+ywBR3zX1EO/Rk2PEm1wkLtD1bm0b/d85pW uhPP3b1JeCfkeTWr+AUDGyAFv54PCUjEx+Pq5Cff27fW4P36+5078hu8tzjYxXr6O1hw BVTqy649dxO30qlA84R+KjLxWwl8DrrN3JNtDeqJy/x6iN5BnYshfeKYnLE7VzKJJmj0 Ee/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=uPO11BNzIv7vz899Ht/+YwTX8A/SEtw3h5ZF3Wqohig=; b=sjihLtQOQPlGwZr6uUTqftBkyRpiJEilZoY7oYfft1nwfAI59YXVsqHD3QN2A4k3bx U9DzJQzlPzinytLK8MeUXf9IrXskvhQ2ZSawFPxPxltads4cgkHadhYhyv9psfdky0j/ 95xzCcVP7Jb9QIxjWWEer0SGxl/nKenvhitJTqBMDS3WXgftVdL5KBUW45WXMA6MxxCn snkWKXyAZzyQ9f3hQB9L1PWLeh0Fqk2x2jSqccu5CMReuGjsTU0ZA9I4Y6ABsaJLUTjo 6yjw+NjQoiPEUNtNKRWXkGU2ZMNRdSq8mTCSCNTAtk8ZPZnrXwKdeW1mmpSlsCKVVITe aGuA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id n17si18051wrg.297.2017.09.22.07.59.44 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 22 Sep 2017 07:59:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1dvPQd-0007H9-Mw; Fri, 22 Sep 2017 15:59:43 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org Subject: [PATCH 20/20] nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit Date: Fri, 22 Sep 2017 16:00:07 +0100 Message-Id: <1506092407-26985-21-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> References: <1506092407-26985-1-git-send-email-peter.maydell@linaro.org> When we added support for the new SHCSR bits in v8M in commit 437d59c17e9 the code to support writing to the new HARDFAULTPENDED bit was accidentally only added for non-secure writes; the secure banked version of the bit should also be writable. Signed-off-by: Peter Maydell --- hw/intc/armv7m_nvic.c | 1 + 1 file changed, 1 insertion(+) -- 2.7.4 Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c index bd1d5d3..22d5e6e 100644 --- a/hw/intc/armv7m_nvic.c +++ b/hw/intc/armv7m_nvic.c @@ -1230,6 +1230,7 @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value, s->sec_vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0; s->sec_vectors[ARMV7M_EXCP_USAGE].enabled = (value & (1 << 18)) != 0; + s->sec_vectors[ARMV7M_EXCP_HARD].pending = (value & (1 << 21)) != 0; /* SecureFault not banked, but RAZ/WI to NS */ s->vectors[ARMV7M_EXCP_SECURE].active = (value & (1 << 4)) != 0; s->vectors[ARMV7M_EXCP_SECURE].enabled = (value & (1 << 19)) != 0;