From patchwork Thu Oct 30 21:28:32 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Bellows X-Patchwork-Id: 39966 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 725BD2405B for ; Fri, 31 Oct 2014 19:03:03 +0000 (UTC) Received: by mail-wg0-f72.google.com with SMTP id k14sf4488513wgh.7 for ; Fri, 31 Oct 2014 12:03:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:date :message-id:in-reply-to:references:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=i2dbtKZ5nv7dNCrzOiMjGI0TU0Lbf2qddLNgZX1leJA=; b=NaOhgPPxx61Ia5QkOGxwQuTbh/kucuVEiQvBRQcojSKgSDR+Qd1FxTN+NPF9PAPdnz E89RZJkwz4cmyX2ICtIGjdpiesxzfcyCS7UzXnQv09UXxM1lnf7fVKoQLV+3L9dJzDM0 l/gs2HZLMA9UxE+PxoyhT7xI3/O7+88UNe6ZpCJ00cSvyKoYfgl/8Ds22WH+ZR7Q/4Ke Y7VvVxFcafUkYwR4r56Thzl/RBHOGaYKtLYK7cvlgPlAIwOWog/Av8Prv33kdILcVyEQ 4P4cts9ENqEZajiAcV7OwChVgjpGXekF+9j/Ii4lgneXLOt3ALN2Tjos/iul5qXt5Nfe p9dg== X-Gm-Message-State: ALoCoQl7tAtgDbxiAza+q1iXZ/9opS3Flz1xEXNs13NmtYnmXU030TWk6BBFyVW7+DG4zVMj7Vsv X-Received: by 10.112.16.35 with SMTP id c3mr675208lbd.13.1414782182512; Fri, 31 Oct 2014 12:03:02 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.37.1 with SMTP id u1ls537385laj.29.gmail; Fri, 31 Oct 2014 12:03:02 -0700 (PDT) X-Received: by 10.112.148.161 with SMTP id tt1mr29037478lbb.67.1414782182367; Fri, 31 Oct 2014 12:03:02 -0700 (PDT) Received: from mail-lb0-f170.google.com (mail-lb0-f170.google.com. [209.85.217.170]) by mx.google.com with ESMTPS id ir2si18191504lac.127.2014.10.31.12.03.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 31 Oct 2014 12:03:02 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) client-ip=209.85.217.170; Received: by mail-lb0-f170.google.com with SMTP id 10so6641028lbg.15 for ; Fri, 31 Oct 2014 12:03:02 -0700 (PDT) X-Received: by 10.113.6.1 with SMTP id cq1mr7208730lbd.32.1414782182203; Fri, 31 Oct 2014 12:03:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp247121lbz; Fri, 31 Oct 2014 12:03:01 -0700 (PDT) X-Received: by 10.224.120.1 with SMTP id b1mr40097514qar.19.1414782180299; Fri, 31 Oct 2014 12:03:00 -0700 (PDT) Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id g6si18529010qgf.49.2014.10.31.12.02.59 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 31 Oct 2014 12:03:00 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Received: from localhost ([::1]:41345 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XkHTO-00071g-E3 for patch@linaro.org; Fri, 31 Oct 2014 15:02:58 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50692) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XkE6A-0005bu-Ac for qemu-devel@nongnu.org; Fri, 31 Oct 2014 11:27:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XjxHK-0004Xe-K9 for qemu-devel@nongnu.org; Thu, 30 Oct 2014 17:29:16 -0400 Received: from mail-pd0-f172.google.com ([209.85.192.172]:59942) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XjxHK-0004WZ-CX for qemu-devel@nongnu.org; Thu, 30 Oct 2014 17:29:10 -0400 Received: by mail-pd0-f172.google.com with SMTP id r10so5906037pdi.17 for ; Thu, 30 Oct 2014 14:29:09 -0700 (PDT) X-Received: by 10.66.191.165 with SMTP id gz5mr19727735pac.75.1414704549684; Thu, 30 Oct 2014 14:29:09 -0700 (PDT) Received: from gbellows-linaro.qualcomm.com (rrcs-67-52-129-61.west.biz.rr.com. [67.52.129.61]) by mx.google.com with ESMTPSA id o5sm8017713pdr.50.2014.10.30.14.29.08 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 30 Oct 2014 14:29:08 -0700 (PDT) From: Greg Bellows To: qemu-devel@nongnu.org, peter.maydell@linaro.org, serge.fdrv@gmail.com, edgar.iglesias@gmail.com, aggelerf@ethz.ch Date: Thu, 30 Oct 2014 16:28:32 -0500 Message-Id: <1414704538-17103-2-git-send-email-greg.bellows@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1414704538-17103-1-git-send-email-greg.bellows@linaro.org> References: <1414704538-17103-1-git-send-email-greg.bellows@linaro.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.192.172 Cc: greg.bellows@linaro.org Subject: [Qemu-devel] [PATCH v8 01/27] target-arm: extend async excp masking X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: greg.bellows@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This patch extends arm_excp_unmasked() to use lookup tables for determining whether IRQ and FIQ exceptions are masked. The lookup tables are based on the ARMv8 and ARMv7 specification physical interrupt masking tables. If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception levels other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3). Signed-off-by: Greg Bellows --- v7 -> v8 - Add IRQ and FIQ exeception masking lookup tables. - Rewrite patch to use lookup tables for determining whether an excpetion is masked or not. v5 -> v6 - Globally change Aarch# to AArch# - Fixed comment termination v4 -> v5 - Merge with v4 patch 10 --- target-arm/cpu.h | 218 ++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 190 insertions(+), 28 deletions(-) diff --git a/target-arm/cpu.h b/target-arm/cpu.h index cb6ec5c..be5d022 100644 --- a/target-arm/cpu.h +++ b/target-arm/cpu.h @@ -1242,44 +1242,200 @@ bool write_cpustate_to_list(ARMCPU *cpu); # define TARGET_VIRT_ADDR_SPACE_BITS 32 #endif +/* Physical FIQ exception mask lookup table + * + * [ From ARM ARMv7 B1.8.6 Async exception masking (table B1-12) ] + * [ From ARM ARMv8 G1.11.3 Async exception masking (table G1-18) ] + * + * The below multi-dimensional table is used for looking up the masking + * behavior given the specified state conditions. The table value are used + * for determining whether the PSTATE.AIF/CPSR.AIF bits control enterrupt + * masking or not. + * + * Dimensions: + * fiq_excp_mask_table[2][2][2][2][2][2][4] + * | | | | | | +--- Current EL + * | | | | | +------ Non-secure(0)/Secure(1) + * | | | | +--------- HCR mask override + * | | | +------------ SCR exec state control + * | | +--------------- SCR non-secure masking + * | +------------------ SCR mask override + * +--------------------- 32-bit(0)/64-bit(1) EL3 + * + * The table values are as such: + * 0 = Exception is masked depending on PSTATE + * 1 = Exception is taken (unmasked) regardless of PSTATE + * -1 = Cannot occur + * -2 = Exception not taken, left pending + * + * Notes: + * - RW is dont-care when EL3 is AArch32 + * - AW/FW are don't care when EL3 is AArch32 + * - Exceptions left pending (-2) is informational and should never escape + * as the correct procedure involves first checking current to target EL. + * + * SCR HCR + * 64 EA AMO From + * BIT IRQ AW IMO Non-secure Secure + * EL3 FIQ FW RW FMO EL0 EL1 EL2 EL3 EL0 EL1 EL2 EL3 + */ +static const int8_t fiq_excp_mask_table[2][2][2][2][2][2][4] = { + {{{{{/* 0 0 0 0 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 0 0 0 1 */{ 1, 1, 0, -1 },{ 0, -1, -1, 0 },},}, + {{/* 0 0 0 1 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 0 0 1 1 */{ 1, 1, 0, -1 },{ 0, -1, -1, 0 },},},}, + {{{/* 0 0 1 0 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 0 1 0 1 */{ 1, 1, 0, -1 },{ 0, -1, -1, 0 },},}, + {{/* 0 0 1 1 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 0 1 1 1 */{ 1, 1, 0, -1 },{ 0, -1, -1, 0 },},},},}, + {{{{/* 0 1 0 0 0 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },}, + {/* 0 1 0 0 1 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },},}, + {{/* 0 1 0 1 0 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },}, + {/* 0 1 0 1 1 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },},},}, + {{{/* 0 1 1 0 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 1 1 0 1 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },},}, + {{/* 0 1 1 1 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 1 1 1 1 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },},},},},}, + {{{{{/* 1 0 0 0 0 */{ 0, 0, 0, -1 },{ 0, 0, -1, -2 },}, + {/* 1 0 0 0 1 */{ 1, 1, 0, -1 },{ 0, 0, -1, -2 },},}, + {{/* 1 0 0 1 0 */{ 0, 0, -2, -1 },{ 0, 0, -1, -2 },}, + {/* 1 0 0 1 1 */{ 1, 1, 0, -1 },{ 0, 0, -1, -2 },},},}, + {{{/* 1 0 1 0 0 */{ 1, 1, 1, -1 },{ 0, 0, -1, -2 },}, + {/* 1 0 1 0 1 */{ 1, 1, 0, -1 },{ 0, 0, -1, -2 },},}, + {{/* 1 0 1 1 0 */{ 0, 0, -2, -1 },{ 0, 0, -1, -2 },}, + {/* 1 0 1 1 1 */{ 1, 1, 0, -1 },{ 0, 0, -1, -2 },},},},}, + {{{{/* 1 1 0 0 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },}, + {/* 1 1 0 0 1 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },},}, + {{/* 1 1 0 1 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },}, + {/* 1 1 0 1 1 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },},},}, + {{{/* 1 1 1 0 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },}, + {/* 1 1 1 0 1 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },},}, + {{/* 1 1 1 1 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },}, + {/* 1 1 1 1 1 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },},},},},}, +}; + +/* Physical IRQ exception mask lookup table + * + * [ From ARM ARMv7 B1.8.6 Async exception masking (table B1-13) ] + * [ From ARM ARMv8 G1.11.3 Async exception masking (table G1-19) ] + * + * The below multi-dimensional table is used for looking up the masking + * behavior given the specified state conditions. The table value are used + * for determining whether the PSTATE.AIF/CPSR.AIF bits control enterrupt + * masking or not. + * + * Dimensions: + * irq_excp_mask_table[2][2][2][2][2][4] + * | | | | | +--- Current EL + * | | | | +------ Non-secure(0)/Secure(1) + * | | | +--------- HCR mask override + * | | +------------ SCR exec state control + * | +--------------- SCR mask override + * +------------------ 32-bit(0)/64-bit(1) EL3 + * + * The table values are as such: + * 0 = Exception is masked depending on PSTATE + * 1 = Exception is taken (unmasked) regardless of PSTATE + * -1 = Cannot occur + * -2 = Exception not taken, left pending + * + * Notes: + * - RW is dont-care when EL3 is AArch32 + * - Exceptions left pending (-2) is informational and should never escape + * as the correct procedure involves first checking current to target EL. + * + * SCR HCR + * 64 EA AMO From + * BIT IRQ IMO Non-secure Secure + * EL3 FIQ RW FMO EL0 EL1 EL2 EL3 EL0 EL1 EL2 EL3 + */ +static const int8_t irq_excp_mask_table[2][2][2][2][2][4] = { + {{{{/* 0 0 0 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 0 0 1 */{ 1, 1, 0, -1 },{ 0, -1, -1, 0 },},}, + {{/* 0 0 1 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 0 1 1 */{ 1, 1, 0, -1 },{ 0, -1, -1, 0 },},},}, + {{{/* 0 1 0 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 1 0 1 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },},}, + {{/* 0 1 1 0 */{ 0, 0, 0, -1 },{ 0, -1, -1, 0 },}, + {/* 0 1 1 1 */{ 1, 1, 1, -1 },{ 0, -1, -1, 0 },},},},}, + {{{{/* 1 0 0 0 */{ 0, 0, 0, -1 },{ 0, 0, -1, -2 },}, + {/* 1 0 0 1 */{ 1, 1, 0, -1 },{ 0, 0, -1, -2 },},}, + {{/* 1 0 1 0 */{ 0, 0, -2, -1 },{ 0, 0, -1, -2 },}, + {/* 1 0 1 1 */{ 1, 1, 0, -1 },{ 0, 0, -1, -2 },},},}, + {{{/* 1 1 0 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },}, + {/* 1 1 0 1 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },},}, + {{/* 1 1 1 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },}, + {/* 1 1 1 1 */{ 1, 1, 1, -1 },{ 1, 1, -1, 0 },},},},}, +}; + static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx) { CPUARMState *env = cs->env_ptr; unsigned int cur_el = arm_current_el(env); - unsigned int target_el = arm_excp_target_el(cs, excp_idx); - /* FIXME: Use actual secure state. */ - bool secure = false; - /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS state. */ - bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2; - /* ARMv7-M interrupt return works by loading a magic value - * into the PC. On real hardware the load causes the - * return to occur. The qemu implementation performs the - * jump normally, then does the exception return when the - * CPU tries to execute code at the magic address. - * This will cause the magic PC value to be pushed to - * the stack if an interrupt occurred at the wrong time. - * We avoid this by disabling interrupts when - * pc contains a magic address. + bool secure = arm_is_secure(env); + uint32_t rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW); + uint32_t is64 = arm_el_is_aa64(env, 3); + uint32_t fw; + uint32_t scr; + uint32_t hcr; + bool pstate_unmasked; + int8_t unmasked = 0; + + /* Don't take exceptions if they target a lower EL. + * This check should catch any exceptions that would not be taken but left + * pending. */ - bool irq_unmasked = !(env->daif & PSTATE_I) - && (!IS_M(env) || env->regs[15] < 0xfffffff0); - - /* Don't take exceptions if they target a lower EL. */ - if (cur_el > target_el) { + if (cur_el > arm_excp_target_el(cs, excp_idx)) { return false; } switch (excp_idx) { case EXCP_FIQ: - if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_FMO)) { - return true; - } - return !(env->daif & PSTATE_F); + scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ); + hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO); + + /* The SCR.FW bit only affects masking when the virtualization + * extension is present. The unmasked table assumes that the extension + * is present, so when not present we must set FW to 1 to + * remain neutral. + */ + fw = (!arm_feature(env, ARM_FEATURE_EL2) | + ((env->cp15.scr_el3 & SCR_FW) == SCR_FW)); + + /* FIQ are unmasked if PSTATE.F is clear */ + pstate_unmasked = !(env->daif & PSTATE_F); + + /* Perform a table lookup on whether the current state results in the + * exception being masked. If the table lookup returns true(1) then + * PSTATE determines whether the interrupt is unmasked or not. + */ + unmasked = fiq_excp_mask_table[is64][scr][fw][rw][hcr][secure][cur_el]; + break; + case EXCP_IRQ: - if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_IMO)) { - return true; - } - return irq_unmasked; + scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ); + hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO); + + /* ARMv7-M interrupt return works by loading a magic value + * into the PC. On real hardware the load causes the + * return to occur. The qemu implementation performs the + * jump normally, then does the exception return when the + * CPU tries to execute code at the magic address. + * This will cause the magic PC value to be pushed to + * the stack if an interrupt occurred at the wrong time. + * We avoid this by disabling interrupts when + * pc contains a magic address. + */ + pstate_unmasked = !(env->daif & PSTATE_I) + && (!IS_M(env) || env->regs[15] < 0xfffffff0); + + /* Perform a table lookup on whether the current state results in the + * exception being masked. If the table lookup returns true(1) then + * PSTATE determines whether the interrupt is unmasked or not. + */ + unmasked = irq_excp_mask_table[is64][scr][rw][hcr][secure][cur_el]; + break; + case EXCP_VFIQ: if (!secure && !(env->cp15.hcr_el2 & HCR_FMO)) { /* VFIQs are only taken when hypervized and non-secure. */ @@ -1291,10 +1447,16 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx) /* VIRQs are only taken when hypervized and non-secure. */ return false; } - return irq_unmasked; + return !(env->daif & PSTATE_I) + && (!IS_M(env) || env->regs[15] < 0xfffffff0); default: g_assert_not_reached(); } + + /* We better not have a negative table value or something went wrong */ + assert(unmasked >= 0); + + return unmasked || pstate_unmasked; } static inline CPUARMState *cpu_init(const char *cpu_model)