From patchwork Fri Jun 26 15:14:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 191844 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp592871ilg; Fri, 26 Jun 2020 08:31:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzRNyzMxPLN1B86x4Iu6Bbx2hmLLVW0D1X0HL/Nb8yPKB3z2DQ/T/9L9P1HVH6OlhRwT4jI X-Received: by 2002:a25:230a:: with SMTP id j10mr6156917ybj.260.1593185516924; Fri, 26 Jun 2020 08:31:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593185516; cv=none; d=google.com; s=arc-20160816; b=mfGRtU/XBnwQGgIVzxFSULCc+e0I8n+aR/2VCSbkEpxmXrjUDkeASQ2yQkauk/GbCk ApIDUYxulEUUEz4HgOKmhAhiMQyNF2jKNFjfqIayTu/OXJjNFWOfiMJyqjzsszt1AN7g N0oT8tpKQZBCbXO483tx+fgTfug9bgHEn4fNFHFj4uVmXU/SRJFdjEiYurv8Cgt2LZKi l6GChpy3NixFxbrAuqthU8Km300SvMzVabuMiYF3mHsZOBf93LAwGnDlBMrX+B4jBGGT JzGSsJf1eRVmwG5zGybENisT/wvgsPvF0qPqstB9fC09EGflAPBxQsr4pcGLpX+V3K80 GB4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=DW69FT0TKjdcN1PDGM+IN2DdN9Pu11pDMtHQlg2vqlE=; b=zInDj01gxgu6qB0dp6+LjwYXAStf9WimS5oaI4bibX+AQNwOH/t+ywBnkVvnkDdo5N bQOwh5kLfCzcd2dwv3PvaO1aEdKMxkQ7wnKkVvcjXFdv13+efZmJlpyW8TMK3/3Gn5vm mu0HOdtnevXGEHyveeV5dk9/YBgqaSdykN9LG0OeTQtHZjkWw2t2SBdXDXCszgz4qWMj OzyYo7daFEA0zeZGCbfnlz5NdPIVTTGFxgUNdqavFwtbXxbrEIItLL32Xwx6/kBZHS4a Nvmk01uIsMZ45gxleqYXx+2I1+wkHORo9igrx0HmADlBXl6e932sORA1Nwe7tdJoEt9L 7PIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=E1nVzuOw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id p135si23217031yba.192.2020.06.26.08.31.56 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 26 Jun 2020 08:31:56 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=E1nVzuOw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:60910 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1joqKa-0007AQ-8m for patch@linaro.org; Fri, 26 Jun 2020 11:31:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36050) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1joq4S-00033Z-DF for qemu-devel@nongnu.org; Fri, 26 Jun 2020 11:15:16 -0400 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]:39511) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1joq4O-0006d9-Kp for qemu-devel@nongnu.org; Fri, 26 Jun 2020 11:15:16 -0400 Received: by mail-wm1-x32a.google.com with SMTP id t194so9709888wmt.4 for ; Fri, 26 Jun 2020 08:15:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=DW69FT0TKjdcN1PDGM+IN2DdN9Pu11pDMtHQlg2vqlE=; b=E1nVzuOwrTxeifLstlo98v4xMdiz60u7eMDO7Yd2Lhe4YDCAlymJ3tE/DCtcZPqopk ZTZ6sjp8EvRM/SbNQp0bVblda9mQ2E/hUAUfXXDkI1LjoQ4LrdA9yraOuu5CJk1VUhNC ymf39hPtXnGp+99x212AHdNA5WgYxppkCTv0DQENMRnP20qbe0SC6v3mJ9BEO/85KKRa PQ0KHwl2taK9tkj8E5ozCmqYGSPx10/3Yh4pwncqXjWwhUb9xfQ/b/wzKQzuFg3ympRa 0ftYz1XqKBW0nWBAhcjEdPQaujxhLQ92YUbtRnihKkPaicbhk1HnJdokOYPxiM+/ljHF fwnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DW69FT0TKjdcN1PDGM+IN2DdN9Pu11pDMtHQlg2vqlE=; b=AKffENBUzF1UrInoyGLioC+A1HgJkdEoNNzVhgPY6wXNEgHRxCdlADbX+rC9a7vI8F /5ERq6FZJjFAuOhGw9WtB2HoFeKE8hfUxqtoY6lxwrvkfGh9i/gr9k6xJ18vE1ftySe3 8EP5q58vBgFAVwP/eeJZ+1z/wbwg7dfwXAtKRKRRcgQajvDPXA4LwuVvA+vc4qh+fpPY 4x/1uaZeuSYWvwgRnX8Nmxx9hZSqMpHqNhaavFdy8f91Pe9NqBizhSmKyIrGfBi9Nfbe F9eGjPKy2U4oZS+OA/a1pf6uiWdBEDuUcK8w9GpxjbBcEXdE2dTbjhzpHHoq/1vBFwuv UNxA== X-Gm-Message-State: AOAM5313x+cqAF80YXr3boZtNNuMBtR65pSvYN0wzRbMvbbCh57UDzKn AAE498sq1sSeExPQTokf50Nkt+ow58FXew== X-Received: by 2002:a1c:1984:: with SMTP id 126mr3725313wmz.147.1593184510386; Fri, 26 Jun 2020 08:15:10 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [81.2.115.148]) by smtp.gmail.com with ESMTPSA id w13sm37838852wrr.67.2020.06.26.08.15.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jun 2020 08:15:09 -0700 (PDT) From: Peter Maydell To: qemu-devel@nongnu.org Subject: [PULL 37/57] target/arm: Implement helper_mte_check1 Date: Fri, 26 Jun 2020 16:14:04 +0100 Message-Id: <20200626151424.30117-38-peter.maydell@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200626151424.30117-1-peter.maydell@linaro.org> References: <20200626151424.30117-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=peter.maydell@linaro.org; helo=mail-wm1-x32a.google.com X-detected-operating-system: by eggs.gnu.org: No matching host in p0f cache. That's all we know. X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Richard Henderson Fill out the stub that was added earlier. Reviewed-by: Peter Maydell Signed-off-by: Richard Henderson Message-id: 20200626033144.790098-26-richard.henderson@linaro.org Signed-off-by: Peter Maydell --- target/arm/internals.h | 48 +++++++++++++++ target/arm/mte_helper.c | 132 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 179 insertions(+), 1 deletion(-) -- 2.20.1 diff --git a/target/arm/internals.h b/target/arm/internals.h index fb92ef6b840..807830cc400 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1318,6 +1318,10 @@ FIELD(MTEDESC, WRITE, 8, 1) FIELD(MTEDESC, ESIZE, 9, 5) FIELD(MTEDESC, TSIZE, 14, 10) /* mte_checkN only */ +bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr); +uint64_t mte_check1(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra); + static inline int allocation_tag_from_addr(uint64_t ptr) { return extract64(ptr, 56, 4); @@ -1328,4 +1332,48 @@ static inline uint64_t address_with_allocation_tag(uint64_t ptr, int rtag) return deposit64(ptr, 56, 4, rtag); } +/* Return true if tbi bits mean that the access is checked. */ +static inline bool tbi_check(uint32_t desc, int bit55) +{ + return (desc >> (R_MTEDESC_TBI_SHIFT + bit55)) & 1; +} + +/* Return true if tcma bits mean that the access is unchecked. */ +static inline bool tcma_check(uint32_t desc, int bit55, int ptr_tag) +{ + /* + * We had extracted bit55 and ptr_tag for other reasons, so fold + * (ptr<59:55> == 00000 || ptr<59:55> == 11111) into a single test. + */ + bool match = ((ptr_tag + bit55) & 0xf) == 0; + bool tcma = (desc >> (R_MTEDESC_TCMA_SHIFT + bit55)) & 1; + return tcma && match; +} + +/* + * For TBI, ideally, we would do nothing. Proper behaviour on fault is + * for the tag to be present in the FAR_ELx register. But for user-only + * mode, we do not have a TLB with which to implement this, so we must + * remove the top byte. + */ +static inline uint64_t useronly_clean_ptr(uint64_t ptr) +{ + /* TBI is known to be enabled. */ +#ifdef CONFIG_USER_ONLY + ptr = sextract64(ptr, 0, 56); +#endif + return ptr; +} + +static inline uint64_t useronly_maybe_clean_ptr(uint32_t desc, uint64_t ptr) +{ +#ifdef CONFIG_USER_ONLY + int64_t clean_ptr = sextract64(ptr, 0, 56); + if (tbi_check(desc, clean_ptr < 0)) { + ptr = clean_ptr; + } +#endif + return ptr; +} + #endif diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index 907a12b3664..c8a5e7c0edd 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -359,12 +359,142 @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val) } } +/* Record a tag check failure. */ +static void mte_check_fail(CPUARMState *env, int mmu_idx, + uint64_t dirty_ptr, uintptr_t ra) +{ + ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx); + int el, reg_el, tcf, select; + uint64_t sctlr; + + reg_el = regime_el(env, arm_mmu_idx); + sctlr = env->cp15.sctlr_el[reg_el]; + + switch (arm_mmu_idx) { + case ARMMMUIdx_E10_0: + case ARMMMUIdx_E20_0: + el = 0; + tcf = extract64(sctlr, 38, 2); + break; + default: + el = reg_el; + tcf = extract64(sctlr, 40, 2); + } + + switch (tcf) { + case 1: + /* + * Tag check fail causes a synchronous exception. + * + * In restore_state_to_opc, we set the exception syndrome + * for the load or store operation. Unwind first so we + * may overwrite that with the syndrome for the tag check. + */ + cpu_restore_state(env_cpu(env), ra, true); + env->exception.vaddress = dirty_ptr; + raise_exception(env, EXCP_DATA_ABORT, + syn_data_abort_no_iss(el != 0, 0, 0, 0, 0, 0, 0x11), + exception_target_el(env)); + /* noreturn, but fall through to the assert anyway */ + + case 0: + /* + * Tag check fail does not affect the PE. + * We eliminate this case by not setting MTE_ACTIVE + * in tb_flags, so that we never make this runtime call. + */ + g_assert_not_reached(); + + case 2: + /* Tag check fail causes asynchronous flag set. */ + mmu_idx = arm_mmu_idx_el(env, el); + if (regime_has_2_ranges(mmu_idx)) { + select = extract64(dirty_ptr, 55, 1); + } else { + select = 0; + } + env->cp15.tfsr_el[el] |= 1 << select; + break; + + default: + /* Case 3: Reserved. */ + qemu_log_mask(LOG_GUEST_ERROR, + "Tag check failure with SCTLR_EL%d.TCF%s " + "set to reserved value %d\n", + reg_el, el ? "" : "0", tcf); + break; + } +} + /* * Perform an MTE checked access for a single logical or atomic access. */ +static bool mte_probe1_int(CPUARMState *env, uint32_t desc, uint64_t ptr, + uintptr_t ra, int bit55) +{ + int mem_tag, mmu_idx, ptr_tag, size; + MMUAccessType type; + uint8_t *mem; + + ptr_tag = allocation_tag_from_addr(ptr); + + if (tcma_check(desc, bit55, ptr_tag)) { + return true; + } + + mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX); + type = FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_LOAD; + size = FIELD_EX32(desc, MTEDESC, ESIZE); + + mem = allocation_tag_mem(env, mmu_idx, ptr, type, size, + MMU_DATA_LOAD, 1, ra); + if (!mem) { + return true; + } + + mem_tag = load_tag1(ptr, mem); + return ptr_tag == mem_tag; +} + +/* + * No-fault version of mte_check1, to be used by SVE for MemSingleNF. + * Returns false if the access is Checked and the check failed. This + * is only intended to probe the tag -- the validity of the page must + * be checked beforehand. + */ +bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr) +{ + int bit55 = extract64(ptr, 55, 1); + + /* If TBI is disabled, the access is unchecked. */ + if (unlikely(!tbi_check(desc, bit55))) { + return true; + } + + return mte_probe1_int(env, desc, ptr, 0, bit55); +} + +uint64_t mte_check1(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra) +{ + int bit55 = extract64(ptr, 55, 1); + + /* If TBI is disabled, the access is unchecked, and ptr is not dirty. */ + if (unlikely(!tbi_check(desc, bit55))) { + return ptr; + } + + if (unlikely(!mte_probe1_int(env, desc, ptr, ra, bit55))) { + int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX); + mte_check_fail(env, mmu_idx, ptr, ra); + } + + return useronly_clean_ptr(ptr); +} + uint64_t HELPER(mte_check1)(CPUARMState *env, uint32_t desc, uint64_t ptr) { - return ptr; + return mte_check1(env, desc, ptr, GETPC()); } /*