From patchwork Fri May 10 15:19:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 163893 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp2544695ilr; Fri, 10 May 2019 08:47:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqx23Vaz0m7QYoW4K40c5DFUEKdt+G3gjqMX7mDtp793SXmf9K72FvczsiN6WG9FeTrKhHcg X-Received: by 2002:a17:906:22d8:: with SMTP id q24mr9173992eja.261.1557503246856; Fri, 10 May 2019 08:47:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557503246; cv=none; d=google.com; s=arc-20160816; b=p7zOU8kzP4rETcIGXRV3EBV0BSWZLs7FO5fUPpG5TTPl5ojeszccMxRiCNOXsihG/8 TZnOtmqL0dwppiaHIAsr/EputbeipIe9KZ0UB+XjLKtWeeU2FxzrCy8c1aa/mXsQvQrj KlIHbD8T4Lvplf8fAOZrR7fOVbLLXWYzVYAhD+XaZWmrkPWLz3n7lURyG9O+9zwbprHD azyCBY8ztxMYBv/XyCp8kuBqXoGsNw0CH6BVYMAjx6jBlxVUObyT7iKEB8Bb9pIZ9C3q Y93FwlV9rHvWIQBex0KVisGhY9C4yRYZRqW1uLN5WYu4P1rB/0YQSSkIsVa8IaTdP0a5 T2nQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=r0SecknPCOTFJNz8WLFcQ0P40WSPEdkeOXrJuI8JGJI=; b=IW18swtH5DCgEFedAGSdesRUYO4ag5w9iEfA3qc9z10mGt5F01SJOmy8cMM7Wi2nxH 4oysOilTeIDF0nNj5huCLC4iV8Q8BA/Oddr4V8I6iF152PaVzvNuke6HjOhSPDQm1cZV rRpYTUJPqkTjIkUADF9CevZyrVib+r9K8hkWOZTB2v0WjX171NeyU3bkZn/CuoFLvSnx vk9Huh3DdGJaVA2uyFvyfwwyA6dlFf1bYPDR0DHKvnInYiAkUUeeZO9oa1+9yzPlBkFv 7tx4Jnnhcu4FCg0XgU8xuYNlXrdHMr4Jm93m9gDgbe0biXH82439PLrDOb3kE3E0e14D FagQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zGljSz4b; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id o20si430279eju.299.2019.05.10.08.47.26 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 10 May 2019 08:47:26 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zGljSz4b; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:45391 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hP7k5-0007v6-Q3 for patch@linaro.org; Fri, 10 May 2019 11:47:25 -0400 Received: from eggs.gnu.org ([209.51.188.92]:56435) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hP7Jx-0007yv-JY for qemu-devel@nongnu.org; Fri, 10 May 2019 11:20:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hP7Jv-0000RN-Kb for qemu-devel@nongnu.org; Fri, 10 May 2019 11:20:25 -0400 Received: from mail-pg1-x542.google.com ([2607:f8b0:4864:20::542]:37591) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hP7Jv-0000QR-5X for qemu-devel@nongnu.org; Fri, 10 May 2019 11:20:23 -0400 Received: by mail-pg1-x542.google.com with SMTP id e6so3181674pgc.4 for ; Fri, 10 May 2019 08:20:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r0SecknPCOTFJNz8WLFcQ0P40WSPEdkeOXrJuI8JGJI=; b=zGljSz4bJljiC8FoZ9eUzfl5/oy4Wu44aO3MYyh8+Q9v2PKF+Ef6Bcl+KnP0Js/l/f yODquTmoi1EY0M05+Qg7kDhyYsjEdSJOFk9G5GYysQHgBR6DdvjQEEMqsIkCavtwBJuD yEgLY06f33NVCe/HLlcnDK7L3euu0DOq4UvktnwVoSloEQW4jQjVJqua18BdRcaA5AGm 7M7dA/tAHWa1HWQQ9ejYFC+mSgdar9QxgMCU/pbM9UlOI5IeUDCQPfiz0SbSgphGqLVE wTLTJDCq6uIJUPqva0n6Tp8VemzfI9qwj9/AQDp6oTjTst9Rqp0jkzJB793BEtL8RJ1J Ai8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r0SecknPCOTFJNz8WLFcQ0P40WSPEdkeOXrJuI8JGJI=; b=X1LqZCuVEuAuha+hdE0cnhjjjnTQav6nGPM9B8Ph+CsWA2b7jiD/z1OJsee0dD8Pu+ oYdwxUeM2oxJ1TwqJ+GqKLNthYtVkt8725Sg4h9LsGqXae2jftedOmDcIpIFclfMjanY I8h7IbDLiysRaXkOTpG5Sne5voIoL3mZi4Zg7TULKATiUO6LUVc7cp6RJihyvZnnEa09 dJrn1/5uF1dc742AJrXObgu/vNs1XbRW5bilBJmOqungoivu9iAPJht6YNa4fYLyenRq yZB3WsE+2v0vHVQk1kS6T7QbJTLDbIM/xEhJJ5jMjiaGW60mE+LmwgFAotB+crMojeif LqaA== X-Gm-Message-State: APjAAAU+kKicwBy1uEtzKk9ZeVu+PAN/C/cKDyZf+GEl5iaYJLd9Wi1K sTv6cHH5K8jgzpfAOyJCiU14aTZuAWQ= X-Received: by 2002:a63:6e0b:: with SMTP id j11mr1552224pgc.291.1557501621145; Fri, 10 May 2019 08:20:21 -0700 (PDT) Received: from localhost.localdomain (97-113-13-231.tukw.qwest.net. [97.113.13.231]) by smtp.gmail.com with ESMTPSA id 19sm5920517pgz.24.2019.05.10.08.20.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 May 2019 08:20:20 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Fri, 10 May 2019 08:19:44 -0700 Message-Id: <20190510151944.22981-28-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190510151944.22981-1-richard.henderson@linaro.org> References: <20190510151944.22981-1-richard.henderson@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::542 Subject: [Qemu-devel] [PULL 27/27] tcg: Use tlb_fill probe from tlb_vaddr_to_host X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Most of the existing users would continue around a loop which would fault the tlb entry in via a normal load/store. But for AArch64 SVE we have an existing emulation bug wherein we would mark the first element of a no-fault vector load as faulted (within the FFR, not via exception) just because we did not have its address in the TLB. Now we can properly only mark it as faulted if there really is no valid, readable translation, while still not raising an exception. (Note that beyond the first element of the vector, the hardware may report a fault for any reason whatsoever; with at least one element loaded, forward progress is guaranteed.) Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Peter Maydell Signed-off-by: Richard Henderson --- include/exec/cpu_ldst.h | 50 ++++++----------------------- accel/tcg/cputlb.c | 69 ++++++++++++++++++++++++++++++++++++----- target/arm/sve_helper.c | 6 +--- 3 files changed, 72 insertions(+), 53 deletions(-) -- 2.17.1 diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h index d78041d7a0..7b28a839d2 100644 --- a/include/exec/cpu_ldst.h +++ b/include/exec/cpu_ldst.h @@ -433,50 +433,20 @@ static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx, * @mmu_idx: MMU index to use for lookup * * Look up the specified guest virtual index in the TCG softmmu TLB. - * If the TLB contains a host virtual address suitable for direct RAM - * access, then return it. Otherwise (TLB miss, TLB entry is for an - * I/O access, etc) return NULL. - * - * This is the equivalent of the initial fast-path code used by - * TCG backends for guest load and store accesses. + * If we can translate a host virtual address suitable for direct RAM + * access, without causing a guest exception, then return it. + * Otherwise (TLB entry is for an I/O access, guest software + * TLB fill required, etc) return NULL. */ +#ifdef CONFIG_USER_ONLY static inline void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, - int access_type, int mmu_idx) + MMUAccessType access_type, int mmu_idx) { -#if defined(CONFIG_USER_ONLY) return g2h(addr); -#else - CPUTLBEntry *tlbentry = tlb_entry(env, mmu_idx, addr); - abi_ptr tlb_addr; - uintptr_t haddr; - - switch (access_type) { - case 0: - tlb_addr = tlbentry->addr_read; - break; - case 1: - tlb_addr = tlb_addr_write(tlbentry); - break; - case 2: - tlb_addr = tlbentry->addr_code; - break; - default: - g_assert_not_reached(); - } - - if (!tlb_hit(tlb_addr, addr)) { - /* TLB entry is for a different page */ - return NULL; - } - - if (tlb_addr & ~TARGET_PAGE_MASK) { - /* IO access */ - return NULL; - } - - haddr = addr + tlbentry->addend; - return (void *)haddr; -#endif /* defined(CONFIG_USER_ONLY) */ } +#else +void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, + MMUAccessType access_type, int mmu_idx); +#endif #endif /* CPU_LDST_H */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index dfcd9ae168..45a5c4e123 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1007,6 +1007,16 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, } } +static inline target_ulong tlb_read_ofs(CPUTLBEntry *entry, size_t ofs) +{ +#if TCG_OVERSIZED_GUEST + return *(target_ulong *)((uintptr_t)entry + ofs); +#else + /* ofs might correspond to .addr_write, so use atomic_read */ + return atomic_read((target_ulong *)((uintptr_t)entry + ofs)); +#endif +} + /* Return true if ADDR is present in the victim tlb, and has been copied back to the main tlb. */ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, @@ -1017,14 +1027,7 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, assert_cpu_is_self(ENV_GET_CPU(env)); for (vidx = 0; vidx < CPU_VTLB_SIZE; ++vidx) { CPUTLBEntry *vtlb = &env->tlb_v_table[mmu_idx][vidx]; - target_ulong cmp; - - /* elt_ofs might correspond to .addr_write, so use atomic_read */ -#if TCG_OVERSIZED_GUEST - cmp = *(target_ulong *)((uintptr_t)vtlb + elt_ofs); -#else - cmp = atomic_read((target_ulong *)((uintptr_t)vtlb + elt_ofs)); -#endif + target_ulong cmp = tlb_read_ofs(vtlb, elt_ofs); if (cmp == page) { /* Found entry in victim tlb, swap tlb and iotlb. */ @@ -1108,6 +1111,56 @@ void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx, } } +void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, + MMUAccessType access_type, int mmu_idx) +{ + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + uintptr_t tlb_addr, page; + size_t elt_ofs; + + switch (access_type) { + case MMU_DATA_LOAD: + elt_ofs = offsetof(CPUTLBEntry, addr_read); + break; + case MMU_DATA_STORE: + elt_ofs = offsetof(CPUTLBEntry, addr_write); + break; + case MMU_INST_FETCH: + elt_ofs = offsetof(CPUTLBEntry, addr_code); + break; + default: + g_assert_not_reached(); + } + + page = addr & TARGET_PAGE_MASK; + tlb_addr = tlb_read_ofs(entry, elt_ofs); + + if (!tlb_hit_page(tlb_addr, page)) { + uintptr_t index = tlb_index(env, mmu_idx, addr); + + if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page)) { + CPUState *cs = ENV_GET_CPU(env); + CPUClass *cc = CPU_GET_CLASS(cs); + + if (!cc->tlb_fill(cs, addr, 0, access_type, mmu_idx, true, 0)) { + /* Non-faulting page table read failed. */ + return NULL; + } + + /* TLB resize via tlb_fill may have moved the entry. */ + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = tlb_read_ofs(entry, elt_ofs); + } + + if (tlb_addr & ~TARGET_PAGE_MASK) { + /* IO access */ + return NULL; + } + + return (void *)(addr + entry->addend); +} + /* Probe for a read-modify-write atomic operation. Do not allow unaligned * operations, or io operations to proceed. Return the host address. */ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index bc847250dd..fd434c66ea 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -4598,11 +4598,7 @@ static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr, * in the real world, obviously.) * * Then there are the annoying special cases with watchpoints... - * - * TODO: Add a form of tlb_fill that does not raise an exception, - * with a form of tlb_vaddr_to_host and a set of loads to match. - * The non_fault_vaddr_to_host would handle everything, usually, - * and the loads would handle the iomem path for watchpoints. + * TODO: Add a form of non-faulting loads using cc->tlb_fill(probe=true). */ host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx); split = max_for_page(addr, mem_off, mem_max);