From patchwork Fri May 10 20:00:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 163952 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp2819657ilr; Fri, 10 May 2019 13:06:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqxANNcE/hDAE/SavVeBfvPxO7jvJh1ePytivufxziK902u7wU7ienB2xFaEIrasHB4ZdHAf X-Received: by 2002:a50:908a:: with SMTP id c10mr13239361eda.226.1557518783866; Fri, 10 May 2019 13:06:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557518783; cv=none; d=google.com; s=arc-20160816; b=o0EiZquMZA9HdTntWwsAJ5m3j6+bTYCbPRSGMhZsMJalN4Hns44IKYmX22xXe4+M74 9NKzDVNUVMeCTozx9OJ+YM3URkuTcUyqBItLc5IItsgStmsSwSkxtUR/XJ6vpvBDaaIF 3x5WcUk+BzYXp7KdFchP/ocD+ApZKYJixC4AFvxBJsy2VFfve2W6EB/ycfFtUFcduycJ OEWroqacvF3mBxCFhyvMp5daexef2EJNzxyzmfnD3bai9AMvIOGbn0UdfuaRNhJwuI4N SNSZqoloUsEXjmbSjnjgJMmxT9LoUpYTGabj9HJKHZ88dNJBGPdHSOpC0IOxJSnLO5aD eRNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=eKgvAaykMdYlV+CFih2bS4g/EVzTgF73re99mr2LDvw=; b=PTJOTUMhSYUR2dVGV/D002AtL9jEuK6PTULUmggaJp0tBIqqG0/RjMdnABBLdzAEh1 L/joTimZX3sIupoE6C3UwCkMb7HPJQgGYDsfJaI8NFWbf1NhBej3bXTdqviPOEo4sxUL 3Gvj3XAh+NdbnKJR5dzx7UAOQy6UVGE0J32BuwH8TpF3uPU5Q1CGI+iw/G9Z7uJyXjhz No7MYeoXAhy2l5ltum+kRg7m5iBs4naO7mux8pUg3l7zLbyFRSEo3P0ZC8GdyI8lD1xI kIBXx2e2DVkbgHl2jW2qi4hj7kXOH0x47QQaWplOtvGGRI/PLv6MzbgnAkpAZwDUoWOU morQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ZLv3M2cA; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w17si3788998ejj.172.2019.05.10.13.06.23 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 10 May 2019 13:06:23 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=ZLv3M2cA; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:49351 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBmg-0006iN-JR for patch@linaro.org; Fri, 10 May 2019 16:06:22 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33911) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhp-0002W3-P5 for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002VI-2C for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]:38059) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002Po-Sg for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wm1-x332.google.com with SMTP id f2so8598602wmj.3 for ; Fri, 10 May 2019 13:01:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eKgvAaykMdYlV+CFih2bS4g/EVzTgF73re99mr2LDvw=; b=ZLv3M2cA+rjgzl9qJzGESFR8h1bylh2Ym+OTQ6ti/6zCG4Rp7aFJiaiI5c7wiCR1dv 9YJlLkb12SVcO4zzJP9cpsMRzbjNE4ibSg/KW8H+/SpsrGTONaXIX/c0sPzyN4nujHar tDV+Fkyv7Y3Vs6nPEM+2VVSY6V3sOvGkaC/Binxy/KnaEvKpuBovW+eTO6RH0P02RMtq KDQbfHGaEIIfqYscP+wSn7Y1yRhOAAF/+p8noJfdAthr955G+OnLWnRYiFN6gObvTWHS JiVylYWahF0DTeCON5rnX8hz8DrzIhk4/JasgSAPBN1nl8UwdVyUrWq3AXpAZltMgILF DS0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eKgvAaykMdYlV+CFih2bS4g/EVzTgF73re99mr2LDvw=; b=NSUveD9OB3m+8cWCDaQu5KTpgKvD5FczuIIWeh99XrC9vttZUKeYvSP1dQFEuO335J ddeg3IJCxkkbbIE/uRX8F/oFVM4ZeHQ9rfmPnZsx3DcdiipBN7bnXo9BfmfYpqDvGdVh omK6Xajf5H4W3LFlVbtOOyiI0gPEr2oyN4tEhDDVTQH+vwejdUYCBLXS4g5Xsows0Utt wb43FemAOFlcnJgXt2IsQBPJvSI+b5e7dEWSNo+KHiLM7VhSWQEotMdrILpH7FC+d4Mf WgTP7cQD5ggVdpgZn1X/wL6nYJBLTJ1svEv4DTDjGZ043350UIU796nm2CGzzKXYWWts FCiA== X-Gm-Message-State: APjAAAUTKac2iu5v/kyUHeV2h/0hoPG0vtf6PRpOPGRweSAMojsAMRn/ mn8Bq59GJ7zmcjmrma/IHGq+6PXG8oc= X-Received: by 2002:a1c:a684:: with SMTP id p126mr3456363wme.101.1557518466423; Fri, 10 May 2019 13:01:06 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id i25sm3523248wmb.46.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E23861FF8C; Fri, 10 May 2019 21:01:01 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:00:57 +0100 Message-Id: <20190510200101.31096-2-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::332 Subject: [Qemu-devel] [PULL 1/5] accel/tcg: demacro cputlb X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Cave-Ayland , Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-devel@nongnu.org, Richard Henderson Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Instead of expanding a series of macros to generate the load/store helpers we move stuff into common functions and rely on the compiler to eliminate the dead code for each variant. Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland -- 2.20.1 diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index f2f618217d6..12f21865ee7 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1168,26 +1168,421 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, } #ifdef TARGET_WORDS_BIGENDIAN -# define TGT_BE(X) (X) -# define TGT_LE(X) BSWAP(X) +#define NEED_BE_BSWAP 0 +#define NEED_LE_BSWAP 1 #else -# define TGT_BE(X) BSWAP(X) -# define TGT_LE(X) (X) +#define NEED_BE_BSWAP 1 +#define NEED_LE_BSWAP 0 #endif -#define MMUSUFFIX _mmu +/* + * Byte Swap Helper + * + * This should all dead code away depending on the build host and + * access type. + */ -#define DATA_SIZE 1 -#include "softmmu_template.h" +static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian) +{ + if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) { + switch (size) { + case 1: return val; + case 2: return bswap16(val); + case 4: return bswap32(val); + case 8: return bswap64(val); + default: + g_assert_not_reached(); + } + } else { + return val; + } +} -#define DATA_SIZE 2 -#include "softmmu_template.h" +/* + * Load Helpers + * + * We support two different access types. SOFTMMU_CODE_ACCESS is + * specifically for reading instructions from system memory. It is + * called by the translation loop and in some helpers where the code + * is disassembled. It shouldn't be called directly by guest code. + */ -#define DATA_SIZE 4 -#include "softmmu_template.h" +static uint64_t load_helper(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr, + size_t size, bool big_endian, + bool code_read) +{ + uintptr_t mmu_idx = get_mmuidx(oi); + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = code_read ? entry->addr_code : entry->addr_read; + const size_t tlb_off = code_read ? + offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read); + unsigned a_bits = get_alignment_bits(get_memop(oi)); + void *haddr; + uint64_t res; + + /* Handle CPU specific unaligned behaviour */ + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + mmu_idx, retaddr); + } -#define DATA_SIZE 8 -#include "softmmu_template.h" + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + addr & TARGET_PAGE_MASK)) { + tlb_fill(ENV_GET_CPU(env), addr, size, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = code_read ? entry->addr_code : entry->addr_read; + } + + /* Handle an IO access. */ + if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; + uint64_t tmp; + + if ((addr & (size - 1)) != 0) { + goto do_unaligned_access; + } + + tmp = io_readx(env, iotlbentry, mmu_idx, addr, retaddr, + tlb_addr & TLB_RECHECK, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, size); + return handle_bswap(tmp, size, big_endian); + } + + /* Handle slow unaligned access (it spans two pages or IO). */ + if (size > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 + >= TARGET_PAGE_SIZE)) { + target_ulong addr1, addr2; + tcg_target_ulong r1, r2; + unsigned shift; + do_unaligned_access: + addr1 = addr & ~(size - 1); + addr2 = addr1 + size; + r1 = load_helper(env, addr1, oi, retaddr, size, big_endian, code_read); + r2 = load_helper(env, addr2, oi, retaddr, size, big_endian, code_read); + shift = (addr & (size - 1)) * 8; + + if (big_endian) { + /* Big-endian combine. */ + res = (r1 << shift) | (r2 >> ((size * 8) - shift)); + } else { + /* Little-endian combine. */ + res = (r1 >> shift) | (r2 << ((size * 8) - shift)); + } + return res & MAKE_64BIT_MASK(0, size * 8); + } + + haddr = (void *)((uintptr_t)addr + entry->addend); + + switch (size) { + case 1: + res = ldub_p(haddr); + break; + case 2: + if (big_endian) { + res = lduw_be_p(haddr); + } else { + res = lduw_le_p(haddr); + } + break; + case 4: + if (big_endian) { + res = (uint32_t)ldl_be_p(haddr); + } else { + res = (uint32_t)ldl_le_p(haddr); + } + break; + case 8: + if (big_endian) { + res = ldq_be_p(haddr); + } else { + res = ldq_le_p(haddr); + } + break; + default: + g_assert_not_reached(); + } + + return res; +} + +/* + * For the benefit of TCG generated code, we want to avoid the + * complication of ABI-specific return type promotion and always + * return a value extended to the register size of the host. This is + * tcg_target_long, except in the case of a 32-bit host and 64-bit + * data, and for that we always have uint64_t. + * + * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. + */ + +tcg_target_ulong __attribute__((flatten)) +helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, false); +} + +uint64_t __attribute__((flatten)) +helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, false, false); +} + +uint64_t __attribute__((flatten)) +helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, true, false); +} + +/* + * Provide signed versions of the load routines as well. We can of course + * avoid this for 64-bit data, or for 32-bit data on 32-bit host. + */ + + +tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int8_t)helper_ret_ldub_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int16_t)helper_le_lduw_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int16_t)helper_be_lduw_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_le_ldsl_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int32_t)helper_le_ldul_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int32_t)helper_be_ldul_mmu(env, addr, oi, retaddr); +} + +/* + * Store Helpers + */ + +static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr, size_t size, + bool big_endian) +{ + uintptr_t mmu_idx = get_mmuidx(oi); + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = tlb_addr_write(entry); + const size_t tlb_off = offsetof(CPUTLBEntry, addr_write); + unsigned a_bits = get_alignment_bits(get_memop(oi)); + void *haddr; + + /* Handle CPU specific unaligned behaviour */ + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + addr & TARGET_PAGE_MASK)) { + tlb_fill(ENV_GET_CPU(env), addr, size, MMU_DATA_STORE, + mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; + } + + /* Handle an IO access. */ + if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; + + if ((addr & (size - 1)) != 0) { + goto do_unaligned_access; + } + + io_writex(env, iotlbentry, mmu_idx, + handle_bswap(val, size, big_endian), + addr, retaddr, tlb_addr & TLB_RECHECK, size); + return; + } + + /* Handle slow unaligned access (it spans two pages or IO). */ + if (size > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 + >= TARGET_PAGE_SIZE)) { + int i; + uintptr_t index2; + CPUTLBEntry *entry2; + target_ulong page2, tlb_addr2; + do_unaligned_access: + /* + * Ensure the second page is in the TLB. Note that the first page + * is already guaranteed to be filled, and that the second page + * cannot evict the first. + */ + page2 = (addr + size) & TARGET_PAGE_MASK; + index2 = tlb_index(env, mmu_idx, page2); + entry2 = tlb_entry(env, mmu_idx, page2); + tlb_addr2 = tlb_addr_write(entry2); + if (!tlb_hit_page(tlb_addr2, page2) + && !victim_tlb_hit(env, mmu_idx, index2, tlb_off, + page2 & TARGET_PAGE_MASK)) { + tlb_fill(ENV_GET_CPU(env), page2, size, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + /* + * XXX: not efficient, but simple. + * This loop must go in the forward direction to avoid issues + * with self-modifying code in Windows 64-bit. + */ + for (i = 0; i < size; ++i) { + uint8_t val8; + if (big_endian) { + /* Big-endian extract. */ + val8 = val >> (((size - 1) * 8) - (i * 8)); + } else { + /* Little-endian extract. */ + val8 = val >> (i * 8); + } + store_helper(env, addr + i, val8, oi, retaddr, 1, big_endian); + } + return; + } + + haddr = (void *)((uintptr_t)addr + entry->addend); + + switch (size) { + case 1: + stb_p(haddr, val); + break; + case 2: + if (big_endian) { + stw_be_p(haddr, val); + } else { + stw_le_p(haddr, val); + } + break; + case 4: + if (big_endian) { + stl_be_p(haddr, val); + } else { + stl_le_p(haddr, val); + } + break; + case 8: + if (big_endian) { + stq_be_p(haddr, val); + } else { + stq_le_p(haddr, val); + } + break; + default: + g_assert_not_reached(); + break; + } +} + +void __attribute__((flatten)) +helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 1, false); +} + +void __attribute__((flatten)) +helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 2, false); +} + +void __attribute__((flatten)) +helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 2, true); +} + +void __attribute__((flatten)) +helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 4, false); +} + +void __attribute__((flatten)) +helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 4, true); +} + +void __attribute__((flatten)) +helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 8, false); +} + +void __attribute__((flatten)) +helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 8, true); +} /* First set of helpers allows passing in of OI and RETADDR. This makes them callable from other helpers. */ @@ -1248,20 +1643,51 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, /* Code access functions. */ -#undef MMUSUFFIX -#define MMUSUFFIX _cmmu -#undef GETPC -#define GETPC() ((uintptr_t)0) -#define SOFTMMU_CODE_ACCESS +uint8_t __attribute__((flatten)) +helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, true); +} -#define DATA_SIZE 1 -#include "softmmu_template.h" +uint16_t __attribute__((flatten)) +helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, true); +} -#define DATA_SIZE 2 -#include "softmmu_template.h" +uint16_t __attribute__((flatten)) +helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, true); +} -#define DATA_SIZE 4 -#include "softmmu_template.h" +uint32_t __attribute__((flatten)) +helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, true); +} -#define DATA_SIZE 8 -#include "softmmu_template.h" +uint32_t __attribute__((flatten)) +helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, true); +} + +uint64_t __attribute__((flatten)) +helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, false, true); +} + +uint64_t __attribute__((flatten)) +helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, true, true); +} diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h deleted file mode 100644 index e970a8b378e..00000000000 --- a/accel/tcg/softmmu_template.h +++ /dev/null @@ -1,454 +0,0 @@ -/* - * Software MMU support - * - * Generate helpers used by TCG for qemu_ld/st ops and code load - * functions. - * - * Included from target op helpers and exec.c. - * - * Copyright (c) 2003 Fabrice Bellard - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with this library; if not, see . - */ -#if DATA_SIZE == 8 -#define SUFFIX q -#define LSUFFIX q -#define SDATA_TYPE int64_t -#define DATA_TYPE uint64_t -#elif DATA_SIZE == 4 -#define SUFFIX l -#define LSUFFIX l -#define SDATA_TYPE int32_t -#define DATA_TYPE uint32_t -#elif DATA_SIZE == 2 -#define SUFFIX w -#define LSUFFIX uw -#define SDATA_TYPE int16_t -#define DATA_TYPE uint16_t -#elif DATA_SIZE == 1 -#define SUFFIX b -#define LSUFFIX ub -#define SDATA_TYPE int8_t -#define DATA_TYPE uint8_t -#else -#error unsupported data size -#endif - - -/* For the benefit of TCG generated code, we want to avoid the complication - of ABI-specific return type promotion and always return a value extended - to the register size of the host. This is tcg_target_long, except in the - case of a 32-bit host and 64-bit data, and for that we always have - uint64_t. Don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -#if defined(SOFTMMU_CODE_ACCESS) || DATA_SIZE == 8 -# define WORD_TYPE DATA_TYPE -# define USUFFIX SUFFIX -#else -# define WORD_TYPE tcg_target_ulong -# define USUFFIX glue(u, SUFFIX) -# define SSUFFIX glue(s, SUFFIX) -#endif - -#ifdef SOFTMMU_CODE_ACCESS -#define READ_ACCESS_TYPE MMU_INST_FETCH -#define ADDR_READ addr_code -#else -#define READ_ACCESS_TYPE MMU_DATA_LOAD -#define ADDR_READ addr_read -#endif - -#if DATA_SIZE == 8 -# define BSWAP(X) bswap64(X) -#elif DATA_SIZE == 4 -# define BSWAP(X) bswap32(X) -#elif DATA_SIZE == 2 -# define BSWAP(X) bswap16(X) -#else -# define BSWAP(X) (X) -#endif - -#if DATA_SIZE == 1 -# define helper_le_ld_name glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX) -# define helper_be_ld_name helper_le_ld_name -# define helper_le_lds_name glue(glue(helper_ret_ld, SSUFFIX), MMUSUFFIX) -# define helper_be_lds_name helper_le_lds_name -# define helper_le_st_name glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX) -# define helper_be_st_name helper_le_st_name -#else -# define helper_le_ld_name glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX) -# define helper_be_ld_name glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX) -# define helper_le_lds_name glue(glue(helper_le_ld, SSUFFIX), MMUSUFFIX) -# define helper_be_lds_name glue(glue(helper_be_ld, SSUFFIX), MMUSUFFIX) -# define helper_le_st_name glue(glue(helper_le_st, SUFFIX), MMUSUFFIX) -# define helper_be_st_name glue(glue(helper_be_st, SUFFIX), MMUSUFFIX) -#endif - -#ifndef SOFTMMU_CODE_ACCESS -static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env, - size_t mmu_idx, size_t index, - target_ulong addr, - uintptr_t retaddr, - bool recheck, - MMUAccessType access_type) -{ - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck, - access_type, DATA_SIZE); -} -#endif - -WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = entry->ADDR_READ; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - DATA_TYPE res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(ADDR_READ, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = entry->ADDR_READ; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr, - tlb_addr & TLB_RECHECK, - READ_ACCESS_TYPE); - res = TGT_LE(res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - DATA_TYPE res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(DATA_SIZE - 1); - addr2 = addr1 + DATA_SIZE; - res1 = helper_le_ld_name(env, addr1, oi, retaddr); - res2 = helper_le_ld_name(env, addr2, oi, retaddr); - shift = (addr & (DATA_SIZE - 1)) * 8; - - /* Little-endian combine. */ - res = (res1 >> shift) | (res2 << ((DATA_SIZE * 8) - shift)); - return res; - } - - haddr = addr + entry->addend; -#if DATA_SIZE == 1 - res = glue(glue(ld, LSUFFIX), _p)((uint8_t *)haddr); -#else - res = glue(glue(ld, LSUFFIX), _le_p)((uint8_t *)haddr); -#endif - return res; -} - -#if DATA_SIZE > 1 -WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = entry->ADDR_READ; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - DATA_TYPE res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(ADDR_READ, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = entry->ADDR_READ; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr, - tlb_addr & TLB_RECHECK, - READ_ACCESS_TYPE); - res = TGT_BE(res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - DATA_TYPE res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(DATA_SIZE - 1); - addr2 = addr1 + DATA_SIZE; - res1 = helper_be_ld_name(env, addr1, oi, retaddr); - res2 = helper_be_ld_name(env, addr2, oi, retaddr); - shift = (addr & (DATA_SIZE - 1)) * 8; - - /* Big-endian combine. */ - res = (res1 << shift) | (res2 >> ((DATA_SIZE * 8) - shift)); - return res; - } - - haddr = addr + entry->addend; - res = glue(glue(ld, LSUFFIX), _be_p)((uint8_t *)haddr); - return res; -} -#endif /* DATA_SIZE > 1 */ - -#ifndef SOFTMMU_CODE_ACCESS - -/* Provide signed versions of the load routines as well. We can of course - avoid this for 64-bit data, or for 32-bit data on 32-bit host. */ -#if DATA_SIZE * 8 < TCG_TARGET_REG_BITS -WORD_TYPE helper_le_lds_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - return (SDATA_TYPE)helper_le_ld_name(env, addr, oi, retaddr); -} - -# if DATA_SIZE > 1 -WORD_TYPE helper_be_lds_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - return (SDATA_TYPE)helper_be_ld_name(env, addr, oi, retaddr); -} -# endif -#endif - -static inline void glue(io_write, SUFFIX)(CPUArchState *env, - size_t mmu_idx, size_t index, - DATA_TYPE val, - target_ulong addr, - uintptr_t retaddr, - bool recheck) -{ - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, - recheck, DATA_SIZE); -} - -void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = tlb_addr_write(entry); - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - val = TGT_LE(val); - glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, - retaddr, tlb_addr & TLB_RECHECK); - return; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - int i; - target_ulong page2; - CPUTLBEntry *entry2; - do_unaligned_access: - /* Ensure the second page is in the TLB. Note that the first page - is already guaranteed to be filled, and that the second page - cannot evict the first. */ - page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK; - entry2 = tlb_entry(env, mmu_idx, page2); - if (!tlb_hit_page(tlb_addr_write(entry2), page2) - && !VICTIM_TLB_HIT(addr_write, page2)) { - tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* XXX: not efficient, but simple. */ - /* This loop must go in the forward direction to avoid issues - with self-modifying code in Windows 64-bit. */ - for (i = 0; i < DATA_SIZE; ++i) { - /* Little-endian extract. */ - uint8_t val8 = val >> (i * 8); - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr); - } - return; - } - - haddr = addr + entry->addend; -#if DATA_SIZE == 1 - glue(glue(st, SUFFIX), _p)((uint8_t *)haddr, val); -#else - glue(glue(st, SUFFIX), _le_p)((uint8_t *)haddr, val); -#endif -} - -#if DATA_SIZE > 1 -void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = tlb_addr_write(entry); - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - val = TGT_BE(val); - glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr, - tlb_addr & TLB_RECHECK); - return; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - int i; - target_ulong page2; - CPUTLBEntry *entry2; - do_unaligned_access: - /* Ensure the second page is in the TLB. Note that the first page - is already guaranteed to be filled, and that the second page - cannot evict the first. */ - page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK; - entry2 = tlb_entry(env, mmu_idx, page2); - if (!tlb_hit_page(tlb_addr_write(entry2), page2) - && !VICTIM_TLB_HIT(addr_write, page2)) { - tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* XXX: not efficient, but simple */ - /* This loop must go in the forward direction to avoid issues - with self-modifying code. */ - for (i = 0; i < DATA_SIZE; ++i) { - /* Big-endian extract. */ - uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr); - } - return; - } - - haddr = addr + entry->addend; - glue(glue(st, SUFFIX), _be_p)((uint8_t *)haddr, val); -} -#endif /* DATA_SIZE > 1 */ -#endif /* !defined(SOFTMMU_CODE_ACCESS) */ - -#undef READ_ACCESS_TYPE -#undef DATA_TYPE -#undef SUFFIX -#undef LSUFFIX -#undef DATA_SIZE -#undef ADDR_READ -#undef WORD_TYPE -#undef SDATA_TYPE -#undef USUFFIX -#undef SSUFFIX -#undef BSWAP -#undef helper_le_ld_name -#undef helper_be_ld_name -#undef helper_le_lds_name -#undef helper_be_lds_name -#undef helper_le_st_name -#undef helper_be_st_name From patchwork Fri May 10 20:00:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 163951 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp2819407ilr; Fri, 10 May 2019 13:06:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqy4vIm0aF/Srru5pl5484P/i5x3pqj8b829+j7uKPSyvCVfuk/1xVYVScj1WHV3V87zXVst X-Received: by 2002:a50:95af:: with SMTP id w44mr13370936eda.95.1557518770856; Fri, 10 May 2019 13:06:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557518770; cv=none; d=google.com; s=arc-20160816; b=pe38UWJjVCMyROStEzhpEmkaDmGx/KcYXxXoxHbeK2U8EW2QGAHvDAYKpMj5JBUI5U 8ZiKUtDvJmN3Eh0Hkeiu6wQl/waWFinVz/kg6sj4nPTz9Yf6PGuHLDZH7QaEc0QHwTXr gTF2iMAbQ9Mh0vWX9lINye5tQE6v5ApTPGs/qczspn9gF4ZOgkYd1kXAfCHuMyzWMvzN CkZtfIXowbBc4pSrlx03jKVVWBkDietQE4sA4HXjx3OWJbmYuQQ7R8uR9052SvN6lCFA uoxVlpc+xcf3Gv7sjXLlH3h8ZZj1g4cOU8YSilU60QB3hsmfO8kAchNvTND6ZExXAafm bgUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=2jRFCM+GPqo8vu/j1Bzu3XKh2Bvye2YGkMc6M6eXRGI=; b=ojNLK34GhpAFwiTtFIsuo0SypfIxwPTIAGw1AawQUaXoheGMpKg5H7iMBqg4kqGiG5 8U86vt+oPCyC1M4P+tATZTqy1vESE9GwQFah2eIsrA6pTjWwv3pQ2oYvRmZCFWSV0M+u v6KCJzFZlWNDN05pqbIV5qjdXlin9J6bZm9rAnda0P2hsh8C3SUoq3tczy6MIdE8CYu/ 4m3jwPBavLrc+bJP2kVbhCKSAh+edQsdrr/s1JyJn4Ec4fm3zYfP5i+3CtkwPQGcd1Xe WMiB1dGxjrH6spmdkCgqC6awsqvmR3tPDlHohJsD8hF/xJ/y4zFll+GGiihn+98dVSMJ BuDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=r7rcIN8g; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id d20si170927eda.159.2019.05.10.13.06.10 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 10 May 2019 13:06:10 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=r7rcIN8g; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:49347 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBmT-0006Tl-Lm for patch@linaro.org; Fri, 10 May 2019 16:06:09 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33865) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SE-KD for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002V6-1m for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]:35262) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002OD-Sh for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wm1-x32e.google.com with SMTP id q15so4593544wmj.0 for ; Fri, 10 May 2019 13:01:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2jRFCM+GPqo8vu/j1Bzu3XKh2Bvye2YGkMc6M6eXRGI=; b=r7rcIN8gz/AAQGBl6XwVxMMRHzeRto+c+l5pUscHfJYSwn5TAmru/biwi0N7WgPKR9 HJDNVART7BjP7o4G4BezRN4vy+Fr+8is3tWe5Y+Chpu8++2AejceiCQwhFAWB5VYNNpM tfc9u5fowCi2CWNaK4c09jPuO+rpSTTMKZbedUeKU+GB88Lfbq3a50WuaLkTVA5wYQei iuhVNj/Sx1Bge3pl2hAkmL+1sha2sH/Xph60yoNX3UbClBcgmi6cCwCveFekVtN/H5Hk AeRxw51HWeUiu+KJViCgOhd6Bcssizfmh3gHeZ1nTj8yAVRjRPzOjxDCyeLI32ZgNqz0 pCcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2jRFCM+GPqo8vu/j1Bzu3XKh2Bvye2YGkMc6M6eXRGI=; b=RF8O4EtM8b17cHe8nqnkSXxQeW1EDy5RZtDDsdbqJseuL10RLhqcHH8V/hTwlpw/gi gIswdHFTKUQw4+dd2unM70NEBO9MZaRR91OWvhHJ3tG00+X9b3nOYvWWdCzd8A/9XTlM D9C4+wZe6l3NPFx/PqtgxUWwThM98yGRYzt025F88S1jLTxzkHOpfP0sX/+xqX5pbD/0 S2D/8nkBSi6Rluow6Ht8NTQWWak+qnOckVoCN5W9fUVvsGqNUZe8u7eA6H8cDNz2Eq+X ETatroB7bIIUtY0DYhm1+Jt81eV33eFevkO4THL8SB020Dy3PxwJmwadtHXd/gPuU1hx ylWw== X-Gm-Message-State: APjAAAVv2204dfjR1f9/XiXSDaOgulcwmYdZoihL/JwEvXEgnpWBeZ3S A+Inl2YTYzyoidT/kTu2omyuDg== X-Received: by 2002:a1c:9e04:: with SMTP id h4mr8671879wme.135.1557518464549; Fri, 10 May 2019 13:01:04 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id s13sm3139414wrw.17.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 047DA1FF8F; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:00:58 +0100 Message-Id: <20190510200101.31096-3-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::32e Subject: [Qemu-devel] [PULL 2/5] cputlb: Move TLB_RECHECK handling into load/store_helper X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Richard Henderson Having this in io_readx/io_writex meant that we forgot to re-compute index after tlb_fill. It also means we can use the normal aligned memory load path. It also fixes a bug in that we had cached a use of index across a tlb_fill. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland -- 2.20.1 diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 12f21865ee7..9c04eb1687c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -856,9 +856,8 @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr) } static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, - int mmu_idx, - target_ulong addr, uintptr_t retaddr, - bool recheck, MMUAccessType access_type, int size) + int mmu_idx, target_ulong addr, uintptr_t retaddr, + MMUAccessType access_type, int size) { CPUState *cpu = ENV_GET_CPU(env); hwaddr mr_offset; @@ -868,30 +867,6 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, bool locked = false; MemTxResult r; - if (recheck) { - /* - * This is a TLB_RECHECK access, where the MMU protection - * covers a smaller range than a target page, and we must - * repeat the MMU check here. This tlb_fill() call might - * longjump out if this access should cause a guest exception. - */ - CPUTLBEntry *entry; - target_ulong tlb_addr; - - tlb_fill(cpu, addr, size, access_type, mmu_idx, retaddr); - - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = (access_type == MMU_DATA_LOAD ? - entry->addr_read : entry->addr_code); - if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) { - /* RAM access */ - uintptr_t haddr = addr + entry->addend; - - return ldn_p((void *)haddr, size); - } - /* Fall through for handling IO accesses */ - } - section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr = section->mr; mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; @@ -925,9 +900,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, } static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, - int mmu_idx, - uint64_t val, target_ulong addr, - uintptr_t retaddr, bool recheck, int size) + int mmu_idx, uint64_t val, target_ulong addr, + uintptr_t retaddr, int size) { CPUState *cpu = ENV_GET_CPU(env); hwaddr mr_offset; @@ -936,30 +910,6 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, bool locked = false; MemTxResult r; - if (recheck) { - /* - * This is a TLB_RECHECK access, where the MMU protection - * covers a smaller range than a target page, and we must - * repeat the MMU check here. This tlb_fill() call might - * longjump out if this access should cause a guest exception. - */ - CPUTLBEntry *entry; - target_ulong tlb_addr; - - tlb_fill(cpu, addr, size, MMU_DATA_STORE, mmu_idx, retaddr); - - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_addr_write(entry); - if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) { - /* RAM access */ - uintptr_t haddr = addr + entry->addend; - - stn_p((void *)haddr, size, val); - return; - } - /* Fall through for handling IO accesses */ - } - section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr = section->mr; mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; @@ -1218,14 +1168,15 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, target_ulong tlb_addr = code_read ? entry->addr_code : entry->addr_read; const size_t tlb_off = code_read ? offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read); + const MMUAccessType access_type = + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD; unsigned a_bits = get_alignment_bits(get_memop(oi)); void *haddr; uint64_t res; /* Handle CPU specific unaligned behaviour */ if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + cpu_unaligned_access(ENV_GET_CPU(env), addr, access_type, mmu_idx, retaddr); } @@ -1234,8 +1185,7 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, addr & TARGET_PAGE_MASK)) { tlb_fill(ENV_GET_CPU(env), addr, size, - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, - mmu_idx, retaddr); + access_type, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); } @@ -1244,17 +1194,33 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, /* Handle an IO access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - uint64_t tmp; - if ((addr & (size - 1)) != 0) { goto do_unaligned_access; } - tmp = io_readx(env, iotlbentry, mmu_idx, addr, retaddr, - tlb_addr & TLB_RECHECK, - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, size); - return handle_bswap(tmp, size, big_endian); + if (tlb_addr & TLB_RECHECK) { + /* + * This is a TLB_RECHECK access, where the MMU protection + * covers a smaller range than a target page, and we must + * repeat the MMU check here. This tlb_fill() call might + * longjump out if this access should cause a guest exception. + */ + tlb_fill(ENV_GET_CPU(env), addr, size, + access_type, mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + + tlb_addr = code_read ? entry->addr_code : entry->addr_read; + tlb_addr &= ~TLB_RECHECK; + if (!(tlb_addr & ~TARGET_PAGE_MASK)) { + /* RAM access */ + goto do_aligned_access; + } + } + + res = io_readx(env, &env->iotlb[mmu_idx][index], mmu_idx, addr, + retaddr, access_type, size); + return handle_bswap(res, size, big_endian); } /* Handle slow unaligned access (it spans two pages or IO). */ @@ -1281,8 +1247,8 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, return res & MAKE_64BIT_MASK(0, size * 8); } + do_aligned_access: haddr = (void *)((uintptr_t)addr + entry->addend); - switch (size) { case 1: res = ldub_p(haddr); @@ -1446,15 +1412,33 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* Handle an IO access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - if ((addr & (size - 1)) != 0) { goto do_unaligned_access; } - io_writex(env, iotlbentry, mmu_idx, + if (tlb_addr & TLB_RECHECK) { + /* + * This is a TLB_RECHECK access, where the MMU protection + * covers a smaller range than a target page, and we must + * repeat the MMU check here. This tlb_fill() call might + * longjump out if this access should cause a guest exception. + */ + tlb_fill(ENV_GET_CPU(env), addr, size, MMU_DATA_STORE, + mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + + tlb_addr = tlb_addr_write(entry); + tlb_addr &= ~TLB_RECHECK; + if (!(tlb_addr & ~TARGET_PAGE_MASK)) { + /* RAM access */ + goto do_aligned_access; + } + } + + io_writex(env, &env->iotlb[mmu_idx][index], mmu_idx, handle_bswap(val, size, big_endian), - addr, retaddr, tlb_addr & TLB_RECHECK, size); + addr, retaddr, size); return; } @@ -1502,8 +1486,8 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, return; } + do_aligned_access: haddr = (void *)((uintptr_t)addr + entry->addend); - switch (size) { case 1: stb_p(haddr, val); From patchwork Fri May 10 20:00:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 163950 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp2817293ilr; Fri, 10 May 2019 13:04:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqyEGIQpBDQgjW9lXeXWfoFHaF2Bq/7VdIwLKcHJhwVuxut7FrmNSkbk6qP3AZSP4vG/AARs X-Received: by 2002:a17:906:2887:: with SMTP id o7mr10245115ejd.249.1557518646274; Fri, 10 May 2019 13:04:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557518646; cv=none; d=google.com; s=arc-20160816; b=ellyjXR1jlIX7iw4INRiDnDlOo6b+n1RL8Tkj0KrH4q1uJEhnmmvCEG80vCfhParLE jpVIN3YmgP/KUZq/qrFTrncDQ7dEu+sv1BDQ59j+kD0Lrjpj9uTDwTRg+iVZFar/qSyx tnHPmPNTQPL8MKC0V+63lhHsGDAqlvp0ez77m0Sd9rl6xb+rNXvUI8rUY7cioW3OV9Cd iQQ5RdnTp2TOcKWMcyyx1owDjOtheOm/e+C8V2g5hqjlUJUbESdkoRCuDn6G/homeKD8 X5b0goYtG6AX/BLRC/haIY/dSAQdHjpiNrW10fKkQW42XfsE+Y5pw9lJyHRs7oyRZ082 pJQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=L07Ir53L/bT8njmxfGpI+tM70yQ090QH+043WWisSwk=; b=KFKdIGnBLbJlCqO0SZh/MS91O5v7aLk1wPPabS70jKMFMjLq3WE/ncdKNJEXXjgJKj fRR8m5Fo2wyQYQRmaDfQge6cNho1JoOjsxeoEdmqWV04zlGj81cIgvlJXMLmENTGiaLy bLpa16Zze/69URKVsZMwl6Wgyhp4fzH1SJwYcBVUKL8f4PP/3+9qf+pDYrotZRJiCq4k hvgG1/qwhM8oDhD5rSbUwAx84vNSM+I9u3nSZkFswc/IcjAovx6dBnfVNiQ0XbdxVoC6 S0nUSi7WFY+bJhSf09V+z4Y0susYoS3CoKv6x1VHcWmoX8BccB0L/8kmN0FN0nv4J9LV U7gg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=yiA54U+S; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id m50si2604371edm.402.2019.05.10.13.04.05 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 10 May 2019 13:04:06 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=yiA54U+S; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:49297 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBkS-0004Ga-VK for patch@linaro.org; Fri, 10 May 2019 16:04:04 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33863) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SC-K8 for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002VD-1f for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]:53921) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002Nq-Se for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 198so8678939wme.3 for ; Fri, 10 May 2019 13:01:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=L07Ir53L/bT8njmxfGpI+tM70yQ090QH+043WWisSwk=; b=yiA54U+S+deHa7/+IrX/xbPFaaFsVstt4MONTKcHtTlEUI+jTcaxnaciQhmVzjAQBd 2vMSBBNfDBwnRTAJ19a7VSjtO7P5qPLTB3ZwQM9/iOXieupLWAmQskv531zOHUL0FGkh 3nSNhiZ3uGpOZ9KoPYdWofwefBeG6c2IC0uzLuvxUICxL+ictPrzZ7Au3ht+kRQrMnMB xS2R/1rOZqMs9Q1n4KylHehc0xJqrP5PDOtNmZG3DvWkNFGuIDLmNT0uQWAG3t/Bduh5 OfMmivzZ4o/njMsQi7jg8E7MdCkakfiBU9VOkI9+47EEEQue7qdVG+OfOXZ+Zx8PT2oF xH0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L07Ir53L/bT8njmxfGpI+tM70yQ090QH+043WWisSwk=; b=GOzCXlrPoHuWgxB2AH0z2I+GP1wqkJ5OH39Lt8wUJif3q9Dy94yWaR/ES7Q99LIUY1 0nG2UdgWXHNblnrleelJxvg3YBadg2+Gqs80oGrcyfUIKXYWeK+/zFbCH9nx30SbReo/ 1bxlAUigGPz5t8xrF+1m78eYTUi2l/nSS5AGoyCl9IIeCmLCZarAVKdKAByU1xuXhIT+ tIF+uhQ1bibKUz0M9NWNnnBj620vU2MMc3FzK7qaRgEWWz+c2CuC/6f37FzVuf21De82 T/yyC1c0zoamk8Oa5j6vLJcXteF2UL450HO8ieSr/jqyxARBxNlmNleBaagWK9RsW0hi Eezg== X-Gm-Message-State: APjAAAVnHrZTbjn3suZODbYqSk0J4rX+gtM9hIPkBjF81zgxN89BlyUv lXILbSXvW+SCCEi3/iNt/HhRLw== X-Received: by 2002:a1c:e904:: with SMTP id q4mr7950555wmc.43.1557518463971; Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id b12sm12868352wrf.21.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 181471FF90; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:00:59 +0100 Message-Id: <20190510200101.31096-4-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::32e Subject: [Qemu-devel] [PULL 3/5] cputlb: Drop attribute flatten X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Richard Henderson Going to approach this problem via __attribute__((always_inline)) instead, but full conversion will take several steps. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland -- 2.20.1 diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 9c04eb1687c..ccbb47d8d1c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1291,51 +1291,44 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -tcg_target_ulong __attribute__((flatten)) -helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 1, false, false); } -tcg_target_ulong __attribute__((flatten)) -helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, false, false); } -tcg_target_ulong __attribute__((flatten)) -helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, true, false); } -tcg_target_ulong __attribute__((flatten)) -helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, false, false); } -tcg_target_ulong __attribute__((flatten)) -helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, true, false); } -uint64_t __attribute__((flatten)) -helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, false, false); } -uint64_t __attribute__((flatten)) -helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, true, false); } @@ -1519,51 +1512,44 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, } } -void __attribute__((flatten)) -helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 1, false); } -void __attribute__((flatten)) -helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 2, false); } -void __attribute__((flatten)) -helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 2, true); } -void __attribute__((flatten)) -helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 4, false); } -void __attribute__((flatten)) -helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 4, true); } -void __attribute__((flatten)) -helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 8, false); } -void __attribute__((flatten)) -helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 8, true); } @@ -1627,51 +1613,44 @@ helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, /* Code access functions. */ -uint8_t __attribute__((flatten)) -helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 1, false, true); } -uint16_t __attribute__((flatten)) -helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, false, true); } -uint16_t __attribute__((flatten)) -helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, true, true); } -uint32_t __attribute__((flatten)) -helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, false, true); } -uint32_t __attribute__((flatten)) -helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, true, true); } -uint64_t __attribute__((flatten)) -helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, false, true); } -uint64_t __attribute__((flatten)) -helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, true, true); } From patchwork Fri May 10 20:01:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 163947 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp2814654ilr; Fri, 10 May 2019 13:01:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqwDoCMseX+qQQIwI43RGvTvpFTGPQR+gfRKEgAVOCaT/BSnW0A5PkCenuloydxDqhNjNi6n X-Received: by 2002:a50:97d6:: with SMTP id f22mr13199505edb.115.1557518505158; Fri, 10 May 2019 13:01:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557518505; cv=none; d=google.com; s=arc-20160816; b=RkI9Hg21iI3Iy72ISO/U12bUEyyw4W7mQjNfjq0Cr9/2IKsHmLj9EZaOFT8KBnSjeU rC3YPn4v3GLWv+DB6IIQarboBg9yf61aKyZhF9OM6mgYdo0jRTCDxH9I4EpCTIx+xaAI Qlivt+Q7bCUlvCOS8ugYQ/hFxaqK2i1D72Y1X4TPi6bYLI3j4ckEfv74O98ivU5PbjMv eZKGv9MCDV0/MqP9v5vGWg7b0V6JDyyyz0HvQ8xRCpTAy6g7/irvQKYNDMogMN2SsrI1 ttP4IMoXE3WqFFjcSkxY25jIg480cxEqW7yvf/SUa3En7ssI/XZiXx7JzqBkxJaCChOj +v/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=gaTltrKsl46/gR/ETi1cLz0ePXsyiO5Cm5Dxu2eiLII=; b=hTZbvZywTR+GikEa2JuO2BUFJSFTJpQED43pBtXf9nlp/oh0Azgs48sWsFahZ+CqXW 0a2t1wsSEly7TYPh7MjFNnD7vS8DKoDj+Un4OYP7yOXFfJxc7cMMIko515DWe3RI9H9X paRf2SKO3Fc3kpGxxIpsKWbvvKPsrR/kJs7X5LfbOZOKWXj73LAdORDY9ML0Zo4kHSB4 CKOhrXXx4UcGdN4dQ7D3+0yI1Ib1Ta6dh8NMJ6NXb2+/rL6tOt1Jxp1xf0J68lAUqxNU DSvsz27d4HT0g3fHPYVjUpm939csSaFzhyxg0wnmPhLTjjLMHy2k2Li/GjS8l9pCCM5k AoKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=jCSzlH+w; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a12si3619699ejp.19.2019.05.10.13.01.44 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 10 May 2019 13:01:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=jCSzlH+w; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:49285 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBiB-0002VH-V1 for patch@linaro.org; Fri, 10 May 2019 16:01:43 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33864) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SD-KA for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002V3-1a for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:41637) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002Ou-Pq for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wr1-x433.google.com with SMTP id d12so9113947wrm.8 for ; Fri, 10 May 2019 13:01:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gaTltrKsl46/gR/ETi1cLz0ePXsyiO5Cm5Dxu2eiLII=; b=jCSzlH+wv2zlswKIDzjD5mxxuES3EnsN973gmFG9dzQcZWPZzIWAvf65BWmqWzH27r 4Ldhnc2C79kFep+3XKb3Jjfv2+svwKGFvNImIp4t1/E723DCwF2As9B4RLV0MLNiVbuN nC7RBMmgmevxbdz9uPqeNDWxZY5GR/j2W3xUF4V/A0jdUIq2iJfbgLxFNHfiLCGxqBaG EwcXp/x+mqfeUocnurbOcrrt3Prh1nhdx7W62KP9P8X2FiTiRgVlaA2kEOrIzqcvyFAt TlVq/rGAj902lh5sUbdH+FGr8CtfK5pdkDFn0YGgrYdazybPDDHfwopAw1/kdPSWkdk7 kLLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gaTltrKsl46/gR/ETi1cLz0ePXsyiO5Cm5Dxu2eiLII=; b=gT2uf4etZBOSlukKKWUX3/muFJuMeskBnomuDc+tBbo7qJbdQpYYKEIASl2Sdtg+aE WigC1WRVBCK1kk8gjit5an59HJzwO3IjzEBOb0F95w/DMt+uovff6ClpP8WO0WM2FOyz SdHXmV7UfDb9o1pwNIzf5ra7eRTEkMQS4GzItpfN0oXpf7Fm7VmwwxH8xRtp1zmKcGmD o1Qmb6SVejnRutWKJvWidOWfJUX5MaWxgZvpifqWXENNynbykCytOTCRKg+PzcvdH9rY EFLF4gz7UJPUGffSlUBNeSoWWvQum0wh++HZVNEbWAqUvMcqbzdPr91+V9qISYFlyoOh t95g== X-Gm-Message-State: APjAAAU6v77/+/BHaUK2rccRPnMmFOo3fcwtEdgh2xR+DTQXmveM6xfa 5cFBEj4p35r5G/0GTk6xCM7G7g== X-Received: by 2002:a5d:4d46:: with SMTP id a6mr9519576wru.142.1557518465502; Fri, 10 May 2019 13:01:05 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id l16sm4980963wrb.40.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2DC311FF91; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:01:00 +0100 Message-Id: <20190510200101.31096-5-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::433 Subject: [Qemu-devel] [PULL 4/5] cputlb: Do unaligned load recursion to outermost function X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Richard Henderson If we attempt to recurse from load_helper back to load_helper, even via intermediary, we do not get all of the constants expanded away as desired. But if we recurse back to the original helper (or a shim that has a consistent function signature), the operands are folded away as desired. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland -- 2.20.1 diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index ccbb47d8d1c..e4d0c943011 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1157,10 +1157,13 @@ static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian) * is disassembled. It shouldn't be called directly by guest code. */ -static uint64_t load_helper(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr, - size_t size, bool big_endian, - bool code_read) +typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); + +static inline uint64_t __attribute__((always_inline)) +load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr, size_t size, bool big_endian, bool code_read, + FullLoadHelper *full_load) { uintptr_t mmu_idx = get_mmuidx(oi); uintptr_t index = tlb_index(env, mmu_idx, addr); @@ -1233,8 +1236,8 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, do_unaligned_access: addr1 = addr & ~(size - 1); addr2 = addr1 + size; - r1 = load_helper(env, addr1, oi, retaddr, size, big_endian, code_read); - r2 = load_helper(env, addr2, oi, retaddr, size, big_endian, code_read); + r1 = full_load(env, addr1, oi, retaddr); + r2 = full_load(env, addr2, oi, retaddr); shift = (addr & (size - 1)) * 8; if (big_endian) { @@ -1291,46 +1294,83 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ +static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, false, + full_ldub_mmu); +} + tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 1, false, false); + return full_ldub_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, false, + full_le_lduw_mmu); } tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, false, false); + return full_le_lduw_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, false, + full_be_lduw_mmu); } tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, true, false); + return full_be_lduw_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, false, + full_le_ldul_mmu); } tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, false, false); + return full_le_ldul_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, false, + full_be_ldul_mmu); } tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, true, false); + return full_be_ldul_mmu(env, addr, oi, retaddr); } uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, false, false); + return load_helper(env, addr, oi, retaddr, 8, false, false, + helper_le_ldq_mmu); } uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, true, false); + return load_helper(env, addr, oi, retaddr, 8, true, false, + helper_be_ldq_mmu); } /* @@ -1613,44 +1653,81 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, /* Code access functions. */ +static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, true, + full_ldub_cmmu); +} + uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 1, false, true); + return full_ldub_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, true, + full_le_lduw_cmmu); } uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, false, true); + return full_le_lduw_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, true, + full_be_lduw_cmmu); } uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, true, true); + return full_be_lduw_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, true, + full_le_ldul_cmmu); } uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, false, true); + return full_le_ldul_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, true, + full_be_ldul_cmmu); } uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, true, true); + return full_be_ldul_cmmu(env, addr, oi, retaddr); } uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, false, true); + return load_helper(env, addr, oi, retaddr, 8, false, true, + helper_le_ldq_cmmu); } uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, true, true); + return load_helper(env, addr, oi, retaddr, 8, true, true, + helper_be_ldq_cmmu); } From patchwork Fri May 10 20:01:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 163949 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp2816899ilr; Fri, 10 May 2019 13:03:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqz5bH4HzVyHGsmuJxRhjAU/kj+uAEtJxaBsVHR1aHcz2iwVzUtg6Hni3jn5dAuNnGjmjjxX X-Received: by 2002:aa7:c919:: with SMTP id b25mr13378673edt.274.1557518624138; Fri, 10 May 2019 13:03:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557518624; cv=none; d=google.com; s=arc-20160816; b=Kgbj2mCf3+h8xFERi6v/HtdKEF4My+k3Ks0WKROvqEDQx8sxmsezWckDdqwvFvMjsW wZQB/i3uO2fw00Pmu3OKIu6RYMsvJxB57hD975qkuVo/NZTmIqSmK8Qumw5C03fpsZFk /TQHZJYGGbETf1l0s+Y1PlmcXUbvZdSen9VW1v7TU+sLubIApeVqAOW+Hr3QZ3yjAo8w 4A1WW5ekmlsbLp4Xr7f/vJFoW2AVJSTyZXVHnAU8qPnJ8BU9OKE2v8/xAXH5l16FuaTr /LDi2jyW0lc+aJcGnAraLFxS0a1L0cOQn3EFIyny+vth9IyTuJTODaLRsZZO1P3S6MXu SRIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=P6OpXLABH0bE0gbKxQnLm+pXDLfhXNolnZxvDMNERq8=; b=SyvaMsTNu55FxHYXx7F3Mj6lDJt978Y7/Tdg/eLFm6UjUe675LCRE/CSBw1Mct4sIs rVWnM5KltldZbLiq3JD21dBXs20S0AwkUNpUV46CTaXIgKdc2xzSD/yWiPJ+cPfNg0gO +qZO6ivNC4yS8OGL3sspmMUwctC/vopGDYWJ6qR/JjkfoM2/KeaAMLkf2p8ZfojvTXiE 2cSDBZkNGCBP/LGE27oRQpzUtX+ijr4zEuDvguH2cJMSzLZ+1/jm2cvdTo/Yzcz+ZwJE 93oInCH9eM4tAIY5+SbGg23s2IPWfMQwIPX72ByymCe0RDZTZNK5hja1LlvLH+TsG1H0 DU2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=xhEzya3U; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w18si1102161ejz.371.2019.05.10.13.03.43 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 10 May 2019 13:03:44 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=xhEzya3U; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:49299 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBk6-0004Gd-RV for patch@linaro.org; Fri, 10 May 2019 16:03:42 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33862) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SB-Jq for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhh-0002UX-F6 for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]:36288) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002QC-RW for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:10 -0400 Received: by mail-wm1-x32d.google.com with SMTP id j187so8604743wmj.1 for ; Fri, 10 May 2019 13:01:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P6OpXLABH0bE0gbKxQnLm+pXDLfhXNolnZxvDMNERq8=; b=xhEzya3U9fREEkpXMVzA1gW7VCa1JRlmW+ax4krdH1cg6AfVfG+pRTIADUgvrwP3ui wY2i0oSPOacSY2m4GajImz3JrG5CKdrSBmV5Z2P/OPpmS+BtVN06A21Yp6AWDqjd1saU sciyBX6CQV1VaqlnlDXVDwzmjsTgHjez5t8RzfrQOFE2lGYhzrGtM+38fCT+0ZqE+n4s g9NMU6SPP+OlchhNRq0K5K4rxnY13SFjCqSFN3BdgYzn6ND5tsS7OLGPWjhicjJaANsT 6KLke5mTmGNfZGqZvhq2yg1zIcD1BmUY7ms1hus77DcYZEBNrnpd+5EWNKXfQ6N7pHGz t8Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P6OpXLABH0bE0gbKxQnLm+pXDLfhXNolnZxvDMNERq8=; b=nNiXuA2qaSyIkvfflGFfbTzdPmsumY2neMEJeaKcTl3LrGaWlNKaIbpfXWqymsFV/1 kdE2q7RI5FtUOaB4vPJWMdN8hdBKVdFdx3+gktf4hp1VRjAnmjBmmJxGpy4O27qlUNPs BDjp6EQMWV36i2eYkE8s9aIto4x9yC/x7Eniqv3qBGg1zBJU/GaGAJIUOOhrHdfOov1R BWUvnmYZ0OGIVq/VaGKJ6XpvRNwS9qnOlHV+9W7WHGJjqeP2/1R8ntv1PZKIcrfRSPJZ +bdtvCKUnw1fA7hfZ6Y70vyS2qB5cXj6gakXPbFQBx3+lra/4g0Y4FQ5u9+1XrOzxxU2 w5Dg== X-Gm-Message-State: APjAAAUNMBz8ztdBAY3OUi0GKdxmjxckfXznMADKVqR1Y8w77E4Tq7QV mUtEoPQokGkLLQvrKZLuutH2kQ== X-Received: by 2002:a05:600c:1191:: with SMTP id i17mr318389wmf.84.1557518467333; Fri, 10 May 2019 13:01:07 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id b10sm11569023wrh.59.2019.05.10.13.01.03 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:06 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 428801FF92; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:01:01 +0100 Message-Id: <20190510200101.31096-6-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::32d Subject: [Qemu-devel] [PULL 5/5] cputlb: Do unaligned store recursion to outermost function X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Richard Henderson This is less tricky than for loads, because we always fall back to single byte stores to implement unaligned stores. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland -- 2.20.1 diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index e4d0c943011..a0833247684 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1413,9 +1413,9 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, * Store Helpers */ -static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, - TCGMemOpIdx oi, uintptr_t retaddr, size_t size, - bool big_endian) +static inline void __attribute__((always_inline)) +store_helper(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr, size_t size, bool big_endian) { uintptr_t mmu_idx = get_mmuidx(oi); uintptr_t index = tlb_index(env, mmu_idx, addr); @@ -1514,7 +1514,7 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* Little-endian extract. */ val8 = val >> (i * 8); } - store_helper(env, addr + i, val8, oi, retaddr, 1, big_endian); + helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr); } return; }