From patchwork Fri Nov 23 14:45:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 151905 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp2236053ljp; Fri, 23 Nov 2018 07:09:03 -0800 (PST) X-Google-Smtp-Source: AJdET5eBA7Oyec/gQUbnFGYIdMdgHuR6CceulfFrSBJ2mGhyN+/+ShbJ9EGRgXHdjNpJuZIMCv7f X-Received: by 2002:aed:3a22:: with SMTP id n31mr14904718qte.29.1542985743259; Fri, 23 Nov 2018 07:09:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542985743; cv=none; d=google.com; s=arc-20160816; b=OM10jSkJYEivuefATt3saWQBHajYbeqTdeYsv/0o0lOS8GqHGnOEzskclCBwesfWiY vIM42By58gRygW5PnE+zBMtns9DMnUytttzuGZGObIl12HH3vn1iIzpcG5YrcBZCvMMj vgJluH4JKA7PiILdMBAOdMgr+YOHYlFc8py8TP+MfG1+wXRoKKFYqOdl/CM2ord+sUlB 6g0wpfBnpBwk0AMw5znz45sSxVMaNvwUtnH3kOBMga3Ra7BMdF+8xuCX1+TLeeL5KdD6 8a+mHjMJrSdA8NCkO6HrBIUmd5UNx/tgzlLKFK7Al1wj90BgHBI+mtL0BxKPO3lK0D2Q gt5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=yJf+mKna6oyCQX+8QLpW7ASUSLIfMAQTCAnpD7BKT6k=; b=zNcp4Zql6ctUbHTz0Bac9gm2QlmeEhQAZLkQcTcIoFGPeTOPTuncK/gKZ7NMO41/h/ oHcIINyKdRNtP2b8MJGjLgMzEocXnX+h+bZ0fnS+nsKWnM6MgmmbatJS5RNZmijsyg1f p6SrqK9YEJVoLc1bo3b8LQe8lpPU9uoPEsSuTorhE34QWih1uPpCYkqX9sdU8vZ91thX 6inCQKdfoTi45C27FItmNKFSwzKQwP4f2R2gIRM0v0TSfLulpMsN60NlPITqCGVWit8f RO5626ne27RDeehuPjSclWOEBBCaHDFsG1mkGRH/YH2lha9qVhetf+KtHr29Z421lwQ+ Ujag== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=VwzHPYs9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id n66si2658840qka.101.2018.11.23.07.09.03 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 23 Nov 2018 07:09:03 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=VwzHPYs9; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52881 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gQD4o-0005ox-Gy for patch@linaro.org; Fri, 23 Nov 2018 10:09:02 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44139) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gQCj1-0000EC-ME for qemu-devel@nongnu.org; Fri, 23 Nov 2018 09:46:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gQCiw-0003XI-0d for qemu-devel@nongnu.org; Fri, 23 Nov 2018 09:46:30 -0500 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]:40233) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gQCis-0003PU-6u for qemu-devel@nongnu.org; Fri, 23 Nov 2018 09:46:24 -0500 Received: by mail-wr1-x444.google.com with SMTP id p4so12585272wrt.7 for ; Fri, 23 Nov 2018 06:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yJf+mKna6oyCQX+8QLpW7ASUSLIfMAQTCAnpD7BKT6k=; b=VwzHPYs9C7UuIY4KGMIrosfHyNAOJk4qyWtdTuO0VNDktUtWzsJE59Od+xWNw0B7K5 ghDzU3TUbuDsjCLf6a+1YMwN/ZZKZSr1l5ciG9TkPypbH5ctmWz8m0ahXKFsStBxO7+0 leDpaXZPovxkwM/wRgjU1hyMsBnrvoMvvX+Vk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yJf+mKna6oyCQX+8QLpW7ASUSLIfMAQTCAnpD7BKT6k=; b=Nq99cw6Rd2Af7hBAjrfrM4dh+0RAAYPbWws6NY/GepElwHfwFmw/mpDSazkxSOMD8W 9vt3JHpGkkGevf2S2WTg6Yu2fWGafu0qKih0CEXfGtKq80TyO72Vdta0FVDRMAAkpBE9 qcZRbtnvCSzKNnC65uJzeBwutuVjtjl5CRA9aW9JyZmNPVwZYjFt3IwhRQs8p/EjI/v2 ybfaCVaN3rRGapBK5XwI9toZAd3ILQ/D6Bd3EbOVHCMOz0U93KY1sj4BYE/SCyOqZ5jW f3p2WLadz3yOjEQ0e3kgN3TxSvrJ0nEVoX2aVf6iUzXgb4fLbr8HjjtO4FAjIH8MA2J9 +/JQ== X-Gm-Message-State: AA+aEWY/l5wapeSFinoeq4ucpdtviGSbd5kZqsLQB2OP1YNnjUDeBQNo vd1NJjgwCFXOe/47zGpMpxLzV5GR6Aav+g== X-Received: by 2002:a5d:4e82:: with SMTP id e2mr13891111wru.291.1542984379140; Fri, 23 Nov 2018 06:46:19 -0800 (PST) Received: from cloudburst.twiddle.net ([195.77.246.50]) by smtp.gmail.com with ESMTPSA id p74sm10339630wmd.29.2018.11.23.06.46.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 23 Nov 2018 06:46:18 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Date: Fri, 23 Nov 2018 15:45:40 +0100 Message-Id: <20181123144558.5048-20-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181123144558.5048-1-richard.henderson@linaro.org> References: <20181123144558.5048-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::444 Subject: [Qemu-devel] [PATCH for-4.0 v2 19/37] tcg/arm: Use TCG_TARGET_NEED_LDST_OOL_LABELS X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alistair.Francis@wdc.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Richard Henderson --- tcg/arm/tcg-target.h | 2 +- tcg/arm/tcg-target.inc.c | 314 ++++++++++++++++----------------------- 2 files changed, 125 insertions(+), 191 deletions(-) -- 2.17.2 diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h index 94b3578c55..02981abdcc 100644 --- a/tcg/arm/tcg-target.h +++ b/tcg/arm/tcg-target.h @@ -141,7 +141,7 @@ static inline void flush_icache_range(uintptr_t start, uintptr_t stop) void tb_target_set_jmp_target(uintptr_t, uintptr_t, uintptr_t); #ifdef CONFIG_SOFTMMU -#define TCG_TARGET_NEED_LDST_LABELS +#define TCG_TARGET_NEED_LDST_OOL_LABELS #endif #define TCG_TARGET_NEED_POOL_LABELS diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c index 6b89ac7983..5a15f6a546 100644 --- a/tcg/arm/tcg-target.inc.c +++ b/tcg/arm/tcg-target.inc.c @@ -1133,7 +1133,7 @@ static TCGCond tcg_out_cmp2(TCGContext *s, const TCGArg *args, } #ifdef CONFIG_SOFTMMU -#include "tcg-ldst.inc.c" +#include "tcg-ldst-ool.inc.c" /* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, * int mmu_idx, uintptr_t ra) @@ -1356,128 +1356,6 @@ static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi, return t2; } - -/* Record the context of a call to the out of line helper code for the slow - path for a load or store, so that we can later generate the correct - helper code. */ -static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi, - TCGReg datalo, TCGReg datahi, TCGReg addrlo, - TCGReg addrhi, tcg_insn_unit *raddr, - tcg_insn_unit *label_ptr) -{ - TCGLabelQemuLdst *label = new_ldst_label(s); - - label->is_ld = is_ld; - label->oi = oi; - label->datalo_reg = datalo; - label->datahi_reg = datahi; - label->addrlo_reg = addrlo; - label->addrhi_reg = addrhi; - label->raddr = raddr; - label->label_ptr[0] = label_ptr; -} - -static void tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) -{ - TCGReg argreg, datalo, datahi; - TCGMemOpIdx oi = lb->oi; - TCGMemOp opc = get_memop(oi); - void *func; - - reloc_pc24(lb->label_ptr[0], s->code_ptr); - - argreg = tcg_out_arg_reg32(s, TCG_REG_R0, TCG_AREG0); - if (TARGET_LONG_BITS == 64) { - argreg = tcg_out_arg_reg64(s, argreg, lb->addrlo_reg, lb->addrhi_reg); - } else { - argreg = tcg_out_arg_reg32(s, argreg, lb->addrlo_reg); - } - argreg = tcg_out_arg_imm32(s, argreg, oi); - argreg = tcg_out_arg_reg32(s, argreg, TCG_REG_R14); - - /* For armv6 we can use the canonical unsigned helpers and minimize - icache usage. For pre-armv6, use the signed helpers since we do - not have a single insn sign-extend. */ - if (use_armv6_instructions) { - func = qemu_ld_helpers[opc & (MO_BSWAP | MO_SIZE)]; - } else { - func = qemu_ld_helpers[opc & (MO_BSWAP | MO_SSIZE)]; - if (opc & MO_SIGN) { - opc = MO_UL; - } - } - tcg_out_call(s, func); - - datalo = lb->datalo_reg; - datahi = lb->datahi_reg; - switch (opc & MO_SSIZE) { - case MO_SB: - tcg_out_ext8s(s, COND_AL, datalo, TCG_REG_R0); - break; - case MO_SW: - tcg_out_ext16s(s, COND_AL, datalo, TCG_REG_R0); - break; - default: - tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0); - break; - case MO_Q: - if (datalo != TCG_REG_R1) { - tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0); - tcg_out_mov_reg(s, COND_AL, datahi, TCG_REG_R1); - } else if (datahi != TCG_REG_R0) { - tcg_out_mov_reg(s, COND_AL, datahi, TCG_REG_R1); - tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0); - } else { - tcg_out_mov_reg(s, COND_AL, TCG_REG_TMP, TCG_REG_R0); - tcg_out_mov_reg(s, COND_AL, datahi, TCG_REG_R1); - tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_TMP); - } - break; - } - - tcg_out_goto(s, COND_AL, lb->raddr); -} - -static void tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) -{ - TCGReg argreg, datalo, datahi; - TCGMemOpIdx oi = lb->oi; - TCGMemOp opc = get_memop(oi); - - reloc_pc24(lb->label_ptr[0], s->code_ptr); - - argreg = TCG_REG_R0; - argreg = tcg_out_arg_reg32(s, argreg, TCG_AREG0); - if (TARGET_LONG_BITS == 64) { - argreg = tcg_out_arg_reg64(s, argreg, lb->addrlo_reg, lb->addrhi_reg); - } else { - argreg = tcg_out_arg_reg32(s, argreg, lb->addrlo_reg); - } - - datalo = lb->datalo_reg; - datahi = lb->datahi_reg; - switch (opc & MO_SIZE) { - case MO_8: - argreg = tcg_out_arg_reg8(s, argreg, datalo); - break; - case MO_16: - argreg = tcg_out_arg_reg16(s, argreg, datalo); - break; - case MO_32: - default: - argreg = tcg_out_arg_reg32(s, argreg, datalo); - break; - case MO_64: - argreg = tcg_out_arg_reg64(s, argreg, datalo, datahi); - break; - } - - argreg = tcg_out_arg_imm32(s, argreg, oi); - argreg = tcg_out_arg_reg32(s, argreg, TCG_REG_R14); - - /* Tail-call to the helper, which will return to the fast path. */ - tcg_out_goto(s, COND_AL, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); -} #endif /* SOFTMMU */ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc, @@ -1602,14 +1480,12 @@ static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64) { - TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused)); + TCGReg addrlo __attribute__((unused)); + TCGReg addrhi __attribute__((unused)); + TCGReg datalo __attribute__((unused)); + TCGReg datahi __attribute__((unused)); TCGMemOpIdx oi; TCGMemOp opc; -#ifdef CONFIG_SOFTMMU - int mem_index, avail; - TCGReg addend, t0, t1; - tcg_insn_unit *label_ptr; -#endif datalo = *args++; datahi = (is64 ? *args++ : 0); @@ -1619,32 +1495,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64) opc = get_memop(oi); #ifdef CONFIG_SOFTMMU - mem_index = get_mmuidx(oi); - - avail = 0xf; - avail &= ~(1 << addrlo); - if (TARGET_LONG_BITS == 64) { - avail &= ~(1 << addrhi); - } - tcg_debug_assert(avail & 1); - t0 = TCG_REG_R0; - avail &= ~1; - tcg_debug_assert(avail != 0); - t1 = ctz32(avail); - - addend = tcg_out_tlb_read(s, addrlo, addrhi, opc, mem_index, 1, - t0, t1, TCG_REG_TMP); - - /* This a conditional BL only to load a pointer within this opcode into LR - for the slow path. We will not be using the value for a tail call. */ - label_ptr = s->code_ptr; - tcg_out_bl_noaddr(s, COND_NE); - - tcg_out_qemu_ld_index(s, opc, datalo, datahi, addrlo, addend); - - add_qemu_ldst_label(s, true, oi, datalo, datahi, addrlo, addrhi, - s->code_ptr, label_ptr); -#else /* !CONFIG_SOFTMMU */ + add_ldst_ool_label(s, true, is64, oi, R_ARM_PC24, 0); + tcg_out_bl_noaddr(s, COND_AL); +#else if (guest_base) { tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_TMP, guest_base); tcg_out_qemu_ld_index(s, opc, datalo, datahi, addrlo, TCG_REG_TMP); @@ -1746,14 +1599,12 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) { - TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused)); + TCGReg addrlo __attribute__((unused)); + TCGReg addrhi __attribute__((unused)); + TCGReg datalo __attribute__((unused)); + TCGReg datahi __attribute__((unused)); TCGMemOpIdx oi; TCGMemOp opc; -#ifdef CONFIG_SOFTMMU - int mem_index, avail; - TCGReg addend, t0, t1; - tcg_insn_unit *label_ptr; -#endif datalo = *args++; datahi = (is64 ? *args++ : 0); @@ -1763,35 +1614,9 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) opc = get_memop(oi); #ifdef CONFIG_SOFTMMU - mem_index = get_mmuidx(oi); - - avail = 0xf; - avail &= ~(1 << addrlo); - avail &= ~(1 << datalo); - if (TARGET_LONG_BITS == 64) { - avail &= ~(1 << addrhi); - } - if (is64) { - avail &= ~(1 << datahi); - } - tcg_debug_assert(avail & 1); - t0 = TCG_REG_R0; - avail &= ~1; - tcg_debug_assert(avail != 0); - t1 = ctz32(avail); - - addend = tcg_out_tlb_read(s, addrlo, addrhi, opc, mem_index, 0, - t0, t1, TCG_REG_TMP); - - tcg_out_qemu_st_index(s, COND_EQ, opc, datalo, datahi, addrlo, addend); - - /* The conditional call must come last, as we're going to return here. */ - label_ptr = s->code_ptr; - tcg_out_bl_noaddr(s, COND_NE); - - add_qemu_ldst_label(s, false, oi, datalo, datahi, addrlo, addrhi, - s->code_ptr, label_ptr); -#else /* !CONFIG_SOFTMMU */ + add_ldst_ool_label(s, false, is64, oi, R_ARM_PC24, 0); + tcg_out_bl_noaddr(s, COND_AL); +#else if (guest_base) { tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_TMP, guest_base); tcg_out_qemu_st_index(s, COND_AL, opc, datalo, @@ -1802,6 +1627,115 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) #endif } +#ifdef CONFIG_SOFTMMU +static tcg_insn_unit *tcg_out_qemu_ldst_ool(TCGContext *s, bool is_ld, + bool is_64, TCGMemOpIdx oi) +{ + TCGReg addrlo, addrhi, datalo, datahi, addend, argreg, t0, t1; + TCGMemOp opc = get_memop(oi); + int mem_index = get_mmuidx(oi); + tcg_insn_unit *thunk = s->code_ptr; + tcg_insn_unit *label; + uintptr_t func; + int avail; + + /* Pick out where the arguments are located. A 64-bit address is + * aligned in the register pair R2:R3. Loads return into R0:R1. + * A 32-bit store with a 32-bit address has room at R2, but + * otherwise uses R4:R5. + */ + if (TARGET_LONG_BITS == 64) { + addrlo = TCG_REG_R2, addrhi = TCG_REG_R3; + } else { + addrlo = TCG_REG_R1, addrhi = -1; + } + if (is_ld) { + datalo = TCG_REG_R0; + } else if (TARGET_LONG_BITS == 64 || is_64) { + datalo = TCG_REG_R4; + } else { + datalo = TCG_REG_R2; + } + datahi = (is_64 ? datalo + 1 : -1); + + /* We need 3 call-clobbered temps. One of them is always R12, + * one of them is always R0. The third is somewhere in R[1-3]. + */ + avail = 0xf; + avail &= ~(1 << addrlo); + if (TARGET_LONG_BITS == 64) { + avail &= ~(1 << addrhi); + } + if (!is_ld) { + avail &= ~(1 << datalo); + if (is_64) { + avail &= ~(1 << datahi); + } + } + tcg_debug_assert(avail & 1); + t0 = TCG_REG_R0; + avail &= ~1; + tcg_debug_assert(avail != 0); + t1 = ctz32(avail); + + addend = tcg_out_tlb_read(s, addrlo, addrhi, opc, mem_index, is_ld, + t0, t1, TCG_REG_TMP); + + label = s->code_ptr; + tcg_out_b_noaddr(s, COND_NE); + + /* TCG Hit. */ + if (is_ld) { + tcg_out_qemu_ld_index(s, opc, datalo, datahi, addrlo, addend); + } else { + tcg_out_qemu_st_index(s, COND_AL, opc, datalo, datahi, addrlo, addend); + } + tcg_out_bx(s, COND_AL, TCG_REG_R14); + + /* TLB Miss. */ + reloc_pc24(label, s->code_ptr); + + tcg_out_arg_reg32(s, TCG_REG_R0, TCG_AREG0); + /* addrlo and addrhi are in place -- see above */ + argreg = addrlo + (TARGET_LONG_BITS / 32); + if (!is_ld) { + switch (opc & MO_SIZE) { + case MO_8: + argreg = tcg_out_arg_reg8(s, argreg, datalo); + break; + case MO_16: + argreg = tcg_out_arg_reg16(s, argreg, datalo); + break; + case MO_32: + argreg = tcg_out_arg_reg32(s, argreg, datalo); + break; + case MO_64: + argreg = tcg_out_arg_reg64(s, argreg, datalo, datahi); + break; + default: + g_assert_not_reached(); + } + } + argreg = tcg_out_arg_imm32(s, argreg, oi); + argreg = tcg_out_arg_reg32(s, argreg, TCG_REG_R14); + + /* Tail call to the helper. */ + if (is_ld) { + func = (uintptr_t)qemu_ld_helpers[opc & (MO_BSWAP | MO_SSIZE)]; + } else { + func = (uintptr_t)qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]; + } + if (use_armv7_instructions) { + tcg_out_movi32(s, COND_AL, TCG_REG_TMP, func); + tcg_out_bx(s, COND_AL, TCG_REG_TMP); + } else { + tcg_out_movi_pool(s, COND_AL, TCG_REG_PC, func); + } + + return thunk; +} +#endif + static tcg_insn_unit *tb_ret_addr; static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,