From patchwork Sat Sep 16 21:41:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 723614 Delivered-To: patch@linaro.org Received: by 2002:adf:f0d1:0:b0:31d:da82:a3b4 with SMTP id x17csp1029035wro; Sat, 16 Sep 2023 14:45:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEcs+/hQW6JQ/sXtDNZyxIWZWwrfSK/PrFv14mRSkdeFa9qAf/91lvK5wznBEnpk2LWpHEP X-Received: by 2002:a05:622a:1903:b0:417:a09a:99a8 with SMTP id w3-20020a05622a190300b00417a09a99a8mr5138471qtc.13.1694900737336; Sat, 16 Sep 2023 14:45:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694900737; cv=none; d=google.com; s=arc-20160816; b=pxJIm1RHfLVQS6ZpfWBlCZuRdVj5kpOd790xFRuvdkydMMC8tZpl5sjyL4Ui8Jured Og2rjGvScl6N74fmh4ttsYzAWCG9Tpw1XJbj1vDCLHGEzYZtEZMkghZ3StP9MFHKGk5Q FMCsHhm6TzoEy8AyVAu3dRMTjB6yHqgSVOrzF+7I6gCYBLB5KCFG5CmiUuQ4jPBQAyTa R43Y2Zl+lC5Vw9M3zgnEUOJcJOfQQsRydVLz7dSow98SrJjNo9WIl4qA/o2G/USU7FHP 24hPoNvAk7f+LXZOdjcrJignajFqGLnSHNHnKoOggCELgHE8WGKAlnQMcAWAgHL5ucxL CuRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=qW54hCoTAvx/Z0R8zzoqjJQHmhjcOZWUd1CkyEKZiYI=; fh=d3q967Ss27yOZEBNWb7WV1pwIIYORDjfLhLmomzC17w=; b=0d1pTWBJNU7bTVIbYVr/C6fnQ7H6MM/RmHl0XnbEB7erNwYKwPgq3ghYPbV4hlHXfo uhvaMlJHXBoFTWbxmgj4u8cwBx+v4z2N3ykSD+LgKTSiRBSG/6IRxBZDpd8RXMz7fhn2 ngFFgSjOgj7v2IStBCWS7jSJL2u+68WWIU4Qfjk+I0DvLGOBxpxcjcaKtCqDIoS9ZRvp fFAYLdttC+kYjzAdz/8iwsLF1rZCMHl2x4voAoRlwKpnwj1GGl2BnBVABP4/bgCZmOKm UHD4p7eie6PHI6SSdo5bZ9cKKNPvpfM4JOjDocvYonMXcua1TuNY5A6Q6suCOf9pTwf+ aUJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=X2DKDHYu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c16-20020ac87d90000000b0040eb9b330d5si3001600qtd.492.2023.09.16.14.45.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sat, 16 Sep 2023 14:45:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=X2DKDHYu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qhd3l-0006Ca-4H; Sat, 16 Sep 2023 17:42:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qhd2z-0005BS-II for qemu-devel@nongnu.org; Sat, 16 Sep 2023 17:41:49 -0400 Received: from mail-oo1-xc2e.google.com ([2607:f8b0:4864:20::c2e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qhd2w-0000TL-LX for qemu-devel@nongnu.org; Sat, 16 Sep 2023 17:41:49 -0400 Received: by mail-oo1-xc2e.google.com with SMTP id 006d021491bc7-571194584e2so1977731eaf.3 for ; Sat, 16 Sep 2023 14:41:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694900505; x=1695505305; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qW54hCoTAvx/Z0R8zzoqjJQHmhjcOZWUd1CkyEKZiYI=; b=X2DKDHYuEk2joN7cF/iHEZggqv8aq3pG3wrawCqP8Gu7R66mSPJptZfd5tXVg5fvrf jRFI0jswa+o1FABHbnarSBsMi8Psv5Ob9JSVe67cpxfNFO4zGU24N4jr14+Kjko//g6Z oQEUF/urCjnNzGPWcp2S9GIZGC8MWzDtAKXkmFZbF4Nf8rbiTpA8zXkjKOidMsvPuK+K QTDBe+wIqLViUqmbfMEJN0AZ1FJyjNnatEdV02Mhbk4+kbIHFaFIp8Rn6xciotBUdlBb Wrkbq/7MNJlVs5K4I/mMp7i+JEDDLk8/1N9Vdy64PvFZzCyn2m5o18tpwTp5LAVYCfcw BV+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694900505; x=1695505305; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qW54hCoTAvx/Z0R8zzoqjJQHmhjcOZWUd1CkyEKZiYI=; b=T2diJlofxe6t1+ARCLPjzgvUQzVmSBoVMWP6+CR2yzlOXPljAzNcaCT5/5TLoP1368 rK6H0XtmtxsBeD6Ng3c7qBdLOa3kbha2deTG4qIREmXBPFOPEoRXkTeWmVh0jzCShXAe snCYJihsAx9OPcqeOqmfZ9M5s49BdzXalUebAhKl4AWAUGkXRISMYtMseQgl7MOY1+XF StTjDeAECk/p3kyLBRxuXFz1uStL7FcPKQAe5HdDUSgVn4kgbVUgfSm+cZ/y7ppzVWzc yMKD+n90j9i7D8t79HHHvvA3UI1sB5e0DUYT/jOhpPnJYkMnps3BN59jAKuddaVGDEuk NQlA== X-Gm-Message-State: AOJu0YwdMDC4Jt3yRe/1abJ9q4zVqXbgvZ8jiDtLcjRagx7YPEW2rl7C AUAxUqacNKR0ig3uCerpXSySLiKbST79vENcxcY= X-Received: by 2002:a05:6358:52cd:b0:132:d333:4a5c with SMTP id z13-20020a05635852cd00b00132d3334a5cmr6398993rwz.10.1694900505440; Sat, 16 Sep 2023 14:41:45 -0700 (PDT) Received: from stoup.. ([71.212.131.115]) by smtp.gmail.com with ESMTPSA id n21-20020aa79055000000b0068fde95aa93sm4871708pfo.135.2023.09.16.14.41.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Sep 2023 14:41:45 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: philmd@linaro.org, anjo@rev.ng Subject: [PATCH v3 20/39] accel/tcg: Use CPUState in atomicity helpers Date: Sat, 16 Sep 2023 14:41:04 -0700 Message-Id: <20230916214123.525796-21-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230916214123.525796-1-richard.henderson@linaro.org> References: <20230916214123.525796-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::c2e; envelope-from=richard.henderson@linaro.org; helo=mail-oo1-xc2e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Anton Johansson Makes ldst_atomicity.c.inc almost target-independent, with the exception of TARGET_PAGE_MASK, which will be addressed in a future patch. Signed-off-by: Anton Johansson Message-Id: <20230912153428.17816-8-anjo@rev.ng> Reviewed-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 20 ++++---- accel/tcg/user-exec.c | 16 +++---- accel/tcg/ldst_atomicity.c.inc | 88 +++++++++++++++++----------------- 3 files changed, 62 insertions(+), 62 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 3703443b5c..17fa7a514c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2172,7 +2172,7 @@ static uint64_t do_ld_whole_be8(CPUState *cpu, uintptr_t ra, MMULookupPageData *p, uint64_t ret_be) { int o = p->addr & 7; - uint64_t x = load_atomic8_or_exit(cpu_env(cpu), ra, p->haddr - o); + uint64_t x = load_atomic8_or_exit(cpu, ra, p->haddr - o); x = cpu_to_be64(x); x <<= o * 8; @@ -2192,7 +2192,7 @@ static Int128 do_ld_whole_be16(CPUState *cpu, uintptr_t ra, MMULookupPageData *p, uint64_t ret_be) { int o = p->addr & 15; - Int128 x, y = load_atomic16_or_exit(cpu_env(cpu), ra, p->haddr - o); + Int128 x, y = load_atomic16_or_exit(cpu, ra, p->haddr - o); int size = p->size; if (!HOST_BIG_ENDIAN) { @@ -2329,7 +2329,7 @@ static uint16_t do_ld_2(CPUState *cpu, MMULookupPageData *p, int mmu_idx, } } else { /* Perform the load host endian, then swap if necessary. */ - ret = load_atom_2(cpu_env(cpu), ra, p->haddr, memop); + ret = load_atom_2(cpu, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap16(ret); } @@ -2349,7 +2349,7 @@ static uint32_t do_ld_4(CPUState *cpu, MMULookupPageData *p, int mmu_idx, } } else { /* Perform the load host endian. */ - ret = load_atom_4(cpu_env(cpu), ra, p->haddr, memop); + ret = load_atom_4(cpu, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap32(ret); } @@ -2369,7 +2369,7 @@ static uint64_t do_ld_8(CPUState *cpu, MMULookupPageData *p, int mmu_idx, } } else { /* Perform the load host endian. */ - ret = load_atom_8(cpu_env(cpu), ra, p->haddr, memop); + ret = load_atom_8(cpu, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret = bswap64(ret); } @@ -2528,7 +2528,7 @@ static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr, } } else { /* Perform the load host endian. */ - ret = load_atom_16(cpu_env(cpu), ra, l.page[0].haddr, l.memop); + ret = load_atom_16(cpu, ra, l.page[0].haddr, l.memop); if (l.memop & MO_BSWAP) { ret = bswap128(ret); } @@ -2880,7 +2880,7 @@ static void do_st_2(CPUState *cpu, MMULookupPageData *p, uint16_t val, if (memop & MO_BSWAP) { val = bswap16(val); } - store_atom_2(cpu_env(cpu), ra, p->haddr, memop, val); + store_atom_2(cpu, ra, p->haddr, memop, val); } } @@ -2899,7 +2899,7 @@ static void do_st_4(CPUState *cpu, MMULookupPageData *p, uint32_t val, if (memop & MO_BSWAP) { val = bswap32(val); } - store_atom_4(cpu_env(cpu), ra, p->haddr, memop, val); + store_atom_4(cpu, ra, p->haddr, memop, val); } } @@ -2918,7 +2918,7 @@ static void do_st_8(CPUState *cpu, MMULookupPageData *p, uint64_t val, if (memop & MO_BSWAP) { val = bswap64(val); } - store_atom_8(cpu_env(cpu), ra, p->haddr, memop, val); + store_atom_8(cpu, ra, p->haddr, memop, val); } } @@ -3045,7 +3045,7 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val, if (l.memop & MO_BSWAP) { val = bswap128(val); } - store_atom_16(cpu_env(cpu), ra, l.page[0].haddr, l.memop, val); + store_atom_16(cpu, ra, l.page[0].haddr, l.memop, val); } return; } diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index d2daeafbab..f9f5cd1770 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -1002,7 +1002,7 @@ static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr addr, tcg_debug_assert((mop & MO_SIZE) == MO_16); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret = load_atom_2(env, ra, haddr, mop); + ret = load_atom_2(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1040,7 +1040,7 @@ static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr addr, tcg_debug_assert((mop & MO_SIZE) == MO_32); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret = load_atom_4(env, ra, haddr, mop); + ret = load_atom_4(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1078,7 +1078,7 @@ static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr addr, tcg_debug_assert((mop & MO_SIZE) == MO_64); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret = load_atom_8(env, ra, haddr, mop); + ret = load_atom_8(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1110,7 +1110,7 @@ static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr addr, tcg_debug_assert((mop & MO_SIZE) == MO_128); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr = cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret = load_atom_16(env, ra, haddr, mop); + ret = load_atom_16(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1175,7 +1175,7 @@ static void do_st2_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, if (mop & MO_BSWAP) { val = bswap16(val); } - store_atom_2(env, ra, haddr, mop, val); + store_atom_2(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } @@ -1204,7 +1204,7 @@ static void do_st4_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, if (mop & MO_BSWAP) { val = bswap32(val); } - store_atom_4(env, ra, haddr, mop, val); + store_atom_4(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } @@ -1233,7 +1233,7 @@ static void do_st8_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, if (mop & MO_BSWAP) { val = bswap64(val); } - store_atom_8(env, ra, haddr, mop, val); + store_atom_8(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } @@ -1262,7 +1262,7 @@ static void do_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val, if (mop & MO_BSWAP) { val = bswap128(val); } - store_atom_16(env, ra, haddr, mop, val); + store_atom_16(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 1b793e6935..1cf5b92166 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -26,7 +26,7 @@ * If the operation must be split into two operations to be * examined separately for atomicity, return -lg2. */ -static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop) +static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop) { MemOp atom = memop & MO_ATOM_MASK; MemOp size = memop & MO_SIZE; @@ -93,7 +93,7 @@ static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop) * host atomicity in order to avoid racing. This reduction * avoids looping with cpu_loop_exit_atomic. */ - if (cpu_in_serial_context(env_cpu(env))) { + if (cpu_in_serial_context(cpu)) { return MO_8; } return atmax; @@ -139,14 +139,14 @@ static inline uint64_t load_atomic8(void *pv) /** * load_atomic8_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @pv: host address * * Atomically load 8 aligned bytes from @pv. * If this is not possible, longjmp out to restart serially. */ -static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv) +static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv) { if (HAVE_al8) { return load_atomic8(pv); @@ -168,19 +168,19 @@ static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv) #endif /* Ultimate fallback: re-execute in serial context. */ - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** * load_atomic16_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @pv: host address * * Atomically load 16 aligned bytes from @pv. * If this is not possible, longjmp out to restart serially. */ -static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv) +static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv) { Int128 *p = __builtin_assume_aligned(pv, 16); @@ -212,7 +212,7 @@ static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv) } /* Ultimate fallback: re-execute in serial context. */ - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** @@ -263,7 +263,7 @@ static uint64_t load_atom_extract_al8x2(void *pv) /** * load_atom_extract_al8_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @pv: host address * @s: object size in bytes, @s <= 4. @@ -273,7 +273,7 @@ static uint64_t load_atom_extract_al8x2(void *pv) * 8-byte load and extract. * The value is returned in the low bits of a uint32_t. */ -static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra, +static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra, void *pv, int s) { uintptr_t pi = (uintptr_t)pv; @@ -281,12 +281,12 @@ static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra, int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8; pv = (void *)(pi & ~7); - return load_atomic8_or_exit(env, ra, pv) >> shr; + return load_atomic8_or_exit(cpu, ra, pv) >> shr; } /** * load_atom_extract_al16_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @p: host address * @s: object size in bytes, @s <= 8. @@ -299,7 +299,7 @@ static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t ra, * * If this is not possible, longjmp out to restart serially. */ -static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra, +static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra, void *pv, int s) { uintptr_t pi = (uintptr_t)pv; @@ -312,7 +312,7 @@ static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_t ra, * Provoke SIGBUS if possible otherwise. */ pv = (void *)(pi & ~7); - r = load_atomic16_or_exit(env, ra, pv); + r = load_atomic16_or_exit(cpu, ra, pv); r = int128_urshift(r, shr); return int128_getlo(r); @@ -394,7 +394,7 @@ static inline uint64_t load_atom_8_by_8_or_4(void *pv) * * Load 2 bytes from @p, honoring the atomicity of @memop. */ -static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra, +static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi = (uintptr_t)pv; @@ -410,7 +410,7 @@ static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra, } } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: return lduw_he_p(pv); @@ -421,9 +421,9 @@ static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra, return load_atomic4(pv - 1) >> 8; } if ((pi & 15) != 7) { - return load_atom_extract_al8_or_exit(env, ra, pv, 2); + return load_atom_extract_al8_or_exit(cpu, ra, pv, 2); } - return load_atom_extract_al16_or_exit(env, ra, pv, 2); + return load_atom_extract_al16_or_exit(cpu, ra, pv, 2); default: g_assert_not_reached(); } @@ -436,7 +436,7 @@ static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra, * * Load 4 bytes from @p, honoring the atomicity of @memop. */ -static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra, +static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi = (uintptr_t)pv; @@ -452,7 +452,7 @@ static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra, } } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: case MO_16: @@ -466,9 +466,9 @@ static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra, return load_atom_extract_al4x2(pv); case MO_32: if (!(pi & 4)) { - return load_atom_extract_al8_or_exit(env, ra, pv, 4); + return load_atom_extract_al8_or_exit(cpu, ra, pv, 4); } - return load_atom_extract_al16_or_exit(env, ra, pv, 4); + return load_atom_extract_al16_or_exit(cpu, ra, pv, 4); default: g_assert_not_reached(); } @@ -481,7 +481,7 @@ static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra, * * Load 8 bytes from @p, honoring the atomicity of @memop. */ -static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, +static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi = (uintptr_t)pv; @@ -498,12 +498,12 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, return load_atom_extract_al16_or_al8(pv, 8); } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); if (atmax == MO_64) { if (!HAVE_al8 && (pi & 7) == 0) { - load_atomic8_or_exit(env, ra, pv); + load_atomic8_or_exit(cpu, ra, pv); } - return load_atom_extract_al16_or_exit(env, ra, pv, 8); + return load_atom_extract_al16_or_exit(cpu, ra, pv, 8); } if (HAVE_al8_fast) { return load_atom_extract_al8x2(pv); @@ -519,7 +519,7 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, if (HAVE_al8) { return load_atom_extract_al8x2(pv); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); default: g_assert_not_reached(); } @@ -532,7 +532,7 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, * * Load 16 bytes from @p, honoring the atomicity of @memop. */ -static Int128 load_atom_16(CPUArchState *env, uintptr_t ra, +static Int128 load_atom_16(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi = (uintptr_t)pv; @@ -548,7 +548,7 @@ static Int128 load_atom_16(CPUArchState *env, uintptr_t ra, return atomic16_read_ro(pv); } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: memcpy(&r, pv, 16); @@ -563,20 +563,20 @@ static Int128 load_atom_16(CPUArchState *env, uintptr_t ra, break; case MO_64: if (!HAVE_al8) { - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } a = load_atomic8(pv); b = load_atomic8(pv + 8); break; case -MO_64: if (!HAVE_al8) { - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } a = load_atom_extract_al8x2(pv); b = load_atom_extract_al8x2(pv + 8); break; case MO_128: - return load_atomic16_or_exit(env, ra, pv); + return load_atomic16_or_exit(cpu, ra, pv); default: g_assert_not_reached(); } @@ -857,7 +857,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le) * * Store 2 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_2(CPUArchState *env, uintptr_t ra, +static void store_atom_2(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, uint16_t val) { uintptr_t pi = (uintptr_t)pv; @@ -868,7 +868,7 @@ static void store_atom_2(CPUArchState *env, uintptr_t ra, return; } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); if (atmax == MO_8) { stw_he_p(pv, val); return; @@ -897,7 +897,7 @@ static void store_atom_2(CPUArchState *env, uintptr_t ra, g_assert_not_reached(); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** @@ -908,7 +908,7 @@ static void store_atom_2(CPUArchState *env, uintptr_t ra, * * Store 4 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_4(CPUArchState *env, uintptr_t ra, +static void store_atom_4(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, uint32_t val) { uintptr_t pi = (uintptr_t)pv; @@ -919,7 +919,7 @@ static void store_atom_4(CPUArchState *env, uintptr_t ra, return; } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: stl_he_p(pv, val); @@ -961,7 +961,7 @@ static void store_atom_4(CPUArchState *env, uintptr_t ra, return; } } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); default: g_assert_not_reached(); } @@ -975,7 +975,7 @@ static void store_atom_4(CPUArchState *env, uintptr_t ra, * * Store 8 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_8(CPUArchState *env, uintptr_t ra, +static void store_atom_8(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, uint64_t val) { uintptr_t pi = (uintptr_t)pv; @@ -986,7 +986,7 @@ static void store_atom_8(CPUArchState *env, uintptr_t ra, return; } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: stq_he_p(pv, val); @@ -1029,7 +1029,7 @@ static void store_atom_8(CPUArchState *env, uintptr_t ra, default: g_assert_not_reached(); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** @@ -1040,7 +1040,7 @@ static void store_atom_8(CPUArchState *env, uintptr_t ra, * * Store 16 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_16(CPUArchState *env, uintptr_t ra, +static void store_atom_16(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, Int128 val) { uintptr_t pi = (uintptr_t)pv; @@ -1052,7 +1052,7 @@ static void store_atom_16(CPUArchState *env, uintptr_t ra, return; } - atmax = required_atomicity(env, pi, memop); + atmax = required_atomicity(cpu, pi, memop); a = HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val); b = HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val); @@ -1111,5 +1111,5 @@ static void store_atom_16(CPUArchState *env, uintptr_t ra, default: g_assert_not_reached(); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); }