From patchwork Thu Nov 2 11:33:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 117797 Delivered-To: patches@linaro.org Received: by 10.140.22.164 with SMTP id 33csp2043661qgn; Thu, 2 Nov 2017 04:32:57 -0700 (PDT) X-Google-Smtp-Source: ABhQp+R+0RH3+rMwWCOqvTPslAPXI1RI+d7G6v3FuEa4NpDBHiqMsw2opH+aDXk8C6cTvW+mmlBR X-Received: by 10.223.197.131 with SMTP id m3mr2841281wrg.0.1509622377487; Thu, 02 Nov 2017 04:32:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509622377; cv=none; d=google.com; s=arc-20160816; b=0URvIUAF9myG/BHm+Or4c+5AaxPKyk/aL27GQ4G7JGB6LyBajiZrZQ7PkKyh9019u3 pxUAMf/YD1FXLDEt2LnomGXv453mK0E5WL9ggExIJBfjOM8ONPRwbrJuEmb9sqseC4hV QWwZUL6rZQw9rPFmqCuODZmPvw93e/H7agc/0zM/hCedvCO0Z6LmHEVb7jTWmXSxmlUp LEXbVmz26c/LEBSqk+aIMmwVVdCWEZUURvPd058tO7geVDNTwOBviCw/3P4KQd9+LZmW 3ek32o2EXOssq6ajPiH8BcnUqLA5pYnjPcF4jSHO5Jse50j0NRMVOEDSPHeesdxv6wM5 Tavw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:arc-authentication-results; bh=8JxDy1pwxfTCd20t6quAPjqTPT2Rivz4Mpt6uBS18uM=; b=gO0eK4RxcBk3uZRVdnfx4k+dqXmRax2HZfQ+Azl/o5ezwmsvapoJMBt6HFleTNmcB9 /sN6F6njUMw6FQFeWBI7xWEjyc1yztwwuAcQJwOkfVuCCFMZG10cobhrhVcans31HUxR zWPejTWNPIz9eHF9DS5U/ZpTDRIrUEdG0NNhKapS5QY7seDKfodnueQNw54wFH9/MnwM BJmy04+IQbFCzNzy2jkDd9vPESuMfesyh2LstC46RIATObqWrmD3I3IXQ0/eTXV7yDZY K2sfV4kYng/ltntVR9ivizRAImmL3Y8rDS8fgE4H8Rm7w1N15WaN4urC3FF61zdsBYJM AL0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id 7si2715431wro.169.2017.11.02.04.32.57 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 02 Nov 2017 04:32:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1eADjw-0002cf-1u; Thu, 02 Nov 2017 11:32:52 +0000 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org, Richard Henderson , christophe.lyon@linaro.org Subject: [PATCH v2] translate.c: Fix usermode big-endian AArch32 LDREXD and STREXD Date: Thu, 2 Nov 2017 11:33:20 +0000 Message-Id: <1509622400-13351-1-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the lowest address is always Rt and the one at addr+4 is Rt2, even if the CPU is big-endian. Our implementation does these with a single 64-bit store, so if we're big-endian then we need to put the two 32-bit halves together in the opposite order to little-endian, so that they end up in the right places. We were trying to do this with the gen_aa32_frob64() function, but that is not correct for the usermode emulator, because there there is a distinction between "load a 64 bit value" (which does a BE 64-bit access and doesn't need swapping) and "load two 32 bit values as one 64 bit access" (where we still need to do the swapping, like system mode BE32). Fixes: https://bugs.launchpad.net/qemu/+bug/1725267 Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell --- Changes v1->v2: * use correct "s->be_data == MO_BE" check for bigendian * don't mangle the data from the atomic-cmpxchg before comparing against expected value * tcg_temp_free() the TCGv from gen_aa32_addr() * name that TCGv "taddr" rather than "a"... target/arm/translate.c | 39 ++++++++++++++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 5 deletions(-) -- 2.7.4 Reviewed-by: Richard Henderson diff --git a/target/arm/translate.c b/target/arm/translate.c index 6ba4ae9..0ed03d7 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -7903,9 +7903,27 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, TCGv_i32 tmp2 = tcg_temp_new_i32(); TCGv_i64 t64 = tcg_temp_new_i64(); - gen_aa32_ld_i64(s, t64, addr, get_mem_index(s), opc); + /* For AArch32, architecturally the 32-bit word at the lowest + * address is always Rt and the one at addr+4 is Rt2, even if + * the CPU is big-endian. That means we don't want to do a + * gen_aa32_ld_i64(), which invokes gen_aa32_frob64() as if + * for an architecturally 64-bit access, but instead do a + * 64-bit access using MO_BE if appropriate and then split + * the two halves. + * This only makes a difference for BE32 user-mode, where + * frob64() must not flip the two halves of the 64-bit data + * but this code must treat BE32 user-mode like BE32 system. + */ + TCGv taddr = gen_aa32_addr(s, addr, opc); + + tcg_gen_qemu_ld_i64(t64, taddr, get_mem_index(s), opc); + tcg_temp_free(taddr); tcg_gen_mov_i64(cpu_exclusive_val, t64); - tcg_gen_extr_i64_i32(tmp, tmp2, t64); + if (s->be_data == MO_BE) { + tcg_gen_extr_i64_i32(tmp2, tmp, t64); + } else { + tcg_gen_extr_i64_i32(tmp, tmp2, t64); + } tcg_temp_free_i64(t64); store_reg(s, rt2, tmp2); @@ -7954,15 +7972,26 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, TCGv_i64 n64 = tcg_temp_new_i64(); t2 = load_reg(s, rt2); - tcg_gen_concat_i32_i64(n64, t1, t2); + /* For AArch32, architecturally the 32-bit word at the lowest + * address is always Rt and the one at addr+4 is Rt2, even if + * the CPU is big-endian. Since we're going to treat this as a + * single 64-bit BE store, we need to put the two halves in the + * opposite order for BE to LE, so that they end up in the right + * places. + * We don't want gen_aa32_frob64() because that does the wrong + * thing for BE32 usermode. + */ + if (s->be_data == MO_BE) { + tcg_gen_concat_i32_i64(n64, t2, t1); + } else { + tcg_gen_concat_i32_i64(n64, t1, t2); + } tcg_temp_free_i32(t2); - gen_aa32_frob64(s, n64); tcg_gen_atomic_cmpxchg_i64(o64, taddr, cpu_exclusive_val, n64, get_mem_index(s), opc); tcg_temp_free_i64(n64); - gen_aa32_frob64(s, o64); tcg_gen_setcond_i64(TCG_COND_NE, o64, o64, cpu_exclusive_val); tcg_gen_extrl_i64_i32(t0, o64);