From patchwork Fri Oct 20 17:56:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 116534 Delivered-To: patches@linaro.org Received: by 10.140.22.164 with SMTP id 33csp1959098qgn; Fri, 20 Oct 2017 10:56:03 -0700 (PDT) X-Google-Smtp-Source: ABhQp+Skfr7oi9rTO7BqQlR42+1DIRoenbFpeR+CWAkB/MPBHqcOdKkuU2gQ5ff7LN8IMdhiqDLG X-Received: by 10.223.144.227 with SMTP id i90mr5516583wri.190.1508522162937; Fri, 20 Oct 2017 10:56:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1508522162; cv=none; d=google.com; s=arc-20160816; b=b7jVJCV8QKnaeRIzkXaNjzQcUU1UiyKu7qpNbvctZfB6xnyGbyoQOL1AnpKamgymXS YfGpzOVRl8sYjD7sb+zte5TNWk2en/eAc9LAuveCkZMQZWqkoFqxC+U5XNDlawBJJpmj KojBVqSYvOgU1Ohk+uDfr63VItXh0Ji05+etLZHmdZku1ScoYH+e0IAKvNO6Z2NivnFw PSH/x2Vw8nz9HwXuXrBjEa8FoH8NnZaOmeX5J2q54eJOACf7Qbvps8uUniqR+Fs46WhG idGVCAiEZzEMZQ+W3H3z1FYkePhrvUYPrXXImI2TW/O+5vqjUimSnNq5cKCNhysBCe4c 39xQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:arc-authentication-results; bh=NZiCeSpWn2q7J5FvdwW1n6VArOoRa/x2H34QB6vQb6A=; b=GX3tqZ6GL1/lWKyvUsxzc39Qsrz919YMr05UKjwmJ6LP/vBEf5yGzy9HIvEkBStheL RukFwyKfe1wOnu/zqZLYmeIpOOKSSKlgpIf6YTJRkhvrFTr1A8w8t8Kw5pux3V+aSe41 bg1j4sjVvUcecJ8CTGbaU02MVG7E13xhlYaNR1vd0O8lbon0aJ7KbUZKl7oo9KWjguUL OR0TMh9zy0ADnrvAiID5bCnaVy/iWoEt8wHN9sla2wWwuZrhMVhMaFdAT0Frwsnq7j6m D/hUcj6O1hjuDDheWJ8uhCGcmWFQ0r82TqUmHWOu3evWHr6Id7ZLr0vg0mRqeLcKKq5O bSgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by mx.google.com with ESMTPS id f137si1217112wme.223.2017.10.20.10.56.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 20 Oct 2017 10:56:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) client-ip=2001:8b0:1d0::2; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::2 as permitted sender) smtp.mailfrom=pm215@archaic.org.uk; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1e5bWW-0008C8-Qz; Fri, 20 Oct 2017 18:55:56 +0100 From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Cc: patches@linaro.org, Richard Henderson , christophe.lyon@linaro.org Subject: [PATCH] translate.c: Fix usermode big-endian AArch32 LDREXD and STREXD Date: Fri, 20 Oct 2017 18:56:10 +0100 Message-Id: <1508522170-22539-1-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 2.7.4 For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the lowest address is always Rt and the one at addr+4 is Rt2, even if the CPU is big-endian. Our implementation does these with a single 64-bit store, so if we're big-endian then we need to put the two 32-bit halves together in the opposite order to little-endian, so that they end up in the right places. We were trying to do this with the gen_aa32_frob64() function, but that is not correct for the usermode emulator, because there there is a distinction between "load a 64 bit value" (which does a BE 64-bit access and doesn't need swapping) and "load two 32 bit values as one 64 bit access" (where we still need to do the swapping, like system mode BE32). Fixes: https://bugs.launchpad.net/qemu/+bug/1725267 Signed-off-by: Peter Maydell --- This is very much "last thing on Friday", but I'm going to be travelling next week so thought I'd shove it out for review and testing. It does fix the test case in the bug. I have not tested: * BE32 system mode * BE8 * little endian :-) This should probably be cc: stable when it gets through review and testing. --- target/arm/translate.c | 40 +++++++++++++++++++++++++++++++++++----- 1 file changed, 35 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/target/arm/translate.c b/target/arm/translate.c index 4da1a4c..88892e0 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -7902,9 +7902,25 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, TCGv_i32 tmp2 = tcg_temp_new_i32(); TCGv_i64 t64 = tcg_temp_new_i64(); - gen_aa32_ld_i64(s, t64, addr, get_mem_index(s), opc); + /* For AArch32, architecturally the 32-bit word at the lowest + * address is always Rt and the one at addr+4 is Rt2, even if + * the CPU is big-endian. That means we don't want to do a + * gen_aa32_ld_i64(), which invokes gen_aa32_frob64() as if + * for an architecturally 64-bit access, but instead do a + * 64-bit access using MO_BE if appropriate and then split + * the two halves. + * This only makes a difference for BE32 user-mode, where + * frob64() must not flip the two halves of the 64-bit data + * but this code must treat BE32 user-mode like BE32 system. + */ + TCGv a = gen_aa32_addr(s, addr, opc); + tcg_gen_qemu_ld_i64(t64, a, get_mem_index(s), opc); tcg_gen_mov_i64(cpu_exclusive_val, t64); - tcg_gen_extr_i64_i32(tmp, tmp2, t64); + if (s->be_data) { + tcg_gen_extr_i64_i32(tmp2, tmp, t64); + } else { + tcg_gen_extr_i64_i32(tmp, tmp2, t64); + } tcg_temp_free_i64(t64); store_reg(s, rt2, tmp2); @@ -7953,15 +7969,29 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, TCGv_i64 n64 = tcg_temp_new_i64(); t2 = load_reg(s, rt2); - tcg_gen_concat_i32_i64(n64, t1, t2); + /* For AArch32, architecturally the 32-bit word at the lowest + * address is always Rt and the one at addr+4 is Rt2, even if + * the CPU is big-endian. Since we're going to treat this as a + * single 64-bit BE store, we need to put the two halves in the + * opposite order for BE to LE, so that they end up in the right + * places. + * We don't want gen_aa32_frob64() because that does the wrong + * thing for BE32 usermode. + */ + if (s->be_data) { + tcg_gen_concat_i32_i64(n64, t2, t1); + } else { + tcg_gen_concat_i32_i64(n64, t1, t2); + } tcg_temp_free_i32(t2); - gen_aa32_frob64(s, n64); tcg_gen_atomic_cmpxchg_i64(o64, taddr, cpu_exclusive_val, n64, get_mem_index(s), opc); tcg_temp_free_i64(n64); - gen_aa32_frob64(s, o64); + if (s->be_data) { + tcg_gen_rotri_i64(o64, o64, 32); + } tcg_gen_setcond_i64(TCG_COND_NE, o64, o64, cpu_exclusive_val); tcg_gen_extrl_i64_i32(t0, o64);