diff mbox series

[PULL,27/52] cputlb: use uint64_t for interim values for unaligned load

Message ID 20190607090552.12434-28-alex.bennee@linaro.org
State Accepted
Commit 8c79b288513587e960b6b7257a9d955d5592f209
Headers show
Series testing, gdbstub and cputlb fixes | expand

Commit Message

Alex Bennée June 7, 2019, 9:05 a.m. UTC
When running on 32 bit TCG backends a wide unaligned load ends up
truncating data before returning to the guest. We specifically have
the return type as uint64_t to avoid any premature truncation so we
should use the same for the interim types.

Fixes: https://bugs.launchpad.net/qemu/+bug/1830872
Fixes: eed5664238e

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

Tested-by: Laszlo Ersek <lersek@redhat.com>

Tested-by: Igor Mammedov <imammedo@redhat.com>


-- 
2.20.1
diff mbox series

Patch

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index cdcc377102..b796ab1cbe 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1303,7 +1303,7 @@  load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
         && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
                     >= TARGET_PAGE_SIZE)) {
         target_ulong addr1, addr2;
-        tcg_target_ulong r1, r2;
+        uint64_t r1, r2;
         unsigned shift;
     do_unaligned_access:
         addr1 = addr & ~(size - 1);