diff mbox series

[PULL,v2,77/91] target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrn

Message ID 20230309200550.3878088-78-richard.henderson@linaro.org
State Accepted
Commit 1b7bc9b5c8bf374dd37e49cc258e4ab3447b7148
Headers show
Series [PULL,v2,01/91] target/mips: Drop tcg_temp_free from micromips_translate.c.inc | expand

Commit Message

Richard Henderson March 9, 2023, 8:05 p.m. UTC
It is easy enough to use mov instead of or-with-zero
and relying on the optimizer to fold away the or.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/tcg/translate-a64.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Peter Maydell Jan. 23, 2024, 3:09 p.m. UTC | #1
On Thu, 9 Mar 2023 at 20:10, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> It is easy enough to use mov instead of or-with-zero
> and relying on the optimizer to fold away the or.
>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  target/arm/tcg/translate-a64.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
> index 2ad7c48901..082a8b82dd 100644
> --- a/target/arm/tcg/translate-a64.c
> +++ b/target/arm/tcg/translate-a64.c
> @@ -8459,7 +8459,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
>      tcg_rn = tcg_temp_new_i64();
>      tcg_rd = tcg_temp_new_i64();
>      tcg_rd_narrowed = tcg_temp_new_i32();
> -    tcg_final = tcg_const_i64(0);
> +    tcg_final = tcg_temp_new_i64();
>
>      if (round) {
>          tcg_round = tcg_constant_i64(1ULL << (shift - 1));
> @@ -8473,7 +8473,11 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
>                                  false, is_u_shift, size+1, shift);
>          narrowfn(tcg_rd_narrowed, cpu_env, tcg_rd);
>          tcg_gen_extu_i32_i64(tcg_rd, tcg_rd_narrowed);
> -        tcg_gen_deposit_i64(tcg_final, tcg_final, tcg_rd, esize * i, esize);
> +        if (i == 0) {
> +            tcg_gen_mov_i64(tcg_final, tcg_rd);
> +        } else {
> +            tcg_gen_deposit_i64(tcg_final, tcg_final, tcg_rd, esize * i, esize);
> +        }

So, it turns out that this causes a regression:
https://gitlab.com/qemu-project/qemu/-/issues/2089

The change here is fine for the vector case, because when
we loop round the subsequent deposit ops will overwrite
the bits of tcg_final above the initial element, whatever
they happen to be in tcg_rd. However, for the scalar case
we only execute this loop once, and so after this change
instead of the high bits of the result being 0, we leave
them as whatever they were in tcg_rd. If the narrow is a
signed version and the result was negative, those high bits
will now be 1 instead of the 0 they should be.

Using
 tcg_gen_extract_i64(tcg_final, tcg_rd, 0, esize);
instead of the tcg_gen_mov_i64() should fix this.

I'll send a patch later this afternoon.

thanks
-- PMM
diff mbox series

Patch

diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 2ad7c48901..082a8b82dd 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -8459,7 +8459,7 @@  static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     tcg_rn = tcg_temp_new_i64();
     tcg_rd = tcg_temp_new_i64();
     tcg_rd_narrowed = tcg_temp_new_i32();
-    tcg_final = tcg_const_i64(0);
+    tcg_final = tcg_temp_new_i64();
 
     if (round) {
         tcg_round = tcg_constant_i64(1ULL << (shift - 1));
@@ -8473,7 +8473,11 @@  static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
                                 false, is_u_shift, size+1, shift);
         narrowfn(tcg_rd_narrowed, cpu_env, tcg_rd);
         tcg_gen_extu_i32_i64(tcg_rd, tcg_rd_narrowed);
-        tcg_gen_deposit_i64(tcg_final, tcg_final, tcg_rd, esize * i, esize);
+        if (i == 0) {
+            tcg_gen_mov_i64(tcg_final, tcg_rd);
+        } else {
+            tcg_gen_deposit_i64(tcg_final, tcg_final, tcg_rd, esize * i, esize);
+        }
     }
 
     if (!is_q) {