diff mbox series

[26/36] target/arm: Convert Neon VQSHL, VRSHL, VQRSHL 3-reg-same insns to decodetree

Message ID 20200430181003.21682-27-peter.maydell@linaro.org
State Superseded
Headers show
Series target/arm: Convert Neon to decodetree (part 1) | expand

Commit Message

Peter Maydell April 30, 2020, 6:09 p.m. UTC
Convert the VQSHL, VRSHL and VQRSHL insns in the 3-reg-same
group to decodetree. We have already implemented the size==0b11
case of these insns; this commit handles the remaining sizes.

TODO: find out from rth why decodetree insists on VSHL going
into the group...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

---
 target/arm/translate-neon.inc.c | 93 +++++++++++++++++++++++++++++++++
 target/arm/translate.c          | 23 ++------
 target/arm/neon-dp.decode       | 30 ++++++++---
 3 files changed, 120 insertions(+), 26 deletions(-)

-- 
2.20.1

Comments

Richard Henderson May 1, 2020, 1:55 a.m. UTC | #1
On 4/30/20 11:09 AM, Peter Maydell wrote:
> +static bool do_3same_qs32(DisasContext *s, arg_3same *a, NeonGenTwoOpEnvFn *fn)

> +{

> +    /*

> +     * Saturating shift operations handled elementwise 32 bits at a

> +     * time which need to pass cpu_env to the helper and where the rn

> +     * and rm operands are reversed from the usual do_3same() order.

> +     */


Perhaps better to handle this as you did in "Convert Neon 64-bit element
3-reg-same insns", by adding a shim expander that adds env?

It would appear we can then merge

> +{

> +  VQSHL_S64_3s   1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64

> +  VQSHL_S_3s     1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same

> +}


back into a single pattern:

void gen_gvec_srshl(unsigned vece, uint32_t rd_ofs,
                    uint32_t rn_ofs, uint32_t rm_ofs,
                    uint32_t oprsz, uint32_t maxsz)
{
    static const GVecGen3 ops[4] = {
        { .fni4 = gen_helper_neon_rshl_s8 },
        { .fni4 = gen_helper_neon_rshl_s16 },
        { .fni4 = gen_helper_neon_rshl_s32 },
        { .fni8 = gen_helper_neon_rshl_s64 }
    };
    tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
                   oprsz, maxsz, &ops[vece]);
}

I'm not 100% sure how best to handle the swapped operands issue.  I don't think
we want to do it here in gen_gvec_srshl, because we don't have the same reverse
operand problem in the aarch64 encoding, and I'm looking forward to re-using
this generator function in aa64 and sve2.

Maybe it would be better to have

@3same     .... ... . . . size:2 .... .... .... . q:1 . . .... \
           &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
@3same_rev .... ... . . . size:2 .... .... .... . q:1 . . .... \
           &3same vn=%vm_dp vm=%vn_dp vd=%vd_dp

and swap the operands to "normal" during decode.

FWIW, over in sve.decode, I prepared for reversed operands from the start (to
handle things like SUBR), so the formats have the register names in order:
@rd_rn_rm vs @rd_rm_rn.


r~
Peter Maydell May 1, 2020, 6:10 p.m. UTC | #2
On Fri, 1 May 2020 at 02:55, Richard Henderson
<richard.henderson@linaro.org> wrote:
> I'm not 100% sure how best to handle the swapped operands issue.  I don't think

> we want to do it here in gen_gvec_srshl, because we don't have the same reverse

> operand problem in the aarch64 encoding, and I'm looking forward to re-using

> this generator function in aa64 and sve2.

>

> Maybe it would be better to have

>

> @3same     .... ... . . . size:2 .... .... .... . q:1 . . .... \

>            &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp

> @3same_rev .... ... . . . size:2 .... .... .... . q:1 . . .... \

>            &3same vn=%vm_dp vm=%vn_dp vd=%vd_dp

>

> and swap the operands to "normal" during decode.


Yeah, I guess so. It's a little confusing because the operands
are going to appear with the "wrong" names in the trans_ functions,
but we can hopefully deflect some of that with a suitable comment
by the @3same_rev format definition.

I think that all the affected insns have asm formats like
 VSHL <Dd>, <Dm>, <Dn>
in contrast to eg
 VSUB <Dd>, <Dn>, <Dm>

so it's effectively just that the field names in the official
insn definition are backwards from what you'd expect.

thanks
-- PMM
diff mbox series

Patch

diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
index bdd5f33214e..084c78eea58 100644
--- a/target/arm/translate-neon.inc.c
+++ b/target/arm/translate-neon.inc.c
@@ -1035,3 +1035,96 @@  DO_3SAME_32(VHADD, hadd)
 DO_3SAME_32(VHSUB, hsub)
 DO_3SAME_32(VRHADD, rhadd)
 DO_3SAME_32(VABD, abd)
+
+static bool do_3same_qs32(DisasContext *s, arg_3same *a, NeonGenTwoOpEnvFn *fn)
+{
+    /*
+     * Saturating shift operations handled elementwise 32 bits at a
+     * time which need to pass cpu_env to the helper and where the rn
+     * and rm operands are reversed from the usual do_3same() order.
+     */
+    TCGv_i32 tmp, tmp2;
+    int pass;
+
+    if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
+        return false;
+    }
+
+    /* UNDEF accesses to D16-D31 if they don't exist. */
+    if (!dc_isar_feature(aa32_simd_r32, s) &&
+        ((a->vd | a->vn | a->vm) & 0x10)) {
+        return false;
+    }
+
+    if ((a->vn | a->vm | a->vd) & a->q) {
+        return false;
+    }
+
+    if (a->size == 3) {
+        return false;
+    }
+
+    if (!vfp_access_check(s)) {
+        return true;
+    }
+
+    for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
+        /* Note reversal of operand order */
+        tmp = neon_load_reg(a->vm, pass);
+        tmp2 = neon_load_reg(a->vn, pass);
+        fn(tmp, cpu_env, tmp, tmp2);
+        tcg_temp_free_i32(tmp2);
+        neon_store_reg(a->vd, pass, tmp);
+    }
+    return true;
+}
+
+/*
+ * Handling for shifts with sizes 8/16/32 bits. 64-bit shifts are
+ * covered by the *_S64_3s and *_U64_3s patterns and the grouping in
+ * the decode file means those functions are called first for
+ * size==0b11. Note that we must 'return false' here for the
+ * size==0b11 case rather than asserting, because where the 64-bit
+ * function has an UNDEF case and returns false the decoder will fall
+ * through to trying these functions.
+ */
+#define DO_3SAME_QS32(INSN, func)                                       \
+    static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a)        \
+    {                                                                   \
+        static NeonGenTwoOpEnvFn * const fns[] = {                      \
+            gen_helper_neon_##func##8,                                  \
+            gen_helper_neon_##func##16,                                 \
+            gen_helper_neon_##func##32,                                 \
+        };                                                              \
+        if (a->size > 2) {                                              \
+            return false;                                               \
+        }                                                               \
+        return do_3same_qs32(s, a, fns[a->size]);                       \
+    }
+
+DO_3SAME_QS32(VQSHL_S,qshl_s)
+DO_3SAME_QS32(VQSHL_U,qshl_u)
+DO_3SAME_QS32(VQRSHL_S,qrshl_s)
+DO_3SAME_QS32(VQRSHL_U,qrshl_u)
+
+#define DO_3SAME_SHIFT32(INSN, func) \
+    static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a)        \
+    {                                                                   \
+        static NeonGenTwoOpFn * const fns[] = {                         \
+            gen_helper_neon_##func##8,                                  \
+            gen_helper_neon_##func##16,                                 \
+            gen_helper_neon_##func##32,                                 \
+        };                                                              \
+        int rtmp;                                                       \
+        if (a->size > 2) {                                              \
+            return false;                                               \
+        }                                                               \
+        /* Shift operand order is reversed */                           \
+        rtmp = a->vn;                                                   \
+        a->vn = a->vm;                                                  \
+        a->vm = rtmp;                                                   \
+        return do_3same_32(s, a, fns[a->size]);                         \
+    }
+
+DO_3SAME_SHIFT32(VRSHL_S, rshl_s)
+DO_3SAME_SHIFT32(VRSHL_U, rshl_u)
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 29301061ca5..4406fe54647 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -4790,6 +4790,9 @@  static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
         case NEON_3R_VRHADD:
         case NEON_3R_VHSUB:
         case NEON_3R_VABD:
+        case NEON_3R_VQSHL:
+        case NEON_3R_VRSHL:
+        case NEON_3R_VQRSHL:
             /* Already handled by decodetree */
             return 1;
         }
@@ -4800,17 +4803,6 @@  static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
         }
         pairwise = 0;
         switch (op) {
-        case NEON_3R_VQSHL:
-        case NEON_3R_VRSHL:
-        case NEON_3R_VQRSHL:
-            {
-                int rtmp;
-                /* Shift instruction operands are reversed.  */
-                rtmp = rn;
-                rn = rm;
-                rm = rtmp;
-            }
-            break;
         case NEON_3R_VPADD_VQRDMLAH:
         case NEON_3R_VPMAX:
         case NEON_3R_VPMIN:
@@ -4870,15 +4862,6 @@  static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
             tmp2 = neon_load_reg(rm, pass);
         }
         switch (op) {
-        case NEON_3R_VQSHL:
-            GEN_NEON_INTEGER_OP_ENV(qshl);
-            break;
-        case NEON_3R_VRSHL:
-            GEN_NEON_INTEGER_OP(rshl);
-            break;
-        case NEON_3R_VQRSHL:
-            GEN_NEON_INTEGER_OP_ENV(qrshl);
-            break;
         case NEON_3R_VABA:
             GEN_NEON_INTEGER_OP(abd);
             tcg_temp_free_i32(tmp2);
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
index 4b15e52221b..ae442071ef1 100644
--- a/target/arm/neon-dp.decode
+++ b/target/arm/neon-dp.decode
@@ -80,12 +80,30 @@  VSHL_U_3s        1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same
 @3same_64        .... ... . . . 11 .... .... .... . q:1 . . .... \
                  &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=3
 
-VQSHL_S64_3s     1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64
-VQSHL_U64_3s     1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64
-VRSHL_S64_3s     1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_64
-VRSHL_U64_3s     1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_64
-VQRSHL_S64_3s    1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64
-VQRSHL_U64_3s    1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_64
+{
+  VQSHL_S64_3s   1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64
+  VQSHL_S_3s     1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same
+}
+{
+  VQSHL_U64_3s   1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64
+  VQSHL_U_3s     1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same
+}
+{
+  VRSHL_S64_3s   1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_64
+  VRSHL_S_3s     1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same
+}
+{
+  VRSHL_U64_3s   1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_64
+  VRSHL_U_3s     1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same
+}
+{
+  VQRSHL_S64_3s  1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64
+  VQRSHL_S_3s    1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same
+}
+{
+  VQRSHL_U64_3s  1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_64
+  VQRSHL_U_3s    1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same
+}
 
 VMAX_S_3s        1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
 VMAX_U_3s        1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same