From patchwork Sat May 4 05:52:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 163361 Delivered-To: patch@linaro.org Received: by 2002:a92:7e86:0:0:0:0:0 with SMTP id q6csp251438ill; Fri, 3 May 2019 23:05:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqx3ZRnBw2piPRixpuPIZtosBZ5lCwh83bKrUBsr439BwLyuvT5sch1GA9yyKGY5Qr7/Z/r3 X-Received: by 2002:adf:da51:: with SMTP id r17mr10186949wrl.118.1556949927125; Fri, 03 May 2019 23:05:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556949927; cv=none; d=google.com; s=arc-20160816; b=dP7Sz2H+ZbpuCx1OPISr8/vXNknF+v1MiOHw+ytxThtBvQ92DV79GuVOlJRKsqvq3k Y0TpPh4dmP5lxSLes326shQ/gbc972NwpisHAq1yjhwaLDPww0Ts7S6PoAgivigYaFrI 4hTHDenZcr9Mc6antof+IBzkMjGYS1qv4bCpHlTFrnTmfaa9J2wOKpfTeTAM9ybMIC4x Wfcm+Btzfgq8fBkV11tvxcZQq2JbB17eKUPDPzMRhsqpp3FlBqb2VaRr/DXSKL1MlbSV Oll7cxhHuU/ozz84OtJEWS/RbQcg4N5IdIcrUuVou9TY5ebDLRy8qL7Mp91mDFF71YF0 4wZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=qw9asg7SRqxlX27WylYo/55gkKhhqIWBrKFJgQwUqqc=; b=MHVPE6o8G6xAq9z/9G0P36qiX2HLEQPgU1sl9JTe3I3U6FCYyWr9/GJKb7jaFjd14Q +Yr0UzomnnVBj0OE/EgDiZcqCnmYmIWrG5EwgmynUCVMQEQAIZMRO5/PO4pfsuabFs84 jQPlzPY5onkfu7N9WOLbNXhkS+ZLEdbhNyVqB2J51YQk4d6nsmwPerjaET3Te6MrZQqt PQYRUuPCnNvnCbsbgC0NhKkDQLngh8HIDutYTRMNQPTFP0slbeyq6CnL8xiFyxCp3x++ NjR9sO6Oxdme5oRwxOvV9wbIOqPxiuVueleoCk7XFZ4H09cTO1LKhfmZgYNJlXrc1SJo ZAag== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zPk2iioo; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id b23si2677448wmd.37.2019.05.03.23.05.26 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 03 May 2019 23:05:27 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=zPk2iioo; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1]:51784 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hMnnZ-0000kZ-Q8 for patch@linaro.org; Sat, 04 May 2019 02:05:25 -0400 Received: from eggs.gnu.org ([209.51.188.92]:41925) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hMnbz-0007vz-6Y for qemu-devel@nongnu.org; Sat, 04 May 2019 01:53:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hMnbx-0004Mu-Fw for qemu-devel@nongnu.org; Sat, 04 May 2019 01:53:27 -0400 Received: from mail-pg1-x543.google.com ([2607:f8b0:4864:20::543]:35880) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hMnbx-0004MM-7D for qemu-devel@nongnu.org; Sat, 04 May 2019 01:53:25 -0400 Received: by mail-pg1-x543.google.com with SMTP id 85so3768775pgc.3 for ; Fri, 03 May 2019 22:53:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qw9asg7SRqxlX27WylYo/55gkKhhqIWBrKFJgQwUqqc=; b=zPk2iiood4zQAFE1p4nOun9WlPhRYe8X6DtHXn5W6MYTuMExnWYESrUaCUWrJ/Kbfk c28ZqEok11BepA1OuUQ3DQWnLaRooZs7R1jdUxs6SHjzmVPjDTqBVrGJWLxniAj0pZOa 863/ZtPOMVACz+IdP7rf3jGiBHnDnlcl5jGXicSgOmtrpohWHv+6DA75eGF7xmpn47KB HxclW12I8undTphfohAfeoAGqf4UrvwmgOOCplnAYz2+mQ9AqG9yY1rFiBQnSzdxki/G 1+EsV7Ng4TSJYEJyX9tlXGQvsYcDfapvgztdrCQfb2855xST6hv92iZJhPGsZnfJLx7v EVdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qw9asg7SRqxlX27WylYo/55gkKhhqIWBrKFJgQwUqqc=; b=ClQ/YNkS6vpOsbokieYt4HxAD/fQ8X8XuYgcsU6sxhd9KPl0N5Ttu09UwZnxYJRaty mL3nKRRDav9rFgLDZ+EVcjbRJ9WSVG6Dzlu/dXU2WT3nIlHoeYs7cX16Xr19ZdHcUfnW XUR4rXmLGMQjBKqDAA2mNnGwdsRSxYQ0JTkKpAVY0Z9bnFz152pZmzM92AIq7VXcoAXu vgKy3TDVnBU4IiuYqZ9tgwvAyUBqGvKf0uKuUDV9NhHwaoXQukbVol2tIMBiG3MTZfhv yGeIUld3FGvMY9IRZX1r84PBR9KW1bt32PMU8EQKjBTzFKqYBJUT8smsP5F7q0xocjLv gdyw== X-Gm-Message-State: APjAAAWlZRmcI9F8GrQqhTXhIolUPIMbDQi1GiYHuXx9TLBtEIVySZeu KqK4Srf6EDfzLLAgsNFxU5dXQAD2RVg= X-Received: by 2002:a63:550c:: with SMTP id j12mr15945016pgb.450.1556949203762; Fri, 03 May 2019 22:53:23 -0700 (PDT) Received: from localhost.localdomain (97-113-189-189.tukw.qwest.net. [97.113.189.189]) by smtp.gmail.com with ESMTPSA id 15sm6680423pfo.117.2019.05.03.22.53.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 May 2019 22:53:23 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Fri, 3 May 2019 22:52:47 -0700 Message-Id: <20190504055300.18426-19-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190504055300.18426-1-richard.henderson@linaro.org> References: <20190504055300.18426-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::543 Subject: [Qemu-devel] [PATCH v3 18/31] tcg: Add gvec expanders for vector shift by scalar X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: alex.bennee@linaro.org, david@redhat.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Allow expansion either via shift by scalar or by replicating the scalar for shift by vector. Signed-off-by: Richard Henderson --- v3: Use a private structure for do_gvec_shifts. --- tcg/tcg-op-gvec.h | 7 ++ tcg/tcg-op.h | 4 + tcg/tcg-op-gvec.c | 214 ++++++++++++++++++++++++++++++++++++++++++++++ tcg/tcg-op-vec.c | 54 ++++++++++++ 4 files changed, 279 insertions(+) -- 2.17.1 diff --git a/tcg/tcg-op-gvec.h b/tcg/tcg-op-gvec.h index 84a6247b16..6ee98f3378 100644 --- a/tcg/tcg-op-gvec.h +++ b/tcg/tcg-op-gvec.h @@ -318,6 +318,13 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs, void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs, int64_t shift, uint32_t oprsz, uint32_t maxsz); +void tcg_gen_gvec_shls(unsigned vece, uint32_t dofs, uint32_t aofs, + TCGv_i32 shift, uint32_t oprsz, uint32_t maxsz); +void tcg_gen_gvec_shrs(unsigned vece, uint32_t dofs, uint32_t aofs, + TCGv_i32 shift, uint32_t oprsz, uint32_t maxsz); +void tcg_gen_gvec_sars(unsigned vece, uint32_t dofs, uint32_t aofs, + TCGv_i32 shift, uint32_t oprsz, uint32_t maxsz); + /* * Perform vector shift by vector element, modulo the element size. * E.g. D[i] = A[i] << (B[i] % (8 << vece)). diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h index 833c6330b5..472b73cb38 100644 --- a/tcg/tcg-op.h +++ b/tcg/tcg-op.h @@ -986,6 +986,10 @@ void tcg_gen_shli_vec(unsigned vece, TCGv_vec r, TCGv_vec a, int64_t i); void tcg_gen_shri_vec(unsigned vece, TCGv_vec r, TCGv_vec a, int64_t i); void tcg_gen_sari_vec(unsigned vece, TCGv_vec r, TCGv_vec a, int64_t i); +void tcg_gen_shls_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_i32 s); +void tcg_gen_shrs_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_i32 s); +void tcg_gen_sars_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_i32 s); + void tcg_gen_shlv_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_vec s); void tcg_gen_shrv_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_vec s); void tcg_gen_sarv_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_vec s); diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c index 061ef329f1..c69c7960b8 100644 --- a/tcg/tcg-op-gvec.c +++ b/tcg/tcg-op-gvec.c @@ -2555,6 +2555,220 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs, } } +/* + * Specialized generation vector shifts by a non-constant scalar. + */ + +typedef struct { + void (*fni4)(TCGv_i32, TCGv_i32, TCGv_i32); + void (*fni8)(TCGv_i64, TCGv_i64, TCGv_i64); + void (*fniv_s)(unsigned, TCGv_vec, TCGv_vec, TCGv_i32); + void (*fniv_v)(unsigned, TCGv_vec, TCGv_vec, TCGv_vec); + gen_helper_gvec_2 *fno[4]; + TCGOpcode s_list[2]; + TCGOpcode v_list[2]; +} GVecGen2sh; + +static void expand_2sh_vec(unsigned vece, uint32_t dofs, uint32_t aofs, + uint32_t oprsz, uint32_t tysz, TCGType type, + TCGv_i32 shift, + void (*fni)(unsigned, TCGv_vec, TCGv_vec, TCGv_i32)) +{ + TCGv_vec t0 = tcg_temp_new_vec(type); + uint32_t i; + + for (i = 0; i < oprsz; i += tysz) { + tcg_gen_ld_vec(t0, cpu_env, aofs + i); + fni(vece, t0, t0, shift); + tcg_gen_st_vec(t0, cpu_env, dofs + i); + } + tcg_temp_free_vec(t0); +} + +static void +do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift, + uint32_t oprsz, uint32_t maxsz, const GVecGen2sh *g) +{ + TCGType type; + uint32_t some; + + check_size_align(oprsz, maxsz, dofs | aofs); + check_overlap_2(dofs, aofs, maxsz); + + /* If the backend has a scalar expansion, great. */ + type = choose_vector_type(g->s_list, vece, oprsz, vece == MO_64); + if (type) { + const TCGOpcode *hold_list = tcg_swap_vecop_list(NULL); + switch (type) { + case TCG_TYPE_V256: + some = QEMU_ALIGN_DOWN(oprsz, 32); + expand_2sh_vec(vece, dofs, aofs, some, 32, + TCG_TYPE_V256, shift, g->fniv_s); + if (some == oprsz) { + break; + } + dofs += some; + aofs += some; + oprsz -= some; + maxsz -= some; + /* fallthru */ + case TCG_TYPE_V128: + expand_2sh_vec(vece, dofs, aofs, oprsz, 16, + TCG_TYPE_V128, shift, g->fniv_s); + break; + case TCG_TYPE_V64: + expand_2sh_vec(vece, dofs, aofs, oprsz, 8, + TCG_TYPE_V64, shift, g->fniv_s); + break; + default: + g_assert_not_reached(); + } + tcg_swap_vecop_list(hold_list); + goto clear_tail; + } + + /* If the backend supports variable vector shifts, also cool. */ + type = choose_vector_type(g->v_list, vece, oprsz, vece == MO_64); + if (type) { + const TCGOpcode *hold_list = tcg_swap_vecop_list(NULL); + TCGv_vec v_shift = tcg_temp_new_vec(type); + + if (vece == MO_64) { + TCGv_i64 sh64 = tcg_temp_new_i64(); + tcg_gen_extu_i32_i64(sh64, shift); + tcg_gen_dup_i64_vec(MO_64, v_shift, sh64); + tcg_temp_free_i64(sh64); + } else { + tcg_gen_dup_i32_vec(vece, v_shift, shift); + } + + switch (type) { + case TCG_TYPE_V256: + some = QEMU_ALIGN_DOWN(oprsz, 32); + expand_2s_vec(vece, dofs, aofs, some, 32, TCG_TYPE_V256, + v_shift, false, g->fniv_v); + if (some == oprsz) { + break; + } + dofs += some; + aofs += some; + oprsz -= some; + maxsz -= some; + /* fallthru */ + case TCG_TYPE_V128: + expand_2s_vec(vece, dofs, aofs, oprsz, 16, TCG_TYPE_V128, + v_shift, false, g->fniv_v); + break; + case TCG_TYPE_V64: + expand_2s_vec(vece, dofs, aofs, oprsz, 8, TCG_TYPE_V64, + v_shift, false, g->fniv_v); + break; + default: + g_assert_not_reached(); + } + tcg_temp_free_vec(v_shift); + tcg_swap_vecop_list(hold_list); + goto clear_tail; + } + + /* Otherwise fall back to integral... */ + if (vece == MO_32 && check_size_impl(oprsz, 4)) { + expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4); + } else if (vece == MO_64 && check_size_impl(oprsz, 8)) { + TCGv_i64 sh64 = tcg_temp_new_i64(); + tcg_gen_extu_i32_i64(sh64, shift); + expand_2s_i64(dofs, aofs, oprsz, sh64, false, g->fni8); + tcg_temp_free_i64(sh64); + } else { + TCGv_ptr a0 = tcg_temp_new_ptr(); + TCGv_ptr a1 = tcg_temp_new_ptr(); + TCGv_i32 desc = tcg_temp_new_i32(); + + tcg_gen_shli_i32(desc, shift, SIMD_DATA_SHIFT); + tcg_gen_ori_i32(desc, desc, simd_desc(oprsz, maxsz, 0)); + tcg_gen_addi_ptr(a0, cpu_env, dofs); + tcg_gen_addi_ptr(a1, cpu_env, aofs); + + g->fno[vece](a0, a1, desc); + + tcg_temp_free_ptr(a0); + tcg_temp_free_ptr(a1); + tcg_temp_free_i32(desc); + return; + } + + clear_tail: + if (oprsz < maxsz) { + expand_clr(dofs + oprsz, maxsz - oprsz); + } +} + +void tcg_gen_gvec_shls(unsigned vece, uint32_t dofs, uint32_t aofs, + TCGv_i32 shift, uint32_t oprsz, uint32_t maxsz) +{ + static const GVecGen2sh g = { + .fni4 = tcg_gen_shl_i32, + .fni8 = tcg_gen_shl_i64, + .fniv_s = tcg_gen_shls_vec, + .fniv_v = tcg_gen_shlv_vec, + .fno = { + gen_helper_gvec_shl8i, + gen_helper_gvec_shl16i, + gen_helper_gvec_shl32i, + gen_helper_gvec_shl64i, + }, + .s_list = { INDEX_op_shls_vec, 0 }, + .v_list = { INDEX_op_shlv_vec, 0 }, + }; + + tcg_debug_assert(vece <= MO_64); + do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g); +} + +void tcg_gen_gvec_shrs(unsigned vece, uint32_t dofs, uint32_t aofs, + TCGv_i32 shift, uint32_t oprsz, uint32_t maxsz) +{ + static const GVecGen2sh g = { + .fni4 = tcg_gen_shr_i32, + .fni8 = tcg_gen_shr_i64, + .fniv_s = tcg_gen_shrs_vec, + .fniv_v = tcg_gen_shrv_vec, + .fno = { + gen_helper_gvec_shr8i, + gen_helper_gvec_shr16i, + gen_helper_gvec_shr32i, + gen_helper_gvec_shr64i, + }, + .s_list = { INDEX_op_shrs_vec, 0 }, + .v_list = { INDEX_op_shrv_vec, 0 }, + }; + + tcg_debug_assert(vece <= MO_64); + do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g); +} + +void tcg_gen_gvec_sars(unsigned vece, uint32_t dofs, uint32_t aofs, + TCGv_i32 shift, uint32_t oprsz, uint32_t maxsz) +{ + static const GVecGen2sh g = { + .fni4 = tcg_gen_sar_i32, + .fni8 = tcg_gen_sar_i64, + .fniv_s = tcg_gen_sars_vec, + .fniv_v = tcg_gen_sarv_vec, + .fno = { + gen_helper_gvec_sar8i, + gen_helper_gvec_sar16i, + gen_helper_gvec_sar32i, + gen_helper_gvec_sar64i, + }, + .s_list = { INDEX_op_sars_vec, 0 }, + .v_list = { INDEX_op_sarv_vec, 0 }, + }; + + tcg_debug_assert(vece <= MO_64); + do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g); +} + /* * Expand D = A << (B % element bits) * diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c index 96317dbd10..16062f5995 100644 --- a/tcg/tcg-op-vec.c +++ b/tcg/tcg-op-vec.c @@ -598,3 +598,57 @@ void tcg_gen_sarv_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_vec b) { do_op3(vece, r, a, b, INDEX_op_sarv_vec); } + +static void do_shifts(unsigned vece, TCGv_vec r, TCGv_vec a, + TCGv_i32 s, TCGOpcode opc_s, TCGOpcode opc_v) +{ + TCGTemp *rt = tcgv_vec_temp(r); + TCGTemp *at = tcgv_vec_temp(a); + TCGTemp *st = tcgv_i32_temp(s); + TCGArg ri = temp_arg(rt); + TCGArg ai = temp_arg(at); + TCGArg si = temp_arg(st); + TCGType type = rt->base_type; + const TCGOpcode *hold_list; + int can; + + tcg_debug_assert(at->base_type >= type); + tcg_assert_listed_vecop(opc_s); + hold_list = tcg_swap_vecop_list(NULL); + + can = tcg_can_emit_vec_op(opc_s, type, vece); + if (can > 0) { + vec_gen_3(opc_s, type, vece, ri, ai, si); + } else if (can < 0) { + tcg_expand_vec_op(opc_s, type, vece, ri, ai, si); + } else { + TCGv_vec vec_s = tcg_temp_new_vec(type); + + if (vece == MO_64) { + TCGv_i64 s64 = tcg_temp_new_i64(); + tcg_gen_extu_i32_i64(s64, s); + tcg_gen_dup_i64_vec(MO_64, vec_s, s64); + tcg_temp_free_i64(s64); + } else { + tcg_gen_dup_i32_vec(vece, vec_s, s); + } + do_op3(vece, r, a, vec_s, opc_v); + tcg_temp_free_vec(vec_s); + } + tcg_swap_vecop_list(hold_list); +} + +void tcg_gen_shls_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_i32 b) +{ + do_shifts(vece, r, a, b, INDEX_op_shls_vec, INDEX_op_shlv_vec); +} + +void tcg_gen_shrs_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_i32 b) +{ + do_shifts(vece, r, a, b, INDEX_op_shrs_vec, INDEX_op_shrv_vec); +} + +void tcg_gen_sars_vec(unsigned vece, TCGv_vec r, TCGv_vec a, TCGv_i32 b) +{ + do_shifts(vece, r, a, b, INDEX_op_sars_vec, INDEX_op_sarv_vec); +}