From patchwork Thu Jun 5 22:04:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 31448 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f70.google.com (mail-yh0-f70.google.com [209.85.213.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CB02A203AC for ; Thu, 5 Jun 2014 22:07:41 +0000 (UTC) Received: by mail-yh0-f70.google.com with SMTP id z6sf9054663yhz.1 for ; Thu, 05 Jun 2014 15:07:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:sender:delivered-to:from:to:subject:date:message-id :in-reply-to:references:x-original-sender :x-original-authentication-results; bh=n0a+ytg7tUmiT6vBLvTDznUn/famCw3Embw7NxYGl90=; b=R2COCsLUmo1B4mYVtr5HpHyQak/a+Lc41hDhOB7GwBDYdhxPQ66Gm1Bnx9rq/9Ics5 wyMfegbEoj83d1AZkvIhCVdpz9eyjo46OFiyPzH41UDAu4kqKPdqX9+m3IzpW/NRhe8c u3u8TAFonB8PF/YxTwIt1lf07EpJQk7inOOiuZtzuZSXTfdQkWyyf25dR5bMhMlZ1SV+ 6C16gsTVV+K8PHnmgwICH+kCLQOZzgUrpFmPdy/Js9H/gMU2dn6pt3CMzhg9DaKfT5Uz 3coFkxT5BBob5iQZi+nbEZUqnR6v3YrJgD+qHN91UUlD6jUwPMRVLo325/LcP9GsXMtO nyRA== X-Gm-Message-State: ALoCoQmCkhZVD+kTMDumoOvhxNandQDEtAXJEjB3IG/iAK8brWTSHfBd+2WIEHCfQN9QvSmtrvKc X-Received: by 10.52.142.72 with SMTP id ru8mr420164vdb.0.1402006061489; Thu, 05 Jun 2014 15:07:41 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.21.85 with SMTP id 79ls415760qgk.41.gmail; Thu, 05 Jun 2014 15:07:41 -0700 (PDT) X-Received: by 10.220.85.1 with SMTP id m1mr749136vcl.42.1402006061304; Thu, 05 Jun 2014 15:07:41 -0700 (PDT) Received: from mail-ve0-x22e.google.com (mail-ve0-x22e.google.com [2607:f8b0:400c:c01::22e]) by mx.google.com with ESMTPS id vu1si4635088vdb.90.2014.06.05.15.07.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 05 Jun 2014 15:07:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22e as permitted sender) client-ip=2607:f8b0:400c:c01::22e; Received: by mail-ve0-f174.google.com with SMTP id jw12so2054271veb.19 for ; Thu, 05 Jun 2014 15:07:41 -0700 (PDT) X-Received: by 10.220.103.141 with SMTP id k13mr563941vco.25.1402006061200; Thu, 05 Jun 2014 15:07:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp51059vcb; Thu, 5 Jun 2014 15:07:40 -0700 (PDT) X-Received: by 10.68.133.229 with SMTP id pf5mr712942pbb.115.1402006060217; Thu, 05 Jun 2014 15:07:40 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id pt4si15398647pbc.159.2014.06.05.15.07.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jun 2014 15:07:40 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-369622-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 1897 invoked by alias); 5 Jun 2014 22:05:14 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 1828 invoked by uid 89); 5 Jun 2014 22:05:13 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-we0-f173.google.com Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com) (74.125.82.173) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Thu, 05 Jun 2014 22:05:06 +0000 Received: by mail-we0-f173.google.com with SMTP id u57so1922267wes.32 for ; Thu, 05 Jun 2014 15:05:03 -0700 (PDT) X-Received: by 10.180.107.97 with SMTP id hb1mr2273570wib.20.1402005903050; Thu, 05 Jun 2014 15:05:03 -0700 (PDT) Received: from babel.clyon.hd.free.fr (vig38-2-82-225-222-175.fbx.proxad.net. [82.225.222.175]) by mx.google.com with ESMTPSA id di10sm9704211wjb.1.2014.06.05.15.05.01 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 05 Jun 2014 15:05:02 -0700 (PDT) From: Christophe Lyon To: gcc-patches@gcc.gnu.org Subject: [Patch ARM/testsuite 07/22] Add binary saturating operators: vqadd, vqsub. Date: Fri, 6 Jun 2014 00:04:27 +0200 Message-Id: <1402005882-31597-8-git-send-email-christophe.lyon@linaro.org> In-Reply-To: <1402005882-31597-7-git-send-email-christophe.lyon@linaro.org> References: <1402005882-31597-1-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-2-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-3-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-4-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-5-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-6-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-7-git-send-email-christophe.lyon@linaro.org> X-IsSubscribed: yes X-Original-Sender: christophe.lyon@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::22e as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 diff --git a/gcc/testsuite/gcc.target/arm/neon-intrinsics/binary_sat_op.inc b/gcc/testsuite/gcc.target/arm/neon-intrinsics/binary_sat_op.inc new file mode 100644 index 0000000..35d7701 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/neon-intrinsics/binary_sat_op.inc @@ -0,0 +1,91 @@ +/* Template file for saturating binary operator validation. + + This file is meant to be included by the relevant test files, which + have to define the intrinsic family to test. If a given intrinsic + supports variants which are not supported by all the other + saturating binary operators, these can be tested by providing a + definition for EXTRA_TESTS. */ + +#include +#include "arm-neon-ref.h" +#include "compute-ref-data.h" + +#define FNNAME1(NAME) exec_ ## NAME +#define FNNAME(NAME) FNNAME1(NAME) + +void FNNAME (INSN_NAME) (void) +{ + /* vector_res = OP(vector1,vector2), then store the result. */ + +#define TEST_BINARY_SAT_OP1(INSN, Q, T1, T2, W, N, EXPECTED_CUMULATIVE_SAT, CMT) \ + Set_Neon_Cumulative_Sat(0); \ + VECT_VAR(vector_res, T1, W, N) = \ + INSN##Q##_##T2##W(VECT_VAR(vector1, T1, W, N), \ + VECT_VAR(vector2, T1, W, N)); \ + vst1##Q##_##T2##W(VECT_VAR(result, T1, W, N), \ + VECT_VAR(vector_res, T1, W, N)); \ + CHECK_CUMULATIVE_SAT(TEST_MSG, T1, W, N, EXPECTED_CUMULATIVE_SAT, CMT) + +#define TEST_BINARY_SAT_OP(INSN, Q, T1, T2, W, N, EXPECTED_CUMULATIVE_SAT, CMT) \ + TEST_BINARY_SAT_OP1(INSN, Q, T1, T2, W, N, EXPECTED_CUMULATIVE_SAT, CMT) + + DECL_VARIABLE_ALL_VARIANTS(vector1); + DECL_VARIABLE_ALL_VARIANTS(vector2); + DECL_VARIABLE_ALL_VARIANTS(vector_res); + + clean_results (); + + /* Initialize input "vector1" from "buffer". */ + TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer); + + /* Choose arbitrary initialization values. */ + VDUP(vector2, , int, s, 8, 8, 0x11); + VDUP(vector2, , int, s, 16, 4, 0x22); + VDUP(vector2, , int, s, 32, 2, 0x33); + VDUP(vector2, , int, s, 64, 1, 0x44); + VDUP(vector2, , uint, u, 8, 8, 0x55); + VDUP(vector2, , uint, u, 16, 4, 0x66); + VDUP(vector2, , uint, u, 32, 2, 0x77); + VDUP(vector2, , uint, u, 64, 1, 0x88); + + VDUP(vector2, q, int, s, 8, 16, 0x11); + VDUP(vector2, q, int, s, 16, 8, 0x22); + VDUP(vector2, q, int, s, 32, 4, 0x33); + VDUP(vector2, q, int, s, 64, 2, 0x44); + VDUP(vector2, q, uint, u, 8, 16, 0x55); + VDUP(vector2, q, uint, u, 16, 8, 0x66); + VDUP(vector2, q, uint, u, 32, 4, 0x77); + VDUP(vector2, q, uint, u, 64, 2, 0x88); + + /* Apply a saturating binary operator named INSN_NAME. */ + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 8, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 16, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 32, 2, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 8, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 16, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 32, 2, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat, ""); + + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 8, 16, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 16, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 32, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 8, 16, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 16, 8, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 32, 4, expected_cumulative_sat, ""); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat, ""); + + CHECK_RESULTS (TEST_MSG, ""); + +#ifdef EXTRA_TESTS + EXTRA_TESTS(); +#endif +} + +int main (void) +{ + FNNAME (INSN_NAME) (); + + return 0; +} diff --git a/gcc/testsuite/gcc.target/arm/neon-intrinsics/vqadd.c b/gcc/testsuite/gcc.target/arm/neon-intrinsics/vqadd.c new file mode 100644 index 0000000..c07f5ff --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/neon-intrinsics/vqadd.c @@ -0,0 +1,278 @@ +#define INSN_NAME vqadd +#define TEST_MSG "VQADD/VQADDQ" + +/* Extra tests for special cases: + - some requiring intermediate types larger than 64 bits to + compute saturation flag. + - corner case saturations with types smaller than 64 bits. +*/ +void vqadd_extras(void); +#define EXTRA_TESTS vqadd_extras + +#include "binary_sat_op.inc" + +/* Expected values of cumulative_saturation flag. */ +int VECT_VAR(expected_cumulative_sat,int,8,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,2) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,8) = 1; +int VECT_VAR(expected_cumulative_sat,uint,16,4) = 1; +int VECT_VAR(expected_cumulative_sat,uint,32,2) = 1; +int VECT_VAR(expected_cumulative_sat,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat,int,8,16) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,16) = 1; +int VECT_VAR(expected_cumulative_sat,uint,16,8) = 1; +int VECT_VAR(expected_cumulative_sat,uint,32,4) = 1; +int VECT_VAR(expected_cumulative_sat,uint,64,2) = 1; +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0x1, 0x2, 0x3, 0x4, + 0x5, 0x6, 0x7, 0x8 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0x12, 0x13, 0x14, 0x15 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0x23, 0x24 }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0x34 }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xffff, 0xffff, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0xffffffffffffffff }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0x1, 0x2, 0x3, 0x4, + 0x5, 0x6, 0x7, 0x8, + 0x9, 0xa, 0xb, 0xc, + 0xd, 0xe, 0xf, 0x10 }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0x12, 0x13, 0x14, 0x15, + 0x16, 0x17, 0x18, 0x19 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0x23, 0x24, 0x25, 0x26 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0x34, 0x35 }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xffff, 0xffff, 0xffff, 0xffff, + 0xffff, 0xffff, 0xffff, 0xffff }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0xffffffffffffffff, + 0xffffffffffffffff }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + + +/* 64-bits types, with 0 as second input. */ +int VECT_VAR(expected_cumulative_sat_64,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,2) = 0; +VECT_VAR_DECL(expected_64,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,int,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_64,uint,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; + +/* 64-bits types, some cases causing cumulative saturation. */ +int VECT_VAR(expected_cumulative_sat_64_2,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_2,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,2) = 1; +VECT_VAR_DECL(expected_64_2,int,64,1) [] = { 0x34 }; +VECT_VAR_DECL(expected_64_2,uint,64,1) [] = { 0xffffffffffffffff }; +VECT_VAR_DECL(expected_64_2,int,64,2) [] = { 0x34, 0x35 }; +VECT_VAR_DECL(expected_64_2,uint,64,2) [] = { 0xffffffffffffffff, + 0xffffffffffffffff }; + +/* 64-bits types, all causing cumulative saturation. */ +int VECT_VAR(expected_cumulative_sat_64_3,int,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,int,64,2) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,2) = 1; +VECT_VAR_DECL(expected_64_3,int,64,1) [] = { 0x8000000000000000 }; +VECT_VAR_DECL(expected_64_3,uint,64,1) [] = { 0xffffffffffffffff }; +VECT_VAR_DECL(expected_64_3,int,64,2) [] = { 0x7fffffffffffffff, + 0x7fffffffffffffff }; +VECT_VAR_DECL(expected_64_3,uint,64,2) [] = { 0xffffffffffffffff, + 0xffffffffffffffff }; + +/* smaller types, corner cases causing cumulative saturation. (1) */ +int VECT_VAR(expected_csat_lt_64_1,int,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,4) = 1; +VECT_VAR_DECL(expected_lt_64_1,int,8,8) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,4) [] = { 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,2) [] = { 0x80000000, 0x80000000 }; +VECT_VAR_DECL(expected_lt_64_1,int,8,16) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,8) [] = { 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,4) [] = { 0x80000000, 0x80000000, + 0x80000000, 0x80000000 }; + +/* smaller types, corner cases causing cumulative saturation. (2) */ +int VECT_VAR(expected_csat_lt_64_2,uint,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,4) = 1; +VECT_VAR_DECL(expected_lt_64_2,uint,8,8) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,4) [] = { 0xffff, 0xffff, + 0xffff, 0xffff }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,2) [] = { 0xffffffff, + 0xffffffff }; +VECT_VAR_DECL(expected_lt_64_2,uint,8,16) [] = { 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,8) [] = { 0xffff, 0xffff, + 0xffff, 0xffff, + 0xffff, 0xffff, + 0xffff, 0xffff }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,4) [] = { 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff }; + +void vqadd_extras(void) +{ + DECL_VARIABLE_ALL_VARIANTS(vector1); + DECL_VARIABLE_ALL_VARIANTS(vector2); + DECL_VARIABLE_ALL_VARIANTS(vector_res); + + /* Initialize input "vector1" from "buffer". */ + TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer); + + /* Use a second vector full of 0. */ + VDUP(vector2, , int, s, 64, 1, 0); + VDUP(vector2, , uint, u, 64, 1, 0); + VDUP(vector2, q, int, s, 64, 2, 0); + VDUP(vector2, q, uint, u, 64, 2, 0); + +#define MSG "64 bits saturation adding zero" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64, MSG); + + /* Another set of tests with non-zero values, some chosen to create + overflow. */ + VDUP(vector2, , int, s, 64, 1, 0x44); + VDUP(vector2, , uint, u, 64, 1, 0x88); + VDUP(vector2, q, int, s, 64, 2, 0x44); + VDUP(vector2, q, uint, u, 64, 2, 0x88); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_2, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_2, MSG); + + /* Another set of tests, with input values chosen to set + cumulative_sat in all cases. */ + VDUP(vector2, , int, s, 64, 1, 0x8000000000000003LL); + VDUP(vector2, , uint, u, 64, 1, 0x88); + /* To check positive saturation, we need to write a positive value + in vector1. */ + VDUP(vector1, q, int, s, 64, 2, 0x4000000000000000LL); + VDUP(vector2, q, int, s, 64, 2, 0x4000000000000000LL); + VDUP(vector2, q, uint, u, 64, 2, 0x22); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (3)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_3, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_3, MSG); + + /* To improve coverage, check saturation with less than 64 bits + too. */ + VDUP(vector2, , int, s, 8, 8, 0x81); + VDUP(vector2, , int, s, 16, 4, 0x8001); + VDUP(vector2, , int, s, 32, 2, 0x80000001); + VDUP(vector2, q, int, s, 8, 16, 0x81); + VDUP(vector2, q, int, s, 16, 8, 0x8001); + VDUP(vector2, q, int, s, 32, 4, 0x80000001); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (1)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 8, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 16, 4, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 32, 2, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 8, 16, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 16, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 32, 4, expected_csat_lt_64_1, MSG); + + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_lt_64_1, MSG); + + /* Another set of tests with large vector1 values. */ + VDUP(vector1, , uint, u, 8, 8, 0xF0); + VDUP(vector1, , uint, u, 16, 4, 0xFFF0); + VDUP(vector1, , uint, u, 32, 2, 0xFFFFFFF0); + VDUP(vector1, q, uint, u, 8, 16, 0xF0); + VDUP(vector1, q, uint, u, 16, 8, 0xFFF0); + VDUP(vector1, q, uint, u, 32, 4, 0xFFFFFFF0); + + VDUP(vector2, , uint, u, 8, 8, 0x20); + VDUP(vector2, , uint, u, 16, 4, 0x20); + VDUP(vector2, , uint, u, 32, 2, 0x20); + VDUP(vector2, q, uint, u, 8, 16, 0x20); + VDUP(vector2, q, uint, u, 16, 8, 0x20); + VDUP(vector2, q, uint, u, 32, 4, 0x20); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 8, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 16, 4, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 32, 2, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 8, 16, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 16, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 32, 4, expected_csat_lt_64_2, MSG); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_lt_64_2, MSG); +} diff --git a/gcc/testsuite/gcc.target/arm/neon-intrinsics/vqsub.c b/gcc/testsuite/gcc.target/arm/neon-intrinsics/vqsub.c new file mode 100644 index 0000000..04df5fe --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/neon-intrinsics/vqsub.c @@ -0,0 +1,278 @@ +#define INSN_NAME vqsub +#define TEST_MSG "VQSUB/VQSUBQ" + +/* Extra tests for special cases: + - some requiring intermediate types larger than 64 bits to + compute saturation flag. + - corner case saturations with types smaller than 64 bits. +*/ +void vqsub_extras(void); +#define EXTRA_TESTS vqsub_extras + +#include "binary_sat_op.inc" + + +/* Expected results. */ +VECT_VAR_DECL(expected,int,8,8) [] = { 0xdf, 0xe0, 0xe1, 0xe2, + 0xe3, 0xe4, 0xe5, 0xe6 }; +VECT_VAR_DECL(expected,int,16,4) [] = { 0xffce, 0xffcf, + 0xffd0, 0xffd1 }; +VECT_VAR_DECL(expected,int,32,2) [] = { 0xffffffbd, 0xffffffbe }; +VECT_VAR_DECL(expected,int,64,1) [] = { 0xffffffffffffffac }; +VECT_VAR_DECL(expected,uint,8,8) [] = { 0x9b, 0x9c, 0x9d, 0x9e, + 0x9f, 0xa0, 0xa1, 0xa2 }; +VECT_VAR_DECL(expected,uint,16,4) [] = { 0xff8a, 0xff8b, + 0xff8c, 0xff8d }; +VECT_VAR_DECL(expected,uint,32,2) [] = { 0xffffff79, 0xffffff7a }; +VECT_VAR_DECL(expected,uint,64,1) [] = { 0xffffffffffffff68 }; +VECT_VAR_DECL(expected,poly,8,8) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,4) [] = { 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x33333333, 0x33333333 }; +VECT_VAR_DECL(expected,int,8,16) [] = { 0xdf, 0xe0, 0xe1, 0xe2, + 0xe3, 0xe4, 0xe5, 0xe6, + 0xe7, 0xe8, 0xe9, 0xea, + 0xeb, 0xec, 0xed, 0xee }; +VECT_VAR_DECL(expected,int,16,8) [] = { 0xffce, 0xffcf, 0xffd0, 0xffd1, + 0xffd2, 0xffd3, 0xffd4, 0xffd5 }; +VECT_VAR_DECL(expected,int,32,4) [] = { 0xffffffbd, 0xffffffbe, + 0xffffffbf, 0xffffffc0 }; +VECT_VAR_DECL(expected,int,64,2) [] = { 0xffffffffffffffac, + 0xffffffffffffffad }; +VECT_VAR_DECL(expected,uint,8,16) [] = { 0x9b, 0x9c, 0x9d, 0x9e, + 0x9f, 0xa0, 0xa1, 0xa2, + 0xa3, 0xa4, 0xa5, 0xa6, + 0xa7, 0xa8, 0xa9, 0xaa }; +VECT_VAR_DECL(expected,uint,16,8) [] = { 0xff8a, 0xff8b, 0xff8c, 0xff8d, + 0xff8e, 0xff8f, 0xff90, 0xff91 }; +VECT_VAR_DECL(expected,uint,32,4) [] = { 0xffffff79, 0xffffff7a, + 0xffffff7b, 0xffffff7c }; +VECT_VAR_DECL(expected,uint,64,2) [] = { 0xffffffffffffff68, + 0xffffffffffffff69 }; +VECT_VAR_DECL(expected,poly,8,16) [] = { 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33, + 0x33, 0x33, 0x33, 0x33 }; +VECT_VAR_DECL(expected,poly,16,8) [] = { 0x3333, 0x3333, 0x3333, 0x3333, + 0x3333, 0x3333, 0x3333, 0x3333 }; +VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0x33333333, 0x33333333, + 0x33333333, 0x33333333 }; + +/* Expected values of cumulative saturation flag. */ +int VECT_VAR(expected_cumulative_sat,int,8,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,2) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,8) = 0; +int VECT_VAR(expected_cumulative_sat,uint,16,4) = 0; +int VECT_VAR(expected_cumulative_sat,uint,32,2) = 0; +int VECT_VAR(expected_cumulative_sat,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat,int,8,16) = 0; +int VECT_VAR(expected_cumulative_sat,int,16,8) = 0; +int VECT_VAR(expected_cumulative_sat,int,32,4) = 0; +int VECT_VAR(expected_cumulative_sat,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat,uint,8,16) = 0; +int VECT_VAR(expected_cumulative_sat,uint,16,8) = 0; +int VECT_VAR(expected_cumulative_sat,uint,32,4) = 0; +int VECT_VAR(expected_cumulative_sat,uint,64,2) = 0; + +/* 64-bits types, with 0 as second input. */ +VECT_VAR_DECL(expected_64,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_64,int,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_64,uint,64,2) [] = { 0xfffffffffffffff0, + 0xfffffffffffffff1 }; +int VECT_VAR(expected_cumulative_sat_64,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64,uint,64,2) = 0; + +/* 64-bits types, other cases. */ +VECT_VAR_DECL(expected_64_2,int,64,1) [] = { 0xffffffffffffffac }; +VECT_VAR_DECL(expected_64_2,uint,64,1) [] = { 0xffffffffffffff68 }; +VECT_VAR_DECL(expected_64_2,int,64,2) [] = { 0xffffffffffffffac, + 0xffffffffffffffad }; +VECT_VAR_DECL(expected_64_2,uint,64,2) [] = { 0xffffffffffffff68, + 0xffffffffffffff69 }; +int VECT_VAR(expected_cumulative_sat_64_2,int,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,1) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,int,64,2) = 0; +int VECT_VAR(expected_cumulative_sat_64_2,uint,64,2) = 0; + +/* 64-bits types, all causing cumulative saturation. */ +VECT_VAR_DECL(expected_64_3,int,64,1) [] = { 0x8000000000000000 }; +VECT_VAR_DECL(expected_64_3,uint,64,1) [] = { 0x0 }; +VECT_VAR_DECL(expected_64_3,int,64,2) [] = { 0x7fffffffffffffff, + 0x7fffffffffffffff }; +VECT_VAR_DECL(expected_64_3,uint,64,2) [] = { 0x0, 0x0 }; +int VECT_VAR(expected_cumulative_sat_64_3,int,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,1) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,int,64,2) = 1; +int VECT_VAR(expected_cumulative_sat_64_3,uint,64,2) = 1; + +/* smaller types, corner cases causing cumulative saturation. (1) */ +VECT_VAR_DECL(expected_lt_64_1,int,8,8) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,4) [] = { 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,2) [] = { 0x80000000, 0x80000000 }; +VECT_VAR_DECL(expected_lt_64_1,int,8,16) [] = { 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80, + 0x80, 0x80, 0x80, 0x80 }; +VECT_VAR_DECL(expected_lt_64_1,int,16,8) [] = { 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000, + 0x8000, 0x8000 }; +VECT_VAR_DECL(expected_lt_64_1,int,32,4) [] = { 0x80000000, 0x80000000, + 0x80000000, 0x80000000 }; +int VECT_VAR(expected_csat_lt_64_1,int,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_1,int,32,4) = 1; + +/* smaller types, corner cases causing cumulative saturation. (2) */ +VECT_VAR_DECL(expected_lt_64_2,uint,8,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,2) [] = { 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,8,16) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,16,8) [] = { 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 }; +VECT_VAR_DECL(expected_lt_64_2,uint,32,4) [] = { 0x0, 0x0, 0x0, 0x0 }; +int VECT_VAR(expected_csat_lt_64_2,uint,8,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,4) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,2) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,8,16) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,16,8) = 1; +int VECT_VAR(expected_csat_lt_64_2,uint,32,4) = 1; + +void vqsub_extras(void) +{ + DECL_VARIABLE_ALL_VARIANTS(vector1); + DECL_VARIABLE_ALL_VARIANTS(vector2); + DECL_VARIABLE_ALL_VARIANTS(vector_res); + + /* Initialize input "vector1" from "buffer". */ + TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer); + + /* Use a second vector full of 0. */ + VDUP(vector2, , int, s, 64, 1, 0x0); + VDUP(vector2, , uint, u, 64, 1, 0x0); + VDUP(vector2, q, int, s, 64, 2, 0x0); + VDUP(vector2, q, uint, u, 64, 2, 0x0); + +#define MSG "64 bits saturation when adding zero" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64, MSG); + + /* Another set of tests with non-zero values. */ + VDUP(vector2, , int, s, 64, 1, 0x44); + VDUP(vector2, , uint, u, 64, 1, 0x88); + VDUP(vector2, q, int, s, 64, 2, 0x44); + VDUP(vector2, q, uint, u, 64, 2, 0x88); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_2, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_2, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_2, MSG); + + /* Another set of tests, with input values chosen to set + cumulative_sat in all cases. */ + VDUP(vector2, , int, s, 64, 1, 0x7fffffffffffffffLL); + VDUP(vector2, , uint, u, 64, 1, 0xffffffffffffffffULL); + /* To check positive saturation, we need to write a positive value + in vector1. */ + VDUP(vector1, q, int, s, 64, 2, 0x3fffffffffffffffLL); + VDUP(vector2, q, int, s, 64, 2, 0x8000000000000000LL); + VDUP(vector2, q, uint, u, 64, 2, 0xffffffffffffffffULL); + +#undef MSG +#define MSG "64 bits saturation cumulative_sat (3)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 64, 1, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 64, 2, expected_cumulative_sat_64_3, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 64, 2, expected_cumulative_sat_64_3, MSG); + + CHECK(TEST_MSG, int, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, int, 64, 2, PRIx64, expected_64_3, MSG); + CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected_64_3, MSG); + + /* To improve coverage, check saturation with less than 64 bits + too. */ + VDUP(vector2, , int, s, 8, 8, 0x7F); + VDUP(vector2, , int, s, 16, 4, 0x7FFF); + VDUP(vector2, , int, s, 32, 2, 0x7FFFFFFF); + VDUP(vector2, q, int, s, 8, 16, 0x7F); + VDUP(vector2, q, int, s, 16, 8, 0x7FFF); + VDUP(vector2, q, int, s, 32, 4, 0x7FFFFFFF); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (1)" + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 8, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 16, 4, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , int, s, 32, 2, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 8, 16, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 16, 8, expected_csat_lt_64_1, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, int, s, 32, 4, expected_csat_lt_64_1, MSG); + + CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 2, PRIx32, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_lt_64_1, MSG); + CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_lt_64_1, MSG); + + /* Another set of tests with vector1 values smaller than + vector2. */ + VDUP(vector1, , uint, u, 8, 8, 0x10); + VDUP(vector1, , uint, u, 16, 4, 0x10); + VDUP(vector1, , uint, u, 32, 2, 0x10); + VDUP(vector1, q, uint, u, 8, 16, 0x10); + VDUP(vector1, q, uint, u, 16, 8, 0x10); + VDUP(vector1, q, uint, u, 32, 4, 0x10); + + VDUP(vector2, , uint, u, 8, 8, 0x20); + VDUP(vector2, , uint, u, 16, 4, 0x20); + VDUP(vector2, , uint, u, 32, 2, 0x20); + VDUP(vector2, q, uint, u, 8, 16, 0x20); + VDUP(vector2, q, uint, u, 16, 8, 0x20); + VDUP(vector2, q, uint, u, 32, 4, 0x20); + +#undef MSG +#define MSG "less than 64 bits saturation cumulative_sat (2)" + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 8, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 16, 4, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, , uint, u, 32, 2, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 8, 16, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 16, 8, expected_csat_lt_64_2, MSG); + TEST_BINARY_SAT_OP(INSN_NAME, q, uint, u, 32, 4, expected_csat_lt_64_2, MSG); + + CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_lt_64_2, MSG); + CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_lt_64_2, MSG); +}